As someone who works with corporate data, it is difficult to spot bad data in a large data set, and costs significant money and time.
I can see why this technique will be effective at deterring people stealing the data to train AI. The best options for the people building the AI are: Pay for quality data (result good AI), avoid using that dataset (less good AI, but not seriously broken), or use the poisoned data and get poor quality AI that no-one is willing to pay for.
I work in the space of marketing data. Poisoning of marketing data that the data brokers trade in would be a big problem. There are records for over 1 billion people, and those companies all have an allergy to employing people to check quality, so they would not spot the data quality being degraded. Cleaning such a large data set would cost more than the data set was worth after a much lower amount of poisoning than the examples in this article.