720-891-1663

Users Beware: Anything Online Could be Fair Game for AI ‘Training’

Reddit just sold your data to an AI model. But that is kind of small news. Companies are quietly change their terms of service to allow them to do that.

Your only alternative could be:

  1. Stop using the service and attempt to delete all of your data from it – assuming you even find out about it before it is a done deal
  2. Sue the company to stop them

Max Schrems, the Austrian attorney and privacy activist who is the bane of the existence of every tech company operating in Europe and the chief villain in the Schrems I and Schrems II EU high court decisions, is now going after Meta. Again.

Old Max boy has fixed a complaint in 11 European countries arguing that a change in Meta’s privacy policy due to go into effect on June 26 would permit the use of personal data, even data that is private, to train its AI.

This is different that using publicly posted data for that purpose.

It might include information posted by users years ago; possibly even dead people (dead people can’t object to changes in Meta’s privacy policy).

Schrems’ complaint says that Meta wants to take all public and non-public data collected since 2007 and use it for some ill-defined purpose.

Facebook sent users an email last month allowing them to opt out. It is possible that I got one but I sure don’t remember it. It probably looked like another bit of Facebook spam – by design.

Assume for the moment that you licensed Meta’s soon to be new model. Are you going to get sued? Well, as the saying goes, you can sue a turnip. But most likely who is going to get Whacked is going to be Meta. Of course, all of the money you spent tuning the model for your purposes – that comes out of your pocket.

Schrems wants the country privacy regulators to stop Facebook before it starts because after it starts it is likely unstoppable.

Of course, customers should DEMAND that their vendor provides legal immunity and pay for any defense costs as well as the cost to rebuild their models on a non-infringing model if they get sued. Good luck with that, although OpenAI is doing part of that.

Businesses will have to conduct periodic compliance audits of their models. That won’t be cheap and what happens if you find a problem.

Multiple companies are in the spotlight – like Slack and ChatGPT, for example. If you are using one of these models and inputting your data into it, you need to understand your rights. And, your responsibilities.

No easy answer here. Credit: CSO Online

Facebooktwitterredditlinkedinmailby feather

Leave a Reply

Your email address will not be published. Required fields are marked *