Getting your Trinity Audio player ready...
|
A proposed class action against Microsoft’s LinkedIn over the use of private messages to train generative AI has been dismissed.
Plaintiff Alessandro De La Torre filed notice of the dismissal without prejudice in San Jose, California federal court last Thursday, nine days after the lawsuit was filed.
LinkedIn had claimed the lawsuit had no merit.
The dismissal was filed the same day that LinkedIn said in a blog post that the company did not disclose users’ private messages for AI training.
Private messages
“LinkedIn’s belated disclosures here left consumers rightly concerned and confused about what was being used to train AI,” said Eli Wade-Scott, managing partner at Edelson PC, the firm that had represented De La Torre, in an emailed statement.
“Users can take comfort, at least, that LinkedIn has shown us evidence that it did not use their private messages to do that. We appreciate the professionalism of LinkedIn’s team.”
Sarah Wright, a vice-president at LinkedIn, said in a Thursday post that LinkedIn had “never” disclosed users’ private messages for AI training.
The post came more than a week after De La Torre’s proposed class action was filed on the night of 22 January.
As companies have rushed to roll out generative AI tools, concern has been expressed by individuals, organisations and regulators over how firms are acquiring the vast amounts of data being used to train their AI offerings, and whether they are violating individuals’ privacy and copyright law in the process.
De La Torre’s case was originally filed on behalf of millions of LinkedIn Premium customers, alleging LinkedIn disclosed their private messages to third parties without permission to train generative artificial intelligence models.
Data sharing
The lawsuit alleged LinkedIn quietly introduced a privacy setting last August that allowed users to enable or disable the sharing of their personal data for AI training.
LinkedIn then discreetly updated its privacy policy on 18 September to say data could be used to train AI models, and in a “frequently asked questions” section said opting out “does not affect training that has already taken place”.
The complaint argued that this attempt to “cover its tracks” suggested LinkedIn was fully aware it had violated a promise to use personal data only to support and improve its platform, and instead had shared their data with third parties.
The New York Times sued ChatGPT developer OpenAI and backer Microsoft in December 2023 over allegedly using its copyrighted content to train OpenAI’s models.
OpenAI says it uses freely available content and that its actions are protected by fair use principles.