It should be fully legal because it’s still a person doing it. Like Cory Doctrow said in this article:
Break down the steps of training a model and it quickly becomes apparent why it’s technically wrong to call this a copyright infringement. First, the act of making transient copies of works – even billions of works – is unequivocally fair use. Unless you think search engines and the Internet Archive shouldn’t exist, then you should support scraping at scale: pluralistic.net/…/how-to-think-about-scraping/
Making quantitative observations about works is a longstanding, respected and important tool for criticism, analysis, archiving and new acts of creation. Measuring the steady contraction of the vocabulary in successive Agatha Christie novels turns out to offer a fascinating window into her dementia: theguardian.com/…/agatha-christie-alzheimers-rese…
The final step in training a model is publishing the conclusions of the quantitative analysis of the temporarily copied documents as software code. Code itself is a form of expressive speech – and that expressivity is key to the fight for privacy, because the fact that code is speech limits how governments can censor software: eff.org/…/remembering-case-established-code-speec…
That’s all these models are, someone’s analysis of the training data in relation to each other, not the data itself. I feel like this is where most people get tripped up. Understanding how these things work makes it all obvious.