My bet is: it’s going to depend on a case by case basis.
A large enough neural network can be used to store, and then recover, a 1:1 copy of a work… but a large enough corpus can contain more data that could ever be stored in a given size neural network, even if some fragments of the input work could be recovered… so it will depend on how big of a recoverable fragment is “big enough” to call it copyright infringement… but then again, reproducing up to a whole work is considered fair use for some purposes… but not in every country.
Copyright laws are not necessarily wrong; just remove the “until author’s death plus 70 years” coverage, go back to a more reasonable “4 years since publication”, and they make much more sense.