I’ve read the source nature article (skimmed though the parts that were beyond my understanding) and I did not get the same impression.
I am aware that LLM service providers regularly use AI generated text for additional training (from my understanding this done to “tune” the results to give a certain style). This is not a new development.
From my limited understanding, LLM model degeneracy is still relevant in the medium to long term. If an increasing % of your net new training content is originally LLM generated (and you have difficulties in identifying LLM generated content), it would stand to reason that you would encounter model degeneracy eventually.
I am not saying you’re wrong. Just looking for more information on this issue.