I do not propose, and it is not neccessarily any output.
Their first question is, what do they want the AI to do. And if they want it to be perfect, then they need to use perfect training data, not human output.
Errors. I rewrote all my stuff up there with Fuck you OpenAI or something pike that in spring and got banned fo 3 months. So after I got a reminder in my calendar that 3 months are up I got to work a little bit more sophisticatedly.
Most improvements in machine learning has been made by increasing the data (and by using models that can generalize larger data better).
Perfect data isn’t needed as the errors will “even out”. Although now there’s the problem that most new content on the Internet is low quality AI garbage.
Perfect data isn’t needed as the errors will “even out”.
That is an assumption.
I do not think that it is a correct assumption.
now there’s the problem that most new content on the Internet is low quality AI garbage.
This reminds me about a recommendation from some philosopher - I forgot who it was - he said that you should read only such books that are at least 100 years old.
15 years ago people made fun of AI models because they could mistake some detail in a bush for a dog. Over time the models became more resistant against those kinds of errors. The change was more data and better models.
It’s the same type of error as hallucination. The model is overly confident about a thing it’s wrong about. I don’t see why these types of errors would be any different.
That's not correct btw. AI is supposed to be creative and come up with new text/images/ideas. Even with perfect training data. That creativity means creativity. We want it to come up with new text out of thin air. And perfect training data is not going to change anything about it. We'd need to remove the ability to generate fictional stories and lots of other answers, too. Or come up with an entirely different approach.