I'm a reporter looking to interview freelancers who have seen demand for their work go down -- or shift -- in the wake of ChatGPT and all the AI image generators.
This is for a story about how freelancers, specifically, have seen demand for their labor change as use of AI has spread.
"Bullshit is 'any utterance produced where a speaker has indifference towards the truth of the utterance'. That explanation, in turn, is divided into two "species": hard bullshit, which occurs when there is an agenda to mislead, or soft bullshit, which is uttered without agenda.
"ChatGPT is at minimum a soft bullshitter or a bullshit machine, because if it is not an agent then it can neither hold any attitudes towards truth nor towards deceiving hearers about its (or, perhaps more properly, its users') agenda."
AI deception: A survey of examples, risks, and potential solutions
"Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems."
AI deception: A survey of examples, risks, and potential solutions
"Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems."
AI deception: A survey of examples, risks, and potential solutions
“Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems.”
AI deception: A survey of examples, risks, and potential solutions
“Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems.”
AI deception: A survey of examples, risks, and potential solutions
"Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems."
AI deception: A survey of examples, risks, and potential solutions
"Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems."
"All tested LLMs performed poorly on medical code querying, often generating codes conveying imprecise or fabricated information. LLMs are not appropriate for use on medical coding tasks without additional research."
ChatGPT hallucinates fake but plausible scientific citations at a staggering rate, study finds
"MacDonald found that a total of 32.3% of the 300 citations generated by ChatGPT were hallucinated. Despite being fabricated, these hallucinated citations were constructed with elements that appeared legitimate — such as real authors who are recognized in their respective fields, properly formatted DOIs, and references to legitimate peer-reviewed journals."
Every time, you think it couldn't get any worse, a new revelation tops it off. As an author, I wonder how long it will take for the book market to be completely enshittified.
Thank you for the #giftArticle! ⬆️ @writers@bookstodon
The Language of Deception: Weaponizing Next Generation AI by Justin Hutchens
A penetrating look at the dark side of emerging AI technologies
In The Language of Deception: Weaponizing Next Generation AI , artificial intelligence and cybersecurity veteran Justin Hutchens delivers an incisive and penetrating look at how contemporary and future AI can and will be weaponized for malicious and adversarial purposes.
OMG, such a doom! 😱 The first #podcasts are scripted by specialised #podcast#LLM. Why the hell am I doing all this work with research, individual ideas, scripting and working on my text???
I got that intro script in under 2 seconds and will NOT promote the #AI website. 🤢
And I promise: MY podcast will be always homespun, all my mistakes inclusive!
But I see a wave of rubbish rolling in here too! 😭 @writers
Small update: 🤖⚔️ #ParzivAI - our #GenAI language model specialized in translating #MiddleHighGerman into modern #German, and explaining the #MiddleAges to students - is halfway done with another round of training...
Did anyone of you already receive #peerreview that was obviously created by #LLM? I can see how #predatorypublishers speed up their publication cycles, get rid of costly human interactions and deliver some seemingly plausible text to authors by just throwing manuscripts at a LLM and then let it generate reviews.
🤖⚔️ @fnieser.bsky.social und ich haben #ParzivAI gebaut. Ein #llm, das auf Mixtral-8x7B-Instruct-v0.1basiert, und das mit mittelhochdeutscher Literatur finetuned wurde.
Bisher noch Proof of Concept, aber demnächst vllt im Schulunterricht?
"In this paper, we presented a proof of concept for an artificial intelligent agent system capable of (semi-)autonomously designing, planning and multistep executing scientific experiments."