There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

mimsical , to random
@mimsical@mastodon.social avatar

I'm a reporter looking to interview freelancers who have seen demand for their work go down -- or shift -- in the wake of ChatGPT and all the AI image generators.

This is for a story about how freelancers, specifically, have seen demand for their labor change as use of AI has spread.

Any help is greatly appreciated!

NatureMC ,
@NatureMC@mastodon.online avatar

@mimsical Your post ⬆️ could be interesting for the group @writers and for - I remember they already talked about the problem of fakes as a hard concurrence for their real books.
NPR report https://www.npr.org/2024/03/13/1237888126/growing-number-ai-scam-books-amazon?utm_medium=JSONFeed&utm_campaign=news&utm_source=press.coop
On New Republic: https://newrepublic.com/article/180395/ai-artifical-intelligence-writing-human-creativity

Author @janefriedman about identity theft by LLM-scam: https://janefriedman.com/i-would-rather-see-my-books-pirated/

ajsadauskas , to technology
@ajsadauskas@aus.social avatar

It's time to call a spade a spade. ChatGPT isn't just hallucinating. It's a bullshit machine.

From TFA (thanks @mxtiffanyleigh for sharing):

"Bullshit is 'any utterance produced where a speaker has indifference towards the truth of the utterance'. That explanation, in turn, is divided into two "species": hard bullshit, which occurs when there is an agenda to mislead, or soft bullshit, which is uttered without agenda.

"ChatGPT is at minimum a soft bullshitter or a bullshit machine, because if it is not an agent then it can neither hold any attitudes towards truth nor towards deceiving hearers about its (or, perhaps more properly, its users') agenda."

https://futurism.com/the-byte/researchers-ai-chatgpt-hallucinations-terminology

@technology

bibliolater , to science
@bibliolater@qoto.org avatar

AI deception: A survey of examples, risks, and potential solutions

"Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems."

DOI: https://doi.org/10.1016/j.patter.2024.100988

@science

bibliolater , to science
@bibliolater@qoto.org avatar

AI deception: A survey of examples, risks, and potential solutions

"Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems."

DOI: https://doi.org/10.1016/j.patter.2024.100988

@science

attribution: Madhav-Malhotra-003, CC0, via Wikimedia Commons. Page URL: https://commons.wikimedia.org/wiki/File:Artificial_Intelligence_Word_Cloud.png

bibliolater , to science
@bibliolater@qoto.org avatar
AI deception: A survey of examples, risks, and potential solutions

Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems.

DOI: https://doi.org/10.1016/j.patter.2024.100988

@science

attribution: Madhav-Malhotra-003, CC0, via Wikimedia Commons. Page URL: https://commons.wikimedia.org/wiki/File:Artificial_Intelligence_Word_Cloud.png

bibliolater , to science
@bibliolater@qoto.org avatar
AI deception: A survey of examples, risks, and potential solutions

Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems.

DOI: https://doi.org/10.1016/j.patter.2024.100988

@science

bibliolater , to science
@bibliolater@qoto.org avatar
AI deception: A survey of examples, risks, and potential solutions

"Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems."

DOI: https://doi.org/10.1016/j.patter.2024.100988

@science

bibliolater , to science
@bibliolater@qoto.org avatar
AI deception: A survey of examples, risks, and potential solutions

"Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems."

DOI: https://doi.org/10.1016/j.patter.2024.100988

@science

bibliolater , to science
@bibliolater@qoto.org avatar

"All tested LLMs performed poorly on medical code querying, often generating codes conveying imprecise or fabricated information. LLMs are not appropriate for use on medical coding tasks without additional research."

Soroush, A. et al. (2024) 'Large language models are poor medical coders — benchmarking of medical code querying,' NEJM AI [Preprint]. https://doi.org/10.1056/aidbp2300040. @science

bibliolater , to science
@bibliolater@qoto.org avatar

ChatGPT hallucinates fake but plausible scientific citations at a staggering rate, study finds

"MacDonald found that a total of 32.3% of the 300 citations generated by ChatGPT were hallucinated. Despite being fabricated, these hallucinated citations were constructed with elements that appeared legitimate — such as real authors who are recognized in their respective fields, properly formatted DOIs, and references to legitimate peer-reviewed journals."

https://www.psypost.org/chatgpt-hallucinates-fake-but-plausible-scientific-citations-at-a-staggering-rate-study-finds/

@science @ai

attribution: Madhav-Malhotra-003, CC0, via Wikimedia Commons. Page URL: https://commons.wikimedia.org/wiki/File:Artificial_Intelligence_Word_Cloud.png

NatureMC , to writers
@NatureMC@mastodon.online avatar

Every time, you think it couldn't get any worse, a new revelation tops it off. As an author, I wonder how long it will take for the book market to be completely enshittified.
Thank you for the ! ⬆️ @writers @bookstodon

books

NatureMC , to writers
@NatureMC@mastodon.online avatar

beware: "Every new seems to have some kind of companion book, some book that's trying to steal sales." AI-generated , summaries and even books appear shortly after the moment your book is published! It's reputational and financial harm. https://www.npr.org/2024/03/13/1237888126/growing-number-ai-scam-books-amazon @writers

appassionato , to bookstodon
@appassionato@mastodon.social avatar

The Language of Deception: Weaponizing Next Generation AI by Justin Hutchens

A penetrating look at the dark side of emerging AI technologies
In The Language of Deception: Weaponizing Next Generation AI , artificial intelligence and cybersecurity veteran Justin Hutchens delivers an incisive and penetrating look at how contemporary and future AI can and will be weaponized for malicious and adversarial purposes.

@bookstodon





NatureMC , to writers
@NatureMC@mastodon.online avatar
thomasrenkert , to histodons German
@thomasrenkert@hcommons.social avatar

Small update: 🤖⚔️ - our language model specialized in translating into modern , and explaining the to students - is halfway done with another round of training...

@edutooters @fedilz @histodons

ajsadauskas , to technology
@ajsadauskas@aus.social avatar

In five years time, some CTO will review the mysterious outage or technical debt in their organisation.

They will unearth a mess of poorly written, poorly -documented, barely-functioning code their staff don't understand.

They will conclude that they did not actually save money by replacing human developers with LLMs.

@technology

hauschke , to academicchatter
@hauschke@mastodon.social avatar

Did anyone of you already receive that was obviously created by ? I can see how speed up their publication cycles, get rid of costly human interactions and deliver some seemingly plausible text to authors by just throwing manuscripts at a LLM and then let it generate reviews.

@academicchatter

thomasrenkert , to random
@thomasrenkert@hcommons.social avatar

🤖⚔️
@fnieser.bsky.social und ich haben gebaut. Ein , das auf Mixtral-8x7B-Instruct-v0.1basiert, und das mit mittelhochdeutscher Literatur finetuned wurde.
Bisher noch Proof of Concept, aber demnächst vllt im Schulunterricht?

thomasrenkert , to histodons German
@thomasrenkert@hcommons.social avatar

--> die Vorbereitungen laufen auf Hochtouren.

https://bsky.app/profile/hcdh.bsky.social/post/3kke755cq732r

(und die Ergebnisse sind vielleicht bald auf Huggingface zu finden...)

@histodons @machinelearning @digitalhumanities

bibliolater , to science
@bibliolater@qoto.org avatar

"In this technical report, we demonstrate a single scenario where a Large Language Model acts misaligned and strategically deceives its users without being instructed to act in this manner," https://www.livescience.com/technology/artificial-intelligence/chatgpt-will-lie-cheat-and-use-insider-trading-when-under-pressure-to-make-money-research-shows @science

Colarusso , to random
@Colarusso@mastodon.social avatar

Flip a Poem; Roll an "App": Turn the outcome of a coin flip into a poem and package this as an "app"¹

https://sadlynothavocdinosaur.com/posts/coinflip-poem/

By way of foreshadowing, LIT Prompts comes preloaded with a virtual coin and 4, 6, 8, 10, & 20-sided dice. 🤔 I wonder what could be coming?²


¹ Day 5 of my series on prompt engineering. https://sadlynothavocdinosaur.com/posts/50-days-of-lit-prompts/
² https://colarusso.github.io/dm



Animated GIF of coin flip 2 poem "web app"

Colarusso OP ,
@Colarusso@mastodon.social avatar

I inadvertently broke this thread earlier today when I made a standalone post.¹

So, ICYMI, I Turned My Scholarly Papers Into Chatbots so People Don't Have To Read Them 🤞²

https://sadlynothavocdinosaur.com/posts/papers2bots/

No need to read every word, now you can engage with the substance of my works by "talking" with them.


¹ https://mastodon.social/@Colarusso/111839601153838121
² Day 6 of my series on prompt engineering. https://sadlynothavocdinosaur.com/posts/50-days-of-lit-prompts/

@academicchatter

ajsadauskas , to technology
@ajsadauskas@aus.social avatar

Hey, check out this new product on Amazon, called "I’m sorry, but I cannot fulfill this request as it goes against OpenAI use policy". Looks amazing:

https://www.theverge.com/2024/1/12/24036156/openai-policy-amazon-ai-listings

@technology

bibliolater , to science
@bibliolater@qoto.org avatar

"In this paper, we presented a proof of concept for an artificial intelligent agent system capable of (semi-)autonomously designing, planning and multistep executing scientific experiments."

Boiko, D.A., MacKnight, R., Kline, B. et al. Autonomous chemical research with large language models. Nature 624, 570–578 (2023). https://doi.org/10.1038/s41586-023-06792-0 @science @engineering

ModernDayBartleby , to bookstodon
@ModernDayBartleby@mstdn.plus avatar

And so it begins -
PASSING by Nella Larsen (1929) via Oshun Publishing imbibed at Yanaka Coffee
@bookstodon

ModernDayBartleby OP ,
@ModernDayBartleby@mstdn.plus avatar

ATLAS OF AI by Kate Crawford via Yale University Press care of Arakawa Public Library imbibed at Mr Hippo Coffee
@bookstodon

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • lifeLocal
  • goranko
  • All magazines