There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

catrionagold , to academicchatter
@catrionagold@mastodon.social avatar

An academic/activist crowdsourcing request:

Who is critically researching, writing or doing cool activism on the environmental impacts of AI?

I’m particularly interested in finding UK-based folks, but all recommendations are appreciated 💕 🙏

@academicchatter

bibliolater , to psychology
@bibliolater@qoto.org avatar

Are you 80% angry and 2% sad? Why ‘emotional AI’ is fraught with problems

Emotional AI’s essential problem is that we can’t definitively say what emotions are. “Put a room of psychologists together and you will have fundamental disagreements,” says McStay. “There is no baseline, agreed definition of what emotion is.”

Nor is there agreement on how emotions are expressed. Lisa Feldman Barrett is a professor of psychology at Northeastern University in Boston, Massachusetts, and in 2019 she and four other scientists came together with a simple question: can we accurately infer emotions from facial movements alone? “We read and summarised more than 1,000 papers,” Barrett says. “And we did something that nobody else to date had done: we came to a consensus over what the data says.”

The consensus? We can’t.

https://www.theguardian.com/technology/article/2024/jun/23/emotional-artificial-intelligence-chatgpt-4o-hume-algorithmic-bias

@ai @psychology

ninokadic , to philosophy
@ninokadic@mastodon.social avatar
ChrisMayLA6 , to bookstodon
@ChrisMayLA6@zirk.us avatar

Tom Gauld is having a great run... today's cartoon for the Guardian is another corker...

Yes, of course.... the robot apocalypse is fiction, sure.... nothing to see here

@bookstodon

JustCodeCulture , to anthropology
@JustCodeCulture@mastodon.social avatar

New Review Essay on @lmesseri tremendous new book, ethnography & tech, social hopes, & false dreams of tech solutionism. Also discussing work of Andrew Brock, Zeynep Tufekci & Kelsie Nabben on Black Twitter, Twitter & ethnographies of DAOs.

@histodons
@commodon
@anthropology
@sociology

https://z.umn.edu/EthnographicSublime

bibliolater , to science
@bibliolater@qoto.org avatar

Backstabbing, bluffing and playing dead: has AI learned to deceive? – podcast

“Dr Peter Park, an AI existential safety researcher at MIT and author of the research, tells Ian Sample about the different examples of deception he uncovered, and why they will be so difficult to tackle as long as AI remains a black box.”

https://www.theguardian.com/science/audio/2024/may/14/backstabbing-bluffing-and-playing-dead-has-ai-learned-to-deceive-podcast

@science

attribution: Orion 8, Public domain, via Wikimedia Commons. Page URL: https://commons.wikimedia.org/wiki/File:Icon_announcer.svg

bibliolater , to science
@bibliolater@qoto.org avatar

"A Bayesian analysis showed that participants had high expectations and performed descriptively better irrespective of the AI description when a sham-AI was present. Using cognitive modeling, we could trace this advantage back to participants gathering more information."

Agnes Mercedes Kloft, Robin Welsch, Thomas Kosch, and Steeven Villa. 2024. "AI enhances our performance, I have no doubt this one will do the same": The Placebo effect is robust to negative descriptions of AI. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI '24). Association for Computing Machinery, New York, NY, USA, Article 299, 1–24. https://doi.org/10.1145/3613904.3642633

@science @technology

attribution: Madhav-Malhotra-003, CC0, via Wikimedia Commons. Page URL: https://commons.wikimedia.org/wiki/File:Artificial_Intelligence_Word_Cloud.png

bibliolater , to science
@bibliolater@qoto.org avatar

"A Bayesian analysis showed that participants had high expectations and performed descriptively better irrespective of the AI description when a sham-AI was present. Using cognitive modeling, we could trace this advantage back to participants gathering more information."

Agnes Mercedes Kloft, Robin Welsch, Thomas Kosch, and Steeven Villa. 2024. "AI enhances our performance, I have no doubt this one will do the same": The Placebo effect is robust to negative descriptions of AI. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI '24). Association for Computing Machinery, New York, NY, USA, Article 299, 1–24. https://doi.org/10.1145/3613904.3642633

@science @technology

attribution: Madhav-Malhotra-003, CC0, via Wikimedia Commons. Page URL: https://commons.wikimedia.org/wiki/File:Artificial_Intelligence_Word_Cloud.png

bibliolater , to science
@bibliolater@qoto.org avatar

"A Bayesian analysis showed that participants had high expectations and performed descriptively better irrespective of the AI description when a sham-AI was present. Using cognitive modeling, we could trace this advantage back to participants gathering more information."

Agnes Mercedes Kloft, Robin Welsch, Thomas Kosch, and Steeven Villa. 2024. "AI enhances our performance, I have no doubt this one will do the same": The Placebo effect is robust to negative descriptions of AI. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI '24). Association for Computing Machinery, New York, NY, USA, Article 299, 1–24. https://doi.org/10.1145/3613904.3642633

@science @technology

attribution: Madhav-Malhotra-003, CC0, via Wikimedia Commons. Page URL: https://commons.wikimedia.org/wiki/File:Artificial_Intelligence_Word_Cloud.png

bibliolater , to science
@bibliolater@qoto.org avatar

"A Bayesian analysis showed that participants had high expectations and performed descriptively better irrespective of the AI description when a sham-AI was present. Using cognitive modeling, we could trace this advantage back to participants gathering more information."

Agnes Mercedes Kloft, Robin Welsch, Thomas Kosch, and Steeven Villa. 2024. "AI enhances our performance, I have no doubt this one will do the same": The Placebo effect is robust to negative descriptions of AI. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI '24). Association for Computing Machinery, New York, NY, USA, Article 299, 1–24. https://doi.org/10.1145/3613904.3642633

@science @technology

attribution: Madhav-Malhotra-003, CC0, via Wikimedia Commons. Page URL: https://commons.wikimedia.org/wiki/File:Artificial_Intelligence_Word_Cloud.png

bibliolater , to science
@bibliolater@qoto.org avatar

AI deception: A survey of examples, risks, and potential solutions

"Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems."

DOI: https://doi.org/10.1016/j.patter.2024.100988

@science

bibliolater , to science
@bibliolater@qoto.org avatar

AI deception: A survey of examples, risks, and potential solutions

"Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems."

DOI: https://doi.org/10.1016/j.patter.2024.100988

@science

attribution: Madhav-Malhotra-003, CC0, via Wikimedia Commons. Page URL: https://commons.wikimedia.org/wiki/File:Artificial_Intelligence_Word_Cloud.png

bibliolater , to science
@bibliolater@qoto.org avatar
AI deception: A survey of examples, risks, and potential solutions

Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems.

DOI: https://doi.org/10.1016/j.patter.2024.100988

@science

attribution: Madhav-Malhotra-003, CC0, via Wikimedia Commons. Page URL: https://commons.wikimedia.org/wiki/File:Artificial_Intelligence_Word_Cloud.png

bibliolater , to science
@bibliolater@qoto.org avatar
AI deception: A survey of examples, risks, and potential solutions

Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems.

DOI: https://doi.org/10.1016/j.patter.2024.100988

@science

bibliolater , to science
@bibliolater@qoto.org avatar
AI deception: A survey of examples, risks, and potential solutions

"Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems."

DOI: https://doi.org/10.1016/j.patter.2024.100988

@science

bibliolater , to science
@bibliolater@qoto.org avatar
AI deception: A survey of examples, risks, and potential solutions

"Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test. AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems."

DOI: https://doi.org/10.1016/j.patter.2024.100988

@science

JustCodeCulture , to sociology
@JustCodeCulture@mastodon.social avatar

Congratulations to Harvard University History of Science doctoral candidate Aaron Gluck-Thaler on the 2024-25 CBI Tomash Fellowship. We are thrilled to have Aaron as a fellow in the upcoming academic year!

@histodons
@sociology
@commodon

https://z.umn.edu/2024-25-Tomash

bibliolater , to science
@bibliolater@qoto.org avatar

DeepMind’s AI can ‘predict how all of life’s molecules interact with each other’

"AlphaFold 3 is able to envision how the complex shapes and networks of molecules – present in every cell in the human body – are connected and how the smallest of changes in these can affect biological functions that can lead to diseases."

https://www.independent.co.uk/news/science/deepmind-dna-london-university-of-oxford-university-of-birmingham-b2541665.html

@science

bibliolater , to archaeodons
@bibliolater@qoto.org avatar

‘Second renaissance’: tech uncovers ancient scroll secrets of Plato and co

"The project belongs to a new wave of efforts that seek to read, restore and translate ancient and even lost languages with cutting-edge technologies. Armed with modern tools, many powered by artificial intelligence, scholars are starting to read what had long been considered unreadable."

https://www.theguardian.com/books/article/2024/may/03/how-scholars-armed-with-cutting-edge-technology-are-unfurling-secrets-of-ancient-scrolls

@archaeodons

bibliolater , to psychology
@bibliolater@qoto.org avatar

"Even though it was hoped that machines might overcome human bias, this assumption often fails due to a problematic or theoretically implausible selection of variables that are fed into the model and because of small size, low representativeness, and presence of bias in the training data [5.]."

Suchotzki, K. and Gamer, M. (2024) 'Detecting deception with artificial intelligence: promises and perils,' Trends in Cognitive Sciences [Preprint]. https://doi.org/10.1016/j.tics.2024.04.002.

@science @psychology

attribution: Madhav-Malhotra-003, CC0, via Wikimedia Commons. Page URL: https://commons.wikimedia.org/wiki/File:Artificial_Intelligence_Word_Cloud.png

bibliolater , to science
@bibliolater@qoto.org avatar

"While ChatGPT-4 correlates closely with established risk stratification tools regarding mean scores, its inconsistency when presented with identical patient data on separate occasions raises concerns about its reliability."

Heston TF, Lewis LM (2024) ChatGPT provides inconsistent risk-stratification of patients with atraumatic chest pain. PLOS ONE 19(4): e0301854. https://doi.org/10.1371/journal.pone.0301854

@science

bibliolater , to bookstodon
@bibliolater@qoto.org avatar
bibliolater , to psychology
@bibliolater@qoto.org avatar

The hidden risk of letting AI decide – losing the skills to choose for ourselves

"Making thoughtful and defensible decisions requires practice and self-discipline. And this is where the hidden harm that AI exposes people to comes in: AI does most of its “thinking” behind the scenes and presents users with answers that are stripped of context and deliberation."

https://theconversation.com/the-hidden-risk-of-letting-ai-decide-losing-the-skills-to-choose-for-ourselves-227311

@psychology

CultureDesk , to bookstodon
@CultureDesk@flipboard.social avatar

AI-generated books on Amazon now have the potential to kill people, as they've moved into the realm of mushroom foraging. Guides have popped up like, well, mushrooms, packed with information that makes no sense and could easily be dangerous, illustrated with structures that are "the mycological equivalent of a picture of a hot blond with six fingers and too many teeth," writes Vox's Constance Grady. Here's more.

https://flip.it/ekbDMe

@bookstodon

ICalzada , to anthropology
@ICalzada@mastodon.social avatar
  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • lifeLocal
  • goranko
  • All magazines