There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

autotldr Bot ,

This is the best summary I could come up with:


“We’re grateful for the progress leading companies have made toward fulfilling their voluntary commitments in addition to what is required by the executive order,” says Robyn Patterson, a spokesperson for the White House.

Without comprehensive federal legislation, the best the US can do right now is to demand that companies follow through on these voluntary commitments, says Brandie Nonnecke, the director of the CITRIS Policy Lab at UC Berkeley.

After they signed the commitments, Anthropic, Google, Microsoft, and OpenAI founded the Frontier Model Forum, a nonprofit that aims to facilitate discussions and actions on AI safety and responsibility.

“The natural question is: Does [the technical fix] meaningfully make progress and address the underlying social concerns that motivate why we want to know whether content is machine generated or not?” he adds.

In the past year, the company has pushed out research on deception, jailbreaking, strategies to mitigate discrimination, and emergent capabilities such as models’ ability to tamper with their own code or engage in persuasion.

Meanwhile, Microsoft has used satellite imagery and AI to improve responses to wildfires in Maui and map climate-vulnerable populations, which helps researchers expose risks such as food insecurity, forced migration, and disease.


The original article contains 3,300 words, the summary contains 197 words. Saved 94%. I’m a bot and I’m open source!

Asafum ,

I swear the only legitimate response to a corporation saying “I’m going to self regulate”

Is:

“HAHAHAHAHAHAHAHA HAHAHAHAHAHAHA!!! Ok anyway, here are the regulations we’re going to enforce.”

Alphane_Moon ,
@Alphane_Moon@lemmy.world avatar

That fact that is not taken as a given, speaks a lot about how deeply ingrained corruption is in our society.

I would almost argue that even “neutral” newswires like AP/Reuters should use language like “Companies A, B, C, have created a common PR organisation that will be focused on self-regulation polemics …”.

Alwaysnownevernotme ,

Awww they promised?

Thats fucking adorable.

The government is supposed to be the triggerman not a supplicant.

homesweethomeMrL ,

But but but . . . They have the money

Also, and more probably, no one in a decision-making capacity in The Government knows how AI works and really doesn’t want to find out.

j4k3 ,
@j4k3@lemmy.world avatar

Any government authoritarian intervention in AI is a joke. The US has a greatly diluted sense of relevance. AI research is international and restrictions will only make the USA irrelevant. Even the names you think of as American are actually funding advancements coming from other countries, most of them in Germany and Asia. I’ve read several white papers on current AI research recently, none were from US based schools or academics. The USA is simply not relevant in some imperial nonsense narrative. AI is a vital military technology. Limiting it through Luddite isolationism will reduce the experience pool massively, and push the research further into places where fresh economic growth is happening without regressive stagnation and corruption of toxic consolidation of wealth.

0x0 ,

You’re kinda missing the point… the big american names are claiming AI should be limited because it’s so dangerous.

Who should control AI? Them.

Sp00kyB00k ,

Like to other SRO’s in the financial system. That worked out fine.

henfredemars ,

Self-regulation? Yeah we got it, right next to the trickle down economics.

mox ,
  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines