There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

programmer_humor

This magazine is from a federated server and may be incomplete. Browse more on the original instance.

reddig33 , in Classic Amazon

Is that an Amazon problem, or a government admin setting the wrong permissions on AWS problem?

wizardbeard ,

Absolutely the latter. This is similar to how Snowden had access to all the stuff he leaked. He worked at a place that did contract work with the government and was mortified at all he had access to that he should have never been able to see.

There’s a shit ton of articles in the tech space about how companies keep fucking up with stuff like this. No reasonable expectation that the government and their contractors would do any better.

SturgiesYrFase ,
@SturgiesYrFase@lemmy.ml avatar

The real problem is Amazon hosting sensitive government files…

RonSijm ,
@RonSijm@programming.dev avatar

It’s pretty common that AWS is doing that, they even have a special GovCloud for them.

These companies are obviously just doing it wrong by having public S3 buckets

SturgiesYrFase ,
@SturgiesYrFase@lemmy.ml avatar

I mean, Amazon isn’t necessarily in the wrong for providing the service. It’s governments trusting a private company, with a history of collecting more data than they should, with sensitive data. It’s just stupid, really really, mind numbingly, stupid

RonSijm ,
@RonSijm@programming.dev avatar

Yea, that’s why I mentioned these companies are just doing it wrong. Governments have the same problems as private companies, in that they don’t really want to maintain their own cloud infrastructure, so they’ll use something like AWS

But for example they could host their own On-premises HSM and encrypt their GovCloud to a degree that it’s inaccessible to AWS

mikyopii , in Classic Amazon
@mikyopii@programming.dev avatar

Those aren’t real classified documents. They aren’t marked correctly.

Starbuck ,

For anyone wondering what a document should look like, the DoD publishes that for anyone to read. Just search Derivative Classifier Training. Spoiler alert: this ain’t what a top secret document looks like.

db2 , in Classic Amazon
db2 ,

SECRETARY OF DEFENSE
1000 DEFENSE PENTAGON
WASHINGTON , DC 20301 - 1000
JANUARY 2021
CLASSIFIED: TOP SECRET - NOT FOR PUBLIC RELEASE
SUBJECT: RUSSIAN HACKINGS OF FEDERAL GOVERNMENT ASSETS
Throughout 2020, the United States received intelligence that Russian hackers have
infiltrated secure government databases and servers, including those located in The Pentagon, the
Intelligence Community, the US Treasury, the Department of Homeland Security, the Commerce
Department, and Health and Human Services. Within the servers affected, 18,000 US
organizations had malicious code in their networks; 50 of them suffered major breaches. As of
the 13th of December, when this knowledge was made known to US officials, the Cybersecurity
and Infrastructure Security Agency (CISA) has been working tirelessly to secure networks and
alleviate any vulnerabilities in the systems that were affected. Russia has denied responsibility
for such hackings.
This hacking poses a major threat to US cybersecurity, as it is one of the most significant
hackings in modern history. The Department of Defense, Homeland Security, and CISA have
urged Congress to take action against this emerging threat. In response, Congress has introduced
the following piece of legislation, named after an essential cybersecurity tool: A Bill to
C.A.P.T.C.H.A. (Create a Procedure to Combat Hacker Attacks). It is your responsibility as
Congress to come to a decision on this legislation before more damage is done.

astraeus ,
@astraeus@programming.dev avatar

Sounds like BS to me. Anyone can host PDFs on AWS and spoof US government agencies, look up C.A.P.T.C.H.A. Congress. No hits for it. Did Russia hack into US government servers? Probably. Nonetheless, this reads like a scare piece and not a legitimate communication from the DoD.

CanadaPlus ,

It also names no names and gives no details, which is odd for something intended to be so internal. Even more damning, it’s addressed to congress, which famously leaks like a sieve.

db2 ,

That’s why I felt OK copying the whole first page. 🤣

RadicalCandour ,

It’s interesting scrolling through the search results. Seems like a lot of schools, municipalities, and the Philippines have a problem with distinguishing between confidential and public.

xmunk ,

You must be one of those hackers I keep hearing about.

SomeBoyo ,

Might even be this 4chan guy I heared a lot about.

NaibofTabr , in Classic Amazon

“The Net interprets censorship as damage and routes around it.” - John Gilmore

Nothing connected to the internet can be kept hidden indefinitely.

LostXOR ,

It can if you set up proper security but, well, the US government isn't exactly known for that.

redcalcium , in Classic Amazon

top public secret

lettruthout , in Classic Amazon

Well whadaya know… that works.

Pandantic ,
@Pandantic@midwest.social avatar

I only got two though!

BurningnnTree , in Junior Dev VS Senior Dev

I don’t get it but it’s still funny

frezik , in We'll refactor this next year anyways

I feel this personally today. I just looked at some code in a module where it started out with nice, short functions with good names. I looked back at it today, and it now has a 180 line mega function full of nested conditionals and I don’t know how this happened.

Gobbel2000 , in We'll refactor this next year anyways
@Gobbel2000@programming.dev avatar

Huh? Hexagonal Architecture?

lobut ,

Onion architecture. Ports and adapters are other names for it, I think.

magic_lobster_party ,

It’s an idea by the same guy who wrote the clean code book

sheepishly , in "prompt engineering"
@sheepishly@kbin.social avatar

New rare Pepe just dropped

cordlesslamp ,

is it NFT and where could I purchase it?

TheOSINTguy ,

Ctrl+c

TomAwsm ,

Nah, do ctrl+x so you’ll have the only one.

Boomkop3 , in We'll refactor this next year anyways

Can’t relate

astraeus , in Junior Dev VS Senior Dev
@astraeus@programming.dev avatar

The senior dev left monitor looking like those Instagram posts that increase your phone brightness by 100x

Frozengyro , in "prompt engineering"
don ,

copied ur nft lol

Frozengyro ,

I’ll never financially recover from this!

fidodo ,

It’s not an nft, it has to be hexagonal to be an nft

nyandere ,

Giving me Jar Jar vibes.

Frozengyro ,

Yea, feels like a mash up of pepe, ninja turtle, and jar jar.

bingbong ,

Frog version of snoop dogg

lemmy_get_my_coat ,

“Snoop Frogg” was right there

rikudou ,

@DallE Create a mix between Pepe the Frog and Snoop Dogg.

DallE Bot ,

Here’s your image!

AI image generated with the prompt from the previous comment


The AI model has revised your prompt: Create an imaginative blending of an anthropomorphic green frog with an individual characterized by long, sleek braids often associated with a hip-hop lifestyle. The frog should exhibit human traits and appear jovial and mischievous. The individual should have a lean physique and wear sunglasses, a beanie hat, and casual attire typically seen in urban fashion.

Natanael ,

Funny how this one has less detail and less expressions despite the more complex prompt.

DallE Bot ,

Here’s your image!

AI image generated with the prompt from the previous comment


The AI model has revised your prompt: Create an image of a green cartoon frog, wearing glasses and featuring typical hip-hop fashion elements such as a baseball cap, gold chains, and baggy clothes. The frog has a cool, laid-back demeanor, characteristic of a classic rap artist.

scrubbles , in "prompt engineering"
@scrubbles@poptalk.scrubbles.tech avatar

The fun thing with AI that companies are starting to realize is that there’s no way to “program” AI, and I just love that. The only way to guide it is by retraining models (and LLMs will just always have stuff you don’t like in them), or using more AI to say “Was that response okay?” which is imperfect.

And I am just loving the fallout.

joyjoy ,

using more AI to say “Was that response okay?”

This is what GPT 2 did. One day it bugged and started outputting the lewdest responses you could ever imagine.

Mango ,

Yoooo, they mathematically implemented masochism! A computer program with a kink as purely defined as you can imagine!

Ohi ,

Thanks for sharing! Cute video that articulated the training process surprisingly well.

Xttweaponttx ,

Dude what a solid video! Stoked to watch more vids from that channel!

xmunk ,

Using another AI to detect if an AI is misbehaving just sounds like the halting problem but with more steps.

match ,
@match@pawb.social avatar

Generative adversarial networks are really effective actually!

Natanael ,

As long as you can correctly model the target behavior in a sufficiently complete way, and capture all necessary context in the inputs!

marcos ,

Lots of things in AI make no sense and really shouldn’t work… except that they do.

Deep learning is one of those.

bbuez ,

The fallout of image generation will be even more incredible imo. Even if models do become even more capable, training off of post-'21 data will become increasingly polluted and difficult to distinguish as models improve their output, which inevitably leads to model collapse. At least until we have a standardized way of flagging generated images opposed to real ones, but I don’t really like that future.

Just on a tangent, openai claiming video models will help “AGI” understand the world around it is laughable to me. 3blue1brown released a very informative video on how text transformers work, and in principal all “AI” is at the moment is very clever statistics and lots of matrix multiplication. How our minds process and retain information is by far more complicated, as we don’t fully understand ourselves yet and we are a grand leap away from ever emulating a true mind.

All that to say is I can’t wait for people to realize: oh hey that is just to try to replace talent in film production coming from silicon valley

scrubbles ,
@scrubbles@poptalk.scrubbles.tech avatar

Yeah I read one of the papers that talked about this. Essentially putting AGI data into a training set will pollute it, and cause it to just fall apart. Most LLMs especially are going to be a ton of fun as there were absolutely no rules about what to do, and bots and spammers immediately used it everywhere on the internet. And the only solution is to… write a model to detect it. Which then they’ll make models that bypass that, and there will just be no way to keep the dataset clean.

The hype of AI is warranted - but also way overblown. Hype from actual developers and seeing what it can do when it’s tasked with doing something appropriate? Blown away. Just honestly blown away. However hearing what businesses want to do with it, the crazy shit like “We’ll fire everyone and just let AI do it!” Impossible. At least with the current generation of models. Those people remind me of the crypto bros saying it’s going to revolutionize everything. It might, but you need to actually understand the tech and it’s limitations first.

bbuez ,

Building my own training set is something I would certainly want to do eventually. Ive been messing with Mistral Instruct using GPT4ALL and its genuinely impressive how quick my 2060 can hallucinate relatively accurate information, but its also evident of limitations. IE I tell it I do not want to use AWS or another cloud hosting service, it will just return a list of suggested services not including AWS. Most certainly a limit of its training data but still impressive.

Anyone suggesting to use LLMs to manage people or resources are better off flipping a coin on every thought, more than likely companies who are insistent on it will go belly up soon enough

Excrubulent ,
@Excrubulent@slrpnk.net avatar

You’re describing an arms race, which makes me wonder if that’s part of the path to AGI. Ultimately the only way to truly detect a fake is to compare it to reality, and the only way to train a model to understand whether it is looking at reality or a generated image is to teach it to understand context and meaning, and that’s basically the ballgame at that point. That’s a qualitative shift, and in that scenario we get there with opposing groups each pursuing their own ends, not with a single group intentionally making AGI.

skeptomatic ,

AIs can be trained to detect AI generated images, so then the race is only whether the AI produced images get better faster than the detector can keep up or not.
More likely as the technology evolves AIs, like a human, will just train real-time-ish from video taken from it’s camera eyeballs.
…and then, of course, it will KILL ALL HUMANS.

Excrubulent ,
@Excrubulent@slrpnk.net avatar

It’s definitely a qualitative shift. I suspect most of the fundamental maths of neural network matrices won’t need to change, because they are enough to emulate the lower level functions of our brains. We have dedicated parts of our brain for image recognition, face recognition, language interpretation, and so on, very analogous to the way individual NNs do those same functions. We got this far with biomimicry, and it’s fascinating to me that biomimicry on the micro level is naturally turning into biomimicry on a larger scale. It seems reasonable to believe that process will continue.

Perhaps some subtle tuning of those matrices is needed to really replicate a mind, but I suspect the actual leap will require first of all a massive increase in raw computation, as well as some new insight into how to arrange all of those subsystems within a larger structure.

What I find interesting is the question of whether AI can actually fully replace a person in a job without crossing that threshold and becoming AGI, and I genuinely don’t think it can. Sure it’ll be able to automate some very limited tasks, but without the capacity to understand meaning it can’t ever do real problem solving. I think past that point it has to be considered a person with all of the ethical implications that has, and I think tech bros intentionally avoid acknowledging that, because that would scare investors.

MalReynolds ,
@MalReynolds@slrpnk.net avatar

I see this a lot, but do you really think the big players haven’t backed up the pre-22 datasets? Also, synthetic (LLM generated) data is routinely used in fine tuning to good effect, it’s likely that architectures exist that can happily do primary training on synthetic as well.

Kyatto ,
@Kyatto@leminal.space avatar

I’m sure it would be pretty simple to put a simple code in the pixels of the image, could probably be done with offset of alpha channel or whatever, using relative offsets or something like that. I might be dumb but fingerprinting the actual image should be relatively quick forward and an algorithm could be used to detect it, of course it would potentially be damaged by bad encoding or image manipulation that changes the entire image. but most people are just going to be copy and pasting and any sort of error correction and duplication of the code would preserve most of the fingerprint.

I’m a dumb though and I’m sure there is someone smarter than me who actually does this sort of thing who will read this and either get angry at the audacity or laugh at the incompetence.

zalgotext ,

The best part is they don’t understand the cost of that retraining. The non-engineer marketing types in my field suggest AI as a potential solution to any technical problem they possibly can. One of the product owners who’s more technically inclined finally had enough during a recent meeting and straight up to told those guys “AI is the least efficient way to solve any technical problem, and should only be considered if everything else has failed”. I wanted to shake his hand right then and there.

scrubbles ,
@scrubbles@poptalk.scrubbles.tech avatar

That is an amazing person you have there, they are owed some beers for sure

NoFun4You ,

Laughs in AI solved problems lol

halloween_spookster , in "prompt engineering"

I once asked ChatGPT to generate some random numerical passwords as I was curious about its capabilities to generate random data. It told me that it couldn’t. I asked why it couldn’t (I knew why it was resisting but I wanted to see its response) and it promptly gave me a bunch of random numerical passwords.

NucleusAdumbens ,

Wait can someone explain why it didn’t want to generate random numbers?

ForgotAboutDre ,

It won’t generate random numbers. It’ll generate random numbers from its training data.

If it’s asked to generate passwords I wouldn’t be surprised if it generated lists of leaked passwords available online.

These models are created from masses of data scraped from the internet. Most of which is unreviewed and unverified. They really don’t want to review and verify it because it’s expensive and much of their data is illegal.

dukk ,

Also, researchers asking ChatGPT for long lists of random numbers were able to extract its training data from the output (which OpenAI promptly blocked).

Or maybe that’s what you meant?

Dkarma ,

It’s not illegal. They don’t want to review it because “it” is the entire fucking internet…do you know what that would cost?

Once again. For the morons. It is not illegal to have an ai scan all content on the internet. If it was Google wouldnt exist .

Stop making shit up just cuz u want it to be true.

Natanael ,

The crawling isn’t illegal, what you do with the data might be

Natanael ,

It’s training and fine tuning has a lot of specific instructions given to it about what it can and can’t do, and if something sounds like something it shouldn’t try then it will refuse. Spitting out unbiased random numbers is something it’s specifically trained not to do by virtue of being a neural network architecture. Not sure if OpenAI specifically has included an instruction about it being bad at randomness though.

While the model is fed randomness when you prompt it, it doesn’t have raw access to those random numbers and can’t feed it forward. Instead it’s likely to interpret it to give you numbers it sees less often.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • [email protected]
  • lifeLocal
  • goranko
  • All magazines