There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

programmerhumor

This magazine is from a federated server and may be incomplete. Browse more on the original instance.

avidamoeba , in question, When were programmers supposed to be obsolete?
@avidamoeba@lemmy.ca avatar

So far one of the best use cases for AI in software engineering has been identifying idiots and sociopaths.

MachineFab812 ,

Joke’s on AI. It’s harder to stop us from outing ourselves.

RegalPotoo , in question, When were programmers supposed to be obsolete?
@RegalPotoo@lemmy.world avatar

If only we lived in a world so simple as to allow the whims of managers, customers and third parties to be completely definable in UML

Phen , in question, When were programmers supposed to be obsolete?

If a tool were created that properly converted an UML diagram into a project without any need for code, all the programmers that lost their job to this tool would then be hired by the company that offered it, in order to give maintenance and support to everything the customers want in their programs.

It would be removing programmers from they payroll of some companies but they would still be working for them, just further down in the chain.

The same is true for AI. If AI could completely replace programmers in some area, it would need a lot of programmers itself to keep dealing with all the edge cases that would show up from being used everywhere that a programmer was needed before.

SzethFriendOfNimi ,
@SzethFriendOfNimi@lemmy.world avatar

Besides. Somebody has to convert customer needs into the diagram. Account for what they’re not saying, etc.

That’s the real essential skill in software dev, not spitting out lines of code.

MrPoopyButthole ,
@MrPoopyButthole@lemmy.world avatar

Yup. Business logic for things that cost millions or billions should not be run by an approximation machine.

corsicanguppy ,

Thanks. I’m remembering the relevant scene from Office Space. ;-)

SzethFriendOfNimi ,
@SzethFriendOfNimi@lemmy.world avatar

I HAVE PEOPLE SKILLS!!!

leisesprecher ,

To be fair, a lot of roles simply disappeared over the years.

Developers today are much more productive than 30 years ago, mostly because someone automated the boring parts away.

A modern developer can spin up a simple crud app including infrastructure in a day or so. That’s much much more productive than 1995. We just cram a lot more of the world into software, so we need 20x the amount of developers we needed back then.

nasi_goreng , in spreading misinformation online (javascript??)
@nasi_goreng@lemmy.zip avatar

I wonder if people that JavaScript is indirectly named from an ethnic group in Indonesia.

Javanese ethnic -> Java Island -> Javanese coffee -> Java programming language -> JavaScript

driving_crooner , (edited )
@driving_crooner@lemmy.eco.br avatar

Is based on the Java island coffee, the preferred variation of James Gosling. This is why the Java logo is a cup of coffee.

Edit: I made that up, but looks like it’s true?

tetris11 ,
@tetris11@lemmy.ml avatar

I’m surprised there aren’t more languages labelled after highly addictive stimulants

SexyVetra ,

I heard Ritchie called it C after creating it during one of Bell Labs’ famous coke parties.

/s

tetris11 ,
@tetris11@lemmy.ml avatar

and Julia? Well we all know what Julia is slang for

Ironfacebuster ,

They tried making SpeedScript but big programming decided it was too fast for its time and shut it down

Midnitte ,

…C ???

CanadaPlus ,

Java was named after the Javanese, and not the other way around?

breadsmasher , in The C++ learning process
@breadsmasher@lemmy.world avatar

“Give it six months”

It only needed 3!

kirk781 ,
@kirk781@discuss.tchncs.de avatar
Ephera ,

Particularly unexpected, because 3! = 6.

boonhet ,

That’s because C++ is such a high performance language, it gets things done faster

dependencyinjection ,
breadsmasher ,
@breadsmasher@lemmy.world avatar

7m - 4m = 3m

dependencyinjection ,

My bad. I was going off OP first comment at 8 months and not the reply at 7 months.

I came back to be like actually no, but then I was like oh I’m the dumb dumb.

python , in question, When were programmers supposed to be obsolete?

Dude I WISH an AI would do all the dumb AWS crap for me so that I could just hang out and build React frontends all day

lord_ryvan ,

I wish it could build front ends fornme so I could focus on database, backend and devops

Bishma , (edited )
@Bishma@discuss.tchncs.de avatar

The thing that made me laugh when I saw the article that OP mentions is that it was coming from AWS.

In my testing AWS’s Titan AI is the least useful for figuring out how to do things in AWS. It’s so terrible that Amazon just announced they’re using Claude for Alexa’s upcoming “AI” features.

litchralee , in question, When were programmers supposed to be obsolete?

I know this is c/programmerhumor but I’ll take a stab at the question. If I may broaden the question to include collectively the set of software engineers, programmers, and (from a mainframe era) operators – but will still use “programmers” for brevity – then we can find examples of all sorts of other roles being taken over by computers or subsumed as part of a different worker’s job description. So it shouldn’t really be surprising that the job of programmer would also be partially offloaded.

The classic example of computer-induced obsolescence is the job of typist, where a large organization would employ staff to operate typewriters to convert hand-written memos into typed documents. Helped by the availability of word processors – no, not the software but a standalone appliance – and then the personal computer, the expectation moved to where knowledge workers have to type their own documents.

If we look to some of the earliest analog computers, built to compute differential equations such as for weather and flow analysis, a small team of people would be needed to operate and interpret the results for the research staff. But nowadays, researchers are expected to crunch their own numbers, possibly aided by a statistics or data analyst expert, but they’re still working in R or Python, as opposed to a dedicated person or team that sets up the analysis program.

In that sense, the job of setting up tasks to run on a computer – that is, the old definition of “programming” the machine – has moved to the users. But alleviating the burden on programmers isn’t always going to be viewed as obsolescence. Otherwise, we’d say that tab-complete is making human-typing obsolete lol

leviticoh OP ,

@litchralee
Thank you!
i didn't expect serious answers here, but this was a nice read,

so the various jobs around computers were kind of obsoleted, but the job description just shifted and the title remained valid most of the times,

now i'm interested to see what we'll do 20 years from now rather than just being annoyed by the "don't learn ${X}, it's outdated" guys

AnarchistArtificer , in The C++ learning process

A friend of mine whose research group works on high throughout X-ray Crystallography had to learn C++ for his work, and he says that it was like “wrangling an unhappy horse”.

xthexder ,
@xthexder@l.sw0.com avatar

I’m not sure how I feel about someone controlling an X-ray machine with C++ when they haven’t used the language before… At least it’s not for use on humans.

humorlessrepost ,
xthexder ,
@xthexder@l.sw0.com avatar

Yep, I learned about this exact case when I got my engineering degree.

AnarchistArtificer ,

He doesn’t directly control anything with C++ — it’s just the data processing. The gist of X-ray Crystallography is that we can shoot some X-rays at a crystallised protein, that will scatter the X-rays due to diffraction, then we can take the diffraction pattern formed and do some mathemagic to figure out the electron density of the crystallised protein and from there, work out the protein’s structure https://slrpnk.net/pictrs/image/51ac8404-4e22-486b-b220-efe33f75d69d.png

C++ helps with the mathemagic part of that, especially because by “high throughput”, I mean that the research facility has a particle accelerator that’s over 1km long, which cost multiple billions because it can shoot super bright X-rays at a rate of up to 27,000 per second. It’s the kind of place that’s used by many research groups, and you have to apply for “beam time”. The sample is piped in front of the beam and the result is thousands of diffraction patterns that need to be matched to particular crystals. That’s where the challenge comes in.

I am probably explaining this badly because it’s pretty cutting edge stuff that’s adjacent to what I know, but I know some of the software used is called CrystFEL. My understanding is that learning C++ was necessary for extending or modifying existing software tools, and for troubleshooting anomalous results.

xthexder ,
@xthexder@l.sw0.com avatar

Neat, thanks for sharing. Reminds me of old mainframe computers where students and researchers had to apply for processing time. Large data analysis definitely makes sense for C++, and it’s pretty low risk. Presumably you’d be able to go back and reprocess stuff if something went wrong? Or is more of a live-feed that’s not practical to store?

AnarchistArtificer , (edited )

The data are stored, so it’s not a live-feed problem. It is an inordinate amount of data that’s stored though. I don’t actually understand this well enough to explain it well, so I’m going to quote from a book [1]. Apologies for wall of text.

“Serial femtosecond crystallography [(SFX)] experiments produce mountains of data that require [Free Electron Laser (FEL)] facilities to provide many petabytes of storage space and large compute clusters for timely processing of user data. The route to reach the summit of the data mountain requires peak finding, indexing, integration, refinement, and phasing.” […]

“The main reason for [steep increase in data volumes] is simple statistics. Systematic rotation of a single crystal allows all the Bragg peaks, required for structure determination, to be swept through and recorded. Serial collection is a rather inefficient way of measuring all these Bragg peak intensities because each snapshot is from a randomly oriented crystal, and there are no systematic relationships between successive crystal orientations. […]

Consider a game of picking a card from a deck of all 52 cards until all the cards in the deck have been seen. The rotation method could be considered as analogous to picking a card from the top of the deck, looking at it and then throwing it away before picking the next, i.e., sampling without replacement. In this analogy, the faces of the cards represent crystal orientations or Bragg reflections. Only 52 turns are required to see all the cards in this case. Serial collection is akin to randomly picking a card and then putting the card back in the deck before choosing the next card, i.e., sampling with replacement (Fig. 7.1 bottom). How many cards are needed to be drawn before all 52 have been seen? Intuitively, we can see that there is no guarantee that all cards will ever be observed. However, statistically speaking, the expected number of turns to complete the task, c, is given by: https://slrpnk.net/pictrs/image/0fdbd723-cb7f-49bf-834f-e1fb634ee0bb.jpegwhere n is the total number of cards. For large n, c converges to n*log(n). That is, for n = 52, it can reasonably be expected that all 52 cards will be observed only after about 236 turns! The problem is further exacerbated because a fraction of the images obtained in an SFX experiment will be blank because the X-ray pulse did not hit a crystal. This fraction varies depending on the sample preparation and delivery methods (see Chaps. 3–5), but is often higher than 60%. The random orientation of crystals and the random picking of this orientation on every measurement represent the primary reasons why SFX data volumes are inherently larger than rotation series data.

The second reason why SFX data volumes are so high is the high variability of many experimental parameters. [There is some randomness in the X-ray pulses themselves]. There may also be a wide variability in the crystals: their size, shape, crystalline order, and even their crystal structure. In effect, each frame in an SFX experiment is from a completely separate experiment to the others.”

The Realities of Experimental Data” "The aim of hit finding in SFX is to determine whether the snapshot contains Bragg spots or not. All the later processing stages are based on Bragg spots, and so frames which do not contain any of them are useless, at least as far as crystallographic data processing is concerned. Conceptually, hit finding seems trivial. However, in practice it can be challenging.

“In an ideal case shown in Fig. 7.5a, the peaks are intense and there is no background noise. In this case, even a simple thresholding algorithm can locate the peaks. Unfortunately, real life is not so simple”

https://slrpnk.net/pictrs/image/83b1ed30-3120-4ded-a887-e7bc6c69cab2.webp

It’s very cool, I wish I knew more about this. A figure I found for approximate data rate is 5GB/s per instrument. I think that’s for the European XFELS.

Citation: [1]: Yoon, C.H., White, T.A. (2018). Climbing the Data Mountain: Processing of SFX Data. In: Boutet, S., Fromme, P., Hunter, M. (eds) X-ray Free Electron Lasers. Springer, Cham. doi.org/10.1007/978-3-030-00551-1_7

xthexder , (edited )
@xthexder@l.sw0.com avatar

That’s definitely a non-trivial amount of data. Storage fast enough to read/write that isn’t cheap either, so it makes perfect sense you’d want to process it and narrow it down to a smaller subset of data ASAP. The physics of it is way over my head, but I at least understand the challenge of dealing with that much data.

Thanks for the read!

TonyTonyChopper ,
@TonyTonyChopper@mander.xyz avatar

Probably makes 7 figures working for big pharma though

AnarchistArtificer ,

Unfortunately no. I don’t know any research scientists who even make 6 figures. You’re lucky to break even 50k if you’re in academia. Working in industry gets you better pay, but not by too much. This is true even in big pharma, at least on the biochemical/biomedical research front. Perhaps non-research roles are where the big bucks are.

MonkderVierte ,
Aceticon , in The C++ learning process

Reminds me of the joke about the guy falling from the top of the Empire State Building who, half way down, was heard saying: “Well, so far, so good”

foo ,

Reminds me of the start of La Haine.
youtu.be/4rD05HsmtIU

Aceticon ,

I suspect indirectly both variants come from the same source or maybe even it’s the La Haine that’s indirectly the source for my variant (though I learned this joke a long time ago, possibly before 1995).

By the way, that’s excellent film intro.

itsmegeorge , in The C++ learning process

In my country C++ is taught as a base language along with Scratch(not a language, but yk what I mean). I recently started learning Kotlin with Jetpack Compose (the only sane way to learn Kotlin) and I realized I wasted two years of my life learning C++, with 5 more to come as it is mandatory in ICT classes… :((

huzzahunimpressively ,

Which programming language would you like to teach if you were a teacher? P.D I also learned C++ as my first language

boonhet ,

I’m not a teacher, and I don’t want to become one tbh.

That said, something like Python is standard, and for good reason IMO. For OOP they usually teach Java here, though I’m not a huge fan. I think Kotlin would be better to teach nowadays. There are other OO languages of course, but I’m of the opinion that after messing around with Python, students should probably use something strongly typed, so that’s JavaScript out - I suppose TypeScript could be used, but IMO it’d be best to keep JS/TS in a web dev specific course.

kirk781 ,
@kirk781@discuss.tchncs.de avatar

C++ was my second programming language after BASIC, if that still qualifies as a programming language these days.

AI_toothbrush ,

Idk about other people but just learning c is so logical. You do stupid shit, you get stupid results. Of course there are a lot of bad things with c but at least when you sit down to understand how it works, it works while most oop languages are so detached from the hardware its hard to understand anything. It might be just me but oop breaks my brain. Also ive never coded in c++ but i automatically avoided it. I heard rust has very minimal oop and its just to make things smoother so i may try that.

AI_toothbrush ,

In hungary its python and c++ in the curriculum but on the tests you can usually choose between a few languages.

n3cr0 , in The C++ learning process

Jokes aside, I struggle more with abominations like JavaScript and even Python.

mogoh ,

Do you have a minute for our lord and savoir TypeScript?

n3cr0 ,

As long as it can distinguish between int and uint - yesss!

Zangoose ,

TypeScript is still built on JavaScript, all numbers are IEEE-754 doubles 🙃

Edit: Actually I lied, there are BigInts which are arbitrarily precise integers but I don’t think there’s a way to make them unsigned. There also might be a byte-array object that stores uint8 values but I’m not completely sure if I’m remembering that correctly.

flying_sheep ,
@flying_sheep@lemmy.ml avatar

Not only is there a UInt8Array, there’s also a bunch of others: developer.mozilla.org/en-US/docs/…/TypedArray#typ…

unionagainstdhmo OP ,
@unionagainstdhmo@aussie.zone avatar

Yeah JavaScript is a bit weird, semicolons being optional and compulsory at the same time: I remember trying to build an electron example ~5yrs ago and it didn’t work unless I put in the semicolons which the developers omitted.

Python is just glorified shell scripting. Libraries like numpy are cool but I don’t like the indentation crap, I’m getting used to it because University likes it.

flying_sheep ,
@flying_sheep@lemmy.ml avatar

Python is just glorified shell scripting

Absolutely not, python is an actual programming language with sane error handling and arbitrarily nestable data structures.

I don’t like the indentation crap

Don’t be so superficial. When learning something, go with the flow and try to work with the design choices, not against them.

Python simply writes a bit differently: you do e.g. more function definitions and list comprehensions.

unionagainstdhmo OP ,
@unionagainstdhmo@aussie.zone avatar

Yeah I meant for that to be a bit inflammatory. I actually don’t mind python apart from the execution speed, but the indentation I find makes it more difficult to read stuff that is extremely nested. I use it mostly for creating plots and basic stuff for my science degree but for any serious project I wouldn’t consider it

UndercoverUlrikHD ,

but I don’t like the indentation crap

Do you not use indentation in other languages?

unionagainstdhmo OP ,
@unionagainstdhmo@aussie.zone avatar

Yes but it’s difficult in a long program to tell which scope you are in or where one ends. I don’t know what is so unfriendly about { and }, my editor can highlight pairs of them, it’s just nicer to work with.

void_star ,

Python has its quirks, but it’s much much cleaner than js or c++, not fair to drag it down with them imo

tunetardis ,

I think the thing with C++ is they have tried to maintain backward compatibility from Day 1. You can take a C++ program from the 80s (or heck, even a straight up C program), and there’s a good chance it will compile as-is, which is rather astonishing considering modern C++ feels like a different language.

But I think this is what leads to a lot of the complexity as it stands? By contrast, I started Python in the Python 2 era, and when they switched to 3, I was like “Wow, did they just break hello world?” It’s a different philosophy and has its trade-offs. By reinventing itself, it can get rid of the legacy cruft that never worked well or required hacky workarounds, but old code will not simply run under the new interpreter. You have to hope your migration tools are up to the task.

0x0 ,

even a straight up C program

C++ is not a superset of C.

tunetardis ,

There were breaking changes between C and C++ (and some divergent evolution since the initial split) as well as breaking changes between different releases of C++ itself. I am not saying these never happened, but the powers that be controlling the standard have worked hard to minimize these for better or worse.

If I took one of my earliest ANSI C programs from the 80s and ran it through a C++23 compiler, I would probably need to remove a bunch of register statements and maybe check if an assumption of 16-bit int is going to land me in some trouble, but otherwise, I think it would build as long as it’s not linking in any 3rd party libraries.

jaybone ,

The terse indexing and index manipulation gets a bit Perl-ish and write-only to me. But other than that I agree.

_____ , in Average GitHub PR

When the CI takes longer than 10 minutes

ByteOnBikes ,

Envious. It averages 15-30 mins

residentmarchant ,

Are you like building a mobile app or have 100k tests or is it just super slow?

ByteOnBikes ,

Still waiting on approval for more resources. It’s not a priority in the company.

I swear we have like 4 runners on a raspberry pi.

thebestaquaman ,

My test suite takes quite a bit of time, not because the code base is huge, but because it consists of a variety of mathematical models that should work under a range of conditions.

This makes it very quick to write a test that’s basically “check that every pair of models gives the same output for the same conditions” or “check that re-ordering the inputs in a certain way does not change the output”.

If you have 10 models, with three inputs that can be ordered 6 ways, you now suddenly have 60 tests that take maybe 2-3 sec each.

Scaling up: It becomes very easy to write automated testing for a lot of stuff, so even if each individual test is relatively quick, they suddenly take 10-15 min to run total.

The test suite now is ≈2000 unit/integration tests, and I have experienced uncovering an obscure bug because a single one of them failed.

spacecadet ,

I used to have to use a CI pipeline at work that had over 40 jobs and 8 stages for checking some sql syntax and formatting and also a custom python ETL library that utilized pandas and constantly got OOM errors.

They didn’t write any unit tests because “we can just do that in the CI pipeline” and if you didn’t constantly pull breaking changes into your branch you would guarantee the pipeline would fail, but if you were lucky you only had to restart 30% of your jobs.

It was the most awful thing and killed developer productivity to the point people were leaving the team because it sucks to spend 40% of your time waiting for CI scripts to fail while you are being yelled at to deliver faster.

vrek , in Poor Bobby tables might mess up the return

This actually gave me an idea. Over break I wanted to practice dB design and entity framework. Designing a database and interface for santa to track kids naughty or nice could be a fun/interesting way of doing it.

AdamEatsAss ,

But is naught/nice a binary value?

vrek ,

I think you would have a table of “activities” with a value of how good/bad each is. So like cleaning your room would be +5 but crying in a store because mommy wouldn’t buy you a toy would be - 15. Then you have a table for children and each child starts with 0 in January and then for each activity the child does there naughty/nice value gers adjusted. December 24 Santa runs a query on the dB and gets a list of every child with a positive value.

Keep in mind I currently feel sick and put about 5 minutes of thought into thus.

vrek ,

Actually I think there should be a intermediary table as a history of activities of each child. Like child table is I’d, name, age, address, and naughty/nice value, activities would be Id, description, and good/bad value. Then a history table of ID, child_id, activity_id. So santa can recalculate a child’s naughty/nice value to “check it twice”

sawdustprophet ,

This is starting to sound like The Good Place with extra steps…

drbluefall ,
@drbluefall@toast.ooo avatar

North Pole Incorporated

bringing all the fun of HR and spreadsheet drudgery to the little boys and girls of the world

vrek ,

I mean in a certain light, christmas presents are a yearly bonus for children and Santa checking his list is a management review of the child’s performance…

AdamEatsAss ,

Redundancy is key. There should always be two methods for calculating a child’s ANNS (Aggregate Naughty Nice Score). I propose a conventional method where activities are graded from -5 to +5 where a child’s ANNS is the sum of all their previous activities, this would be a child’s historical ANNS. And I propose a second system where only the activities performed within the last 365 days of calculation are considered, to account for children who have drastically changed behavior year to year, this would be a child’s current year ANNS. I think the current year ANNS would hold more weight in Santa’s judgement but looking at the historical ANNS in conjunction with the current year could help provide a better picture of a child’s character.

jubilationtcornpone ,

Just FYI, LinqPad is a really neat tool for messing around with EFCore. I use it all the time for testing ideas or doing quick tasks that I don’t want to spin up a new project for.

magikmw ,

Good way to get yourself on an FBI tracker too.

ericbomb ,

Ooh this is actually a good learning example.

Kids will have their wish list that’s another table that we wanna reference. Then of course do we have the name of the toys in the table, or simple reference “Toys” table.

Also need an address table as some kids get Santa gifts at more than one house…

vrek ,

I didn’t even consider incorporating toy distribution… At what levels should kids get a small gift(a toy or game) vs a large gift(bike, game system etc).

In a real world scenario I would probably spilt this between 2 databases… One for kids (“with a nice score of 2 you get a toy of value 4 or less”) and one for toys (“the toys available with a value less than 4 are…”)

ericbomb ,

Gonna need a whole auto converter thing to make sure that requests for “ps” “play station” and “new play station” all get converted to same thing!

vrek ,

Yeah… Which is 100x more complicated cause Microsoft has no idea how to name consoles

tiefling , in question, When were programmers supposed to be obsolete?

Programmers become obsolete when they stop evolving with technology

Steamymoomilk , in My debugging experience today: Quantum Debugging

Fear kepts the bits in line

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines