Assembly was my first language after BASIC - I know I’m weird, and I’m okay with that:-). Tbf it was for a calculator, so simplified. Any language ofc can go off the deep end in terms of complexity, or if you stick to the shallows it can be fairly simple to write a hello world program (though it took me a month to successfully do that for my calculator, learning on my own and with limited time spent on that task:-).
Stop calling out my commit messages… I swear “fixed bug x (for real this time)(ok now it should be fixed) added feature y” is a valid commit message scheme.
Maybe; but Valve has plenty of patch notes that say “X fixed,” but it’s not actually fixed and then the next update says “X fixed. No really this time.” lol
Researchers: “Look at our child crawl! This is a big milestone. We can’t wait to see what he’ll do in the future.
CEOs: Give that baby a job!
AI stuff was so cool to learn about in school, but it was also really clear how much further we had to go. I’m kind of worried. We already had one period of AI overhype lead to a crash in research funding for decades. I really hope this bubble doesn’t do the same thing.
AI as a field initially started getting big in the 1960s with machine translation and perceptrons (super-basic neural networks), which started promising but hit a wall basically immediately. Around 1974 the US military cut most of their funding to their AI projects because they weren’t working out, but by 1980 they started funding AI projects again because people had invented new AI approaches. Around 1984 people coined the term “AI winter” for the time when funding had dried up, which incidentally was right before funding dried up again in the 90s until around the 2010s.
The sheer waste of energy and mass production of garbage clogging up search results alone is enough to make me hope the bubble will pop reeeeal soon. Sucks for research but honestly the bad far outweighs the good right now, it has to die.
Yeah search is pretty useless now. I’m so over it. Trying to fix problems always has the top 15 results be like:
“You might ask yourself, how is Error-13 on a Maytag Washer? Well first, let’s start with What Is a Maytag Washer. You would be right to assume washing clothes has been a task for thousands of years. The first washing machine was invented…” (Yes I wrote that by hand, how’d I do? Lol)
It’s the same as how I really stopped caring if crypto was gonna “revolutionize money” once it became a gold rush to horde GPUs and subsequently any other component you could store a hash on.
R&D and open source for the advancement of humanity is cool.
Building enormous farms and burning out powerful components that could’ve been used for art and science, to instead prove-that-you-own-a-receipt-for-an-ugly-monkey-jpeg hoping it explodes in value, is apalling.
I’m sure there was an ethical application way back there somewhere, but it just becomes a pump-and-dump scheme and ruins things for a lot of good people.
I’m… honestly kinda okay with it crashing. It’d suck because AI has a lot of potential outside of generative tasks; like science and medicine. However, we don’t really have the corporate ethics or morals for it, nor do we have the economic structure for it.
AI at our current stage is guaranteed to cause problems even when used responsibly, because its entire goal is to do human tasks better than a human can. No matter how hard you try to avoid it, even if you do your best to think carefully and hire humans whenever possible, AI will end up replacing human jobs. What’s the point in hiring a bunch of people with a hyper-specialized understanding of a specific scientific field if an AI can do their work faster and better? If I’m not mistaken, normally having some form of hyper-specialization would be advantageous for the scientist because it means they can demand more for their expertise (so long as it’s paired with a general understanding of other fields).
However, if you have to choose between 5 hyper-specialized and potentially expensive human scientists, or an AI designed to do the hyper-specialized task with 2~3 human generalists to design the input and interpret the output, which do you go with?
So long as the output is the same or similar, the no-brainer would be to go with the 2~3 generalists and AI; it would require less funding and possibly less equipment - and that’s ignoring that, from what I’ve seen, AI tends to be better than human scientists in hyper-specialized tasks (though you still need scientists to design the input and parse the output). As such, you’re basically guaranteed to replace humans with AI.
We just don’t have the society for that. We should be moving in that direction, but we’re not even close to being there yet. So, again, as much potential as AI has, I’m kinda okay if it crashes. There aren’t enough people who possess a brain capable of handling an AI-dominated world yet. There are too many people who see things like money, government, economics, etc as some kind of magical force of nature and not as human-made systems which only exist because we let them.
Creating drafts for white papers my boss asks for every week about stupid shit on his mind. Used to take a couple days now it’s done in one day at most and I spend my Friday doing chores and checking on my email and chat every once in a while until I send him the completed version before logging out for the weekend.
Writing boring shit is LLM dream stuff. Especially tedious corpo shit. I have to write letters and such a lot, it makes it so much easier having a machine that can summarise material and write it in dry corporate language in 10 seconds. I already have to proof read my own writing, and there’s almost always 1 or 2 other approvers, so checking it for errors is no extra effort.
I understand this perspective, because the text, image, audio, and video generators all default to the most generic solution. I challenge you to explore past the surface with the simple goal of examining something you enjoy from new angles. All of the interesting work in generative AI is being done at the edges of the models’ semantic spaces. Avoid getting stuck in workflows. Try new ones regularly and compare their efficacies. I’m constantly finding use cases that I end up putting to practical use - sometimes immediately, sometimes six months later when the need arises.
The more you use generative AI, the less amazing it is. Don’t get me wrong, I enjoy it, but it really can only impress you when it’s talking about a subject you know nothing of. The pictures are terrible, though way better than I could do. The coding is terrible, although it’s amazingly fast for similar quality to a junior developer. The prose seems amazing at first, but as you use it over and over you realize it’s quite bland and it’s continually sort of reverting to a default voice even if it can write really good short passages (specific to ChatGPT-like instruct models here, not seen that with other models).
I’ve been playing with generative AI for about 5 years, and it has certainly gotten much better in some ways, but it’s still just a neat toy in search of a problem it can solve. There’s a lot of money going into it in the hope it will improve to the point where it can solve some of the things we really want it to, but I’m not sure it ever reliably will. Maybe some other AI technology, but not LLM.
It saves me 10-20 hours of work every week as a corpo video producer, and I use that time to experiment with AI - which has allowed our small team to produce work that would be completely outside our resources otherwise. Without a single additional breakthrough, we’d be finding novel ways to be productive with the current form of generative AI for decades. I understand the desire to temper expectations, and I agree that companies and providers are not handling this well at all. But the tech is already solid. It’s just being misused more often than it’s being wielded well.
I don’t have the experience to refute that. But I see the same things from developers all the time swearing AI saves them hours, but that’s a domain I know well and AI does certain very limited things quite well. It can spit out boilerplate stuff pretty quick and often with few enough errors that I can fix them faster than I could’ve written everything by hand. But it very much relies on me knowing what I’m doing and immediately recognizing the garbage for what it is.
It does make me a little bit faster at the stuff I’m already good at, at the cost of leading me down some wild rabbit holes on things I don’t know so well. It’s not nothing, but it’s not what I would call professional-grade.
Yes. It’s not wrong 100% of the time, otherwise you could make a fortune by asking it for investment advice and then doing the opposite.
What happened is like the current robot craze: they made the technology resemble humans, which drives attention and money. Specialized “robots” can indeed perform tedious tasks (CNC, pick-and-place machines) or work safely with heavier objects (construction equipment). Similarly, we can use AI to identify data forgery or fold proteins. If we try to make either human-like, they will appear to do a wide variety of tasks (which drives sales & investment) but not be great at any of them. You wouldn’t buy a humanoid robot just to reuse your existing shovel if excavators are cheaper. (Yes, I don’t think a humanoid robot with digging capabilities will ever be cheaper than a standard excavator).
It’s actually really frustrating that LLMs have gotten all the funding when we’re finally at the point where we can build reasonably priced purpose-built AI and instead the CEOs want to push trashbag LLMs on everything
Well, a conversational AI with sub-human abilities still has some uses. Notably scamming people en masse so human email scammers will be put out of their jobs /s
New outlook is less functional but much better UI design (it’s just outlook web access after all). Outlook hasn’t changed in forever because so many corporate high ups use it and think they know how it works. They always respond to emails that are already answered because they didn’t see the newer reply in their inbox. I suspect this resistance is why it’s a totally separate program to the old outlook. Yes, there are settings to group threads in outlook, but the interface is still pretty unintuitive and the vast majority of these users don’t change their default settings anyway. In my experience the terrible defaults create more problems than outlook solves. And the server syncing can be really slow at times. Personally, I’m very happy that MS is finally showing some interest to modernize outlook, the more people who use it the easier my job will get.
Also ya the name is stupid. Teams (New) gets me the most. Idk who possibly thought this naming scheme was a good idea.
I work in a technical field, and the amount of bad work I see is way higher than you’d think. There are companies without anyone competent to do what they claim to do. Astonishingly, they make money at it and frequently don’t get caught. Sometimes they have to hire someone like me to fix their bad work when they do cause themselves actual problems, but that’s much less expensive than hiring qualified people in the first place. That’s probably where we’re headed with ais, and honestly it won’t be much different than things are now, except for the horrible dystopian nature of replacing people with machines. As time goes on they’ll get fed the corrections competent people make to their output and the number of competent people necessary will shrink and shrink, till the work product is good enough that they don’t care to get it corrected. Then there won’t be anyone getting paid to do the job, and because of ais black box nature we will completely lose the knowledge to perform the job in the first place.
Hot take, C is better then C++. It really just has one unique footgun, pointers, which can be avoided most of the time. C++ has lots of (smart)pointer related footguns, each with their own rules.
Yeah. My journey of love, loathing, hatred, adoration, and mild appreciation for C++, ended with the realization that 90% of the time I can get the job done in C with little hassle, and a consistent, predictable, trustworthy set of unholy abominations.
Preach brother, I don’t think that’s a hot take at all. I’ve become almost twice as productive since moving from c++ to c. I think I made the change when I was looking into virtual destructors and I was thinking, “at what point am I solving a problem the language is creating?” Another good example of this is move semantics. It’s only a solution to a problem the language invented.
My hot take: The general fear of pointers needs to die.
I’m not a fan of C++, but move semantics seem very clearly like a solution to a problem that C invented.
Though to be honest I could live with manual memory management. What I really don’t understand is how anyone can bear to use C after rewriting the same monomorphic collection type for the 20th time.
Maybe I’m wrong, but aren’t move semantics mostly to aid with smart pointers and move constructors an optimization to avoid copy constructors? Neither of which exist in c.
I’m not sure what collection type you’re referring to, but most c programmers would probably agree that polymorphism isn’t a good thing.
That’s what std::move does, and you’re right that it’s quite an ugly hack to deal with C++ legacy mistakes that C doesn’t have.
I say move semantics to refer to the broader concept, which exists to make manual memory management safer and easier to get right. It’s also a core feature of Rust.
Also I’m talking about parametric polymorphism, not subtype polymorphism. So I mean things like lists, queues and maps which can be specialised for the element type. That’s what I can’t imagine living without.
Hahaha. I knew I was wrong about the polymorphism there. You used big words and I’m a grug c programmer =]
We use those generic containers in c as well. Just, that we roll our own.
Move semantics in the general idea of ownership I can see more of a use for.
I would just emphasize that manual memory management really isn’t nearly as scary as it’s made out to be. So, it’s frustrating to see the ridiculous lengths people go to to avoid it at the expense of everything else.
I definitely agree on the last point. Personally I like languages where I can get the compiler to check a lot more of my reasoning, but I still want to be able to use all the memory management techniques that people use in C.
I remember Jonathan Blow did a fairly rambling stream of consciousness talk on his criticisms of Rust, and it was largely written off as “old man yells at clouds”, but I tried to make sense of what he was saying and eventually realised he had a lot of good points.
Just watched this. Thank you. I think I’d agree with most of what he says there. I like trying languages, and I did try rust. I didn’t like fighting with the compiler, but once I was done fighting the compiler, I was somehow 98% done with the project. It kind of felt like magic in that way. There are lots of great ideas in there, but I didn’t stick with it. A little too much for me in the end. One of my favorite parts C is how simple it is. Like you would never be able to show me a line of C I couldn’t understand.
That said, I’ve fallen in love a language called Odin. Odin has a unique take on allocators in general. It actually gives you even more control than C while providing language support for the more basic containers like dynamic arrays and maps.
The only conceivable way to avoid pointers in C is by using indices into arrays, which have the exact same set of problems that pointers do because array indexing and pointer dereferencing are the same thing. If anything array indexing is slightly worse, because the index doesn’t carry a type.
Also you’re ignoring a whole host of other problems in C. Most notably unions.
People say that “you only need to learn pointers”, but that’s not a real thing you can do. It’s like saying it’s easy to write correct brainfuck because the language spec is so small. The exact opposite is true.
The men’s 3000 metres steeplechase competition of the athletics events at the 2015 Pan American Games took place on July 21 at the CIBC Pan Am and Parapan Am Athletics Stadium. The event was won by Matt Hughes of Canada in a time of 8:32.18.
French toast is a dish of sliced bread soaked in beaten eggs and often milk or cream, then pan-fried. Alternative names and variants include eggy bread, Bombay toast, gypsy toast, and poor knights of Windsor.
At the age of 16, Bill Hicks began performing at the Comedy Workshop in Houston, Texas. During the 1980s, he toured the U.S. extensively and made a number of high-profile television appearances, but he amassed a significant fan base in the UK, filling large venues during his 1991 tour.
Foodfight! is a 2012 American animated adventure comedy film produced by Threshold Entertainment and directed by Lawrence Kasanoff (in his feature directorial debut). The film features the voices of Charlie Sheen, Wayne Brady, Hilary Duff, Eva Longoria, Larry Miller, and Christopher Lloyd.
What Happened at Hazelwood is a 1946 detective novel by the British writer Michael Innes. It is a standalone novel from the author who was best known for his series featuring the Golden Age detective John Appleby.
Garrotxa is a comarca (county) in the Girona region, Catalonia, Spain. Its population in 2016 was 55,999, more than half of them in the capital city of Olot. It is roughly equivalent to the historical County of Besalú.
You start out with negative knowledge in C++, then as you just hear the name for the first time, you get your balls stepped on, jizz, and then get post-nut clarity.
I read a pretty convincing article title and subheading implying that the best use for so called “AI” would be to replace all corporate CEOs with it.
I didn’t read the article but given how I’ve seen most CEOs behave it would probably be trivial to automate their behavior. Pursue short term profit boosts with no eye to the long term, cut workers and/or pay and/or benefits at every opportunity, attempt to deny unionization to the employees, tell the board and shareholders that everything is great, tell the employees that everything sucks, …
Then some hackers get in and reprogram the AI CEOs to value long term profit and employee training and productivity. The company grows and is massively profitable until some venture capitalists swoop in and kill the company to feed from the carcass.
The graph goes up for me when I find my comfortable little subset of C++ but goes back down when I encounter other people’s comfortable little subset of C++ or when I find/remember another footgun I didn’t know/forgot about.
When I became a team leader at my last job, my first priority was making a list of parts of the language we must never use because of our high reliability requirement.
Sure, strtok is a terrible misfeature, a relic of ancient times, but it’s plainly the heritage of C, not C++ (just like e.g. strcpy). The C++ problems are things like braced initialization list having different meaning depending on the set of available constructors, or the significantly non-zero cost of various abstractions, caused by strange backward-compatible limitations of the standard/ABI definitions, or the distinctness of vector<bool> etc.
No you are right! Honestly it was several years ago and I struggled to remember exactly what I came up with before I left.
In our application we for example never use dynamic memory allocation. It has to be done very carefully so we don’t crash. Problem is there’s lots of sneaky ways one can accidentally do it from the standard library.
That’s one thing that always shocks me. You can have two people writing C++ and have them both not understand what the other is writing. C++ has soo many random and contradictory design patterns, that two people can literally use it as if it were 2 separate languages.
C is almost the perfect subset for me, but then I miss templates (almost exclusively for defining generic data structures) and automatic cleanup. That’s why I’m so interested in Zig with its comptime and defer features.
You may also like Odin if you haven’t already started zig. It’s less of a learning curve and feels more like what c should have always been. It has defer and simple generics, but doesn’t have the magic of comptime.
Me too. If I can use it, I prefer C# — that is — if I’m not doing systems programming, I don’t have to worry about legacy code, and mainly I’m supporting Windows then it’s really quite cozy.
update: i just looked it up and they are not. Visual J++ is a predecessor to C#. Nevertheless, the name “Visual J++” in all its Microsoftian goodness(?) is as good a descriptor as any for what C# turned into
So more an iterative family member, which I suppose was more what I’d expect with how Microsoft hisorically handled programming languages. Still interesting! Thanks for the fact-check!
I like C# too. I feel like I shouldn’t because of how Microsoft it is, but I can’t help but see it as a better put together/structured Java when I use it.
I feel the same, but to me, it’s more understandable than the other C derivatives. I just understand it better. I’ve been thinking of diving into rust lately.
They put new AI controls on our traffic lights. Cost the city a fuck ton more money than fixing our dilapidated public pool. Now no one tries to turn left at a light. They don’t activate. We threw out a perfectly good timer no one was complaining about.
But no one from silicone valley is lobbing cities to buy pool equipment, I guess.
I’ve seen a video of at least one spa that does that. They mine bitcoin on rigs immersed in mineral oil, with a heat exchanger to the spa’s water system. I’m struggling to imagine that’s enough heat, especially piped a distance through the building, to run several hot tubs, and I’m kind of dubious about that particular load, but hey.
A large data centre can use over 100 MW at the high end. Certainly enough to power a swimming pool or three. In fact swimming pools are normally measured in kW not MW.
We are a small software company. We’re trying to find a useful use case. Currently we can’t. However, we’re watching closely. It has to come at the rate of improving.
Whilst it’s a shame this implementation sucks, I wish we would get intelligent traffic light controls that worked. Sitting at a light for 90 seconds in the dead of night without a car in sight is frustrating.
That was a solved problem 20 years ago lol. We made working systems for this in our lab at Uni, it was one of our course group projects. It used combinations of sensors and microcontrollers.
It’s not really the kind of problem that requires AI. You can do it with AI and image recognition or live traffic data but that’s more fitting for complex tasks like adjusting the entire grid live based on traffic conditions. It’s massively overkill for dead time switches.
Even for grid optimization you shouldn’t jump into AI head first. It’s much better long term to analyze the underlying causes of grid congestion and come up with holistic solutions that address those problems, which often translate into low-tech or zero-tech solutions. I’ve seen intersections massively improved by a couple of signs, some markings and a handful of plastic poles.
Throwing AI at problems is sort of a “spray and pray” approach that often goes about as badly as you can expect.
Throwing AI at problems is sort of a “spray and pray” approach that often goes about as badly as you can expect.
I can see the headlines now: “New social media trend where people are asking traffic light Ai to solve the traveling salesman problem is causing massive traffic jams and record electricity costs for the city.”
You need to really specify what is meant by “AI” here. Chances are it’s probably some form of smart traffic lights to improve traffic flow. Which is not all that special. It has nothing to do with LLMs
Honestly I’m not sure, we had circular sensors for a long time, about the size of a tall drinking glass, now there’s rectangular sensors they just put up about twice the size of a cell phone and they have a bend, arc, to them, I know they weren’t being used as cameras at all before, no one was getting tickets with pictures from them, it’s a small town. What exactly the new system is I’m not sure, our local news all went out of business, so its all word of mouth, or going to town hall meetings.
It’s funny because this is what I was afraid of with “AI” threatening humanity.
Not that we’d get super-intelligences running Terminators, but that we’d be using black-box “I dunno how it does it, we just trained it and let it go.” Tech in civilization-critical applications because it sounded cool to people with more dollars than brain cells.
programmer_humor
Oldest
This magazine is from a federated server and may be incomplete. Browse more on the original instance.