From when this has come up in the past, itās a lucrative career path, but probably tricky to break in to since nobodyās maintaining a COBOL system they can afford to put into the hands of someone inexperienced.
The dudes earning half a million are able to do so because theyāve been at it since before their boss was born.
Yeah, and from what I understand, learning the language itself isnāt the hard part. It actually has rather few concepts. Whatās difficult, is learning how to program a computer correctly without all the abstractions and safety measures that modern languages provide.
Even structured programming had to be added to COBOL in a later revision. Thatās if/else, loops and similar.
It seems that back in the day, people effectively ran a simple compiler by hand on paper. It could work pretty well; Roller Coaster Tycoon was famously written in assembly.
Well, I only wrote simple exercises in Intel assembly in uni, but there were more of those with AVR assembly.
You can structure things nicely and understandably if you want.
Itās an acquired skill just like many others. Just today writing something big fully in assembly is not in demand, so that skill can usually be encountered among embedded engineers or something like that.
Sorry, I donāt remember what I used then as a tutorial, possibly nothing, and I donāt write assembly often, it was just an opinion based on the experience from the beginning of my comment. That said:
You have call and return, so you can use procedures with return. You have compare and conditional jump instructions. And you have timers and interrupts for scheduling. That allows for basic structure.
You split your program functionally into many files (say, one per procedure) and include those. That allows for basic complexity management.
To use OS syscalls you need to look for the relevant OS ABI reference, but itās not hard.
So all the usual. Similar to the dumber way of using C.
In general writing (EDIT: whole programs, itās used all the time in codecs and other DSP, at the very least) in assembly languages is unpopular not because itās hard, but because itās very slow.
Once you get into it youāll wonder how you ever programmed without ādivisionsā! I mean honestly, just declaring variables anywhere? Who needs that. Give me a nice, defined data division any day š
Friend has a cobol + IBM AIX combo going for him and his on call + at most 1 day/week of work position pays more than my full time very senior dev role.
I know a person who does AIX consulting with Cobol. She works about 4-8 weeks a year spread between 3 companies and makes enough to raise a family and fund a massive hobby farm. Helps to be in an area with a large fintech presence I imagine.
Very nice, yeah thatās the problem. I broke into AIX in the wholesale industry in early 2000ās so I have very few finance connections, which is where it all seems to be.
I have also been work from home for 7 years now and figured Iād have to go onsite for banks. That may have changed post covid. I will poke around and see what might be out there for me
Idk what the AIX job market is right now, but several years ago banks in central Europe poached employees back and forth just to reach minimum staff required.
Every time I hear this from one of my devs under me I get a little more angry. Such a meaningless statement, what are you gonna do, hand your pc to the fucking customer?
It's not actually meaningless. It means "I did test this and it did work under certain conditions." So maybe if you can determine what conditions are different on the customer's machine that'll give you a clue as to what happened.
The most obscure bug that I ever created ended up being something that would work just fine on any machine that had at any point had Visual Studio 2013 installed on it, even if it had since had it uninstalled (it left behind the library that my code change had introduced a hidden dependency on). It would only fail on a machine that had never had Visual Studio 2013 installed. This was quite a few years back so the computers we had throughout the company mostly had had 2013 installed at some point, only brand new ones that hadn't been used for much would crash when it happened to touch my code. That was a fun one to figure out and the list of "works on this machine" vs. "doesn't work on that machine" was useful.
Iāve always, always been a documentation-only guy. Meaning I almost never use anything other than the documentation for the languages and libraries I use. I genuinely donāt feel that Iām missing out on anything, I already write code faster than my peers and I donāt feel the need to try to be some sort of 10x developer.
Learning the proper way to do things has nothing to do with being smart. I donāt think Iām smarter than my peers, I just have a longer attention span (thanks adderall!)
Yeah I donāt buy it. I donāt think Iāve ever once used documentation so good I didnāt need to use a search engine at some point. Kind of odd you pretend you never use one lolā¦ You and all the downvoters seem to have some kind of strange complex where you need to feel superior
I donāt get how you could think thatās what I meant. Every person with an internet connection uses a search engine. I use search engines to find relevant documentation. I assumed that went without saying.
If you donāt believe me, there isnāt really anything I can do to change your mind. It doesnāt matter to me either way, but no I donāt go on stackoverflow because itās toxic and unhelpful. I go on forums or IRC if I want to discuss the development of a library.
Honestly, I am pretty used to the āyou canāt do things your way, you have to do them our way or it wont workā shit because practically every neurotypical thinks this way. All Iām saying is thereās a reason Iām faster than my peers. I donāt know how much of that is due to my avoidance of ācrutchesā but Iām certain itās nonzero.
The fact that you people pretend to only use documentation like some elitist boyscouts actually does say something about you.
I donāt believe you, youāre lying, you just want to seem smart. I donāt give a flying fuck if random Internet people think Iām smart or whatever the hell else youāre suggesting. Just flat donāt care. I know there is nothing wrong with using search engines and stack overflow and that we all do it. Pretty weird you all pretend otherwise. Kind of sad really that your ego requires this of you.
Itās not as dumb as you are suggesting. Iāve been programming in various languages since the 80s and I can say with confidence that your take is, at best, absurd. Go spend some time with GPT 4.
Iām IP banned due to my VPN. If they donāt want my business, thatās fine. Iām not getting off my VPN just to interact with proprietary software.
Iāve always, always been a intuition only guy. Meaning I almost never use any thing other than blind guessing on how languages and libraries work. I genuinely donāt feel Iām missing out on anything, my farts already smell better than the rest of my peers and I just donāt feel the need to learn the modern tools of my trade.
It would be a default on almost every distro that follows XDG specifications to have stuff like Downloads, Pictures, Videos in the $HOME folder. One of the first things I do as part of an installation is to modify ~/.config/user-dirs.dirs and set a specific folder, say /data/downloads or ~/downloads, for every XDG base directory.
Something similar happened to me a while back. I was copying some code from a Mac to a remote Linux host. For some reason the Mac was using a thing called an āen dashā ā which is slightly longer than a regular hyphen - and was really fucking frustrating to figure out.
I donāt know why Iām here commenting about this, but I love type, so:
Hyphen (-): the short one, used for hyphenated words. fire-eaters. Close-up.
en-dash (ā): slightly longer, traditionally the length of a lowercase"n" in the typeface. Used between for things like a timeframe. 10ā11:30, AugustāOctober
em-dash (ā): the longest of the three, and the length of a lowercase āmā. Used as a punctuation mark to denote a side comment or to abruptly cut off a sentence. āItās a great punctuation markāin fact I overuse itābut itās still useful.ā āHey where are you going with that giantāā
I didnāt bother to double check the definitions, so there might be more specific rules, but these are my rules of thumb.
Some mac apps have some quirks, the default note app was probably not meant for pasting code in, but when you do it changes the quotes and makes them all fancy. Drives me up the wall and thereās nobody to blame but me.
I was looking for this. Some text from webpages end up pasting that way too, even on non-mac systems, and it is utterly infuriating. Nothing I hate more than having to paste something into notepad++ so I can fix all the stupid quotes from some online tutorial that is giving you things to paste into a command prompt.
Haha theyāre just different ways of estimating the difficulty of a work task. Story points are a kind of estimate that are represented by a number. The larger the number the more difficult the task.
My company uses WAG (wild ass guess) time estimates where we have to actually say how long we think a task is going to take in a matter of days or weeks. It sounds fine but programming tasks are notoriously hard to estimate since you have to consider so many different factors. Iām especially bad at it so Iād much prefer just saying āthis task is an 8 because it seems hardā
Itāll take 20 hours. Unless itās harder than I thought. Or itās easier than I thought. Or itās exactly as hard as I thought except thereās one little thing that I get stuck on for 5 hours.
I recently estimated a task to take a couple hours but it ended up taking a week. Hadnāt considered having to update a bunch of other teams services with new proto schemas and making sure they were deployed before our own service š
My team has being trying an approach where instead of story pointing, we break everything down into the smallest incremental tasks we reasonably can and use number of tasks overall as the metric instead of story points.
In theory itās meant to be just as accurate on larger projects because the larger than normal and smaller than normal tasks all average out, and it save the whole headache of sitting around and arbitrarily setting points on everything based mostly on gut feeling.
Huh. Thatās such a simple and obvious approach, Iām kinda mad Iāve never thought of it lmao. It seems like youāre essentially breaking everything down to a 1 (or as close as you can get it), which is probably a more accurate measurement anyways. Neat.
I remember reading a study done across some large organizations that showed this approach was more accurate than other estimation techniques. Makes sense to me.
When I teach story points (not in an official Agile Scrum capacity, just as part of a larger course) I emphasize that the points are for conversation and consensus more than actual estimates.
Saying this story is bigger than that one, and why, and seeing people in something like planning poker give drastically differing estimates is a great way to signal that people donāt really get the story or some major area wasnāt considered. Itās a great discussion tool. Then it also gives a really rough ballpark to help the PO reprioritize the next two sprints before planning, but I donāt think they should ever be taken too seriously (or else you probably wasted a ton of time trying to be accurate on something youāre not going to be accurate on).
Students usually start by using task-hours as their metric, and naturally get pretty granular with tasks. This is for smaller projects - in larger ones, amortizing to just number of tasks is effectively the same as long as itās not chewing away way more time in planning.
programmer_humor
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.