I think it would be a worthwhile research project to find out how many users just click through these, accepting what the website wants you to accept by default. It effectively operates like a EULA for every single website, which produces overall fatigue and lack of care. When you’ve visited 20 sites in one day, you just start being irritated by having to constantly make a decision before you can view any content, and just mash whatever button you need to proceed.
I also live in Europe and almost all websites display a dialog that asks you to choose cookie preferences. However, it seems that some few websites, mostly german (spiegel.de, gutefrage) that give you the opetion to browse with ads and cookies or pay. I do not use those websites and I imagine it is not legal.
My job was to organise the work between the workers, keep the business away from my subordinates, and only waste their time when they had the complete information being asked for the specific reason.
And if I wasn’t doing one of the things above, my job was to pick up the horrible things that no one else wanted/I had experience and domain knowledge in (eg : accessibility testing)
Margaret Elaine Hamilton (née Heafield; born August 17, 1936) is an American computer scientist, systems engineer, and business owner. She was director of the Software Engineering Division of the MIT Instrumentation Laboratory, which developed on-board flight software for NASA's Apollo program. She later founded two software companies—Higher Order Software in 1976 and Hamilton Technologies in 1986, both in Cambridge, Massachusetts.
Hamilton has published more than 130 papers, proceedings, and reports, about sixty projects, and six major programs. She invented the term "software engineering", stating "I began to use the term 'software engineering' to distinguish it from hardware and other kinds of engineering, yet treat each type of engineering as part of the overall systems engineering process."
On November 22, 2016, Hamilton received the Presidential Medal of Freedom from president Barack Obama for her work leading to the development of on-board flight software for NASA's Apollo Moon missions.
Huh, didn't know about her! She sounds like a badass lady!
People might be more familiar with this viral picture as well, if not the name.
“Margaret Hamilton shown in 1969 standing beside listings of the software developed by her and her team for the Apollo program’s Lunar Module and Command Module.”
My mom was a systems programmer who used assembly language and built a lot of the banking infrastructure!
Originally, programming was actually a woman dominated field because it was considered a subset of secretary work and “beneath men” (it wasn’t for a good reason).
If you watch the recent cummerbatch movie about Turing the eagle eyed observer will notice that nearly everyone who actually interacts with the computer software is a woman.
Not to turn this into a sociology discussion, but for anyone unaware: this is a fairly common pattern.
Women often pioneer fields like this, but as soon as it becomes seen as something “important” out “respectable” then suddenly it becomes male dominated.
The opposite also happens, where as society deems something as unimportant, a male dominated field will become female dominant - see teaching for an unfortunate example of a field that used to be highly paid and respected, and is now largely looked down on.
Sorry, don’t mean to go off on a tangent - it just bugs me and I think more people should be aware of it.
Beer brewing was originally a field dominated by women.
The presitege associated with a position can also change the expected gender. Women traditionally cooked meals at home but “Chefs” are predominately male, especially famous or celebrated Chefs.
You can google “women in computing” for more details, or check out en.wikipedia.org/wiki/Women_in_computing - it’s amazing how much women contributed to this field and how little known that appears to be. (I only learned about it a few years ago myself.)
But the gist is:
Early on (i.e. the 1940s and 50s), men thought the prestige and honor was in building the giant machines (which back then could fill a classroom or more). Actually programming them was considered easier, “just like following a recipe”, so women got jobs as “computers” who did this part. To quote that wikipedia article: Designing the hardware was “men’s work” and programming the software was “women’s work.”
Fast forward to the 1970s and people had started realizing that programming was actually hard, and so it was promoted as a field boys should get educated in, while girls were encouraged to instead become nurses and teachers and such.
using a computer traditionally was seen as a secretary job, so it was often dominated by women. its only as of post consumer computer events where a lot more males went into the field due to the large market it offers came in.
The secret to a healthy career in IT is to let things break just a little every once in a while. Nothing so bad as to cause serious problems. But just enough to remind people that you exist and their world would come crumbling down without you.
I get really fucking tired of justifying work. Like, I have delivered every single project I’ve ever been given ahead of schedule. But every time a new project comes up, higher level managers want all these update meetings to check up on the status, discuss risk factors that might prevent it from being delivered, and a bunch of other bullshit. You’re the risk factor, motherfucker, you and your meetings. Get the fuck out of my way and I’ll deliver it ahead of schedule just like literally every other project I’ve ever been in charge of. Quit feeling that you need to be involved! You don’t. You’re a road block that provides no value. Ugh!
If you’re ignoring all the risk factors, got no contingency plans or measurements against projected time and budget you have delivered everything on time and budget by luck.
If you already have those, those meetings should absolutely be a 30 min weekend meeting to check on status and what else you may need to keep delivering.
I know they should be 30 minutes per week. But they’re not, and that’s the frustration. A weekend meeting though? I have a feeling that we may perceive work-life balance differently.
I don’t know how to see a memory bug in an out of order elevator, but I once saw and reported a wiring error of a working elevator. It was an interesting talk at the reception desk, but as I could precisely describe what was wrong and the verifyable consequences, they took me seriously. And sent me a “Thank You” email later ;-)
You know how everything has LCD screens these days? And sometimes you’ll witness a good old crash? I know I’ve seen quite a few.
I’m guessing something like that.
Bro needs to go big. Why not all electronics and electronic systems in general? As it is he could still be “caught with his pants down” by another speculative execution bug.
“Usable” is a strong statement… It went from a “misery inducing insufferable machine” to a “extremely big annoyance”. I do concede it is anyway a progress
Winget-UI specifically can run the upgrade tool automatically for you, that’s what I meant for “automation”. You could also add a scheduler to run Winget by itself every day if you need to.
I really want to love the “everything is an object” of power shell but I just have zero uses for using a shell on windows. Granted, my windows usage is like 15 minutes a week most of the time, but still. I also can’t be bothered to use it for work because it’s exclusively Linux/linux-ish over there so it’s not worth bothering.
Either way, I like the idea, can’t really justify figuring out the details.
It’s a wonderful tool for me in a Windows environment/shop, especially with how it works well with all the Windows and Microsoft administration systems/tools we use.
Personally, I’m less interested in any language’s hypothetical merits than how it fits as a tool for what I need to accomplish and ease of future maintenance when the script/program/automation inevitably needs to be adjusted.
All that said, I can’t think of a legitimate reason to use PSCore on non-Windows hardware unless you’re just really familiar with PS and literally nothing else. Even then you’re better off taking time learning a better tool for that environment.
If you only have to use it 15 minutes every week it’s probably not worth getting to know.
I work in a Windows shop, so I love everything being an object, most of the time. At least for the things that are worked out completely.
It’s great for things you need to iterate or just for figuring out what you can do by piping a result to get-member. If you are interested in getting better at powershell at some point, I highly recommend Powershell in a month of lunches. (Also because I like Manning’s model where they automatically offer the digital versions of books they sell, and also offering free previews of the entire book, given enough time)
I hope I didn’t come across as defending ps. PS sucks and whoever decided to have functions use capital case with dashes in between needs to have their brain scanned
I do a lot of work in PS and I don’t find it that bad. But you forgot what’s even dumber about their function naming conventions.
Function names are supposed to be a single word verb, then the dash, then the rest. But not any verb, you’re supposed to use one from PS’s list of acceptable ones which has some really weird omissions. And they break their own single word verb convention with “acceptable verbs” ConvertTo and ConvertFrom (ConvertTo-SecureString, ConvertFrom-Json), which are the only exception to one word verbs before the dash.
Function names are definitely one of my biggest peeves with it.
Additionally, their basic comparison operators are dumb as hell. How is “-le” better or clearer in meaning that “<=”? -ne instead of !=, but == isn’t just -e, it’s -eq. And you can’t slap an n in front of other comparators for not, -nle isn’t a thing. You gotta wrap the whole comparison in parentheses and slap an ! on the front or slap -not in front. But don’t try to do !-le, because that’s also not a thing. It’s not terrible but I refuse to believe that -eq is more readable than ==
Functionally speaking, PS is a really good shell language. Its minor things about it that I dont enjoy. As you said, it feels like the language design has some poor decisions.
I have only ever used simply “git push”. I feel like this is a “how to say that you barely know how to use git without saying that you barely know how to use git” moment:-D.
The first time you manually push like that, you can add the -u flag (git push -u origin master) to push and set the branch’s default upstream. Afterwards, a plain git push while that branch is checked out will push the branch to that default upstream. This is per-branch, so you can have a main branch that pulls from one repository and a patch branch that pulls and pushes to a different repository.
My strategy is to just type git push and get some kind of error message about upstream not being set or something. That’s a signal for me to take a second to think about what I’m actually doing and type the correct command.
I can follow along re-typing the same commands told to me by a more senior dev just like any average monkey!
This reminds me of something I made a long time ago: img
Since I am calling myself dumb, I estimate my progress to be somewhere perhaps at the 20th percentile marker? :-D One of these days I’ll RTFM and rocket all the way up to be dumb enough to properly qualify for “below average”! :-P
It’s git push origin branch and then merge after submitting a pull request from branch to main after a successful lint check, build, deployment, and testing in a non-production environment, and PR approval. What kind of wild west operation allows pushing directly to main?
Our changes land in main at my workplace, once they’ve received a code review and all CI checks pass (which includes tests, E2E tests, etc). We use feature flags rather than feature branches, so all diffs / pull requests are against main. We use continuous deployment which means changes are automatically deployed once landed. Changes roll out just to employees, then to servers used by a small percentage of users (2% I think), then everywhere. All risky changes are gated by feature flags so they can be enabled or disabled without a code push.
We just had a customer escalation caused by exactly this. One group relied too heavily on tribal knowledge for code reviews and didn’t want any other process. Once the tribal elders were gone, no one knew all the things to look for, and there was no other way to catch issues
As a Continuous Integration and Test guy, I was standing in the corner yelling “I thought you were DevOps. Where’s the automation?” Fine, Puppet/YAML doesn’t work with a traditional build and test, but you could have done syntax validation and schema validation that would have caught before the code review could have happened (and I showed them how a year ago, even offered to do it for them) and set up some monitoring to discover when you break stuff, before customers discover it.
Do you not use a fork as your origin, separate from the production upstream repo? I’ll push to my fork’s main branch for small or urgent changes that will definitely be merged before anything else I’m working on.
If it’s a private repo I don’t worry too much about forking. Ideally branches should be getting cleaned up as they get merged anyway. I don’t see a great advantage in every developer having a fork rather than just having feature/bug branches that PR for merging to main, and honestly it makes it a bit painful to cherry-pick patches from other dev branches.
I never worked anywhere where they had this set up. I would push to branches and make pull requests, but always work in the production environment.
I was mainly working as a data engineer though so that’s probably why. It’s hard to have test environments since you can’t replicate all the enormous amounts of data between environments without huge costs.
There are many strategies for maintaining test environments for that kind of thing. Read-only replicas, sampling datasets for smaller replicas, etc. Plenty of organizations do it, so it’s not really an excuse, imo.
No I know. But it was “good enough” for the company and we never had any serious issues working that way either. If someone pushed a faulty commit, we just reverted it and reloaded the data from the source system.
A lot of companies have kind of bad solutions for this sort of stuff, but it’s not talked about and nobody is proud of it. But it keeps the environments simple to work with.
No kidding. Never push to main, and you most likely can’t. While I get the joke of the meme, I’d send the person packing if they don’t understand branching, and branch flow, rebasing or reverting. Even if you look up the command or do it all through your IDE, understanding the workflow of using git is important
Git itself does not use that standard yet, so at least now there are two competing standards.
I get that there are cultural reasons why the word master was loaded language, but still, it’s not like institutional racism will go away. Meanwhile, the rest of the world which doesn’t struggle with the remnants of slavery has to put up with US weirdness.
Git itself does not use that standard yet, so at least now there are two competing standards.
Just ran git init in a brand new empty directory, and while it did create a master branch by default, it also printed out a very descriptive message explaining how you can change that branch name, how you can configure git to use something else by default, and other standards that are commonly used.
Also, there’s nothing saying your local branch name has to match the upstream. That’s the beauty of git - you have the freedom to set it up pretty much however you want locally.
Yeah, that’s what I’m saying, there is no one standard now. The stupid thing is all the problems that causes is mostly because there used to be one, and stuff written assuming master branches are eternal.
I’ve had a company that had some automation built on git but below GitLab that would not let you delete master branches. When main became a thing, they just started hard protecting those as well by name. It’s because of regulatory, and they are very stingy about it.
So when I created a few dozen empty deployment repos with main as the default, and then had to change it over to master so that it lined up nicer with the rest of the stuff, I’ve had a few dozen orphaned undeletable empty main branches laying around. A bit frustrating.
That said, the whole thing is just that. A bit frustrating. If it makes some people feel better about themselves, so be it. I am blessed in life enough to take “a bit frustrating”.
Yeah that’s fair, I can see how that would be annoying for sure. I think that frustration stems more from company policy though, not necessarily the standard changing. And you know what they say, there’s nothing certain in this world except for death, taxes, and standards changing
It is trash code for sure, but most of the world’s code is trash, so we do have to accommodate trash code when we design stuff. That said, they do need to do this to comply with laws and make sure code doesn’t get lost (it’s finance), and this was the easy way to do it. Doing it better would have taken time and attention away from other stuff.
And standards do change, but they usually change to accommodate new features, or a new software product displaces an old one. I don’t really know any tech standard that changed because of cultural reasons. Point is, change is a cost. It may be worth to pay the cost, but here the benefits were US cultural sentiments that most of the world doesn’t care about.
And the stupid thing is that even when standards change, you are not usually labelled as culturally out of touch if you don’t follow it. Most big orgs don’t follow changes that they don’t need to. Nobody calls you a bigot for running COBOL mainframes in 2023, but they might if you predominantly have master branches.
I guess my perspective is that some people I know were mildly annoyed before lunch about it one day two years ago, since nobody cares about US identity politics, with my personal opinion being if the US didn’t fill up its for-profit prisons with black people who it then those prisons profit off of (just as an example), the word master would not bite as hard, and the whole thing would be moot.
If you don’t have autocomplete set up for your shell, get it working. If someone has a different branch named ma…, ask if you can delete it, and get your team to adopt a decent branch naming convention.
I really wish to work in a team where people have naming conventions for branches that are concerned about stuff like that. Must’ve been a nice place to work at.
I think the reasons was ridiculous. The fact that people didn’t like the word master anymore. But I’m used to it now, so fine, let’s use main. It makes sensitive people feel better.
If the devs are really exhausted and sad you can’t go wrong with bringing them a Java while they’re dealing with their latest Brainf**k . Knowing various languages helps you to C#, as long as you take good care of your eyes!
C++ is an awful candidate for a first programming language to learn, at least nowadays - it is very powerful, but it’s also full of foot-guns and past a certain point the learning curve becomes a wall
it’s a great candidate. It was my first “real” languages (i.e. the first language, that is not php/js)
you have a text file. then call the compiler on it, and then you have a exe file, that you can run. It does exactly what it is supposed to do without thinking about the browser, the webserver, the JVM, or some other weirdness.
I get, that doing “good cpp” is difficult. And using all the weird languages features is difficult. But as long as you use strings, ints, ifs, fors, you should be fine. Just don’t use generics, templates, new (keep everything on the stack), multi-inheritance, complex libraries, and it’s a nice beginner language.
Yeah. My intro programming classes used C and C++ and they were great for illustrating the fundamentals. Plus I think it’s important to learn the building-blocks/history
this std::cout << “hello world” bullshit is in no way intuitive. You’re using the bit-shift operator to output stuff to the console? WTF? Why 2 colons? What is cout? And then these guys go on to complain about JS being weird…
No, C is where it’s at: printf(“hello world”); is just a function call, like all the other things you do in C.
C is no beginner heaven either, printf is its own can of “why can this function have any number of arguments and why does the compiler have to complain about the formatting every 25 milliseconds” worms
For non-programmers (who most definitelly don’t know that >> and << are bit shift operators) shoving something into something else is more intuitive than “calling a function with parameters”.
Also don’t get me started on the unintuitiveness of first passing a string were text is mixed with funny codes sgnaling the places were values are going to be placed, with the values passed afterwards, as opposed to just “shove some text into stdout, then shove a value into stdout, then shove some more text into it”.
Absolutelly, once you are used to it, the “template” style of printf makes sense (plus is naturally well-suited for reuse), but when first exposed to it people don’t really have any real life parallels of stituations were one first makes the final picture but leaving some holes in it and later fills-in the holes with actual values - because in real life one typically does it all at once, at most by incremental composition such as in C++, not by templating - so that style is not intuitive.
That’s not what this operator does normally, and if you try to “shove” something into anything else (an int into a variable? a function into an object?) you’ll get surprises… Basically it’s “special” and nothing else in the language behaves like it. Learning hello world in C++ teaches you absolutely nothing useful about the language, because it doesn’t generalize.
C, in contrast, has many instances of complex functions like printf (another commenter mentioned variable arguments), and learning to call a function is something very useful that generalizes well to the rest of the language. You also learn early enough that each different function has its own “user manual” of how to use it, but it’s still just a function call.
Pointers are almost always a bad idea - but you’ll probably get a lot of mileage out of having a handful of them in a large project… there’s an impulse with new C++ devs to do everything with pointers and use complex pointer arithmetic to do weird array offset and abuse predictable layouts to access stack variables etc… pointers are fine when used with moderation.
Basic C++ isn’t really confusing (if you are not handwriting makefiles). It starts to get fucky when you get into memory handling, templates, etc. I’m assuming they are only using C++ over C for basic OOP (class/structs inheritance etc).
I actually found it alot easier once I had learnt C. That way I know where all of the problems are and can use the high-level stuff to get around them, while still fundamentally understanding what is going on
Oh yeah, if you know C can be way more convenient depending on the language features you care about (as long as you thread very carefully when doing type punning, which you would rarely want to).
One of the classes that can be chosen is: 6.4400 Computer Graphics, which has a programming 101a/b class as a prereq (granted, it uses python instead of C++, but pretty sure they used C++ as their language-of-choice for the programming 101 language until recently).
Given the variety of digital art (video games, VTube avatars/VR avatars, more traditional-style digital art, etc), having the tools to make those kinds of things can be useful for making responsive/interactive digital art.
I’m currently working in game making and a ton of tools for things like 3D model creation (such as Houdini or Substance) use some form of procedural generation where at least understanding programming concepts is important and actual programming is required to do the more advanced stuff.
I’m very confused as well. Some universities do have ridiculous requirements though. I was planning on being a veterinarian and had to take politics classes. I switched to IT and was required to take general chemistry.
40 years ago at UCLA, I had to do my FORTRAN programs on punch cards submitted through the batch system. The CS/Math department (no CS department then), only offered 1 section in FORTRAN with 40 others in PASCAL. And it was taught by an Engineering professor. Why would a Chemistry major take a computer science class? Remember all those shiny machines CSI uses to do forensic analysis? They came from chem labs.
There are lots of ways computers are used for making art. Not just video-games. For example, projection mapping, algorithmic music composition, live coding, etc.
You can look into openFrameworks for examples of C++ in arts.
This was the first serious creative coding framework I’ve learned 2008 or 2010 or something. I have been in this field since then. I have seen Java, Javascript, and kotlin creative frameworks but not python and I am still as surprised as you are.
I think this is a good question and answer in the sense that it reveals a fundamental misunderstanding on the part of the student - exactly what you hope an exam would do! (Except for how this seems to combine javascript’s .length and python’s print statement - maybe there is a language like this though - or ‘print’ was a javascript function defined elsewhere).
This reminds me once of when I was a TA in a computer science course in the computer lab. Students were working on a “connect 4” game - drop a token in a column, try to connect 4. A student asked me, while writing the drop function, if he would have to write code to ensure that the token “fell” to bottom of the board, or if the computer would understand what it was trying to do. Excellent question! Because the question connects to a huge misunderstanding that the answer has a chance to correct.
Teaching complete “clean slates” is a great way to re-evaluate your understanding.
I’ve had to teach a few apprentices and while they were perfectly reasonable and bright people, they had absolutely no idea, how computers worked internally. It’s really hard to put yourself in the shoes of such persons if it’s been too long since you were at this point of ignorance.
Second this. I’m a teacher aid and I get to fix student’s code for students who are not technically inclined. It’s so much fun and I’ve learned so much McGuivering all that shitty mess together.
For reference the “language” used in the exam would probably be Exam Reference Language (OCR exam board specifically, which I believe this question is from) which is just fancier pseudocode.
programmer_humor
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.