I was originally a chip designer. Then I shifted into embedded development. Now I’m mainly a C# guy.
But when I shifted into embedded development, I also shifted into doing power engineering. I grabbed a couple of books on the topic at hand, taught myself a lot, and designed the electronics to meet the need. We sold the product to city utilities.
I remember one time I was in a room with probably 10 engineers from one of the utilities. After having described the product to them, and went through a lot of our settings and stuff, I was explaining the difference between two of algorithms we put in (because different utilities use different algorithms, and I just wanted one device that could do both). At some point I was like “which of the two algorithms do you use?” and one responded “well, which do you recommend?” So I talked about why I thought one was better than the other.
They all started looking at each other and nodding and saying “Yeah, that’s the one were going to use.” I realize I could have said anything at that point and they would have agreed. They thought I was expert. And that was my “last two frames” of this comic moment.
Now as a senior dev, I’ve seen enough shit to realize that most people have no idea what is going on, and are flying by the seat of their pants. So I figure my ignorance is a little less than theirs, and that gives me a lot of confidence, but I also realize that I can learn a lot from most people.
Now as a senior dev, I’ve seen enough shit to realize that most people have no idea what is going on, and are flying by the seat of their pants.
What’s helpful in my industry is that new development happens so frequently that the absolute best answer today is probably the wrong one in the next few years. Since I’m never on the absolute cutting edge, I have to trust my team to pitch the plan and we roll with it.
What I think makes me a senior is me knowing that we don’t know anything, but being able to create a plan if/when we have to make changes.
I’ve always felt like I don’t deserve my role, I climbed the proverbial ladder quick and I am very young for my position (Principal Engineer). But I sleep fine at night because at the end of the day I was always honest with my skills, my intentions and my motivations, and I’m always sure to get full agreement from everyone before doing stuff. If after all that nobody figured out I’m a fucking idiot just making an informed guess, that’s on them.
I always fear the next company I join will have “real technical leaders” who will inevitably show me my place, but it hasn’t happened so far (3-4 massive companies in the last decade).
Maybe one day I will meet this person, but it is not this day… And then I try to teach the same to younger engineers to work through problems as a team and just do it until somebody stops you, because in a lot of cases nobody has a clue either, and that’s what it means to lead.
When I was working stand-in positions such as after a move, for example retail, my favorite go-to when asked “Whyyyy?” was “I have no idea. No one told me anything.” I sometimes miss those days.
You’re right though. Most people have enough knowledge to do the steps of the job or task. For many of them skipping a step shuts down that memory, if only temporarily. I’ve met only a handful of true experts. People who can do things forwards, backwards, upside-down, and mix things up on the fly. They are BY FAR the most uncommon.
I learned so much over the years abusing Cunningham’s.
Could have a presentation for the C-suite for a major company, post some tenuous claim related to what I intended to present on, and have people with PhDs in the subject citing papers correcting me with nuances that would make it into the final presentation.
It’s one of the key things I miss about Reddit. The scale of Lemmy just doesn’t have the same rate and quality of expertise jumping in to correct random things as a site with 100x the users.
The major problem with reddit is that you could never really trust the credentials of the person you were talking to. They might have been PhDs or they might have been 13 year olds who just learned to Google. It amazes me how many times I saw a highly upvoted comment posted about a subject that I knew a lot about, but was just so blatantly wrong.
Only if it’s something controversial. If it’s something technical with no political affiliation, people vote for answers that sound right. Thankfully Cunningham’s usually comes to the rescue on time.
To be fair this is not a Reddit thing and it can be found in the fediverse too. I can remember some of such situations where a person just posted wrong stuff but in a very confident way. I was able to prove him wrong later but nobody cared anymore.
I always kind of felt like those voices began to be drowned out the more and more popular reddit became. You’re correct about Lemmy’s scale, but there is certainly a sweet spot. I’m happy knowing Lemmy hasn’t yet reached its own, and reddit’s is long gone. I’m happier here and it’s likely only going to get better.
Its a type of fiber optic cable where the center of the cable is literally hollow. Normal fiber uses a glass core. Light passing through glass also travels about 2/3 the speed of the light since the speed of light is only constant in an empty vacuum. With hollow core, light is no longer passing through glass so its speed is much closer to the actual speed of light.
High Group Velocity, Low Latency Signal Transmission
The group velocity of guided light is usually close to the vacuum velocity of light. This implies substantially lower latency for signal transmission through hollow-core fibers.
I don’t know the physics of it. I posted some info for the parent you responded to. My understanding is the applied physics is different from traditional fiber.
The main physical principle behind propagation of light in conventional optical fibers is total internal reflection (TIR). However, engineering of optical materials with features on the scale of the wavelength of light offers many new possibilities for manipulating light. In particular, some microstructured fibres make it possible to guide light by a mechanism different from total internal reflection. In these fibres, light is trapped in the core by an out-of-plane band-gap, which appears over a range of axial wavevectors and prevents propagation of light in the microstructured cladding [Cregan (1999)], allowing guided modes to form in the central hollow core.
Eh, sometimes they’re right about this one though. It’s true that a request traveling near light speed is as fast as it can possibly be, but what if it’s 17 requests? Sometimes you can fix latency by doing fewer transactions.
edit: love a downvote with no reply. Just “No!” [stomps feet]
I actually kind of like the error handling. Code should explain why something was a problem, not just where it was a problem. You get a huge string of “couldn’t foobar the baz: target baz was not greebleable: no greeble provider named fizzbuzz”, and while the strings are long as hell they are much better explanations for a problem than a stack trace is.
a desperate fear of modular code that provides sound and safe abstractions over common patterns. that the language failed to learn from Java and was eventually forced to add generics anyway - a lesson from 2004 - says everything worth saying about the language.
The language was designed to be as simple as possible, as to not confuse the developers at Google. I know this sounds like something I made up in bad faith, but it’s really not.
The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt. – Rob Pike
"It must be familiar, roughly C-like. Programmers working at Google are early in their careers and are most familiar with procedural languages, particularly from the C family. The need to get programmers productive quickly in a new language means that the language cannot be too radical. – Rob Pike
The infamous if err != nil blocks are a consequence of building the language around tuples (as opposed to, say, sum types like in Rust) and treating errors as values like in C. Rob Pike attempts to explain why it’s not a big deal here.
Having a Result[T, Err] monad that could represent either the data from a successful operation or an error. This can be generalised to the Either[A, B] monad too.
Its how rust does error handling for example, you have to test a return value for “something or nothing” but you can pass the monadic value and handle the error later, in go you have to handle the error explicitly (almost) all the time.
Here’s an example (first in Haskell then in Go), lets say you have some types/functions:
type Possible a = Either String a
data User = User { name :: String, age :: Int }
validateName :: String -> Possible String
validateAge :: Int -> Possible Int
then you can make
<span style="color:#323232;">mkValidUser :: String -> Int -> Possible User
</span><span style="color:#323232;">mkValidUser name age = do
</span><span style="color:#323232;"> validatedName ← validateName name
</span><span style="color:#323232;"> validatedAge ← validateAge age
</span><span style="color:#323232;"> pure $ User validatedName validatedAge
</span>
for some reason <- in lemmy shows up as <- inside code blocks, so I used the left arrow unicode in the above instead
in Go you’d have these
(no Possible type alias, Go can’t do generic type aliases yet, there’s an open issue for it)
In the Haskell, the fact that Either is a monad is saving you from a lot of boilerplate. You don’t have to explicitly handle the Left/error case, if any of the Eithers end up being a Left value then it’ll correctly “short-circuit” and the function will evaluate to that Left value.
Without using the fact that it’s a functor/monad (e.g you have no access to fmap/>>=/do syntax), you’d end up with code that has a similar amount of boilerplate to the Go code (notice we have to handle each Left case now):
<span style="color:#323232;">mkValidUser :: String -> Int -> Possible User
</span><span style="color:#323232;">mkValidUser name age =
</span><span style="color:#323232;"> case (validatedName name, validateAge age) of
</span><span style="color:#323232;"> (Left nameErr, _) => Left nameErr
</span><span style="color:#323232;"> (_, Left ageErr) => Left ageErr
</span><span style="color:#323232;"> (Right validatedName, Right validatedAge) =>
</span><span style="color:#323232;"> Right $ User validatedName validatedAge
</span>
Swift and Rust have a far more elegant solution. Swift has a pseudo throw / try-catch, while Rust has a Result<> and if you want to throw it up the chain you can use a ? notation instead of cluttering the code with error checking.
The exception handling question mark, spelled ? and abbreviated and pronounced eh?, is a half-arsed copy of monadic error handling. Rust devs really wanted the syntax without introducing HKTs, and admittedly you can’t do foo()?.bar()?.baz()? in Haskell so it’s only theoretical purity which is half-arsed, not ergonomics.
It’s not a half-arsed copy, it’s borrowing a limited subset of HKT for a language with very different goals. Haskell can afford a lot of luxuries that Rust can’t.
It’s a specialised syntax transformation that has nothing to do with HKTs, or the type system in general. Also HKTs aren’t off the table it’s just that their theory isn’t exactly trivial in face of the rest of Rust’s type system but we already have GATs.
It actually wouldn’t be hard writing a macro implementing do-notation that desugars to and_then calls on a particular type to get some kind of generic code (though of course monomorphised), but of course that would be circumventing the type system.
Anyhow my point stands that how Rust currently does it is imitating all that Haskell goodness on a practical everyday coding level but without having (yet) to solve the hard problem of how to do it without special-cased syntax sugar. With proper monads we e.g. wouldn’t need to have separate syntax for async and ?
Note: Lemmy code blocks don’t play nice with some symbols, specifically < and & in the following code examples
This isn’t a language level issue really though, Haskell can be equally ergonomic.
The weird thing about ?. is that it’s actually overloaded, it can mean:
call a function on A? that returns B?
call a function on A? that returns B
you’d end up with B? in either case
Say you have these functions
<span style="color:#323232;">toInt :: String -> Maybe Int
</span><span style="color:#323232;">
</span><span style="color:#323232;">double :: Int -> Int
</span><span style="color:#323232;">
</span><span style="color:#323232;">isValid :: Int -> Maybe Int
</span>
and you want to construct the following using these 3 functions
<span style="color:#323232;">fn :: Maybe String -> Maybe Int
</span>
<span style="color:#323232;">class Chainable f a b fb where
</span><span style="color:#323232;"> (?.) :: f a -> (a -> fb) -> f b
</span><span style="color:#323232;">
</span><span style="color:#323232;">instance Functor f => Chainable f a b b where
</span><span style="color:#323232;"> (?.) = (<&>)
</span><span style="color:#323232;">
</span><span style="color:#323232;">instance Monad m => Chainable m a b (m b) where
</span><span style="color:#323232;"> (?.) = (>>=)
</span>
and then get roughly the same syntax as rust without introducing a new language feature
though this is more general than just Maybes (it works with any functor/monad), and maybe you wouldn’t want it to be. In that case you’d do this
<span style="color:#323232;">class Chainable a b fb where
</span><span style="color:#323232;"> (?.) :: Maybe a -> (a -> fb) -> Maybe b
</span><span style="color:#323232;">
</span><span style="color:#323232;">instance Chainable a b b where
</span><span style="color:#323232;"> (?.) = (<&>)
</span><span style="color:#323232;">
</span><span style="color:#323232;">instance Chainable a b (Maybe b) where
</span><span style="color:#323232;"> (?.) = (>>=)
</span>
restricting it to only maybes could also theoretically help type inference.
I was thinking along the lines of “you can’t easily get at the wrapped type”. To get at b instead of Maybe b you need to either use do-notation or lambdas (which do-notation is supposed to eliminate because they’re awkward in a monadic context) whereas Rust will gladly hand you that b in the middle of an expression, and doesn’t force you to name the point.
Or to give a concrete example, if foo()? {…} is rather awkward in Haskell, you end up writing things like
<span style="color:#323232;">foo x y = bar >>= baz x y
</span><span style="color:#323232;"> where
</span><span style="color:#323232;"> baz x y True = x
</span><span style="color:#323232;"> baz x y False = y
</span>
, though of course baz is completely generic and can be factored out. I think I called it “cap” in my Haskell days, for “consequent-alternative-predicate”.
Flattening Functors and Monads syntax-wise is neat but it’s not getting you all the way. But it’s the Haskell way: Instead of macros, use tons upon tons of trivial functions :)
Holy shit!! You did it. I would never expect a banking password to max special characters. I have been scratching my head with Bitwarden and this shitty app for an hour.
I love how the acceptance/rejection status is messed up.
If it’s only one special character, then that should be unchecked not check, and the combination of “letters, numbers and special characters” should be check marked.
When I was younger, I read R.A Salvatore’s classic fantasy novel, The Crystal Shard. There is a scene in it where the young protagonist, Wulfgar, challenges a barbarian chieftain to a duel for control of the clan so that he can lead his people into a war that will save the world. The fight culminates with Wulfgar throwing away his weapon, grabbing the chief’s head with bare hands, and begging the chief to surrender so that he does not need to crush a skull like an egg and become a murderer.
Well this is me. Begging you. To stop lying. I don’t want to crush your skull, I really don’t.
It’s been getting “more and more use” since 2001. To start with the isps said that they were not going to do any work to implement it until endpoints supported it. Then vista came with support by default. Next they wanted the backbones to support it. All tier 1 networks are now dual stack. Then they said they were not going to do anything until websites supported it widely. Now all cdns support it. Then they said, it’s ok we will just do mass nat on everyone so won’t do any work on it.
exactly. I have been begging multiple ISPs for direct IPv6 allocations for 10+ years now. its always “we are internally testing - not available for distribution yet”. the most recent request from me was less than 3 months ago when I needed a IPv4 /29 for a remote site. figured I would see if I could also get a nice sized IPv6 allocation as well. nope. just gotta keep paying a premium for that dwindling IPv4 address space.
Hurricane Electric is to be commended for their public IPv6 tunnels, but without direct allocations from your immediate upstream, its just play.
A lot of ISPs do have some kind of IPv6. Many don’t give you a prefix with the length they should. Many don’t give you a static prefix. They’re doing everything they can to continue to fuck this up.
Mostly to their own detriment. Maintaining equipment to do carrier grade NAT makes their network slower, less reliable, and more expensive.
Last week I was peer pressured into trying out Helldivers 2 (yes, this is relevant, trust me), so I downloaded it, installed it, and fired it up with no issues. Set up my preferred control schema with no issues. Played the torturial with no issues.
Then came time for joining my friends in multiplayer. Issues! No matter what I did, I couldn’t seem to join them. Nor could they join me.
I verified the installed files, I tried to connect via my phone to rule out ISP issues, and I tried all of the different versions of proton, but the result remained the same. I simply couldn’t join my friends.
I don’t remember what caused me to go down the right path of troubleshooting, but I’ve always dosabled IPv6 on my linux installs. So I re-enabled it. The problem remained. Then I realized that I had it disabled in the kernel via grub command line flags, so I cuanged that and gave my PC a reboot. Success!
So, despite networking being a large (maybe even the largest) part of my vocation for the past two decades, last week was the first time ever I actually NEEDED IPv6.
I think windows+v syncs to microsoft servers or something. I remember when I was running chris titus tech’s debloat script it removed that functionality.
I googled it, there is an option to sync it to your Microsoft account, but I can’t say whether that’s on by default when you turn on clipboard history because I skipped adding a Microsoft account. But if it is, you can turn it off in Settings -> System -> Clipboard.
I took a look through my power toys settings, but couldn’t find anything there that had to do with the win+v clipboard history. Google hasn’t been any help either. What is it that I’m overlooking? How does powertoys improve the clipboard history feature?
I’m currently not on my windows pc at the moment but it could be that it’s functionality might actually be native to win 11? I don’t realise use it myself I just remember seeing it when originally getting powertoys and thinking that was cool
Application specific buffers are the first thing I disable on emacs. The OS one isn’t just integrated with every other normal piece of software, it’s also more powerful and easier to use.
I don’t understand why people think that it’s acceptable.
As developers, we’ve had it drummed into us from day one that variable names are important and shouldn’t be one or two letters.
Yet developers deliberately alias an easy to read table name such as “customer” into “c” because that’s the first letter of the table. I’m sure that it’s more work to do that with auto completion meaning that you don’t even need to type out “customer”.
Especially when you also have company and county tables. It forces people to look up what the c is aliased to before beginning to comprehend what you’re doing.
Ah, must’ve been a fortran developer. I swear they have this ability to make the shortest yet the least memorable variable names. E.g. was the variable called APFLWS or APFLWD? Impossible to remember without going back and forth to recheck the definition. Autocomplete won’t help you because both variables exist.
Sat on jury duty. We literally said not guilty because the officer was supposed to follow a process for line ups and they didn’t even do the bare minimum. They were like we got out guy
I once had a friend who was robbed of all kinds of stuff including a PS3, and that the guy was signed into his Netflix changing account profiles the very same day. I told him he can just get a tracking number by calling Playstation and that the active police officer can use it to track them. Thing is, the officer ghosted him for like 8 months despite having everything they needed to immediately find the exact location of the perpetrator actively using the stolen property.
They don’t care really. As has been my experience anyway.
I once had my car window smashed, a mix of gear taken…some was expensive, some was personal to me. I felt violated. Called the police, explained, gave S/Ns to what I could, told them exactly who did it. He didn’t give a shit. Actually made me feel like I was wasting his time. I think Seinfeld covered this…
“We’ll let you know if we find anything” “Do you ever find anything?” “No”
But oh, my reg is out of date and the plate scanner picked it up? Boom, they really kick it into gear. So that’s $130… i could just go take care of the tags immediately with a friendly warning but now don’t even want to. And in the end I end up pretty fucked.
If only they put that effort into other things I just might have gotten my linear power amps back. Props to anyone who knows that product.
'Cause COBOL might actually get you a job? Esoteric languages can’t even do that, so you have to hate yourself worse than COBOL if you’re going to learn brainfuck.
“Worse than COBOL.” This is a terrifying concept. 😱
I recently held a science slam about this topic! It’s a mix of the first computer scientists being mathematicians, who love their abbreviations, and limited screen size, memory and file size. It’s a trend in computing that has been well justified in the past, but has been making it harder for people to work together. And the need to use abbreviations has completely gone with the age of auto completion and language servers.
At some point, they even collectively decided that not having to write a multiplication dot is more important than being able to use more than a single letter for your variables. Just what the fuck?
Bruh how large should our notebook pages be? Also how should we speak about the equation? What if the terms should be represented in a matrix? What if mathematical functions e^x, sin, ln etc. are present? Would you write sine of e^(velocity of the particle B) ? Notations are necessary for readability
Welcome to Greece! No, not our modern Greece, the old timey write philosophical questions into the dirt with sticks and argue with your best homies about it kind of Greece!
Want to compute something? Hope you got all your steps in linear order so you don’t have to remember too much in between other steps!
/s (but not really so I totally am on your side, original formulations of math problems are a pain…)
I don’t know what to tell you. They obliterate readability for me.
I also genuinely believe these shorthands hinder access to research for the 99.9% of humanity who are not experts in the given field. Obviously, you do need to understand the context to use a formula correctly, but that also becomes harder when everything is written with hieroglyphs.
In university, I had to assess this paper. It took me 3 weeks to decipher that alien language, and it doesn’t even say anything particularly riveting.
To address your points:
I’m hoping that at least published math can be typed out with full names.
I’m not opposed to local aliases. E.g. if the point is that some values in the matrix are negative and others not, then absolutely write “with air_resistance as ‘a’, the catapultation matrix is { a, -a, -a, … }”.
I don’t actually want to introduce spaces into variable names, that’s just an example I randomly found online. I was rather thinking e.g. sine(euler^velocity_b).
Bonus point: You can reasonably type it on a computer, because you don’t need Greek letters and subscripts anymore.
Btw i am all for local aliases. I see them most of the times.
i.e, [equation], where a = area of the surface, v= velocity,…
But without short codes it would be a pain to write and remember. Some of the shortening like del operator really reallh simplifies the original expression with better showcase of physical meaning, but looks alien to people who don’t know. But we can’t stop using it since it makes everything else difficult for people in that area
You only have to define it once in a document, book, whatever. Also, it’s not like you’d ever need to do this for handwritten notes, only for a wider audience, or if you intend for something to be read by not just you.
No one is suggesting you don’t use symbols, just that you define them, and not assume the reader uses the same symbols as you. Which, so often, they don’t. (How many different ones have you come across just in highschool and uni. I came across multiple)
I’m no physicist, but surely there is a huge range of symbols for the same thing, especially the more niche you get.
I’m not a mathematician, but I agree with you because this is precisely how one would abbreviate repeating terms in a paper (e.g. The Museum of Modern Art (MoMA) and the Metropolitan Museum of Art (The Met) are both located in New York, New York (colloquially, New York City, or NYC). While MoMA has an art collection of about 200,000 pieces, The Met houses 1.5 million works of art.)
What OP is talking about is readability, so in a situation where you’re taking your own notes and have your own set of defined symbols, full words aren’t necessary.
I personally lost all interest in math because there are way too many opinionated or non-standard symbol definitions
I personally lost all interest in math because there are way too many opinionated or non-standard symbol definitions
That seems like a strange reason to quit math since most symbols are pretty well agreed upon, and maths has little to do with the actual notation either way.
I should’ve said “anything math-heavy,” but even then, it seems like switching fields or applications of math requires understanding a new definition of the same symbols, and a lot of that could be avoided with words.
I mean, if you get into any real depth with math, you are going to reach a point where you can’t use conveniently use words to describe the symbols being manipulated.
As an example for the math I am doing literally right now, I very much prefer using C^+^R compared to “semi circular arc in the upper half of the complex plane with radius R”, or M^+^(f(z)) which means “Maximum of the magnitude of the function f(z) over C^+^R”, which if I were to write out in full, would just become a clusterfuck.
Also you still wouldn’t be able to get rid of symbols because some symbols are placeholders and straight up don’t have any meaning in natural language. This occurs often in physics as well, not just pure maths. For example, the laplace transform of any function is written as a variable of “s”, but “s” doesn’t have a clear meaning (at least as far as I know).
Thing is, you usually define all your variables. At least we do in engineering (of physical variety, rather than software).
Mostly because we can’t expect everyone reading the calculation to know, and that not everyone uses the same symbols.
Not explaining each variable is bad practice, other than for very simple things. (I do expect everyone and their dog reading a process eng calc to know PV=nRT, at a minimum).
Just like (in my opinion) not defining industry specific abbreviations is also bad practice.
But Nah, I think assumed knowledge of PV=nRT is fair in context, since if you don’t know what it is, you’ll only be reading the conclusion, not getting into the weeds of a calculation document.
I’m not going even going to be explaining if I have a column that’s says volumetric flow rate, with V=m/ρ. If I give mass flow rate and density (with units, of course), and use these extremely common symbols, and someone doesn’t understand, then they have no real business getting to this level of detail anyway.
I do agree that in most cases not defining your variables is bad practice, but there is some nuance, depending on the intended audience and how common a formula is, and the format of whatever it is you’re writing.
So, in the end you just do assume everyone to know the “common sense” one-letter notation for everything. Well, not everything, but the essential ten thousands of entities for sure /s
This sounds like No true Scotsman fallacy to me
I find it a bit contradicting to the very point you made about defining variables. If anything, one might be some home-grown genius that has real business getting into details but only ever used Chinese characters as variables
Understand your frustration with how I’ve communicated my position, sorry about that:
My justification for the examples I’ve given is there still needs to be other context, is based on complexity of the equation, and the intended audience of that equation.
An example of me not explaining a very simple equation would be perhaps a table of various cases:
| — | mass flow (kg/hr) | density (kg/m³) | Volumetric flow (m³/hr), V = m/ρ | | Case 1 | blah blah | blah blah | blah | | Etc. | … | … | … |
Realising now that markdown tables don’t seem to work 😅, hopefully this is still clear.
It may be a touch better to put variable symbols in the other columns, but:
You still have the final answer (the purpose of my report, I’m not writing a thesis here)
It should be plainly obvious by the units, and the fact those are the previous two variables, to someone who has the ability to understand (and is the intended audience of that little equation)
As a recent example for this, in a data sheet I recently prepared, I literally just put a * in the references column and said “*calculated from other data sheet values” for the volumetric flow rate, because the intended audience would know how to do that, and the purpose was for me to communicate how that value was determined.
Me putting in the V = m/ρ in the hypothetical example I gave above is a just a little mind jog for the reader.
Where more complicated equations are used, of course these are properly referenced, usually even with the standard or book it’s come from.
I’ll redefine my position to: Clearly define all variables, unless it’s abundantly obvious to your intended audience from context.
My intended audience of the conclusions or final values are the layman. My intended audience of everything else is someone with a very basic chemical engineering understanding.
Your last point is a strawman:
I find it a bit contradicting to the very point you made about defining variables. If anything, one might be some home-grown genius that has real business getting into details but only ever used Chinese characters as variables
Because I’m writing in English, for an English speaking audience, and there is no such thing as a home-grown genius getting into my area because it’s a legal requirement that they have an honours degree. Even still, the two assumed knowledge equations I mentioned, which I would only not reference with sufficient context, would STILL be recognisable with totally random symbols.
| mass flow (kg/hr) | density (kg/m³) | Volumetric flow (m³/hr), 容 = 质/密 | Yup, a bit odd in an English context, but with the units information (always mandatory, of course) completely understandable.
First of all, thank you for a thoughtful response, I was too snarky, sorry about that.
TL;DR: guess I’m just upset that there is no objective way of measuring how much knowledge is required, and trying to read everything from sources list would take forever.
Yeah, the last point is sort of a strawman, although I meant it not to highlight that explanations should be given in terms that the reader is used to, but rather that the reader may have quite arbitrary amount of prior knowledge.
I agree that there probably should be some shared context, what bugs me is that people tend to vary a lot in what amount of context is considered to be required. And more than once have I met papers that require deciphering even if you have some context and kind of come from the field they are written for. I used to think that this is our of malice to make reproducing their work harder for others, but maybe it was just an assumption of much larger shared context.
Tables markdown work in some clients, afaik, but I don’t remember which, and even if I saw it or imagined it
No worries friend, no hard feelings and appreciate the engagement!
Yeah, agree it is a bit wishy washy in terms of gauging how much explanation to include ¯_(ツ)_/¯
I suppose (in my opinion) the mindset should be: include as much explanation as possible, without it being cumbersome.
I personally err on the side of over-explanation and have had some senior engineers give me feedback that it’s too much. Still learning for myself how much is too much.
Totally agree though, that there are many cases where people leave things out as assumed, when it’s not really reasonable to do so.
A side-thought on specificity: one of my biggest pet peeves is when people list pressure with the units of kPa, when they really mean kPag. In industry, you are rarely talking in absolute pressure (other than for pressure differences) and people then get lazy/don’t know/assume it’s fine to do something like: set point 100 kPa (when they mean 100 kPag). It isn’t fine though, because at lower pressures atmosphere counts for a pretty large percentage of the absolute value.
I mean, it was rather physics that was worse in this regard.
Mathematicians do define their variable quite rigorously. Everything is so abstract, at some point you do just need to write down “this thing is a number”. Problem with maths folks is rather that they get more creative with their other symbols. So, “this thing is a number” is actually written as “∃x, x ∈ ℝ”.
But yeah, in the school/university physics I experienced, it was assumed that you knew that U is voltage, ρ (rho) is density, ω (omega) is angular velocity etc…
At one point, I had to memorize six pages of formulas and it felt like every letter (Latin, Greek, uppercase, lowercase, some Fraktur for good measure) was a shorthand for something.
I should specify when I say physical engineering I just mean chemical, mechanical, electrical, etc. (not software), rather than physics in theory/academia.
I guess engineers are applied physics (in a particular area each), and we need to distribute our deliverables to people who aren’t necessarily experts in every discipline.
It just also makes sense to always define variables.
It’s so funny because I’ve never seen voltage defined as U, and not V haha, proving how if you’re going to have an equation, you’d better define everything, there’s so many reused letters!
U is definitely standard for a difference in electric potential in Europe. Thought to come from “Unterschied”, difference. V refers to electric potential, which as wikipedia says so wisely, should not be confused with a difference in electric potential. Which North American notation does. At least it’s not PEMDAS…
In another thread I admit I didn’t explain my position here well enough. I would only not explain this equation given sufficient context (e.g. I’ve shown all those variables in a table, and my intended audience is people familiar with basic chemistry, which I’d expect would be everyone reading the report for this particular example, since this is high school chemistry, and the topic of all reports I work on is chemical engineering.)
People can read the conclusions if they’re not familiar with chemistry, and for the detail, they’re not my intended audience anyway.
Generally I still hold the position that you should define variables as much as possible, unless it’s overly cumbersome, given your intended audience would clearly understand anyway.
In context this simple equation is obvious even if you change the symbols, as long as there is sufficient context to draw from.
Using full names like that might be fine for explaining a physical rule, or stating the final result of some calculation - but it certainly would be cumbersome and difficult for actually carrying out the calculations. In many cases we already fill pages with algebra showing how things can be related and rearranged to arrive at new results. That kind of work would be intractable with full word names for the variables, partially because you’d be constantly spilling off the end of the page trying to write the steps; but also because having all that stuff would actually obfuscate what you are trying to do - which is algebra. And during that process, the meanings and values of the pronumerals is not as important has how they interact with each other. So the names are just a distraction.
For setting up an equation, and for stating the final result, the meanings of the variables are very important; but during the process of manipulating the equations to get the result you want the meanings of the letters are often ignored. You only need to know that it is something that can be multiplied, or inverted, or subtracted, or whatever. Eg. suppose I want to rearrange to get the velocity. I don’t care that I’m dividing both sides by the air density times the drag coefficient and the area… I’m just dividing ρCA, which is an algebraic blob whose interpretation can be saved for some other time.
This is absolutely true, but it still seems to me that we’re throwing the baby out with the bath water when we just stick to extremely terse symbols for everything regardless of context.
Reading articles would be so much easier if they used even slightly longer names – thankfully more and more computer science articles do tend to use more human readable naming nowadays, at least.
Sure, longer names make manipulation harder a bit more annoying if you’re doing it by hand, but if you do need to manipulate something you can then abbreviate the terms (and I’m 60% sure I’ve seen some papers that had both a longer form and a shorter form for terms, so one for explaining shit and one for the fiddly formal stuff)
Of course using terse terms is totally fine when it’s clear from the context what eg. ∆x means.
Yep, that’s what it usually boils down to. However, I think a slight approach shift for basic materials could be useful, where introductory books / papers / … write out formulas. That makes it easier to understand the basic concepts before moving onto the more complex stuff. It should be easy to create such works, as they are usually created digitally, and autocomplete is available. Students can and will abbreviate those written outs words by themselves (after all, writing is annoying), but IMO reading comprehension is the key part that can be improved.
Also, when doing long formulas that you want to eliminate members of, writing stuff out can be a nightmare.
It’s been really holding me back in learning coding. I felt pretty comfortable at first learning javascript, but as I got further the code was increasingly hard to look back to and understand, to the point I had to spend a lot of time understanding my own code.
Does it truely matter after the code has been compiled if it has more full words or not?
It matters as soon as a requirement change comes in and you have to change something. Writing a dirty ass incomprehensible, but working piece of code is ok, as long as no one touches it again.
But as soon as code has to be reworked, worked on together by multiple people, or you just want to understand what you did 2 weeks earlier, code readability becomes important.
I like Uncle Bobs Clean Code (with a grain of salt) for a general idea of what such an approach to make code readable could look like. However, it is controversial and if overdone, can achieve the opposite. I like it as a starting point though.
Did you know that in the first version of php, each function name would be hashed to lookup the code to run it? And the hashing algorithm was: the first letter. So all the functions started with a different letter.
It’s not. PHP used to use the function length as hash buckets, so by having evenly distributed lengths the execution time was faster. No idea where GP came up with that.
GP specifically talked about the first version of PHP, sounds like it was just a dummy implementation as they were working on PHP, that then later got replaced with a proper implementation :)
And with a bit of namespacing and/or object orientation and usage of dots, it becomes perfectly readable.
There are also camel case and underscores in other languages…
BTW: How on earth should a newcomer know that the letter “n” in that word stands for number without having to google it? The newcomer could even assume that it’s a letter of the word string… And even, if you know that it stands for number, it’s still hard for me to understand what it means in this context… I actually had to google it… But that’s probably some C++ convention I don’t know about, because I don’t program in C++…
C is a little older than namespacing and object orientation. C++ wasn’t even a glimmer in Bjarne’s eye when these conventions were laid down.
And yes, having to google it is part of the design. Originally C programmers would have had to read actual manuals about this stuff. Once you learn the names you don’t really forget so it works well enough even now for ubiquitous standard library functions.
And yet, C was an ergonomic revelation to programmers of the time. Now it’s the arcane grandpa that most youngsters don’t put up with.
How on earth should a newcomer know that the letter “n” in that word stands for number without having to google it?
By looking at the difference between strcpy and strncpy. Preferably, though, you should simply learn C before writing C.
The gist of is is that strcpy takes a null-terminated string and copies it somewhere, while strncpy takes a zero-terminated string and copies it somewhere but will not write more than n bytes. strncpy literally has exactly one more parameter than strcpy, that being n, hence the name. If n is smaller than the string length (as in: distance to first null byte) then you’re bound to have garbage in your destination, and to check for that you have to dereference the pointer strncpy returns and check if it’s actually null. Yay C error handling.
In retrospect null-terminated strings were a mistake, but so were many other things, at some point you just have to accept that there’s hysterical raisins everywhere.
If n is smaller than the string length (as in: distance to first null byte) then you’re bound to have garbage in your return destination
Wha? N is just maximum length of string to copy. Data after dst+n is unchanged.
In retrospect null-terminated strings were a mistake, but so were many other things, at some point you just have to accept that there’s hysterical raisins everywhere.
Sure but that means the part before that is garbage because you have a null terminated string without terminator.
Or at least that’s how I see it. If your intention isn’t to start and end with a null-terminated string you should be using memcpy. Let us not talk about situations where CHAR_BIT != 8 that’s not POSIX anyway.
Even better, just avoid doing string manipulation in C.
Correct me if I’m wrong, but it’s not enough to delete the files in the commit, unless you’re ok with Git tracking the large amount of data that was previously committed. Your git clones will be long, my friend
I don’t understand how we’re all using git and it’s not just some backend utility that we all use a sane wrapper for instead.
Everytime you want to do anything with git it’s a weird series or arcane nonsense commands and then someone cuts in saying “oh yeah but that will destroy x y and z, you have to use this other arcane nonsense command that also sounds nothing like you’re trying to do” and you sit there having no idea why either of them even kind of accomplish what you want.
There are tons of wrappers for git, but they all kinda suck. They either don’t let you do something the cli does, so you have to resort to the arcane magicks every now and then anyways. Or they just obfuscate things to the point where you have no idea what it’s doing, making it impossible to know how to fix things if (when) it fucks things up.
It’s because git is a complex tool to solve complex problems. If you’re one hacker working alone, RCS will do an acceptable job. As soon as you add a second hacker, things change and RCS will quickly show its limitations. FOSS version control went through CVS and SVN before finally arriving at git, and there are good reasons we made each of those transitions. For that matter, CVS and SVN had plenty of arcane stuff to fix weird scenarios, too, and in my subjective experience, git doesn’t pile on appreciably more.
You think deleting an empty directory should be easy? CVS laughs at your effort, puny developer.
It’s because git is a complex tool to solve complex problems. If you’re one hacker working alone, RCS will do an acceptable job. As soon as you add a second hacker, things change and RCS will quickly show its limitations. FOSS version control went through CVS and SVN before finally arriving at git, and there are good reasons we made each of those transitions. For that matter, CVS and SVN had plenty of arcane stuff to fix weird scenarios, too, and in my subjective experience, git doesn’t pile on appreciably more.
Yes it is a complex tool that can solve complex problems, but me as a typical developer, I am not doing anything complex with it, and the CLI surface area that’s exposed to me is by and large nonsense and does not meet me where I’m at or with the commands or naming I would expect.
I mean NPM is also a complex tool, but the CLI surface area of NPM is “npm install”.
So basic, well documented, easily understandable commands like git add, git commit, git push, git branch, and git checkout should have you covered.
the CLI surface area that’s exposed to me is by and large nonsense and does not meet me where I’m at
What an interesting way to say “git has steep learning curve”. Which is true, git takes time to learn and even more to master. You can get there solely by reading the man pages and online docs though, which isn’t something a lot of other complex tools can say (looking at you kubernetes).
Also I don’t know if a package manager really compares in complexity to git, which is not just a version control tool, it’s also a thin interface for manipulating a directed acyclic graph.
So basic, well documented, easily understandable commands like git add, git commit, git push, git branch, and git checkout should have you covered.
You mean: git add -A, git commit -m “xxx”, git push or git push -u origin --set-upstream, etc. etc. etc. I get that there’s probably a reason for it’s complexity, but it doesn’t change the fact that it doesn’t just have a steep learning curve, it’s flat out remarkably user unfriendly sometimes.
git add with no arguments outputs a message telling you to specify a path.
Yes, but a more sensible default would be -A since that is how most developers use it most of the time.
git commit with no arguments drops you into a text editor with instructions on how to write a commit message.
Git commit with no arguments drops you into vim, less a text editor and more a cruel joke of figuring out how to exit it.
Again, I recognize that git has a steep learning curve, but you chose just about the worst possible examples to try and prove that point lol.
Git has a steep learning curve not because it’s necessary but because it chose defaults that made sense to the person programming it, not to the developer using it and interacting with it.
It is great software and obviously better than most other version control systems, but it still has asinine defaults and it’s cli surface is over complicated. When I worked at a MAANG company and had to learn their proprietary version control system my first thought was “this is dumb, why wouldn’t you just use git like everyone else”, then I went back to Git and realized how much easier and more sensible their system was.
No it wouldn’t. You’d have git beginners committing IDE configs and secrets left and right if -A was the default behavior.
vim, less a text editor and more a cruel joke of figuring out how to exit it.
Esc, :, q. Sure it’s a funny internet meme to say vim is impossible to quit out of, but any self-respecting software developer should know how, and if you don’t, you have google. If you think this is hard, no wonder you struggle with git.
it chose defaults that made sense to the person programming it, not to the developer using it and interacting with it.
Just because you don’t like the defaults doesn’t mean they don’t make sense. It just means you don’t understand the (very good) reasons those defaults were chosen.
Git has a steep learning curve not because it’s necessary but because it chose defaults that made sense to the person programming it, not to the developer using it and interacting with it.
Git’s authors were the first users. The team that started the linux kernel project created it and used it because no other version control tool in existence at that time suited their needs. The subtle implication that you, as a user of git, know better than the authors, who were the original users, is laughable.
No it wouldn’t. You’d have git beginners committing IDE configs and secrets left and right if -A was the default behavior.
No, you wouldn’t because no one is a git beginner, they’re a software developer beginner who need to use git. In that scenario, you are almost always using repos that are created by someone else or by some framework with precreated git ignores.
You know what else it could do? Say “hey, youve said add with no files selected, press enter to add all changed files”
Esc, :, q. Sure it’s a funny internet meme to say vim is impossible to quit out of, but any self-respecting software developer should know how, and if you don’t, you have google. If you think this is hard, no wonder you struggle with git.
Dumping people into an archaic cli program that doesn’t follow the universal conventions for exiting a cli program, all for the the goal of entering 150 characters of text that can be captured through the CLI with one prompt, is bad CLI design.
There is no reason to ever dump the user to an external editor unless they specifically request it, yet git does, knowing full well that that means VIM in many cases.
And no, a self respecting software developer wouldn’t tolerate standards breaking, user unfriendly software and would change their default away from VIM.
Git’s authors were the first users. The team that started the linux kernel project created it and used it because no other version control tool in existence at that time suited their needs. The subtle implication that you, as a user of git, know better than the authors, who were the original users, is laughable.
Lmao, the idea that we should hero worship every decision Linus Torvalds ever made is the only thing laughable here.
I think in this case, “depth” was am inferior solution to achieve fast cloning, that they could quickly implement. Sparse checkout (“filter”) is the good solution that only came out recently-ish
Lol if an employer can’t have an intelligent discussion about user friendly interface design I’m happy to not work for them.
Every interview I’ve ever been in there’s been some moment where I say ‘yeah I don’t remember that specific command, but conceptually you need to do this and that, if you want I can look up the command’ and they always say something along the lines of ‘oh no, yeah, that makes conceptual sense don’t worry about it, this isn’t a memory test’.
These things are not related. Git uses the system default editor, which is exactly what a cli program dropping you into an editor should use. If that’s Vim and you don’t like that, you need to configure your system or take it up with your distro maintainers.
No, it should prompt you to enter your one sentence description in the CLI itself, and kick you out to an editor only if you provide a flag saying you like writing paragraph long commit descriptions.
Git is complicated, but then again, it’s a tool with a lot of options. Could it be nicer and less abstract in its use? Sure!
However, if you compare what goes does, and how it does, to it’s competitors, then git is quite amazing. 5-10 years ago it was all svn, the dark times. Simpler tool and an actual headache to use.
What are you smoking? Shallow clones don’t modify commit hashes.
The only thing that you lose is history, but that usually isn’t a big deal.
–filter=blob:none probably also won’t help too much here since the problem with node_modules is more about millions of individual files rather than large files (although both can be annoying).
git clone --depth=1 <url> creates a shallow clone. These clones truncate the commit history to reduce the clone size. This creates some unexpected behavior issues, limiting which Git commands are possible. These clones also put undue stress on later fetches, so they are strongly discouraged for developer use. They are helpful for some build environments where the repository will be deleted after a single build.
Maybe the hashes aren’t different, but the important part is that comparisons beyond the fetched depth don’t work: git can’t know if a shallowly cloned repo has a common ancestor with some given commit outside the range, e.g. a tag.
Blobless clones don’t have that limitation. Git will download a hash+path for each file, but it won’t download the contents, so it still takes much less space and time.
If you want to skip all file data without any limitations, you can do git clone --filter=tree:0 which doesn’t even download the metadata
Yes, if you ask about a tag on a commit that you don’t have git won’t know about it. You would need to download that history. You also can’t in general say “commit A doesn’t contain commit B” as you don’t know all of the parents.
You are completely right that –depth=1 will omit some data. That is sort of the point but it does have some downsides. Filters also omit some data but often the data will be fetched on demand which can be useful. (But will also cause other issues like blame taking ridiculous amounts of time.)
Neither option is wrong, they just have different tradeoffs.
See this is the kind of shit that bothers me with Git and we just sort of accept it, because it’s THE STANDARD. And then we crank attach these shitty LFS solutions on the side because it don’t really work.
What was perforce’s solution to this? If you delete a file in a new revision, it still kept the old data around, right? Otherwise there’d be no way to rollback.
Yes but Perforce is a (broadly) centralised system, so you don’t end up with the whole history on your local computer. Yes, that then has some challenges (local branches etc, which Perforce mitigates with Streams) and local development (which is mitigated in other ways).
For how most teams work, I’d choose Perforce any day. Git is specialised towards very large, often part time, hyper-distributed development (AKA Linux development), but the reality is that most teams do work with a main branch in a central location.
It’s mind bending that there are actual humans on the planet, paid a shit tonne more than software developers, who not only believe the parody highlighted by @SwiftOnSecutity, but treat and share it as gospel, acting on it with nutjob metrics to “increase productivity” whilst salivating over the hyperbole around “AI” that is sweeping the globe, dreaming of a better world.
One without those pesky developers with their brains, thoughts and opinions.
But, what do I know, I’ve been in this profession for only 40 years…
You’re probably not the biggest asshole in the room. In my experience, the person making decisions (and the most money) is never the most qualified, most competent, most efficient, or hardest working individual. They are just the biggest asshole in the room. They’re willing to be loud and belligerently wrong, they’re willing to take credit for the accomplishments of others, they’re willing to shift blame onto someone else, they’re willing to demand everyone else work harder than they do, and they’re willing to demand far more than their fair share of the profit.
And they will be mollified by the rest because nobody is a bigger asshole. Most people just want to do their jobs, and don’t want to rock the boat. Competent people see opportunity to ride in the wake of the biggest asshole in the room.
If you ever watch Shark Tank, you’ll see they are masters of the craft.
The problem is that most of us have swallowed the ‘competence uber alles’ ideal that school fed us through exams and scoring, when the game really is mostly politics (as in interpersonal relationships). So we are understandably disappointed when the incompetent get promoted through brown nosing or luck, when we should be reevaluating the rules of the game.
That’s always fun in sales. The vendor that brazenly promises two-and-a-half mirage for half the price will win the bid, and the sales people will move on to a different employer when the real budget for the project becomes clear.
programmer_humor
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.