My favourite take on DI is this set of articles from like 12 years ago, written by a guy who has written the first DI framework for Unity, on which are the currently popular ones, such as Zenject, based on.
The first two articles are pretty basic, explaining his reasoning and why it’s such a cool concept and way forward.
Followed by more articles about why he thinks it was a mistake, and he no longer recommends or uses DI in Unity in favor of manual dependency injection. And I kind of agree - his main reasoning is that it’s really easy for unnecessary dependencies to sneak up into your code-base, since it’s really easy to just write another [Inject] without a second thought and be done with it.
However, with manual dependency injection through constructor parameters, you will take a step back when you’re adding 11th parameter to the constructor, and will take a moment to think whether there’s really no other better way. Of course, this should not be an relevant issue with experienced programmers, but it’s not as inherently obvious you’re doing something potentially wrong, when you just add another [Inject], when compared to adding another constructor parameter.
Exactly. Dependency injection is good; if you need a framework to do it, you’re probably doing it wrong; if your framework is too magical, you’re probably not even doing it at all anymore.
At work we have a lot of old monolithic OOP PHP code. Dependency injection has been the new way to do things since before I started and it’s basically never used anywhere.
I assume most people just find it easier to create a new class instance where it’s needed.
I’ve never really seen a case where I think, “dependency injection would be amazing here” I assume there is a case otherwise it wouldn’t exist.
When we implemented it significantly improved our ability to write unit tests. It also allowed us to make more modular code due to the default of every class having an interface. So I’m all for it.
Yeah. Injection has a place in test patterns. Thankfully, it’s usually possible to hide injection from strongly affecting anything else that matters, as long as the team hates injection deeply enough.
In my opinion dependency injection solves a problem that doesn’t need to exist, and does it by adding even more obfuscation and complexity.
The problem is that the original gang of four design patterns had very little to say about managing effects. In old java code things like network and file IO often happen deep inside the object graph, hidden behind multiple impenetrable abstractions such that it’s impossible to run the logic without triggering the effect.
The wrong solution is to add even more obfuscation and abstraction, so that you can inject replacement classes deep inside the object graph where the effects happen. it solves the immediate problem of implementing tests, but makes everything else worse and more confusing.
The right solution is to surface all your effects at the top level of the call graph. The logic only generates data, and passes it back up to the top level of the program. The top level code then decides whether to feed this data into an effectful operation. Now all your code is easier to reason about, and in you can easily test the logic without triggering unwanted effects.
As a fellow PHP dev (working in laminas specifically) DI actually is fucking awful, there’s a distinction between a service factory pattern and this thing called DI which is similar to a service factory pattern but uses reflection based type sniffing to guess at which service you want where. I’d considered making a reference to it but PHP developers are few and far between these days.
Seriously though, spring configurations are written in XML and you create variables, call functions, and have control flow. Effectively turning XML into a horrible twisted shadow of a programming language.
All in the name of “configurability” through dependency injection.
I’m fond of saying that all great code earns it’s right to become good code by starting as trash…
But I still think we should all quietly and politely let Spring die a simple dignified death, as soon as possible.
Out of wildly morbid curiosity, do Maven and Ant still shit all over each other to make sure no one has any real idea what the build inputs and outputs are?
I shouldn’t ask things I don’t really want to know, though. My inbox is gonna be full of Java apologists.
It was a markup language until someone decided to parse and execute it as a programming language. This person should be watched for other deranged behavior.
I use XML as markup language, what kind of deranged person thought to turn it into a programming language? My problems with the Lua API led me down the rabbit hole of making my own VM and implementation, not looking at a markup languge, then go “what if I used this for scripting?”.
When they make XML do these things (or the way Github Actions does it with YAML), they’re essentially creating a representation of the AST that the compiler would make internally from a mini language. So there’s a few possibilities:
They don’t know how compilers work and reach for a tool they do know
They know, but figure the problem at hand doesn’t need the complexity of a mini language and start the project the quick and dirty way, and it gets out of hand as they add features
They may or may not know, but they do get caught up in the hype of some other tool (likely what happened with XSLT)
We’re struggling to deal with climate change and these selfish developers can think of nothing except building more factories. This is a global issue, we need a global solution: eschew factories and services for defining everything globally.
programmer_humor
Active
This magazine is from a federated server and may be incomplete. Browse more on the original instance.