In the modern world it’s completely subjective.
The lowest-level language is probably ASM/machine code, as many people at least edit that regularly, and the highest-level would be LLMs. They are the shittiest way to program, yes, but technically you just enter instructions and the LLM outputs eg. Python, which is then compiled to bytecode and run. Similar to how eg. Java works. And that’s the subjective part; many people (me included) don’t use LLMs as the only way to program, or only use them for convenience and some help, therefore the highest level language is probably either some drag-and-drop UI stuff (like scratch), or Python/JS. And the lowest level is either C/C++ (because “no one uses ASM anyway”), or straight up machine code.
Oh that’s interesting. I started poking around with a Gameboy emulator guide implemented in Python that intended to emulate a Z80. Got any good resource recommendation in case I decide to pick this back up and inevitably get stuck?
When I was young, we didn’t have hex codes, we only had 1 and 0s. One time we where all out of 1s, and I had to code a whole Database system with only 0s!
If you want some modern day fun with this, try the Zachtronics programming games; TIS-100, Shenzhen I/O, and Exapunks.
Or, my personal favorite I only discovered somewhat recently, try Turing Complete. You start by designing all your logic gates from just a negate gate IIRC. You eventually build up an ALU and everything else you need and then create your own computer. Then you define your own assembly language and have to write programs in your assembly language that run on the computer you’ve designed to complete different tasks. It’s a highly underrated game, although it takes a certain type of person to enjoy.
I would say it’s very polished. It does everything you’d expect and has some nice QoL features. There was work on a big update that’d improve performance and things, but the last information about that was from Aug of last year as far as I can tell. That’s not a big deal though. The game works fine without it.
Another interesting low-level interpreter/emulated system to look into for anyone else trying to get started with this type of thing is the CHIP-8! It’s a pretty basic 8/16-bit instruction set (there are 35 opcodes, the instructions themselves are mostly simple) and there are tons of detailed guides on making one and writing roms for them.
The alt text is nice, too: The weird sense of duty really good sysadmins have can border on the sociopathic, but it’s nice to know that it stands between the forces of darkness and your cat blog’s servers.
Problem: Given two list of length n, find what elements the two list have in common. (we assume that there are not duplicates within a single list)
Naive solution: For each element in the first list, check if it appears in the second.
Bogo solution: For each permutation of the first list and for each permutation of the second list, check if the first item in each list is the same. If so, report in the output (and make sure to only report it once).
lol, you’d really have to go out of your way in this scenario. First implement a way to get every single permutation of a list, then to ahead with the asinine solution. 😆 But yes, nice one! Your imagination is impressive.
unless the problem space includes all possible functions f , function f must itself have a complexity of at least n to use every number from both lists , else we can ignore some elements of either of the lists , therby lowering the complexity below O(n!²)
if the problem space does include all possible functions f , I feel like it will still be faster complexity wise to find what elements the function is dependant on than to assume it depends on every element , therefore either the problem cannot be solved in O(n!²) or it can be solved quicker
By “certain distance function”, I mean a specific function that forces the problem to be O(n!^2).
But fear not! I have an idea of such function.
So the idea of such function is the hamming distance of a hash (like sha256). The hash is computed iterably by h[n] = hash(concat(h[n - 1], l[n])).
This ensures:
We can save partial results of the hashing, so we don’t need to iterate through the entire lists for each permutation. Otherwise we would get another factor n in time complexity.
The cost of computing the hamming distance is O(1).
Order matters. We can’t cheat and come up with a way to exclude some permutations.
No idea of the practical use of such algorithm. Probably completely useless.
honestly I was very suspicious that you could get away with only calling the hash function once per permutation , but I couldn’t think how to prove one way or another.
so I implemented it, first in python for prototyping then in c++ for longer runs… well only half of it, ie iterating over permutations and computing the hash, but not doing anything with it. unfortunately my implementation is O(n²) anyway, unsure if there is a way to optimize it, whatever. code
as of writing I have results for lists of n ∈ 1 … 13 (13 took 18 minutes, 12 took about 1 minute, cant be bothered to run it for longer) and the number of hashes does not follow n! as your reasoning suggests, but closer to n! ⋅ n.
!desmos graph showing three graphs, labeled #hashes, n factorial and n factorial times n
anyway with your proposed function it doesn’t seem to be possible to achieve O(n!²) complexity
also dont be so negative about your own creation. you could write an entire paper about this problem imho and have a problem with your name on it. though I would rather not have to title a paper “complexity of the magic lobster party problem” so yeah
Good effort of actually implementing it. I was pretty confident my solution is correct, but I’m not as confident anymore. I will think about it for a bit more.
So in your code you do the following for each permutation:
for (int i = 0; i<n;i++) {
You’re iterating through the entire list for each permutation, which yields an O(n x n!) time complexity. My idea was an attempt to avoid that extra factor n.
I’m not sure how std implements permutations, but the way I want them is:
1 2 3 4 5
1 2 3 5 4
1 2 4 3 5
1 2 4 5 3
1 2 5 3 4
1 2 5 4 3
1 3 2 4 5
etc.
Note that the last 2 numbers change every iteration, third last number change every 2 iterations, fourth last iteration change every 2 x 3 iterations. The first number in this example change every 2 x 3 x 4 iterations.
This gives us an idea how often we need to calculate how often each hash need to be updated. We don’t need to calculate the hash for 1 2 3 between the first and second iteration for example.
The first hash will be updated 5 times. Second hash 5 x 4 times. Third 5 x 4 x 3 times. Fourth 5 x 4 x 3 x 2 times. Fifth 5 x 4 x 3 x 2 x 1 times.
So the time complexity should be the number of times we need to calculate the hash function, which is O(n + n (n - 1) + n (n - 1) (n - 2) + … + n!) = O(n!) times.
EDIT: on a second afterthought, I’m not sure this is a legal simplification. It might be the case that it’s actually O(n x n!), as there are n growing number of terms. But in that case shouldn’t all permutation algorithms be O(n x n!)?
The time complexity can be simplified as O(2.71828 x n!), which makes it O(n!), so it’s a legal simplification! (Although I thought wrong, but I arrived to the correct conclusion)
END EDIT.
We do the same for the second list (for each permission), which makes it O(n!^2).
Finally we do the hamming distance, but this is done between constant length hashes, so it’s going to be constant time O(1) in this context.
Maybe I can try my own implementation once I have access to a proper computer.
you forgot about updating the hashes of items after items which were modified , so while it could be slightly faster than O((n!×n)²) , not by much as my data shows .
in other words , every time you update the first hash you also need to update all the hashes after it , etcetera
so the complexity is O(n×n + n×(n-1)×(n-1)+…+n!×1) , though I dont know how to simplify that
I’m taking into account that when I update a hash, all the hashes to the right of it should also be updated.
Number of hashes is about 2.71828 x n! as predicted. The time seems to be proportional to n! as well (n = 12 is about 12 times slower than n = 11, which in turn is about 11 times slower than n = 10).
Interestingly this program turned out to be a fun and inefficient way of calculating the digits of e.
Since I decided to pack the hashes and previous number values into a single array and then forgot to actually properly format the values, the hash counts generated by my code were nonsense. Not sure why I did that honestly.
Also, my data analysis was trash, since even with the correct data, which as you noted is in a lineal correlation with n!, my reasoning suggests that its growing faster than it is.
Here is a plot of the incorrect ratios compared to the correct ones, which is the proper analysis and also clearly shows something is wrong.
!Desmos graph showing two data sets, one growing linearly labeled incorrect and one converging to e labeled #hashes
Anyway, and this is totally unrelated to me losing an internet argument and not coping well with that, I optimized my solution a lot and turns out its actually faster to only preform the check you are doing once or twice and narrow it down from there. The checks I’m doing are for the last two elements and the midpoint (though I tried moving that about with seemingly no effect ???) with the end check going to a branch without a loop. I’m not exactly sure why, despite the hour or two I spent profiling, though my guess is that it has something to do with caching?
Also FYI I compared performance with -O3 and after modifying your implementation to use sdbm and to actually use the previous hash instead of the previous value (plus misc changes, see patch).
Surely you could implement this via a sorting algorithm? If you can prove the distance function is a metric and both lists contains elements from the same space under that metric, isn’t the answer to sort both?
if I’m not mistaken , a example of a problem where O(n!²) is the optimal complexity is :
There are n traveling salespeople and n towns . find the path for each salesperson with each salesperson starting out in a unique town , such that the sum d₁ + 2 d₂ + … + n dₙ is minimised, where n is a positive natural number , dᵢ is the distance traveled by salesperson i and i is any natural number in the range 1 to n inclusive .
pre post edit, I realized you can implement a solution in 2(n!) :(
Make your MIT-licensed library big enough that the corpos use it, then switch it to AGPL just before you add a really important and tricky feature they’ve been waiting for.
I suppose in the days of ‘Cloud Hosting’ a lot of people (hopefully) don’t just randomly upload new files (manually) on a server anymore.
Even if you still just use normal servers that behave like this, a better practice would be to have a build server that creates builds, like whenever you check code into the Main branch, it’ll create a deploy for the server, and you deploy it from there - instead of compiling locally, opening filezilla and doing an upload.
If you’re using ‘Cloud Hosting’ - for example AWS - If you use VMs or bare metal - you’d maybe create Elastic Beanstalk images and upload a new Application or Machine Image as a new version, and deploy that in a more managed way. Or if you’re using Docker, you just upload a new Docker image into a Docker registry and deploy those.
For some of my sites, I still build on my PC and rsync the build directory across. I’ve been meaning to set up Gitlab or something similar and configure automated deployments.
This is what I do because my sites aren’t complicated enough to warrant a build system. Personally I think most websites out there are over-engineered. Example: a Discord friend made a React site that displays stats from a gaming server. It looks nice, but you literally can’t hyperlink to any of the data, it can only be loaded dynamically and only looks coherent on a phone in portrait mode. There are a lot of people following trends (some good trends) but without really thinking about why.
I’m starting to like the htmx model a lot. Server-rendered app that uses HTML attributes to configure the dynamic bits (e.g. which URL to hit and which DOM element to insert the response into). Don’t have to write much JS (or any in some cases).
you literally can’t hyperlink to any of the data
I thought most React-powered frameworks use a URL router out-of-the-box these days? The developer does need to have a rough idea what they’re doing, though.
Yea, I wasn’t saying it’s always bad in every scenario - but we used to have this kinda deployment in a professional company. It’s pretty bad if this is still how you’re doing it like this in an enterprise scenarios.
But for a personal project, it’s alrightish. But yea, there are easier setups. For example configuring an automated deployed from Github/Gitlab. You can check out other peoples’ deployment config, since all that stuff is part of the repos, in the .github folder. So probably all you have to do is find a project that’s similar to yours, like “static file upload for an sftp” - and copypaste the script to your own repo.
They have bundled malware from the main downloads on their own site multiple times over the years, and even denied it and tried gaslighting people that AVs were giving false positives because AV companies are paid off by other corporations. And the admin will even try to delete the threads about this stuff but web archive to the rescue…
You know what? I didn’t believe you, since I’m using it for a long time on Linux and never had any issues with it. Today, when I helped a friend (on Windows) with some SFTP transfer and recommended FileZilla was the first time I realised the official Downloads page provides Adware. The executable even gets flagged by Microsoft Defender and VirusTotal. That’s actually REALLY bad. Isn’t FileZilla operated by Mozilla? Should I stop using it, even though the Linux versions don’t have sketchy stuff? It definitely leaves a really bad taste.
Yeah, it’s bad. Surprised they’re still serving that crap in their own bundle but i guess some things don’t change.
Filezilla is no relation to mozilla. But yeah i moved away from it years ago. The general recommendation I’ve seen is “anything but filezilla”. Personally i use winscp for windows, and will have to figure out what to use when i switch my daily driver to Linux.
Here’s the core issue. The developer didn’t know his rights, and made a mistake. I’m not criticizing, people make a career dealing with crap like this. But if you want to make a business out of something, it’s worth it to do some research or talk to a lawyer. I believe the MIT license has its place but, from what the OP said, this isn’t it.
I did not want to make a business out of this library. I don’t want money for it.
All I would’ve wanted is that the people at Apple would’ve given me a heads up beforehand, so I would’ve been prepared for it and not caught on surprise. And a that they do a version upgrade when I release a new bugfix release.
This is not a license issue. I was well aware of the consequences when I chose the MIT license. This is not about money.
You specifically said you chose the MIT license because you wanted to use it in commercial projects. That’s business, no matter how small. As the owner of the property, you could have used any and all licenses available to you. Also, if you wanted to require users of your code to attribute or notify you, you could have. If you want to be disappointed in their behavior that’s perfectly fine, too. Corporations usually disappoint if you have any altruistic expectations of them.
In this regard you are right. I could’ve chosen AGPL and use it in my commercial project nonetheless. I wasn’t aware of that at the time, and that was a mistake.
That said, I don’t expect all users to notify me. But if a company like Apple, with millions of users, exposes me to even a fraction of its users - then yes. I expect a mail beforehand. I did not sign up for this.
Agreed. Free licenses should NEVER be applied to Apple-specific tools. They don’t want to help the FOSS community, so we shouldn’t help them back. Make them pay for it, or make them make their own version.
You can use your library for commercial projects that you have. Just have dual license that requires payment for commercial use or something similar. You don’t have to pay yourself
I think that’s why Github suggests MIT as default. Unaware people will just put that. Most open source people just code things they want without thinking much on other aspects. We really need some sort of enforcement to stop companies banking on voluntary work done for the community.
It’s probably a single dev that made the decision, then moves onto something else. They (probably?) don’t have the ability to just raise a recurring PO etc to easily pay you and don’t care enough to worth through the paperwork.
If you had a paid licencing model they may have done it, or just found another lib/ wrote their own.
programmer_humor
Oldest
This magazine is from a federated server and may be incomplete. Browse more on the original instance.