Fr I was about to say the same thing. Aside from better hardware and more layers the technology really hasn’t changed from this level to begin with.
We’ve learned a little about what emergent behavior and trends look like in machine learning algorithms when graphed, though: it becomes more and more convergent, as if it forms its own little confirmation bias it will produce more and more samey results.
Rebranding a Markov Chain stapled onto a particularly large graph
Could you elaborate how this applies to various areas of AI in your opinion?
Several models are non-markovian. Then there are also a lot of models and algorithms, where the description as or even comparison to Markov-chains would be incorrect and not suitable.
I’m not sure what people think AI was ever going to be… every time something new comes out it’s always dismissed because “it’s basically just a X that does Y”. I think that will continue to be the case until there is some literal connection to actual brains, in which case the concept of what a brain is will probably be questioned as well.
I’m not sure what people think AI was ever going to be…
The heavy investment in AI is coming under the assumption that these advanced processes will replace huge portions of the human workforce.
So we don’t need lawyers, because we just put prompts into a Law AI and it gives us a verdict. We don’t need doctors, because we just put symptoms into a Medical AI and it gives us a diagnosis and treatment plan. We don’t need salespeople, because we just put the product into a Marketing AI and it spits out a bunch of comvincing ad copy.
the concept of what a brain is will probably be questioned as well.
We already connect our brains to our computers. We just use screens and keyboards as our interface.
I suppose you could argue that a guy with a calculator or a camera or a chat app is mentally different than one without it. But I think the goal with AI is supplementing human minds, not complementing then.
The USB standards are just… Comically overcomplicated. And almost everything about it is optional. They need a full revamp, making it simpler and mandatory on all future ports, devices and cables.
I had a FireWire hard drive! I remember I bought specifically the enclosure that supported both standards since my motherboard had a FireWire port and on paper it was faster than usb! Too bad the HDD was as slow as molasses
Yeah, the ZIP drive was just starting to take off when the Internet killed needing a sneaker net (at least of that size). Add in CD-ROM drives which you needed anyway. And good night.
Yep, that’s because the actual data transfer was handled by the more capable device, instead of only the guest. I think the standard also required a minimum throughput, iirc, whereas USB only had a maximum.
Firewire was good for high bandwidth devices like external hard drives and video cameras because it didn’t require the CPU to do any heavy lifting. These days USB is mature enough and CPUs are so fast that we (mostly) don’t notice any performance impact but in the Core 2 Duo days you could easily max out one of your two cores with a large file transfer over USB.
Almost everything about it needs to be optional because sometimes USB is used to charge some cheap battery powered thing and sometimes it’s used to make a backup of a harddrive and sometimes it’s charging my laptop with enough power for it to be rendering video but still have a net charge increase to the battery while also providing Ethernet, video output, and keyboard/mouse input over the same one port.
EDIT to make it more clear why the variability of USB standards is what it is, compare a modern laptop to one from 10 years ago.
The older laptop has:
for video, an HDMI port (or the less common mini HDMI port), and perhaps a mini DP port
an Ethernet port
a charging plug
possibly some FireWire ports (may or may not be the same as the mini DP port)
USB A ports for keyboard/mouse and other random devices
The newer laptop has:
USBC ports that can do all of the above
The perhiperals, however, don’t support all of the features. They only support the features they actually use. As long as the laptop supports all of the optional features, you don’t need to worry about it.
The is especially helpful for less technical users who may not want to know what the difference between HDMI and DisplayPort is. With a fully USBC based laptop and USBC perhipals you can just plug it in and it will work.
Of course this is all dependent on the laptop implementing all of the extra features, which is still only really true of more expensive laptops.
People do not want to be limited to 1m long cords or only have thick and stiff Thunderbolt3 cords with 20 different conductors for a wired mouse.
Minimum specs like you are proposing just make the standard less useful and would lead to more competing specs that aren’t compatible at all (a la lightning cables).
To be a truly “universal” spec, flexibility is king.
Maybe optional opt out? Like to say you are usb-4 you have to have this format and support all of these features. Other you are USB 4 W/O x,y,z,PD,Video,etc. I also think PD levels should be labeled on power sources and sinks.
Cain kills Abel (8). Cain gets cursed. (11) Cain is afraid he’ll be killed by… ??? (14). God is all, nah fam karma mark lmao. (15) Cain goes and lives in the land of Nod. (16) Cain, one of three living humans at this point, makes love to his wife, who randomly now exists, and births a son, Enoch. Cain also builds a city… for the now five people who live on Earth. (17).
Apparently the biblical explanation is that it’s a sister of Cain. Maybe the daughters births didn’t warrant an extra line in the Bible? It probably doesn’t keep a record of when Adam and Eve acquire new property as that’s mostly what women were considered.
I loved their explanation regarding building the Ark authentically when Noah lived to be over 900 years old. It’s simple really. He built it when he was like 300. You see it makes perfect sense. Next question.
Not sure I would trust anything from the creation museum to be actually biblical.
I know there’s an ancient myth about Adam having a first wife before Eve — there’s probably also other myths that fill in the blanks. There’s also nothing stopping God from making more people during this period like he made Adam and Eve. They were probably just the “first batch” so to speak.
Adding a bit to that, it’s very likely that the old judaic religion was polytheistic like every other in the nearby region (assyrians, babylonians, egyptians, hittites), but started to consolidate around a single deity (not clear when, the tradition was oral). That meant some stories were left out for whatever reason, others changed, as they did several times over the centuries before being written, and every other god of their pantheon became yaweh, which explains why he has such drastically different personalities in the bible
Also, at some point any creation story is going to have to stop specifying literally every single thing that happened and start to hit broad strokes. Things like “we just didn’t explicitly mention every single kid she had” is probably the easiest explanation.
They were probably just the “first batch” so to speak.
the standard response to this is that if there were other independently created people in Eden then they wouldn’t have been expelled for Adam and Eve’s mistake. and after the fall no other people could be created because a) they would be sinless which messes everything up and b) “God created the world in 6 days and rested on the 7th”.
weirdly that might effectively be true, many other animals don’t really suffer from inbreeding, and there’s an hypothesis that humans are so vulnerable to it because a shiteload of us died all at once a long time ago.
nice little dig at evolution calling mutations “mistakes”. as in, they happen but they can only be negative.
since God’s Word is the only standard for defining proper marriage
oh - and where’s that? the bit where multiple wives are ok (Solomon), or where multiple wives is commanded (Levrite marriage) or where slave girls are ok (“concubines” being the usual euphamism) or where polygamy is disallowed but only for church leaders (this seems like the worst one tbh, the very necessity of this rule means there were sufficient polygamous relationships in the early church that it even warrants a mention…)
Ive heard people say that Adam being the first human is metaphorical in the text, and means “first YHWH worshipper”, not sure how widespread or accurate this view is.
The general answers to this question is or that Cain married his sister (fundamentalist view) or that Adam and Cain are metaphors and didn’t existed at all (mainline view).
What’s interesting is in Genesis 1 God created all the stuff, but in Genesis 2 it says none of the plants or animals or man had materialized yet, then God makes then an asks Adam to name them.
I definitely read Genesis as a story of tribal origins
For confused folks, no this is not how Canadians package their peanut butter, although yes the milk bags are real, IIRC this is actually a thing that happens in the Carribbean for locally packaged peanut butter because it’s cheaper than the jars are in the US and Canada.
Cats are conflicting for me. They can be cute, but I hate when they try to walk on me. Because I know if I try to move them they’ll probably scratch me. And that’s no bueno to me, just makes me anxious as fuck when they walk my way while i’m sitting on a couch trying to relax.
Dogs are too energetic and overstimulating for me to be around. I’m fairly allergic to them as well, so I have a decent excuse for the people who insist their dog will be perfectly well-behaved and calm (they weren’t).
Oh my God. A cat scratch. I wouldn’t wish that pain and agony on my worst enemy. I bet when it happened you had to sell that couch because of all the memories and PTSD. You don’t need a cat because you’re already a pussy anyway.
my cats walk over me like i’m a carpet. sometimes i wake up in the middle of the night with a paw in my mouth or a cats ass on my face. but they never scratch me on purpose.
The problem is I tend to shift around a lot, and cats hate that. So i try to “nip it in the bud” to make sure to avoid the situation altogether, but sometimes they sneak up on me while watching a movie or something.
Thats why you always keep the claw trimmers around and if theyre too long you push on their toe beans a bit to expose the claw, and give it a little snip. You dont get hurt and the cat now has an objective to make them sharp again over the next few weeks
That’s good to know! Unfortunately all the cats I see are other people’s, and so it would be awkward to be like “hey before we sit down, can we trim your cat’s claws a bit?”
Honestly, it’s still ridiculous to me how slow Python, Java, JS, Ruby etc. continue to feel, even after decades of hardware optimizations. You’d think their slowness would stop being relevant at some point, because processors and whatnot have become magnitudes faster, but you can still feel it quite well, when something was implemented in one of those.
Many of these have C-bindings for their libraries, which means that slowness is caused by bad code (such as making a for loop with a C-call for each iteration instead of once for the whole loop).
I am no coder, but it is my experience that bad code can be slow regardless of language used.
Bad code can certainly be part of it. The average skill level of those coding C/C++/Rust tends to be higher. And modern programs typically use hundreds of libraries, so even if your own code is immaculate, not all of your dependencies will be.
But there’s other reasons, too:
Python, Java etc. execute their compiler/interpreter while the program is running.
CLIs are magnitudes slower, because these languages require a runtime to be launched before executing the CLI logic.
GUIs and simulations stutter around, because these languages use garbage collection for memory management.
And then just death by a thousand paper cuts. For example, when iterating over text, you can’t tell it to just give you a view/pointer into the existing memory of the text. Instead, it copies each snippet of text you want to process into new memory.
And when working with multiple threads in Java, it is considered best practice to always clone memory of basically anything you touch. Like, that’s good code and its performance will be mediocre. Also, you better don’t think about using multiple threads in Python+JS. For those two, even parallelism was an afterthought.
Well, and then all of the above feeds back into all the libraries not being performant. There’s no chance to use the languages for performance-critical stuff, so no one bothers optimizing the libraries.
For example, when iterating over text, you can't tell it to just give you a view/pointer into the existing memory of the text. Instead, it copies each snippet of text you want to process into new memory.
As someone used to embedded programming, this sounds horrific.
Yep. I used to code a lot in JVM languages, then started learning Rust. My initial reaction was “Why the hell does Rust have two string types?”.
Then I learned that it’s for representing actual memory vs. view and what that meant. Since then I’m thinking “Why the hell do JVM languages not have two string types?”.
I’m not a java programmer, but I think the equivalent to str would be char[]. However the ergonomics of rust for str isn’t there for char[], so java devs probably use String everywhere.
Nope, crucial difference between Java’s char[] and Rust’s &str is that the latter is always a pointer to an existing section of memory. When you create a char[], it allocates a new section of memory (and then you get a pointer to that).
One thing that they might be able to do, is to optimize it in the JVM, akin to Rust’s https://doc.rust-lang.org/stable/std/borrow/enum.Cow.html.
Basically, you could share the same section of memory between multiple String instances and only if someone writes to their instance of that String, then you copy it into new memory and do the modification there.
Java doesn’t have mutability semantics, which Rust uses for this, but I guess, with object encapsulation, they could manually implement it whenever a potentially modifying method is called…?
But yeah I’d like it if the features given by Lombok were standard in the language though it’s not a big deal these days since adding Lombok support is very trivial.
You shouldn’t use Lombok, as it uses non-public internal Java APIs, which is why it breaks every release. At one point we had a bug with Lombok that only resolved if you restarted the application. Switching off of Lombok resolved the issue.
Just switch to kotlin. You can even just use Kotlin as a library if you really want (just for POJOs), but at this point Kotlin is just better than Java in almost every way.
Energy use? That’s a pointless metric. If that is the goal then whole idea of desktop should be scraped. Waste of memory and hard drive space. Just imagine the amount of energy wasted on booting GUI.
If you want to talk about climate change then electronics is the wrong place to point the finger at. For start look at cement manufacturing. It requires huge amounts of energy to produce even though we have eco-friendly variants ready to go. And cement production amounts to 8% of all greenhouse gasses released annually.
Hell, just ban private jets and you’ve offset all of the bad things datacenters ever made. Elon had 10 minute flight to avoid traffic which consumed around 300l of fuel. Royal family makes so many flights a year that you could go into the wild and eat bark until the rest of your life and you wouldn’t be able to offset their footprint in thousands of lives.
Bill Gates himself talks a lot about reducing carbon footprint we make and yet he refuses to sell his collection of airplanes. He has A COLLECTION of them.
Using higher level language that requires more operations than assembler is not a thing to worry about when talking about climate change. Especially without taking into account how much pollution have those managed to reduce by smartly controlling irrigation and other processes.
Idk numpy go brrrrrrrrrr. I think it’s more just the right tool for the right job. Most languages have areas they excel at, and areas where they’re weaker, siloing yourself into one and thinking it’s faster for every implementation seems short sighted.
At it’s heart, numpy is C tho. That’s exactly what I’m talking about. Python is amazing glue code. It makes this fast code more useful by wrapping it in simple® scripts and classes.
That’s because it’s not relevant. Speed can be compensated for either by caching or outsourcing your load, if there’s such a huge need to process large amount of data quickly. In day to day work I can’t say I have ever ran into issues because code was executing slow. Normal operation Python is more than capable of keeping up.
On the other side of the coin you have memory management, buff and stack overflows and general issues almost exclusive to C, which is something you don’t have to worry about as much with higher level languages. Development with Python is simply faster and safer. We as developers have different tools, and we should use them for their appropriate purpose. You can drive nails with a rock as well, but you generally don’t see carpenters doing this all day.
You can sometimes deal with performance issues by caching, if you want to trade one hard problem for another (cache invalidation). There’s plenty of cases where that’s not a solution though. I recently had a 1ns time budget on a change. That kind of optimization is fun/impossible to do in Python and straightforward to accomplish Rust or C/C++ once you’ve set up your measurements.
You can find plenty of people complaining online about the startup time of the windows and gnome (snap) calculators. The problem in those cases isn’t solved by compiled languages, but it illustrates that it’s important to consider performance even for things like calculator apps.
Which is exactly what I said. Most of the times you can work around it. Sure cache invalidation can be hard, but doesn’t have to be. If you need performance use more performant language. Right tool for the job.
Especially since languages such as Python and JavaScript are really good a event programing where you have an event that runs a function. Most of the CPU time is idling anyway.
They do have optimizations however they are interpreted at runtime so they can only be so fast
Frankly you won’t notice much unless the program is doing something computation heavy which shouldn’t be done in languages such as JavaScript and Python
They aren’t as fast as a native language but they aren’t all that slow if you aren’t trying to use them for performance sensitive applications. Modern machines run all those very quickly as CPUs are crazy fast.
Also it seems weird to put Java/OpenJDK in the list as it is in its own category from my experience
Java is certainly the fastest of the bunch, but I still find it rather noticeable how long the startup of applications takes and how it always feels a bit laggy when used for graphical stuff.
Certainly possible to ignore that on a rational level, but that’s why I’m talking about how it feels.
I’m guessing, this has to do with just the basic UX principle of giving the user feedback. If I click a button, I want feedback that my click was accepted and when the triggered action completed. The sooner those happen, the more confident I feel about my input and the better everything feels.
Yep, I also don’t fully agree on that one. I’m typing this on a degoogled Android phone with quite a bit stronger hardware than the iPhone SE that my workplace provides, e.g. octacore rather than hexacore, 8GB vs. 3GB RAM.
And yet, you guessed it, my Android phone feels quite a bit laggier. Scrolling on the screen has a noticeable delay. Typing on the touchscreen doesn’t feel great on the iPhone either, because the screen is tiny, but at least it doesn’t feel like I’m typing via SSH.
I have experienced the delayed scrolling, mostly on cheaper phones.
But that’s mostly because i’m used to phones having 120+hz screens now, going back to a 60hz screen does feel a bit sluggish, which is especially noticeable on a phone where you’re physically touching the thing. I think it might also have something to do with the cheaper touch matrixes, which may have a lower polling rate as well.
Why? I certainly expect that to be a factor, but I’ve gone through several generations of Android devices and I have never seen it without the GC-typical micro-stutters.
It is always a question of chosing the right tool for the right task. My core code is in C (but probably better structured than most C++ programs), and it needs to be this way. But I also do a lot of stuff in PERL. When I have to generate a source code or smart-edit a file, it is faster and easier to do this in PERL, especially if the execution time is so short that one would not notice a difference anyway.
Or the code that generates files for the production: Yes, a single run may take a minute (in the background), but it produces the files necessary for the production of goods of over 100k worth. And the run is still faster than the surrounding processes like getting the request from production, calculating the necessary parameters, then wrapping all the necessary files with the results of the run into a reply to the production department.
True, plus the bloated websites I see are using hundreds of thousands of lines of JavaScript. Why would you possibly need that much code? My full fledged web games use under 10,000.
I both love and hate this so much. The performance and recording is incredible but any super tech nerdy parody just causes me immense internal cringe. I couldn’t make it more than a third of the way through that and I love working with K8S.
Clearly they are Jewish and their ancestors were part of The Madagascar Plan, where Nazi Germany was forced resettling Jewish people in Madagascar.
They have largely been isolated and weren’t able to keep apprised of what happened after their family was resettled. Germany had been really pushing to establish a military base in Antarctica for decades, so the penguins had the understanding that Germany must be well established there by the time they got there.
So that is why Penguins from Madagascar would believe that people in Antarctica would be speaking German.
They are originally from Antarctica and only end up in the zoo in the USA, however this only occurs in the Pinguins of Madagascar, which came out after the original Madagascar, so their origin story could have been thought up differently by the writers of the Madagascar movie than how it happened in the Pinguins of Madagascar movie
I think I would do okay, but that’s because I had a watch fascination when I was in school. Big thing that revolutionized navigation was stable clocks that could work on a boat. Depends on how far back you go really.
All consumer and enterprise equipment made in the last 10+ years natively support IPv6.
I object to this statement. You can buy name brand routers today that don’t implement it properly. Sure, they route packets, but they have broken stateless auto configuration or don’t respect DHCPv6 options correctly, and the situation is made worse because you don’t know how your ISP implements IPv6 until you try it.
God help you if you need a firewall where you can open ports on v6. Three years ago I bought one that doesn’t even properly firewall IPv6.
I tested a top-of-the-line Netgear router to find that it doesn’t support opening ports and once again doesn’t correctly support forwarded IP DHCPv6, which even if that works correctly, your Android clients can’t use it 🫠 Decades later there’s no consensus on how it should function on every device. This is a severe problem when you are a standard.
The state of IPv6 on consumer hardware is absolute garbage. You have to guess how your ISP implements it if at all, and even then you’re at the mercy of your limited implementation. If you’re lucky it just works with your ISP router. If you’re not, it’s a PITA.
The problem is mainly that IPv4 port forwarding is network address translation, but on IPv6 it’s instead IP forwarding with a firewall rule.
The latter is conceptually simpler, but it’s a different mechanism and one that most home routers don’t bother to implement. This is quite ironic because IPv6 was intended to restore end to end connectivity principles.
Don’t get me wrong; I’m quite happy with the standard. They are very few good implementations of that standard, and given the momentum of its predecessor, implementers just don’t care.
I absolutely hate how dependent we’ve gotten to IPv4. To the point that Amazon is charging almost $4 a month per IP. It used to be free. These assholes are buying IPv4 addresses so fast that they are literally driving up the price.
Is there a resource that you can recommend on learning IPv6 based on my knowledge on IPv4? A lot of resources I’ve seen are way over engineered for my feeble brain.
Like I know what IP addresses are and what port numbers are. I don’t understand the difference between how IPv6 addresses are assigned (both locally and generally speaking) and what makes it different from IPv4.
It absolutely can be DHCP. There’s two main ways to do it: stateless auto configuration, and DHCP. Super briefly, you can assign IP addresses the same way you used to if you want, or you can let devices pick their own.
I’m afraid I can’t recommend a great resource, but I really like the Wikipedia article because it’s very precise in its terminology. I appreciate that with learning a new subject. I’m not even that precise here. For example, I use the term IP forwarding more liberally than what it actually means.
This is why I use PFSense and Hurricane Electric as a v6 tunnelbroker. I have working functional IPv6 with SLAAC and DHCPv6 and full Routing Advertisements on my LAN running side-by-side so that no matter which the device implements how poorly; it gets an IPv6 address and it works and is protected by the firewall.
I really like stateless, but it bugs me that the router has to snoop on traffic if you want a list of devices. The good ones will actually do this, but most are blind to how your network is being used with IPv6.
And it really bothers me that Android just refuses to support DHCPv6 in any capacity. Seems like a weird hill to die on. There are too many legitimate use cases.
I run both because of this; and because SLAAC enables features in Desktop OSes that offer some level of additional privacy.
For example; Windows can do “Temporary IPv6 Addressing” that it will hand out to various applications and browsers. That IPv6 address rotates on a periodic basis; once every 24 hours by default; and can be configured to behave differently depending on your needs via registry keys.
This could for example, allow you to quickly spin up a small application server for something; like a gaming session; and let you use/bind that IPv6 address for it. Once the application stops using it and the time period has elapsed; Windows drops the IP address and statelessly configures itself a new one.
I also like the privacy extensions, but how often does your prefix even change? Most places I’ve seen you get a /64 announced and it basically never changes – so somewhat elementary to “break through” that regardless.
A /64 is more than enough though to prevent most casual attempts at entry; and does force more work / enumeration to be done to break into a network and do damage with. I’m not saying the privacy extensions are the greatest; but they do work to slightly increase the difficulty of tracking and exploitation.
With a /48 or even a /56; I can subdivide things and hand out several /64s to each device too; which would shake up things if tracking expects a /64 explicitly.
I actually use /55s to cordon off blocks inside the /48 that aren’t used too. So dialing a random prefix won’t help. You’d be surprised how often I get intrusive portsweeps trying to enumerate my /64s this way…and it doesn’t work because I’m not subnetting on any standard behavior.
You shouldn’t be forwarding anything - lan devices are directly accessible from the internet with ipv6. The router’s job now is to firewall inbound ipv6 packets. You should be able to simply open the inbound port for that device in particular.
Right, that’s how it should work. Unfortunately that’s not how it actually works most of the time in consumer.
Many devices don’t provide an option in the UI to open an inbound port on IPv6. For example, the latest and most expensive Linksys gaming router blocks all inbound connections and there are no options for different behavior. It doesn’t support opening any ports for v6.
The most recent TP link device I tested for my dad doesn’t even have a firewall. If you know the global IP, you can connect to any port you want.
And that’s why I abandoned cheap consumer routers many years ago… closest devices to implement ipv6 port management firewalling even half good was/is the ASUS devices. I got fed up and went pfsense and/or unifi one day and never looked back.
UDM handles ipv6 real good, and pfsense can even get /64 subs from an ATT router for all its lan interfaces.
Comcast has finally gotten around to giving hosts inside the firewall publicly routable IPv6 addresses, but port forwarding (which, by the way, can only be done through Xfinity’s website or mobile app which then connect to and configure the router through the ISP interface – if you go to the port forward configuration in the router’s webui, all you’ll see is a message that it’s now “easier than ever” to configure port forwards) can only happen on IPv4. Want to open a hole in the IPv6 firewall? Well that’s just too fucken bad.
Funny, I have an ancient DOCSIS modem from a company that went bankrupt ages ago which supports all these features flawlessly. Only thing it’s missing is DNS options, it’s hardcoded to use the ISPs DNS. Oh well.
Gonna be honest, a lot of times I feel like I don’t belong here, I’m still figuring things out. I’m not a “techy” type person (that seems to be some kind of prerequisite) and I barely know how to explain the fediverse to the layman, but I left reddit when they fucked over Joey (my preferred reddit app) and read enough to give reddit the middle finger and never look back. It’s been nice, really. I spend more time outside of the internet now. But I believe in the fediverse, I think it’s the right thing to do. I still check up on lemmy daily, but I get much more value and human connection and only spend the time that is appropriate on lemmy instead of endlessly scrolling. Most days I end up in some Wikipedia rabbit hole. Just like the good ol’ days. Learning new things, meeting new people. That’s what I love about the internet.
It’s easy to feel that way even if you are a techy. It seems like being minimally neurodivergent is the abnormal here.
That being said, I’ve been introduced to many different ways of thinking that I wouldn’t have gained otherwise. Think of it like you’re different, but that’s ok because everyone here is different - and that makes them (and you) all the more beautiful for it (especially in the context of idea exchange). In fact, being the different one here will give you the perspective that many of the people who use Lemmy experience simply by existing which, in and of itself, is a valuable thing.
It’s what we all loved about the internet I think, before the web become… “that” (looking at the pile of shit the web has become).
But actually it’s not the web, not really. It’s the big tech platforms that most people seem to think is the internet now. It’s sad to watch how people log on to “Facebook” and not the general web anymore. And then Google in front of everything, like a big cancer growth.
Lemmy is not the new internet either I believe. But it’s here to show people that something else can exist. As soon as we let advertising in here though, it’s over.
I think the answer is not to gatekeep against advertising actively, but to have a platform that is resilient to that kind of thing. Like, if there were advertising on an instance people would fucking BOUNCE I think. And if it got somehow baked into the platform itself there would be a new fork with the advertising excised before the sun went down.
The beautiful thing about decentralisation is that if an instance tries to as ads, then you can go to a different instance and see the same content.
If an instance creates as posts, your instance admin can block the whole instance.
Interestingly, the big instances seem to easily get enough donations to cover costs. I think that’s the great thing about this model, people are willing to donate when they know it’s not some big corporate making profit for shareholders.
adding to the old internet thing, using mojeek reminds me of the old search results! searched for something mildly obscure, actually got good results and also a porn site lmfao.
Honestly this is great, non-techy people making the transition is a good sign and something the system needs to gain mainstream appeal.
Also, people who aren’t techy are less likely to accept hacky workaround BS and complain until it’s fixed on a system-wide level, and that’s needed to mature the platform to something anyone can use. It’s getting there but it’s still got a lot of rough edges.
A way I have found to explain federated social media to people, that seems to work is this: Imagine reddit, but instead of one company, with one administration, owning the whole site, it is a bunch of different reddits, that are independently run, that choose which other reddits they wish to associate themselves with. When you log into one instance, you automatically can see, and interact with, all the other ones that one chooses to associate with. You can have accounts on as many instances as you would like, even having accounts on instances that do no associate with each other.
I just say: “It’s like email. There are different email servers, but they can all talk to one another. If there are things you really like, you can subscribe to them, and if there are things you don’t like, you can block them.”
I enjoyed reading this. I came over from reddit when they started banning people for protesting. That showed me that reddit was not what I thought it is.
I‘m a techy person. I run servers for friends and customers, partly with fediverse services on them. Lemmy being one of them. I donate both time and money to lemmy and other services I enjoy and use.
The fediverse is a great thing imo. I hope it succeeds.
Oh yes, good point. That’s a big part of my problem when it came to my reddit experience in the end. I mean shit, I was a redditor since 2009. It was hard to leave but also not.
Hey, I also was a Joey user. I am pretty tech savvy (I’m a software dev and a former sys admin). I’m not a Linux daily user though, so I still understand that out of place feeling. Like I have used Linux for things, but after working on my computer all day for work, I don’t exactly want to deal with roadblocks or tinkering on my computer in the evening.
I have also noticed that I spend less time scrolling on here than I did on Reddit, which is a good thing for me. It’s a place where I can satisfy that itch without getting lost in scrolling of posts or comment sections for hours.
Congrats! It’s scary, but this is when you finally have the freedom to live your unique life. Stay in touch with your close friends, everyone else will fade away. And remember, wear sunscreen.
lemmy.world
Top