My spouse was admitted to hospital for two weeks and had several geriatric roommates. The nurse asked one of them if she knew where she was, and she yelled “IN SATAN’S ASSHOLE”.
I’m actually involved in the filming of a local horror movie that’s going to be truly awful.
It’s about mermaid monsters. I help coordinate some of the water work and act as the !production photographer and backup safety diver for the underwater scenes.
It’s going to be really, really bad. But it’s still fun!
Nurse here: we ask ‘orientation’ questions as part of our assessment
I had a younger patient going through the straights with hallucinations (newly diagnosed schizophrenia)… and I had been asking the same questions (as we do) a lot
So I asked them once again, “Do you know why you’re in the hospital?”. Their response: “Deez nuts!”
I always appreciate a good “Deez nuts” joke, but that one has been my favorite so far. The volition on their face and the shitty smirk; they’re completely tied down with a guard because they would occasionally be violent… but hot damn, that was a zinger.
I counted their response as oriented— they know what they did lol
Just so we’re clear, it’s not obvious nor is the general public misunderstanding anything. There are not a lot of situations like that with basically any other thing that has been monetized. I am a filmmaker. Even if I directed, produced, and starred in the film, I cannot necessarily send you a copy for free even if I want to (legally). There are other parties involved that restrict what I can and can’t do with the product, typically film festivals until the festival circuit is done and then distributors.
This is very common and most people just kind of assume It to be the case with academic journals.
There are other parties involved that restrict what I can and can’t do
I’m going to guess it’s got something to do with the high cost of creating the actual film reel that gives creditors the power to dictate access to the film as per a contract.
It is different, but tbf academics are also reliant on external funding sources to conduct research. It’s not absurd to think that the grant writers or university administration might have some stipulations about the free distribution of research they paid for.
Have we forgotten what happened to Aaron Swartz? With the state of the world today, I naturally expect everything to be monetized, regardless of whether it makes any rational or ethical sense.
To be fair though, the people who fund the research are not the people who lose out if the publisher isn’t paid their £30. They are very often governmental or inter-governmental research agencies and programmes. Realistically it is rare for anyone except from the publisher to care about free distribution. The publishers are however pretty vicious (e.g. Swartz’s case).
No idea why you chose to phrase this in a condescending way. I have no doubt that they will have been able to come up with any number of differences after having it pointed out that it wasn’t the case for scientific papers.
Honestly probably mediocre at both, but I like to think I’m better with a camera seeing as how I several and get paid to do it, whereas I don’t even own a firearm lol
Can anyone point to the law on this? I am in science and still was under this impression. Why is film different? I do share papers but I always thought I was doing so in the shadows. When I want to republish an image I’ve created that I’ve used in another paper I need to ask the publisher for permission to do so (this is pretty explicit) and then cite that source in the new publication. Ive assumed the publisher now owns my words as well and that I cant just share that with anyone. If that’s not true what sets it apart from your film? Can I share it as much as I’d like? Can I just put all my pdfs on my instutional public facing website? Does funding source matter at all?
Usually, for academic journals, you can retain most of your copyrights and grant a license to the journal. You have to pay attention to the options they give you when going through the publishing process, though. Because it does depend.
Some funding sources require that you retain certain copyrights in order to comply with things like public access mandates.
There are free services that let you send and receive on your own domain. I use zoho. I can send emails with SMTP, but unfortunately, you cannot read them other than by using their web interface in the free tier.
Every time I think about hosting my own mail server, I think back to the many, many, many times I’ve had to troubleshoot corporate email systems over the years. From small ones that ran on duct tape and prayers to big ones that were robust, high dollar systems.
98% of the time, the reason the messages aren’t coming or going is something either really obscure or really stupid. Email itself isn’t that complicated and it’s a legacy communications medium at this point. But it’s had so much stuff piled on top of it for spam and fraud prevention, out of necessity, and that’s where the major headaches come from. Honestly, it’s one service that to me it’s worth paying someone else to deal with.
I’ve been running mailcow for almost 2 years with no issues. I’m not doing anything major with it, mainly using it to send myself alerts on the environment, but it does work for external purposes if I want it to as well. Updating is easy and seamless. I did get greylisted almost immediately though, so I use SMTP2Go and it works great as a free relay for the amount of mail I generate.
FYI: Lots of the managed switches or the expensive wifi access points should be able to show the link status in their webinterfaces. It should be pretty easy to figure out if they're running at 100M. (Sometimes also some LEDs light up in a different color.)
Have to say that I love how this idea congealed into “popular fact” as soon as peoples paychecks started relying on massive investor buy in to LLMs.
I have a hard time believing that anyone truly convinced that humans operate as stochastic parrots or statistical analysis engines has any significant experience interacting with others human beings.
Less dismissively, are there any studies that actually support this concept?
Speaking as someone whose professional life depends on an understanding of human thoughts, feelings and sensations, I can’t help but have an opinion on this.
To offer an illustrative example
When I’m writing feedback for my students, which is a repetitive task with individual elements, it’s original and different every time.
And yet, anyone reading it would soon learn to recognise my style same as they could learn to recognise someone else’s or how many people have learned to spot text written by AI already.
I think it’s fair to say that this is because we do have a similar system for creating text especially in response to a given prompt, just like these things called AI. This is why people who read a lot develop their writing skills and style.
But, really significant, that’s not all I have. There’s so much more than that going on in a person.
So you’re both right in a way I’d say. This is how humans develop their individual style of expression, through data collection and stochastic methods, happening outside of awareness. As you suggest, just because humans can do this doesn’t mean the two structures are the same.
Idk. There’s something going on in how humans learn which is probably fundamentally different from current ML models.
Sure, humans learn from observing their environments, but they generally don’t need millions of examples to figure something out. They’ve got some kind of heuristics or other ways of learning things that lets them understand many things after seeing them just a few times or even once.
Most of the progress in ML models in recent years has been the discovery that you can get massive improvements with current models by just feeding them more and data. Essentially brute force. But there’s a limit to that, either because there might be a theoretical point where the gains stop, or the more practical issue of only having so much data and compute resources.
There’s almost certainly going to need to be some kind of breakthrough before we’re able to get meaningful further than we are now, let alone matching up to human cognition.
At least, that’s how I understand it from the classes I took in grad school. I’m not an expert by any means.
I would say that what humans do to learn has some elements of some machine learning approaches (Naive Bayes classifier comes to mind) on an unconscious level, but humans have a wild mix of different approaches to learning and even a single human employs many ways of capturing knowledge, and also, the imperfect and messy ways that humans capture and store knowledge is a critical feature of humanness.
The difference in people is that our brains are continuously learning and LLMs are a static state model after being trained. To take your example about brute forcing more data, we’ve been doing that the second we were born. Every moment of every second we’ve had sound, light, taste, noises, feelings, etc, bombarding us nonstop. And our brains have astonishing storage capacity. AND our neurons function as both memory and processor (a holy grail in computing).
Sure, we have a ton of advantages on the hardware/wetware side of things. Okay, and technically the data-side also, but the idea of us learning from fewer examples isn’t exactly right. Even a 5 year old child has “trained” far longer than probably all other major LLMs being used right now combined.
The big difference between people and LLMs is that an LLM is static. It goes through a learning (training) phase as a singular event. Then going forward it’s locked into that state with no additional learning.
A person is constantly learning. Every moment of every second we have a ton of input feeding into our brains as well as a feedback loop within the mind itself. This creates an incredibly unique system that has never yet been replicated by computers. It makes our brains a dynamic engine as opposed to the static and locked state of an LLM.
Could you point me towards one that isn’t? Or is this something still in the theoretical?
I’m really trying not to be rude, but there’s a massive amount of BS being spread around based off what is potentially theoretically possible with these things. AI is in a massive bubble right now, with life changing amounts of money on the line. A lot of people have very vested interest in everyone believing that the theoretical possibilities are just a few months/years away from reality.
I’ve read enough Popular Science magazine, and heard enough “This is the year of the Linux desktop” to take claims of where technological advances are absolutely going to go with a grain of salt.
Remember that Microsoft chatbot that 4chan turned into a nazi over the course of a week? That was a self-updating language model using 2010s technology (versus the small-country-sized energy drain of ChatGPT4)
But they are. There’s no feedback loop and continuous training happening. Once an instance or conversation is done all that context is gone. The context is never integrated directly into the model as it happens. That’s more or less the way our brains work. Every stimulus, every thought, every sensation, every idea is added to our brain’s model as it happens.
This is actually why I find a lot of arguments about AI’s limitations as stochastic parrots very shortsighted. Language, picture or video models are indeed good at memorizing some reasonable features from their respective domains and building a simplistic (but often inaccurate) world model where some features of the world are generalized. They don’t reason per se but have really good ways to look up how typical reasoning would look like.
To get actual reasoning, you need to do what all AI labs are currently working on and add a neuro-symbolic spin to model outputs. In these approaches, a model generates ideas for what to do next, and the solution space is searched with more traditional methods. This introduces a dynamic element that’s more akin to human problem-solving, where the system can adapt and learn within the context of a specific task, even if it doesn’t permanently update the knowledge base of the idea-generating model.
A notable example is AlphaGeometry, a system that solves complex geometry problems without human demonstrations and insufficient training data that is based on an LLM and structured search. Similar approaches are also used for coding or for a recent strong improvement in reasoning to solve example from the ARC challenge..
It’s not specifically related, but biological neurons and artificial neurons are quite different in how they function. Neural nets are a crude approximation of the biological version. Doesn’t mean they can’t solve similar problems or achieve similar levels of cognition , just that about the only similarity they have is “network of input/output things”.
But claims of what future performance will be as given by people with careers, companies, and life changing amounts of money on the line are also no guarantee either.
The world would be a very different place if technology had advanced as predicted not even ten years ago.
Not a guarantee, no. A very, very strong predictor though. You have to have some kind of evidence beyond just vibes to start making claims that this time is totally different from all the others before anyone should take you seriously.
Ehhh… It depends on what you mean by human cognition. Usually when tech people are talking about cognition, they’re just talking about a specific cognitive process in neurology.
Tech enthusiasts tend to present human cognition in a reductive manor that for the most part only focuses on the central nervous system. When in reality human cognition includes anyway we interact with the physical world or metaphysical concepts.
There’s something called the mind body problem that’s been mostly a philosophical concept for a long time, but is currently influencing work in medicine and in tech to a lesser degree.
Basically, it questions if it’s appropriate to delineate the mind from the body when it comes to consciousness. There’s a lot of evidence to suggest that that mental phenomenon are a subset of physical phenomenon. Meaning that cognition is reliant on actual physical interactions with our surroundings to develop.
There’s a lot we understand about the brain, but there is so much more we dont understand about the brain and “awareness” in general. It may not be magic, but it certainly isnt 100% understood.
Not neccesarily, there are a number of modern philosiphers and physicists who posit that “experience” is incalculable, and further that it’s directly tied to the collapse of the wave function in quantum mechanics (Penrose-Hammerof; ORCH-OR). I’m not saying they’re right, but Penrose won a Nobel Prize in quantum mechanics and he says it can’t be explained by math.
I agree experience is incalculable but not because it is some special immaterial substance but because experience just is objective reality from a particular context frame. I can do all the calculations I want on a piece of paper describing the properties of fire, but the paper it’s written on won’t suddenly burst into flames. A description of an object will never converge into a real object, and by no means will descriptions of reality ever become reality itself. The notion that experience is incalculable is just uninteresting. Of course, we can say the same about the wave function. We use it as a tool to predict where we will see real particles. You also cannot compute the real particles from the wave function either because it’s not a real entity but a description of relationships between observations (i.e. experiences) of real things.
Yes, that’s physics. We abstract things down to their constituent parts, to figure out what they are made up of, and how they work. Human brains aren’t straightforward computers, so they must rely on statistics if there is nothing non-physical (a “soul” or something).
(working with the assumption we mean stuff like ChatGPT) mKay… Tho math and logic is A LOT more than just statistics. At no point did we prove that statistics alone is enough to reach the point of cognition. I’d argue no statistical model can ever reach cognition, simply because it averages too much. The input we train it on is also fundamentally flawed. Feeding it only text skips the entire thinking and processing step of creating an answer. It literally just take texts and predicts on previous answers what’s the most likely text. It’s literally incapable of generating or reasoning in any other way then was already spelled out somewhere in the dataset. At BEST, it’s a chat simulator (or dare I say…language model?), it’s nowhere near an inteligence emulator in any capacity.
Idk it’s not the worst name ever. Definitely sounds like a “kooky millennial parents wanted an interesting name” name. But there’s worse. Much worse. He should’ve told her where it came from though, kinda a dumb thing to not involve your wife in. You know. The name of her child.
Can’t you say the same about virtually any form of entertainment? The electricity that runs the server you used to post this doesn’t come from nowhere.
I don’t think this is a fair comparison. Fireworks launch a lot of nanoparticles, metals, and other harmful chemicals in the sky and directly worsen air quality while many Lemmy servers (lemmy.world included) use renewable energy.
It’s a totally valid point. Both waste your time and money to distract for a brief moment. You can use all the renewables you want but in the end the consumer is the product and the product needs you to keep consuming it to justify its existence. We need pyrotechnics to excercise ghost as much as we need another season of that marveldisneyfox show to survive. Or the steam summersale to make us think we are saving money by buying more games. Unsub, unfollow, smash the bellbutton and block shit more often.
You would be surprised how often parts need to be replaced in a data center. There is a lot more than severs in there to make all this happen. Then you need the device to read it on, and the infrastructure to get the bits to you… a lot of plastic in all that which won’t break down for a very line time too.
I think a fairer comparison in entertainment would be sport. On paper it doesn’t produce anything to better humanity, there’s a ridiculous amount of fuel used by teams and fans to travel especially when it comes to something like a World Cup because it’s on a global scale.
In reality, the world absolutely needs it because that’s what people use to entertain themselves. People can’t be mindless drones fucking about; they won’t just read books all the time or go camping every day. There’s something primal that just comes out when it comes to sport and most people can’t live without it.
I had the discussion the other day of how civilization would be different if humans followed the ‘have loads of babies at once and see which ones survive’ style of reproduction.
“Oh hi Sarah! How’re the kids?”
“Oh, little Jeremy wasn’t eating as much as the others so I threw him outside onto the road.”
I don’t know about you all, but I have been posting as an adult human male for a numbers of years now despite being a 4 year old Alaskan Malamute. No one seems to notice or care.
You absolutely can go have lunch elsewhere. I’ve been in similar situation. If asked, you can simply tell them, I enjoy having lunch by myself, it helps me recharge. Also, most of the time, boundaries are set through action not only words. Just do what you prefer without the concern of what others will think or feel, while being polite with your words. Most people will pick up on you actions and eventually leave up be. I’ve had serious boundary issues in my family and I’ve had to learn quite a bit about forming proper sustainable boundaries.
Seems like a distinction without a difference, I sort of assumed the OP meant that is all I mean. We don’t know anything before the beginning after all. Like you said.
No, in our current best-supported model of the universe (Lambda-CDM) the concept of “before” the Big Bang is meaningless. It is the apex of the spacetime “bell” from which everything emerged.
But something must have triggered the big bang. The model might not support this, but this only means the model is insufficient to describe what goes beyond our known universe.
That’s a philosophical question, not a scientific one, since it’s by definition beyond the ability of science to answer. It suffers from the infinite regress problem which many people invoke God to solve (the uncaused cause) but that’s not very satisfying, is it?
That’s a separate claim you’d have to prove. We have no evidence of something triggering it, we don’t even know that it would need to be triggered. All of our observations occur inside this universe, therefore we have no idea at all if cause-and-effect even applies to the universe as a whole. The short answer is: we don’t know and have no reason to posit the need for something else.
What does it mean for something to be “beyond” everywhere or before time?
That’s nonsense. You think some massive amount of matter just materialized from nothing into a singular point? How do you think all the stuff managed to get there in the first place?
Based on the comment you’re replying to, I assume they would say “no, nothing materialized from nothing because there wasn’t a ‘before’ in which nothing could have existed”
I’m not a physicist, I don’t know one way or another. But it’s possible that there’s a leading explanation for the formation of the universe based on a mathematical model that predicts exactly one big bang.
Physicists don’t even know why it started expanding to begin with. We also don’t know if there’s anything outside of our own universe. We also don’t know if our universe is curved and folded in on itself, which would make several mathematical calculations for the size of the universe and what was going on with expansion a bit easier to try and work out (I’m also not a physicist. These are just things ive read about) or if it’s flat. Their best measurements right now is that it’s flat. But they still aren’t sure, because they don’t know how big space actually is right now. If it’s big enough, it could still be curved in on itself, but we just can’t measure the flatness of two points far enough apart from each other to notice the curve. An example I given was that it would be like trying to show the earth was round by measuring an area of a sandbox.
It wasn’t matter that did the banging, it was space-time itself. Have you heard how we know that the universe is expanding? Well we can extrapolate backwards and find the point in time where space-time was just a point: “the big bang”. Not only was there no space-time for matter to exist in before the big bang, there was no concept of “before” because that word only makes sense in the context of spacetime. So yeah, the person you’re replying to is right, “before the big bang” is a nonsense phrase.
They keep finding inconsistencies to that. Groupings and radiation and gap distances that don’t line up with the expansion expectations.
Then the other more applicable point is that what makes you think “the big bang” was the first big bang? You think mass and entropy and radioactive decay and all this shit in the nothingness of space all started with “the big bang” but it only happens once and then in a ridiculously long time from now when everything reaches absolute 0 and there’s no energy left anywhere, that it’s just done? A one trick pony?
Well what if it all eventually manages to head back to its origin point after that and it makes another big bang that kicks off again?
Then the other more applicable point is that what makes you think “the big bang” was the first big bang?
Well, again you’re using terms of time to describe the birth of time, so no that’s not what I think because that statement doesn’t make sense. But I’m being pedantic, I’m sure you meant “what if our’s wasn’t the only big bang?” And to that I can confidently say “maybe?”. It’s an interesting question but it’s just not a scientific question. According to big bang theory, our universe, space-time and all the matter and energy in it, began with the big bang and we still exist inside it. Other big bangs, if they exist in some higher medium, are simply outside our scope. We just can’t design tests to answer those questions. Best we can tell scientifically is where our universe started.
You think mass and entropy and radioactive decay and all this shit in the nothingness of space all started with “the big bang” but it only happens once and then in a ridiculously long time from now when everything reaches absolute 0 and there’s no energy left anywhere, that it’s just done? A one trick pony?
Again maybe? You’re kinda putting words in my mouth. Idk if our universe is the only one, it’s impossible to know. My original point was that time as defined by general relativity could not exist before the big bang because it was itself a product of the big bang.
What I meant by “what if it wasn’t the first big bang?”, was that what if it wasn’t the first of our own universe? I mean what if space will at some point stop expanding and start contracting. Pull everything back close together again. Then theres another expansion just like what we’re currently in now. The best scientists, physicists, and mathematicians haven’t been able to work out a lot of major thing about our universe or how it works or even if it’s flat or folded in on itself yet. The data and tests/measurements don’t exist yet. So until that can get worked out into a theory, it’s silly to say time began at the expansion.
Last I heard scientists were leaning more toward the ever-accelerating expansion “heat death” theory then the expansion-to-contraction “big crunch” theory, but it’s not set in stone yet. But even if “big crunch” came out on top, assuming that the life of the universe is cyclical is pure conjecture. It could be right, but it’s unprovable, so we’ll never know.
As for the existence of space-time before the big bang, I don’t know what to tell you, I’m just quoting theory. By definition, the big bang is when space-time came to be. If the big bang was the result of an ancestor universe’s big crunch, we can’t assume that the same space-time carried over, let alone that the ancestor universe even had something analogous to space-time. Barring some insanely massive breakthrough, it’s simply unknowable.
But the speed of light changes. It can be slowed down. It just doesn’t change while moving through outer space (a vacuum). The maximum speed is the constant.
It’s only something we can speculate about. It represents a limit to our ability to gather any evidence that might validate those speculations. We can’t say what happened before it, because time itself was one of the things that popped out of the big bang. What would “before” even mean if time didn’t exist?
Even if time and matter did exist in some sense, we can’t get any evidence for it. We can’t make any kind of useful theory about it. At best, we can make wild guesses.
We could also just say “we don’t know what it was like”. Russell’s Teapot suggests we should instead say there was nothing, because we can’t prove there was anything.
There’s no evidence to point to the big bang as being the very beginning, though. There may well have been a billion big bangs before this one. Each one taking so long to reset and start anew that to us, it might as well be seen as about infinity. Humanity outright doesn’t have the knowledge of what happens on extremely large or extremely small scales. We don’t really have a clue for what actually made space start to expand in the first place, so we don’t know if it’s ever happened before, or even if it happened anywhere else at any other time but outside of our observable universe.
The works of Roger Penrose have shown that it’s conceivable or potentially even provable that at the very largest scales of time and space, there is no meaningful difference between the accelerating “cold” end of our universe and the collossal expansion that began the universe as we know it, and in fact those two states are perpetually cycling, birthing new universes from the explosion of old ones. This is based on the idea that when there is no more physical mass in the universe, you can look at the universe from a reference frame that only looks at the geometry of the energy expanding through space and it’s identical to the beginning states.
I would recommend PBS Spacetime youtube channel for a lot better explanations of conformal cyclic cosmology than my feeble mind can try to relate.
How do you think all the stuff managed to get there in the first place?
You’re still thinking like a meat-monkey. There are stranger states out there than one can imagine, and that’s not hyperbole. There was no causality before expansion, because there was no meaningful interactions or spacetime in which interactions can occur.
You’re always going to have a hard time imagining this, because again, you are a human. We all are, none of us can imagine states of the universe without time and space.
kbin.life
Top