If anyone wants to grasp the basics: here is some fun reading (leading on to some beautiful math). Changing the idea of parallelity leads to hyperbolic geometry and other fun stuff. :)
Please correct my layman understanding if I’m wring here. But isn’t everything traveling in a straight line until an external force is applied. For example the earth orbiting the sun is traveling in a straight line in a curved apacetime. Also if you jump, the moment you leave the ground until you touch it again coming back down you were traveling in a straight line.
What they are getting at is that gravity is not a force so much as your mass trying to travel in a straight line through curved spacetime. The weight you feel is because the surface of the earth is in your way.
Get into low earth orbit and that straight path has you going in apparent circles around the planet. You are very much within the earth’s gravity but you don’t feel “weight” because the surface of the earth is no longer blocking your path. You still have mass and inertia and all that, of course.
Also if you jump, the moment you leave the ground until you touch it again coming back down you were traveling in a straight line.
relative to the body of earth, including its rotation it would be an arc path, and including it’s tilt it would be 3d, if we also include the travel around the sun in orbit, that elongates it around the orbit, so uh.
Not true, as when space bends, it bends the rulers and compasses too. We experience no spatial distortion.
A person traveling near the speed of light doesn’t feel like time is slower for them (but it is and we can measure it)
The principle is equivalent.
That said, it’s not a straight line in any topology standard I am aware of.
Sure you could CREATE a topology framework where this would be considered a straight line, but there is no real world model that could come even close without so much mass being concentrated in static relative areas, and EVEN THEN it would only be straight for a predetermined instant before the mass deforming spacetime began interacting with each other.
That’s the problem with spacetime deformations, almost no layman takes into account the ridiculous amounts of static mass to make those strange topologies.
Completely agree, this is garbage, and I’ve bitched about it in the past. Annoyingly, both the Gmail and Outlook widgets are far better, but I don’t want either of those on my phone.
You’re just salty because you missed that one event at that one place with those specific people. Apple put it in your calendar, how is that not helpful!? /s
EDIT: I was making a dumb joke about how it didn’t show you any details, even if it would take up the same space…
Aw man I have to deal with jerks letting their dogs roam around the town almost getting flattened by cars and jumping on people and now this is a thing too???
Ikr? The chocolate rain one literally changed my perspective on life. And I wish I could go back in time and listen to Wow Wow again for the first time. Don’t also was a banger despite being an outtake
instagram’s login pop-ups will appear if you have seen like 12 posts of a user. that’s really annoying. if you are on mobile and open instagram on the browser and then log in, instagram still asks you to log in. how weird!
Pro tip: you can turn the link into ddinstagram to embed on services like Discord and other ones with embeds. This way you don’t have to visit the site
When ya upload a file to a Claude project, it just keeps it handy, so it can reference it whenever. I like to walk through a kind of study session chat once I upload something, with Claude making note rundowns in its code window. If it’s a book or paper I’m going to need to go back to a lot, I have Claude write custom instructions for itself using those notes. That way it only has to refer to the source text for specific stuff. It works surprisingly well most of the time.
If we’re speaking of transformer models like ChatGPT, BERT or whatever: They don’t have memory at all.
The closest thing that resembles memory is the accepted length of the input sequence combined with the attention mechanism. (If left unmodified though, this will lead to a quadratic increase in computation time the longer that sequence becomes.) And since the attention weights are a learned property, it is in practise probable that earlier tokens of the input sequence get basically ignored the further they lie “in the past”, as they usually do not contribute much to the current context.
“In the past”: Transformers technically “see” the whole input sequence at once. But they are equipped with positional encoding which incorporates spatial and/or temporal ordering into the input sequence (e.g., position of words in a sentence). That way they can model sequential relationships as those found in natural language (sentences), videos, movement trajectories and other kinds of contextually coherent sequences.
I’d still call that memory. It’s not the present; arguably for a (post-training) LLM the present totally consists of choosing probabilities for the next token, and there is no notion of future. That’s really just a choice of interpretation, though.
During training they definitely can learn and remember things (or at least “learn” and “remember”). Sometimes despite our best efforts, because we don’t really want them to know a real, non-celebrity person’s information. Training ends before the consumer uses the thing though, and it’s kind of like we’re running a coma patient after that.
Evolution is a result of surviving environmental stressors and, well, surviving in general.
Case in point: this person’s evolutionary lineage involved surviving overcrowded environments by causing everyone else to have aneurisms with just their words.
lemmy.world
Newest