There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

askscience

This magazine is from a federated server and may be incomplete. Browse more on the original instance.

TauZero , in Does physics ever get vague?

No, physics is never vague. Some problems are currently computationally intractable beyond a specific level of accuracy (like predicting the weather more than 2 weeks out), and some questions we do not know answers yet but expect to find answers to in the future (like why did the big bang happen). But there is never an element of the mysterious or spiritual or “man can never know”.

Popular science physics often gets mysterious, but that is a failure of popularization. Like the observer effect in quantum physics, which is often depicted as a literal eyeball watching the photons go through a double slit (?!). This may cause a naive viewer to mistakenly start thinking there is something special or unique about the human eyeball and the human consciousness that physics cannot explain. It gets even worse - one of the most popular double slit videos on youtube for example is “Dr. Quantum explains the double slit experiment” (not even gonna link it) which is not only laughably misleading, but upon closer examination was produced by a literal UFO cult, and they surreptitiously used the video to funnel more members.

Or the “delayed choice quantum eraser experiment” which confounded me for years (“What’s that? I personally can make a choice now that retroactively changes events that have already happened in the past? I have magical time powers?”), until I grew tired of being bamboozled by all its popular science depictions and dug up the actual original research paper on which it is based. Surprise! Having read the paper I have now understood exactly how the experiment works and why the result is sensible and not mysterious at all and that I don’t have magical powers. Sabine Hossenfelder video on youtube debunking the delayed-choice quantum eraser was the first and so far one of only two videos I have seen in the years since that have also used the actual paper. This has immediately made me respect her, regardless of all the controversy she has accumulated before or since.

Donjuanme ,

I believe Heisenberg says there’s vagueness in the amount of things we can know at once. But I agree there’s nothing we shouldn’t be able to know, only things we know that we can’t know simultaneously, which imo is “vagueness”. However the understand principle is something I hope falls some day with better measurement devices than we had a hundred years ago.

Also everyone should listen to Sabine, she’s among the least biased science educators imo. People need to be really careful what they learn from YouTube creators, in fact that was a subject of a recent Sabine video!

FooBarrington ,

The uncertainty principle fundamentally can’t fall. It’s not a limitation of our measurement devices, it’s a fundamental limitation of physics that, as far as we know, can’t be broken.

Also, Sabine Hossenfelder has horrible takes regarding trans people, so I’d take anything from her beyond her immediate field with a giant grain of salt.

TauZero ,

On the subject of Heisenberg Uncertainty - even there I blame popular science for having misled me! “You can’t know precise position and momentum at once” - sounds great! So mysterious! If you dig a little deeper, you might even get an explanation like that to measure the position of something you have to bombard it with particles (photons, electrons), and when it’s hit its velocity will change in a way you do not know. The smaller that something is, and the more you bombard it to get more precise position, the more uncertainty you will get.

All misleading! It was not until having taken an actual physics class where I learned how to calculate HU that I realized that not only is HU the result of simple mathematics, but that it also incidentally solves the thousands-years-old Zeno Paradox almost as a side lemma - a really cool fact that I was taught nowhere before!

Basically the wavefunction is the only thing that exists. The function for a single particle is a value assigned to every point in space, the values can be complex numbers, and the Schroedinger equation defines how the values change over time, depending on their nearby values in the now. That function is the particle’s position (or rather its square absolute magnitude) - if it is non-zero at more than one point we say that the particle is present in two places at once. What is the particle’s velocity? In computer games, each object has a value for a position and a value for a velocity. In quantum mechanics, there is no second value for velocity. The wavefunction is all that exists. To get a number that you can interpret as the velocity, you need to take the Fourier transform of the position function. And you don’t get one number out, you get a spectrum.

In one dimension, what is the Fourier transform of the delta function (a particle with exactly one position)? It is a constant function that is non-zero everywhere! (More precisely it is a corkscrew in the complex values, where the angle rotates around but magnitude remains the same). A particle with one position has every possible momentum at once! What is the Fourier transform of a complex-valued corkscrew? A delta function! Only a particle that is present everywhere at once can be said to have a precise momentum! The chirality of the particle’s corkscrew position function determines whether it is moving to the left or to the right. Zeno could not have known! Even if you look at an infinitesmall instant of time, the arrow’s speed and direction is already well-defined, encoded it that arrow’s instantaneous position function!

If you try imagine a function that minimizes uncertainty in both position and momentum at once, you end up with a wavepacket - a normal(?)-distribution-shaped curve peak that is equally minimally wide in both position and momentum space. If it were any narrower in one, it would be way wider in the other. That width squared is precisely the minimum possible value of Heisenberg Uncertainty in that famous Δx*Δp >= ħ/2 equation. It wasn’t ever about bombardment at all! It was just a mathematical consequence of using Fourier transforms.

FlowVoid ,

Even once you understand that the uncertainty principle is not the same as the observer effect, I think it’s still mysterious for the same reason that “the wavefunction is the only thing that exists” is mysterious.

If anything, it’s more mysterious once you understand the difference. People are more willing to accept “Your height cannot be measured with infinite precision” than “Your height fundamentally has no definite value”, but the latter is closer to the truth than the former.

FlowVoid , (edited )

I wouldn’t say that Sabine is among the “least biased”. She strongly advocates for superdeterminism, and her videos on the subject presume it is true even though it is still unproven and currently accepted only by a minority of physicists.

Contramuffin ,

I am generally somewhat skeptical about your comment. Sure, I hadn’t heard about Sabine’s video about the quantum eraser, but I don’t necessarily think that it disproves the idea that physics is never vague or unknown.

Perhaps it is different in physics than my own field, but if you read enough primary papers, enough lit reviews, at least in my field, you’ll see some common themes come up. Things such as “further research is required to determine this mechanism,” “the factors that are involved are unknown,” “it is unclear why this occurs.” Actually, your suggestion that nothing is vague is entirely counter to my entire field of science. When we introduce ourselves in our field, we start off with a sentence about what we do not know. And perhaps it is my bias, having worked in my field, but I cannot see how any scientist could possibly say that nothing is vague.

To me, my interpretation is that “science is not vague” is itself a symptom of popular science. “Science is mystical” is simply a symptom of a slightly different disease - the disease of poor popular science communication. But I think that’s distinctly different from the question, which is asking if anything was vague. I’d love to hear your thoughts on the matter.

TauZero ,

Oh yeah for sure, I don’t mean at all to say that all questions have been answered, and even the answers that we do have get more and more vague as you move down the science purity ladder. If all questions were solved, we would be out of a job for one! But I choose to interpret OP’s question as “is there anything unknowable?”. That’s the question relevant to our world right now, and I often disagree with the world view implied by popular science - that the world is full of wonder but also mystery. The mystery is not fundamental, but rather an expression of our individual and collective ignorance. There are even plenty of questions, like the delayed-choice quantum eraser, that have already been solved, and yet they keep popping up as examples of the unknowable, one even sniped me in this very thread (hi there!). Then people say “you do not even know whether eating eggs is good for you” and therefore by implication we shouldn’t listen to scientists at all. In that sense, the proliferation of the idea of mystery has become dangerous. The answer to unanswered questions is not gnosticism, it is as you said “further research” 😄!

Contramuffin ,

Thank you for the thoughtful response. I see that we interpreted the question differently, based on what we thought was the issue of science communication. Which I think is really interesting!

I see what you mean - the people who impose their fantasies onto the science, who seemingly think there is some sort of “science god” who determines what fact is true on which days. Certainly, they are a problem. My experiences with non-scientific folk have actually usually been something of the opposite. They think that science is overly rigid and unchanging. They believe that science is merely a collection of facts to be memorized and models to be applied. Perhaps this is just the flip side of the same problem (maybe these people interpret changes in our knowledge to be evidence that there is no such thing as true facts?)

The difference in interpretation might stem from a difference in our fields. I assume you study physics. And I must assume that scientific rigor in physics depends on being certain about your discoveries. In my field (disease and pathogenesis), the biggest challenge is, surprisingly enough, convincing people that diseases are important things that need to be studied. Or perhaps that’s not a big surprise, given the public’s response to COVID-19. Even grant readers have to be convinced that there is merit in studying your disease of interest.

When I speak of my research to non-scientific people, a lot of the times the response is simply, “why not just use antibiotics? Why do we care about how diseases happen when we can just treat it?” A lot of my field, even in undergraduate programs, is dedicated to breaking down the notion that “we know enough, so don’t bother looking deeper.” I think there’s a very strong mental undercurrent in my field that we know next to nothing, and that we need to very quickly expand our knowledge before conventional medical science, especially our overreliance on antibiotics, gives out and fails. For instance, did you know that one of the most fundamental infection-detecting systems in our bodies (pattern recognition receptors) was discovered in mammals just over 20 years ago? The idea of pattern recognition receptors is literally only college-age.

So I actually find it interesting that you interpret the question so differently. It’s a testament to how anti-science rhetoric manifests in different ways to different fields

TauZero ,

Thank you for your perspective! I found it really informative!

FlowVoid , (edited )

There are even plenty of questions, like the delayed-choice quantum eraser, that have already been solved

No, it has not been solved. At least not solved to the satisfaction of many physicists.

In one respect, there is nothing to solve. Everyone agrees on what you would observe in this experiment. The observations agree with what quantum equations predict. So you could stop there, and there would be no problem.

The problem arises when physicists want to assign meaning to quantum equations, to develop a human intuition. But so far every attempt to do so is flawed.

For example, the quantum eraser experiment produces results that are counterintuitive to one interpretation of quantum mechanics. Sabine’s “solution” is to use a different interpretation instead. But her interpretation introduces so many counterintuitive results for other experiments that most physicists still prefer the interpretation that can’t explain the quantum eraser. Which is why they still think about it.

In the end, choosing a particular interpretation amounts to choosing not if, but how QM will violate ordinary intuition. Sabine doesn’t actually solve this fundamental problem in her video. And since QM predictions are the same regardless of the interpretation, there is no correct choice.

TauZero ,

Have we watched the same Sabine video? Delayed choice quantum eraser has nothing to do with interpretations of quantum mechanics, at least in so far as every interpretation (Copenhagen, de Broglie-Bohm, Many-Worlds) predicts the same outcome, which is also the one observed. The “solution” to DCQEE is a matter of simple accounting. And every single popular science DCQEE video GETS IT WRONG. The omission is so reckless it borders on malicious IMO.

For example, in that PBS video linked in this very thread, what does the host say at 7:07?

https://mander.xyz/pictrs/image/3c3f75a3-816e-4a7b-91e5-53a57ee5dc69.jpeg

If we only look at photons whose twins end up at detectors C or D, we do see an interference pattern. It looks like the simple act of scrambling the which-way information retroactively [makes the interference pattern appear].

This is NOT WHAT THE PAPER SAYS OR SHOWS! On page 4 it is clear that figure R01 is the joint detection rate between screen and detector C-only! (Screen = D0, C = D1, D = D2, A = D3, B omitted). If you look at photons whose twins end up at detectors C inclusive-OR D, you DO NOT SEE AN INTERFERENCE PATTERN. (The paper doesn’t show that figure, you have to add figures R01 and R02 together yourself, and the peaks of one fill the troughs of the other because they are offset by phase π.) You get only 2 big peaks in total, just like in the standard which-way double slit experiment. The 2 peaks do NOT change retroactively no matter what choices you make! You NEED the information of whether detector C or D got activated to account which group (R01 or R02) to assign your detection event to! Only after you do the accounting can you see the two overlapping interference patterns within the data you already have and which itself does not change. If you consumed your twin photon at detector A or B to record which-way information, you cannot do the accounting! You only get one peak or the other (figure R03).

It’s a very tiny difference between lexical “OR” and inclusive “OR”, but in this case it makes ALL the difference. For years I was mystified by the DCQEE and how it exposes the ability of retrocausality, and turns out every single video simply lied to me.

FlowVoid , (edited )

Right, but in order to get the observed effect at D1 or D2 there must be interaction/interference between a wave from mirror A and a wave from mirror B (because otherwise why would D1 and D2 behave differently from D3 and D4?).

And that’s a problem for some interpretations of QM. Because when one of the entangled photons strikes the screen, its waveform is considered to have “collapsed”. Which means the waveform of the other entangled photon, still in flight, must also instantly “collapse”. Which means the photon still in flight can be reflected from mirror A or mirror B, but not both. Which means no interaction is possible at D1 or D2.

TauZero ,

It’s not a problem for Copenhagen if that’s the interpretation you are referring to. Yes, the first photon “collapses” when in strikes the screen, but it still went through both slits. Even in Copenhagen both slit paths are taken at once, the photon doesn’t collapse when it goes through the slit, it collapses later. When the first photon hits the screen and collapses, that doesn’t mean its twin photon collapses too. Where would it even collapse to, one path or the other? Why? The first photon didn’t take only one path! The twin photon is still in flight and still in superposition, taking both paths, and reflecting off both mirrors.

FlowVoid , (edited )

When the first photon hits the screen and collapses, that doesn’t mean its twin photon collapses too.

Yes, it does. By definition, entangled particles are described by a single wave function. If the wave function collapses, it has to collapse for both of them.

So for example, an entangled pair of electrons can have a superposition of up and down spin before either one is measured. But if you detect the spin of one electron as up, then you immediately know that the spin of the second electron must be down. And if the second electron must be down then it is no longer in superposition, i.e. its wave function has also collapsed.

TauZero ,

Ok, I thought about it some more, and I want to make a correction to my description! The twin photon does collapse, but it doesn’t collapse to a single point or a single path. It collapses to a different superposition, a subset of its original wavefunction.

I understand it is an option even under Copenhagen. So in your two-electron example, where your have 1/sqrt(2)(|z+z-> + |z-z+>), when you measure your first electron, what if instead of measuring it along z+ you measure it along z+45°? It collapses into up or down along that axis (let’s say up), and the entangled second electron collapses too, but it doesn’t collapse into z-135°! The second electron collapses into a superposition of (I think) 1/2 |z+> + sqrt(3)/2 |z-> . I.e. when you measure the second electron along z+, you get 25% up and 75% down. The second electron is correlated to the first, but it is no longer the exact opposite to the first, because the axis you measured the first at was not z+ but inclined to it. There is exists no axis that you could measure the second electron at and get 100% up because it is not a pure state, it is still in superposition.

So back to the quantum eraser experiment, when the first photon hits the screen D0 and collapses, say at position 1.5, the twin photon collapses to a sub-superposition of its state, something like 80% path A and 20% path B. It still takes both paths, but in such a manner that if you choose to measure which-path information at detector D3 it will be more strongly correlated with path A, and if you choose to measure the self-interference signal from the mirror at D1 or D2, it will still self-interfere and will be more strongly correlated with detector D1. What do you think?

FlowVoid ,

In the electron example, if the two electrons are entangled then the wave functions must be the shared. The new superposition for the second electron would therefore be shared with the first electron. So if you measured the second electron along z+ and got up, then if you measured the first electron again, this time along z+, it would give down.

Likewise if the twin photon is still in superposition, then the first photon is also in superposition. Which is hard to accept in the Copenhagen interpretation, given that the first photon has been absorbed. If absorption doesn’t completely collapse a wave function, then what does?

TauZero ,

So if you measured the second electron along z+ and got up, then if you measured the first electron again, this time along z+, it would give down.

Right! So what happens when you have two z+z- entangled electrons, and you measure one along z+45° and then the other along z+0°? What would happen if you measure the second electron along z+45° as well?

FlowVoid , (edited )

Entangled electrons are entangled in all directions. If you measure one along any direction, you can completely predict the measurement of its pair in the same direction.

In other words, measuring one along X and its pair at Y is equivalent to measuring one along X and then measuring the same one again at Y (accounting for the sign shift in the pair, of course).

TauZero ,

Hmm interesting. I may have been mistaken about the electrons only being entangled in a single direction. I thought that if you prepared a pair of electrons in state 1/sqrt(2) (|z+z-> + |z-z+>) and then measured it in y there would be no correlation, but based on: …stackexchange.com/…/intuition-for-results-of-a-m…
…stackexchange.com/…/what-is-the-quantum-state-of…
if I had done the 90° rotation properly, the math works out such that the electrons would still be entangled in the new y+ basis! There is no way to only entangle them in z alone - if they are entangled in z they are also entangled in x and y. My math skills were 20 years rusty, sorry!

I still think my original proposition, that in the DCQEE under Copenhagen, an observation that collapses one photon, collapses the other photon to a sub-superposition, can be salvaged. In the second stackexchange link we are reminded that for a single electron, the superposition state 1/sqrt(2) (|y+> - |y->) is the same as |z+> state! They describe the same wavefunction psi, expressed in different basis: (y+,y-) vs. (z+,z-). When we take a single electron in superposition 1/sqrt(2) (|z+> + |z->) and measure it in z, and it collapses to, say, z+, we know that it is a pure state in z basis, but expressed in y basis it is now a superposition of 1/sqrt(2) (|y+> - |y->)! Indeed if we measure it now in y, we will get 50% y+ and 50% y-.

So in DCQEE when you collapse the first photon into a single position on the screen, the twin photon does collapse, but its basis is not expressed in terms of single positions! It’s some weird agglomeration of them. If you were to take that “pure” state and express it in terms of position basis, you would get a superposition of, say, 80% path A and 20% path B.

FlowVoid ,

Well, if the second photon is in a new, weird superposition then the first photon must also be in the same new, weird superposition. Again, I don’t that’s compatible with Copenhagen given that the first photon no longer exists.

Note by the way that 50% y+ and 50% y- is how all photons start. So if that’s also the final state then there is no reason for it to prefer any detector over the others.

TauZero ,

50% y+ and 50% y- is how all [electrons] start

Yeah, but when you start with a 50% z+ / 50% z- electron, and you measure it and get say z+, it is now 100% z+, right? If you measure it again, you will always get z+. And then you give a bunch of them to your buddy with an identical lab and an identical Stern-Gerlach apparatus and and they say “hey, I measured your electrons that you said were 100% z+, and I’m getting 50% z+ 50% z-”. And you say “dude! your lab is in China! your z+ is my y+! you have to do coordinate rotation and basis substitution! if you look at my pure electron in your sideways basis, it’s in superposition for you”.

When the first photon hits the screen, the basis is the screen basis. Each position on the screen - 1.4, 1.5, 1.6, etc - is an eigenvector and the first photon collapses to one of those eigenvectors. The second photon collapses too, but you are wrongly equating the positions on the screen and positions on paths A/B as if they are in the same basis. They are not! You were just misled to think they are the same basis because they are both named “position”, but they are as different as the z+ axis in America is different from z+ axis in China.

The second photon collapses into the screen basis eigenvector 1.5 but that 1.5 does not correspond to any single location on path A or path B. If you do the basis substitution from screen basis into path basis, you get something like 80% path A and 20% path B (and something weird with the phases too I bet). Does that sound accurate?

JiggityWeenis OP ,

Isn’t the idea that the world is never vague, kind of an assumption? What if some are, and some aren’t?

Contramuffin , in Does physics ever get vague?

Disclaimer: I’m not a physicist, but I am a scientist. Science as a whole is usually taught in school as though we already know everything there is to know. That’s not really accurate.

Science is really sort of a black box system. We know that if you do this particular thing at this particular time, then we get this particular response. Why does that response happen? Nobody really knows. There’s a lot of “vague” or unknown things in all of science, physics included. And to be clear, that’s not invalidating science. Most of the time, just knowing that we’ll get a consistent response is enough for us to build cool technologies.

One of the strangest things I’ve heard about in physics is the quantum eraser experiment, and as far as I’m aware, to this day nobody really knows why it happens. PBS Spacetime did a cool video on it: youtu.be/8ORLN_KwAgs?si=XqjFEjDfmnZX31Mn

ilinamorato , in Why and how does atmosphere exist? Shouldn't all oxygen (and everthing else) fall down due to gravity?

I love that the answer is basically “Yes, it does, but the other air molecules get in the way.”

Catoblepas , in What is the current state of research on regional anthropological phenotypes?

Studies of gene flow are probably more what you’re looking for, rather than phenotype. I’m 10 minutes from heading out the door so I don’t have time go hunting for links, but if you search for information about haplogroups, molecular clocks, and mitochondrial DNA you’ll be able to find a lot of information about the history of gene flow in humans.

mechoman444 , in Why and how does atmosphere exist? Shouldn't all oxygen (and everthing else) fall down due to gravity?

That is absolutely correct. The atmosphere is held in place, so to speak, by gravity. It is also spinning along with the earth. Some of the atmosphere is lost to outer space at a regular rate and again replenished by natural processes that are beyond the scope of your question.

What’s more interesting is the atmosphere is pressurized at a decreasing gradient the closer it gets to outer space thereby relinquishing the idea that a container is necessary to house all of our atmosphere. (I watch a lot of flat earth videos for funzies.)

Uhhh, the earth isn’t flat, just in case.

linucs OP ,

I genuinely laughed at the last sentence hahaha you silly silly person, of course it’s flat

nxfsi , in Why and how does atmosphere exist? Shouldn't all oxygen (and everthing else) fall down due to gravity?

Ever ridden on a seesaw with a small child?

Lexam ,

No.

TheJack , (edited ) in Why and how does atmosphere exist? Shouldn't all oxygen (and everthing else) fall down due to gravity?

According to this physics.stackexchange.com answer:

“I suppose the surprising thing is why the atmosphere doesn’t all fall immediately to the Earth’s surface to form a thin dense layer of air molecules.

The reason this doesn’t happen is that air molecules are all whizzing around at surprisingly high speeds - typically hundreds of metres per second depending on the temperature.

The air molecules bash into each other and knock each other around, and the air molecules near the ground bash into the air molecules above them and stop them falling down.”

Detailed explanation from another answer:

“The key ingredient is temperature.

If it were zero then all the air would indeed just fall down to the ground (actually, this is a simplification I’ll address later).

As you increase the temperature the atoms of the ground will start to wiggle more and they’ll start to kick the air molecules giving them non-zero average height.

So the atmosphere would move a little off the ground. The bigger the temperature is the higher the atmosphere will reach.

Note: there are number of assumptions above that simplify the picture. They are not that important but I want to provide a complete picture:

1, Even at the zero temperature the molecules would wiggle a little because of quantum mechanics

2, The atmosphere would freeze at some point (like 50K) so under that temperature it would just lie on the ground

3, I assumed that the ground and the atmosphere have the same temperature because they are in the thermal equilibrium; in reality their temperatures can differ a little because of additional slow heat-transfer processes.”

TauZero ,

This is the way! It helps me to imagine what would it look like if the atmosphere consisted of a single nitrogen molecule. You place it on the ground but the ground has temperature (is warm) so your one molecule gets launched up into the vacuum on a parabolic trajectory at 500 m/s on average. If it launched at 45° it would reach 6km up and fall down, at 90° - 12km up - and that’s on average. Some would get launched faster and higher (following the long tail of the Boltzmann distribution), and hydrogen and helium even faster still because they are lighter. A few hydrogen molecules would be launched at speed above 11km/s, which is above Earth’s escape velocity, so they would escape and never fall down.

When you have many air molecules, they hit each other on the way up (and down), but because their collisions must be perfectly elastic, mathematically it works out that the overall velocities are preserved. So when your one nitrogen molecule gets launched up but on its way hits another identical molecule, you can think of them equivalently as passing through each other without colliding at all. (Yes, mathematically they can also scatter in some other random directions, but the important part is that your original molecule is equally likely to be boosted further upwards as opposed to impeded.)

The end result is that majority of the atmosphere stays below 12km, density goes down as you go up though never quite reaching zero, and hydrogen and helium continuously escape to space to the point none are left.

Heggico , in Why and how does atmosphere exist? Shouldn't all oxygen (and everthing else) fall down due to gravity?

It does, but as more air drops down, the pressure increases. This pressure then starts to push back against the air above it. Which is why we have atmospheric pressure at the surface, but that goes down to pretty much 0 in space.

Even in low earth orbit there are still some particles, which causes satellites and such to slow down, requiring them to fire some thrusters every once in a while.

linucs OP ,

Cool, thanks!

Follow up question: are there different densities in space?

count_of_monte_carlo , in how does lucky imaging in astrophotography work?

This isn’t exactly my area of expertise, but I have some information that might be helpful. Here’s the description of the frame selection from a paper on a lucky imaging system:

The frame selection algorithm, implemented (currently) as a post-processing step, is summarised below:

  1. A Point Spread Function (PSF) guide star is selected as a reference to the turbulence induced blurring of each frame.
  1. The guide star image in each frame is sinc-resampled by a factor of 4 to give a sub-pixel estimate of the position of the brightest speckle.
  1. A quality factor (currently the fraction of light concentrated in the brightest pixel of the PSF) is calculated for each frame.
  1. A fraction of the frames are then selected according to their quality factors. The fraction is chosen to optimise the trade- off between the resolution and the target signal-to-noise ra- tio required.
  1. The selected frames are shifted-and-added to align their brightest speckle positions.

If you want all the gory details, the best place to look is probably the thesis the same author wrote on this work. That’s available here PDF warning.

PeriodicallyPedantic OP ,

Thanks, I’ll take a look at that! I think I actually already skimmed it, because those 5 points are familiar.

I wasn’t sure what was meant by the PSF guide star. Is that just the function that selects the speckle in each frame use to shift/align the frames?

Also I wasn’t sure what “sync-resampled” means. Shifted-and-upscaled?

Reading this had given me an idea on how I might implement it myself, but I wasn’t familiar enough with the terminology to know if my algorithm was the same as the one described.

I’ll try reading further into the paper to see if it clears anything up

count_of_monte_carlo ,

I believe the idea is that a single bright star in the frame (the guide star) is used for selecting the frames. The point spread function (PSF) is just going to be some function that describes the blurred shape you would observe with the detector for an input point source. You then select frames in which the guide star is well centered, compared to its overall distribution.

I think your guess on “sync-resampled” is correct. They increased the “resolution” by a factor of 4, so that when they realign the chosen frames to center the guide star, they can do so at a sub-pixel precision.

You may want to check out chapter 3 in the thesis, particularly section 3.5.3. The give a lot more detail on the process than you’ll be able to find in the paper. A well-written PhD thesis can be 1000x more valuable than the journal article it ultimately produces, because it contains all the specific details that can be glossed over in the final paper.

Mbourgon , in how does lucky imaging in astrophotography work?

Looking at this: skyandtelescope.org/…/lucky-imaging/

Reading between the lines, my bet is that it is looking for photos with less atmospheric blurring. Since it sets reference points, it can measure the delta from a good shot, add the values to detainee how close to ideal a particular photo is, then choose the overall “luckiest” photos and stack them.

PeriodicallyPedantic OP ,

I read that article, and it’s very good! But it didn’t explain how detect atmospheric blurring, since it’s not actually blurring, it’s distortion. To quote that article

even if the sharpest image is very clear, it may still be distorted in varying degrees around the frame So you can’t just score the frames by sharpness.

Assuming all images are compared to a reference shot as you suggested, how is the reference shot selected?

I’ve actually got my own ideas about how it could be done, but this is coming from a background in computer science, not from astronomy, so I don’t trust my solution.

Mbourgon ,

Yeah, I’m guessing your ideas and mine are going to be similar then; wish I could add more!

PhlubbaDubba , in Why were the dinosaurs huge?

Hollow bones and in some cases spaces within their bodies that were just filled up with air. The end result being that dinosaurs were a lot lighter than their frame would suggest, which is what allowed them to get so big in volume.

holycrap , in Why were the dinosaurs huge?

The vast majority were not! Larger animals are more likely to be fossilized, so our fossil record is biased toward larger animals.

Bipta ,

But the largest herbivores and carnivore were far larger than anything we have today, or even had before humans killed the megafauna.

Would animals have again become huge in a few tens of million years more?

octoperson ,

🐋

Shalakushka ,
@Shalakushka@kbin.social avatar

https://en.wikipedia.org/wiki/Blue_whale The largest animal ever known is currently on Earth, though endangered.

magikmw ,

Only because we perfected killing them only few hundreds years ago. If we had more time they’d be dead too!

billygoat ,

🚨🚨🚨 Sorry Alan.

magnetosphere , in Why were the dinosaurs huge?
@magnetosphere@kbin.social avatar

At first I interpreted “huge” as “immensely popular”. I thought you were surrounded by idiots who aren’t impressed by dinosaurs lol

linucs OP ,

Hahaha those kids are idiots, dinosaurs for life!

HonoraryMancunian , in Why were the dinosaurs huge?

Fum fact! Michael Crichton, who wrote about dinosaurs, is 6’9"!

This is probably a coincidence

dangblingus ,

was 6’9" :(

HonoraryMancunian ,

Whoops!

…although he probably still is, unless he was cremated

Gigan , in Why were the dinosaurs huge?
@Gigan@lemmy.world avatar

Being big is advantageous as long as the animal is able to find enough food to sustain itself. Food was plentiful at the time, so dinosaurs grew quite large.

In modern times, most mega fauna is gone because Humans hunted them to extinction.

magnetosphere ,
@magnetosphere@kbin.social avatar

Big game hunters driving the giant lemur to extinction bothers me most, I think. I’d love to see a lemur the size of a gorilla.

Bipta ,

Even before humans drove them to extinction they were nowhere near dinosaur sized though.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • lifeLocal
  • [email protected]
  • goranko
  • All magazines