That implies the Engineers gave humanity modern wheat despite all evidence to the contrary. Or took our modern wheat back to their abandoned and dead planet despite all movie evidence to the contrary.
It’s just a stupid plot point in a movie full of them, but it did have xenomorphs murdering the dumbest scientists in existence so it’s not all bad.
Of course, this is also the series that had the engineers “life seed” Earth with literally just a dude breaking down into DNA. Because why shouldn’t an incredibly advanced alien species splooge into a river 4 billion years ago then just let it cook until it’s time to abduct and train Jesus?
I’ve mostly found that smart alerts just overreact to everything and result in alarm fatigue but one of the better features EPIC implemented was actually letting clinicians (like nurses and doctors) rate the alerts and comment on why or why not the alert was helpful so we can actually help train the algorithm even for facility-specific policies.
So for instance one thing I rated that actually turned out really well was we were getting suicide watch alerts on pretty much all our patients and told we needed to get a suicide sitter order because their CSSRS scores were high (depression screening “quiz”). I work in inpatient psychiatry. Not only are half my patients suicidal but a) I already know and b) our environment is specifically designed to manage what would be moderate-high suicide risk on other units by making most of the implements restricted or completely unavailable. So I rated that alert poorly every time I saw it (which was every time I opened each patient’s chart for the first time that shift then every 4 hours after; it was infuriating) and specified that that particular warning needed to not show for our specific unit. After the next update I never saw it again!
So AI and other “smart” clinical tools can work, but they need frequent and high quality input from the people actually using them (and the quality is important, most of my coworkers didn’t even know the feature existed, let alone that they would need to coherently comment a reason for their input to be actionable).
I’m part of a coalition trying to prevent a private equity firm from buying out a local nonprofit hospital and using AI to “Improve efficiency” is one of their plans that we’ve had to study (done by people much more competent than I).
The main thing they plan to use AI for is filling out paperwork - nurses will record their introductory interviews with patients and the AI (basically, speech recognition + knowing what fields to fill out for certain information) will automatically fill out that patient’s chart.
I’m sure they’re planning on using AI for other purposes as well, but this is the most prevalent use - speech recognition and filling out charts automatically.
What I need is AI to fix my doctor visits. Seems like those fucks expect you to be timely but then make you wait in their waiting room for 15 minutes and then an additional 30 inside the patient room. Oh sure, our time is unimportant, it’s all about you, doc.
lemmy.world
Hot