I’ve mostly found that smart alerts just overreact to everything and result in alarm fatigue but one of the better features EPIC implemented was actually letting clinicians (like nurses and doctors) rate the alerts and comment on why or why not the alert was helpful so we can actually help train the algorithm even for facility-specific policies.
So for instance one thing I rated that actually turned out really well was we were getting suicide watch alerts on pretty much all our patients and told we needed to get a suicide sitter order because their CSSRS scores were high (depression screening “quiz”). I work in inpatient psychiatry. Not only are half my patients suicidal but a) I already know and b) our environment is specifically designed to manage what would be moderate-high suicide risk on other units by making most of the implements restricted or completely unavailable. So I rated that alert poorly every time I saw it (which was every time I opened each patient’s chart for the first time that shift then every 4 hours after; it was infuriating) and specified that that particular warning needed to not show for our specific unit. After the next update I never saw it again!
So AI and other “smart” clinical tools can work, but they need frequent and high quality input from the people actually using them (and the quality is important, most of my coworkers didn’t even know the feature existed, let alone that they would need to coherently comment a reason for their input to be actionable).
I’m part of a coalition trying to prevent a private equity firm from buying out a local nonprofit hospital and using AI to “Improve efficiency” is one of their plans that we’ve had to study (done by people much more competent than I).
The main thing they plan to use AI for is filling out paperwork - nurses will record their introductory interviews with patients and the AI (basically, speech recognition + knowing what fields to fill out for certain information) will automatically fill out that patient’s chart.
I’m sure they’re planning on using AI for other purposes as well, but this is the most prevalent use - speech recognition and filling out charts automatically.
What I need is AI to fix my doctor visits. Seems like those fucks expect you to be timely but then make you wait in their waiting room for 15 minutes and then an additional 30 inside the patient room. Oh sure, our time is unimportant, it’s all about you, doc.
What baffles me is why would you use an LLM when what you need is a digital inventory manager. Not bashing your argument’s merits. On the contrary, I think it depicts very well how people will shove AI-marketed shit on already-solved problems and make everyone’s lives worse because it’s ✨modern✨.
My question is: is it being used for inventory management? Or is it being used to feed the entire patient file in to make sure the Pharmacist doesn’t make a mistake as well. Double checking for conflict in the prescription interactions and stuff like that.
Should it be relied as the only thing? No. Is it nice to have another set of eyes on every task? Probably? Could this be solved with the hiring of more pharmacy techs and an education system not driven by profit margins for the investors that actually facilitates the workforce’s technical skills? Yes.
Idk. Just sounds like shitty companies being shitty companies all the way down.
People have no idea how sophisticated modern IT systems already are, and if you glue fancy words on solved problems, people will cheer you for being super innovative.
Ugh, blockchain. During the pandemic, I had absolutely no work to do so my boss asked me to make a presentation for him to present on the merits of blockchain. When my response was that it’s overhyped bullshit, he was not thrilled.
I made the requested presentation but it made me feel dirty, so I alt texted every slide’s graphics to include the counterpoint to the bullshit benefits being presented.
lemmy.world
Newest