How a Children’s Hospital is using AI to Build Better Processes

How a Children’s Hospital is using AI to Build Better Processes

Boston Children’s Hospital is ranked as one of the best children’s hospitals in the United States. As such, they’re connected to major research institutions and take advantage of the latest technologies.

John Brownstein, an epidemiologist and Chief Innovation Officer at Boston Children’s Hospital, combines technology and health to create innovative solutions. A recent example of his work is the site “Outbreaks Near Me”, a pathogen tracking website that during the pandemic took the name “Covid Near Me”. Now, Brownstein is pivoting to something new: AI.

With AI quickly becoming a prominent tool in the healthcare industry, Brownstein reassures that AI is not meant to replace humans in the field.

“This is not meant as a replacement for the human. This is an augmentation. So there’s always a human in the loop.” In fact not only has AI not taken jobs away from anyone, it’s created them.

With new AI tools like ChatGPT, brand new employment opportunities are arising, such as “Prompt Engineers”. Prompt Engineer is basically just the title given to those hired to “play around” in AI tools such as chatbots to experiment with generative AI algorithms. It sounds like it’s just a fun job for those who like to test and play in AI environments, but don’t let that fool you. Some of these jobs are paying six-figure salaries, and Brownstein and Boston Children’s Hospital posted an ad of their own looking for one.

The goal for Boston Children’s own Prompt Engineer is to hire someone to train AI language models to improve hospital operations, as well as improve conditions for hospital staff.

The Prompt Engineer would just be another member of a team that is meant to prevent “provider burnout”. As an addition to “an internal team that builds tech,” their job is to “locate places in the world of work” where technology can play a role, but currently isn’t. They basically explore “pain points” and develop ways, using technology, to “ease the pain”.

One of the “pain points” at Boston Children’s Hospital is getting patients from point A to point B. Language barriers, stress or illness can all disrupt the communication in this process.

According to Brownstein, “already out of the gate, we can query ChatGPT with questions about how to navigate our hospital. It’s actually shocking, what these are producing without any amount of training from us.” He continues that ChatGPT can tell you how to get around “not just our hospital, but any hospital.”

Knowing this information, it seems useful to have things like machine kiosks throughout the hospital that patients can interact with to receive answers to questions like “Where can I pray?” This can make it easier, faster and more convenient for patients and healthcare workers alike by cutting the number of times they are stopped in their tracks with similar questions.

Brownstein also says that he has ideas for ways that providers can handle patient data with AI.

Mildred Cho, Professor of Pediatrics at Stanford’s Center for Biomedical Ethics, has concerns in regards to how AI will handle patient data. For one, Mildred reviewed the Prompt Engineer job ad for the children’s hospital and said, “What strikes me about it is that the qualifications are focused on computer science and coding expertise and only ‘knowledge of healthcare research methodologies’ while the tasks include evaluating the performance of AI prompts.”

“To truly understand whether the outputs of large language models are valid to the high standards necessary for health care, an evaluator would need to have a much more nuanced and sophisticated knowledge base of medicine and also working knowledge of health care delivery systems and the limitations of their data,” Cho said.

Cho continued that nightmare scenarios could result as well, such as prompt engineers retraining language models based on faulty assumptions. For example, what if language models are trained based on something like racial bias? Data collected by people is inherently flawed, and processes could be built on a foundation of errors.

In a rebuttal to the concerns expressed from Cho, Brownstein assured that the children’s hospitals’ Prompt Engineer would not be “working in a bubble.” He also stated that his team takes into account what it means to have “imperfect data.” Their process isn’t to just “put a bunch of data in and, like, hope for the best.”

Although Brownstein’s team doesn’t just “wing it” with their processes, just putting in a bunch of data in a large language model such as ChatGPT to achieve results is basically how a lot of this works right now, and the results can be anything but reliable.

However, another way Brownstein envisions AI transforming processes at the children’s hospital is through discharge instructions.

Typically discharge instructions currently come in the form of stapled together pieces of paper that tell you information about the concussion you may have sustained, how to use a cold compress, and how much of what medication you should take.

With the help of a language model trained on your specific patient information, AI can transform this process by telling you the most convenient place to pick up your medication based on where you live. Or, perhaps you shouldn’t use certain medication because you’re allergic and alternative solutions would be recommended. According to Brownstein, that is just the beginning.

“You’re doing rehab, and you need to take a walk. It’s telling you to do this walk around this particular area around your house. Or it could be contextually valuable, and it can modify based on your age and various attributes about you. And it can give that output in the voice that is the most compelling to make sure that you adhere to those instructions.”

Many professionals in the industry fear that ideas like Brownstein and others have in regards to AI in the industry are a data-privacy nightmare. With language models being operated by large organizations like Google or Microsoft, patient privacy is a legitimate concern.

In their case, Brownstein and Boston Children’s Hospital say that they are “building internal LLMs” – which mean they won’t rely on big corporations like the aforementioned and OpenAI.

“We actually have an environment we’re building, so that we don’t have to push patient data anywhere outside the walls of this hospital,” according to Brownstein.

 

Story via Mashable

 

 

U.S. Federal Agencies Affected by ‘One of Largest Theft and Extortion Attacks’

U.S. Federal Agencies Affected by ‘One of Largest Theft and Extortion Attacks’

“CosmicEnegery” Malware being used to Target Industrial Facilities

“CosmicEnegery” Malware being used to Target Industrial Facilities