Photo: Booz Allen Hamilton
The use of artificial intelligence in a variety of cases is on the rise in healthcare today.
What many in health IT want to avoid is what’s known as an “AI winter,” a time period where players are not convinced AI can be as helpful as it is said to be, and they thus stay away from the technology.
Dr. Kevin Vigilante, chief medical officer at consulting firm Booz Allen Hamilton, wants to avoid another AI winter and has thoughts on how to do so.
Healthcare IT News sat down with him for an interview to discuss promising use cases of AI in healthcare, including better clinical decision support, population health interventions, patient self-care and research; how an organization needs to prepare itself before substantial investments in AI are made; and how to mitigate risks threatening AI’s momentum.
Q. You’ve talked about an “AI winter” in healthcare. What does this mean, when has it happened in the past and is it happening again?
A. “AI winter” refers to a period of disillusionment with AI, marked by reduced investments and progress, which follow periods of high enthusiasm and interest in AI technology.
There have been two AI winters: one between the mid-1980s and early 1990s and another in the late 1970s and early 1980s, in which expert systems and practical artificial neural networks rose to prominence. However, it became clear that these expert systems had limitations that prevented them from living up to expectations. This resulted in the second AI winter, a period of decreased AI research funding and a decline in general interest in AI.
According to the Gartner Hype Cycle, we now are at risk of another AI winter in healthcare due to several AI solutions falling short of their initial hype, including natural language processing, deep learning and machine learning, which is decreasing trust in AI by users.
Recent examples that highlight the growing concern over inappropriate and disappointing AI solutions include racial bias in algorithms supporting healthcare decision-making, unexpected poor performance in cancer diagnostic support or inferior performance when deploying AI solutions in real-world environments.
Q. How can use-cases like better clinical decision support, population health interventions, patient self-care and research help AI in healthcare?
A. The emergence of AI in healthcare offers unprecedented opportunities to improve patient outcomes, reduce costs, streamline decision-making for medical professionals and impact population health. However, to realize the full promise of AI, the healthcare industry needs to refocus on building a foundation of trust.
There currently are several promising use cases of AI – such as better clinical decision support, population health interventions, patient self-care and research – that demonstrate how healthcare professionals can identify needs and solutions faster and with more accuracy, using data pattern recognition to make informed medical or business decisions quickly.
Q. How does a healthcare provider organization need to prepare itself before substantial investments in AI are made?
A. There are several prerequisites that should be in place before healthcare organizations make substantial investments in AI. These include a clear vision of what problems AI will help solve, in-house talent with both technical AI expertise and health domain understanding, and a review process to assess the potential risks and ethical implications of each AI solution.
Once those prerequisites are met, there are three key considerations decision-makers in health organizations should weigh as they prepare themselves to make substantial AI investments.
Entry costs to healthcare organizations for implementing process automation AI solutions are often not high and can yield significant ROI quickly. However, it’s important to keep in mind that success with process automation solutions may not translate to AI solutions that rely on pattern recognition or “reasoning within context” (for example, self-driving cars) – meaning healthcare organizations must take different approaches to implementing each individual solution.
And healthcare providers should be prepared for the long game in investing in AI solutions. Successful AI deployment requires investment and focus from leadership, so the organization must free up staff and resources to ensure sustained success.
Q. How can healthcare organizations mitigate risks threatening AI’s momentum?
A. In a recent article I co-authored, we identified 10 significant risk types for AI development and deployment. When realized, these risks can quickly hamper performance and undermine trust in AI solutions.
These risks include, among others, the lack of integration of stakeholder perspectives and considerations, lack of clearly defined organizational values and ethics, data bias, lack of data management, and lack of data encryption and privacy protections.
As in evidence-based medicine, there are evidence-based AI development and deployment practices that should guide decision-making. For instance, practices such as relying on user-centered design, systematic organizational prioritization and review processes, and AI algorithm performance surveillance have proven to improve AI performance across many use cases.
These practices mitigate important AI risks. To counter the growing mistrust of AI solutions, the healthcare industry should implement self-governance processes that may include certification/accreditation programs for AI developers and implementers.
Standards for these accreditations or certification programs should be built on these evidence-based AI practices. Such programs could promote evidence-based practices and verify adherence to them in a way that balances effective AI risk mitigation with the need to continuously foster innovation.
Now is the time for healthcare organizations to proactively shape the future of AI to maintain users’ trust in AI and realize its transformative potential.
Twitter: @SiwickiHealthIT
Email the writer: [email protected]
Healthcare IT News is a HIMSS Media publication.
Source: Read Full Article