Vocal characteristics that can’t actually be heard, discernible only by computer, might help identify individuals with confirmed or suspected heart disease who are at increased risk for a cardiovascular (CV) event over the next several years, a prospective study suggests.
The research is only the latest to suggest a potential role for “voice biomarkers” — acoustic features discernible with machine-learning algorithms — for CV risk assessment, with implications for screening, noninvasive risk stratification, and telemedicine, investigators say.
Voice recordings of the study’s 108 patients were processed and assigned scores based on how much they expressed the inaudible biomarker. Patients with assigned scores in the top third, compared with scores in the lower two-thirds, showed 2.6 times the risk of developing acute coronary syndrome (ACS) or presenting to the hospital with chest pain over about 24 months. They showed triple the risk for a positive stress test or coronary artery disease (CAD) at angiography.
Jaskanwal Deep Singh Sara “I would say that this voice-analysis technology is not a standalone diagnostic tool,” Jaskanwal Deep Singh Sara, MBChB, Mayo Clinic, Rochester, Minnesota, told theheart.org | Medscape Cardiology. “Once we filter out people who are unlikely to have disease, maybe we could use this as a screening tool to identify those with a higher pretest probability, and then start working them up with conventional methods. Or you could use it to follow people longitudinally,” Sara proposed. “But in whatever manner we use it in, it’s going to be an adjunct to existing strategies.” Sara is slated to present the findings April 2 during the American College of Cardiology (ACC) 2022 Scientific Session, conducted virtually and in-person in Washington, DC, and is lead author on the study, published March 24 in Mayo Clinical Proceedings. |
Earlier research demonstrated significant associations between the same or similar voice biomarkers, or the separate constituent voice signal features, and baseline CAD, pulmonary hypertension, and, in patients with heart failure, mortality and risk for hospitalization.
But the current study is the first to use the voice-analysis techniques to prospectively forecast CAD events, Sara observed. The voice biomarker it tested was derived — using proprietary artificial intelligence (AI) methods (Vocalis Health) — from voice signals from more than 10,000 patients with chronic diseases, he noted. The resulting algorithms were developed to analyze 80 voice-signal features, such as frequency, amplitude, pitch, and cadence.
Patients in the current study, who had been referred for coronary angiography for various indications — including angina-like chest pain, a positive stress test, hospitalization with ACS, or preoperative evaluation — each provided three 30-second voice recordings that were processed for the distinct voice signal features.
Their assigned scores based on prevalence of the voice features of interest ranged from –1 to 1. In multivariable analysis, the hazard ratio (HR) for the primary endpoint, defined as incident ACS or admission or emergency department presentation with chest pain over a median of 24 months, was 2.61 (95% CI, 1.42 – 4.80; P = .002) for the one-third of patients with the highest scores, compared with the lowest-scoring two-thirds.
The corresponding HR for the main secondary endpoint — positive stress test result or CAD identified on angiography at follow-up — was 3.13 (95% CI, 1.13 – 8.68; P = .03).
Techniques for unmasking disease or risk stratification that are AI-based tend to be mysterious about what they are “seeing” that is so informative. In the current study, it’s entirely unknown what a patient’s voice could have so in common with CAD or future ACS risk. “All we have at the moment are hypotheses based on our understanding of the pathophysiology,” Sara noted.
“One is that this biomarker is actually looking at more of a systemic process, as opposed to coronary disease per se. And it might be that it’s tapping into changes that are occurring, for example, in the autonomic nervous system.” That is, he proposed, there could be “dynamic interplay” among different vagally mediated physiologic functions.
“That makes sense,” he said, because the autonomically important vagus nerve also directly innervates the larynx and vocal cords. And CAD-related events, such as angina and myocardial infarction, can have autonomic components; for example, they can elicit nausea, diaphoresis, or changes in blood pressure.
Alternatively, the link between voice and CV risk might relate to CAD-associated systemic inflammation, Sara proposed. Perhaps the process of inflammation associated with atherosclerosis “may affect the organs of phonation as well, and what we’re doing is picking up on that parallel pathology.”
But, he cautioned, “we need more evidence before we start putting these sorts of ideas out more concretely. We don’t want to overstate our claims.
And the AI-derived algorithms “are not going to be something that everyone can download” to use on themselves for at-home diagnosis or risk assessment, Sara said. “It would be used in a targeted fashion. And it would be used under the direction of a clinician who knows its limitations and knows it’s there to use as an adjunct to existing clinical methods.”
The study was partly supported by Vocalis Health, for which one author works as a consultant. The other authors, including Sara, disclosed no conflicts of interest.
American College of Cardiology (ACC) 2022 Scientific Session. Abstract 1319-121 / 121. To be presented April 2, 2022.
Mayo Clin Proc. Published online March 24, 2022. Abstract
Follow Steve Stiles on Twitter: @SteveStiles2. For more from theheart.org | Medscape Cardiology, follow us on Twitter and Facebook.
Source: Read Full Article