Currently Viewing:
Newsroom

A Contactless Artificial Intelligence System for Smart Devices Can Identify a Sign of Cardiac Arrest

Wallace Stephens
Researchers at the University of Washington created a tool, which could potentially be developed into an application for smart speakers and smartphones, that uses algorithms and machine learning to identify instances of agonal breathing, a sign of cardiac arrest, with an accuracy of 97% at distances of up to 6 meters away.
A contactless support vector machine (SVM), an artificial intelligence system that uses algorithms and machine learning, could be used by smart speakers and similar devices to detect agonal breathing, a symptom of potential cardiac arrest. The machine performs with 97% accuracy from a distance of up to 6 meters away, according to a study in Nature Partner Journals Digital Medicine.

"A lot of people have smart speakers in their homes, and these devices have amazing capabilities that we can take advantage of," said sudy co-author Shyam Gollakota, PhD, associate professor at the University of Washington’s Paul G. Allen School of Computer Science and Engineering, in a statement. "We envision a contactless system that works by continuously and passively monitoring the bedroom for an agonal breathing event, and alerts anyone nearby to come provide CPR. And then if there's no response, the device can automatically call 911."

Researchers obtained agonal breathing recordings from 911 emergency calls from 2009 to 2017 that were provided by Public Health-Seattle and King County, Division of Emergency Medical Services. The study’s dataset included 162 calls that had clear recordings, were verified examples of cardiac arrest, and were identified as instances of cardiac arrest-associated agonal breathing. Researchers trained a classifier, a type of algorithm, on calls that were rated with high confidence by 911 operators and an emergency medical services assurance reviewer.

For each occurrence of agonal breathing, researchers extracted 2.5 seconds of audio from the start of each breath. A total of 236 clips of examples were extracted. As researchers considered the size of the data set to be relatively small, they augmented the number of agonal breath instances with label preserving transformations. The data were augmented by playing the recordings over distances of 1, 3, and 6 meters in the presence of interference from indoor and outdoor sounds at different volumes and when a noise cancellation filter was applied. A total of 7316 positive samples were produced. The recordings were captured on smart speakers and smartphones including an Amazon Alexa, an Apple iPhone 5s, and a Samsung Galaxy S4.

Negative data sets were used to increase the accuracy of the SVM. The sets consisted of 83 hours of audio data from polysomnographic sleep studies of 12 patients. The audio streams included instances of hypopnea, central apnea, obstructive apnea, snoring, and breathing. Researchers also included interfering sounds that could be present in a bedroom while a person is asleep, such as podcasts, sleep soundscapes, and white noise. They used 1 hour of audio data from the sleep study with interference to train the SVM. The audio signals were played at different distances and yielded a total of 7305 samples. The remaining 82 hours of sleep data, containing 117,985 audio segments, were then used to validate the performance of the SVM.

"We don't want to alert either emergency services or loved ones unnecessarily, so it's important that we reduce our false positive rate," said the study’s lead author Justin Chan, a doctoral student at the University of Washington, in a statement.

Researchers ran their classifier over the full audio stream collected from the sleep lab to evaluate the false positive rate. They found the SVM incorrectly categorized a breathing sound as agonal breathing 0.14% of the time.

Researchers then recruited 35 individuals to record themselves sleeping using smart phones to test the SVM’s performance on real-world data. A total of 167 hours of data were collected. They then retrained their classifier with an additional 5 minutes of data from each individual. Researchers found they’d achieved an area under the curve of 0.9993 ± 0.0003 and an operating point with an overall sensitivity of 97.17% and specificity of 99.38%. The false positive rate was found to be nearly 0.22% without an agonal-breathing frequency filter. After the frequency filter was applied, the false positive rate was 0.00127% when considering 2 agonal breaths within a duration of 10 to 20 seconds. The rate fell to 0% after considering a third agonal breath within another period of 10 to 20 seconds.

"Right now, this is a good proof of concept using the 911 calls in the Seattle metropolitan area," Gollakotta said. "But we need to get access to more 911 calls related to cardiac arrest so that we can improve the accuracy of the algorithm further and ensure that it generalizes across a larger population."

Reference

Chan J, Rea T, Gollakota S, Sunshine JE. Contactless cardiac arrest detection using smart devices. NPJ Digit Med. 2019;2(52). doi: 10.1038/s41746-019-0128-7.

Related Articles

Three Ways Artificial Intelligence Is Changing the Game in Healthcare
AI Decision Support Systems Could Prevent Unnecessary Diagnostic Tests for Patients With Stable Chest Pain
Study Links Medicaid Expansion to Fewer Cardiovascular Deaths
Machine Learning Can Predict Heart Attack or Death More Accurately Than Humans
Quest Diagnostics: Value-Based Care Cuts Costs, Boosts Care Coordination
 
Copyright AJMC 2006-2019 Clinical Care Targeted Communications Group, LLC. All Rights Reserved.
x
Welcome the the new and improved AJMC.com, the premier managed market network. Tell us about yourself so that we can serve you better.
Sign Up