More than 112 million Americans use a voice assistant at least monthly, up from 35 million in 2017, and voice-enabling technologies are increasingly making inroads into health care.
Many hospitals and health systems are already testing or deploying voice-enabled technologies in a variety of ways: to allow patients to call a nurse from the hospital bed; help doctors cut down on administrative tasks; monitor their interaction with a patient; improve efficiencies in the ER; facilitate clinical trials; help chronically ill patients manage their disease; and help the elderly with daily reminders and appointments.
Tech giant Amazon.com Inc.'s Alexa, Apple Inc.’s Siri, Google’s Home and Assistant and Microsoft’s Cortana are all working their way into the health-care arena and many start-ups have joined hospitals to explore uses of voice at their facilities. A report by Research and Markets forecasts the global voice recognition market will reach $126.5bn by 2023.
To date, Amazon's Alexa tops the list among hospital projects and existing hospital programs using voice. According to clinicaltrials.gov., there are currently 58 studies listed that test Alexa in a wide range of areas from behavioral health to smoking cessation, cardiovascular disease and psoriasis.
"Clinical trials are a great place to try new technology," Sara Holoubek, chief executive of New York-based consulting firm Luminary Labs told Medtech Insight. "It often precedes mass adoption."
SARA HOLOUBEK, CEO, LUMINARY LABS
Holoubek's firm helped organize the Alexa Diabetes Challenge in 2017 on behalf of Merck & Co. Inc. and Amazon. The challenge focused on finding ways for the Amazon Echo smart speaker and its Alexa voice assistant to help people with type 2 diabetes live healthier lives.
Holoubek said voice assistants have come a long way; it wasn't until this year that Amazon announced it had created a way for select health-care providers and organizations to transmit information via Alexa-enabled devices, while remaining compliant to the Health Insurance Portability and Accountability Act (HIPAA). This is seen as a big step forward in terms of addressing privacy concerns.
Rachel Jiang, head of Alexa Health and Wellness, said in a blog post from 4 April that the company had launched six health-care skills. The skills allow patients or consumers to ask the virtual assistant for help or provide information. Among Amazon's partners in the Alexa health-care skills program are the health insurer Cigna, the pharmacy benefits manager Express Scripts, Livongo Health Inc., which developed a technology platform that helps people manage chronic disease, and Boston Children's Health.
Matthew Montelongo, head of business development at Boston, MA-based Orbita, which provides a technology platform for health-care organizations to develop voice assistance, said that while the use of voice was relatively new in health-care, he felt its application was especially promising in the areas of chronic-care management and elder care.
"The biggest growth area of care for the patient is in the home, so outside of the four walls of the hospital," Montelongo told Medtech Insight on 27 August at the Connected Health Summit conference in San Diego.
John Bosco, senior VP and chief information officer at Northwell Health, said that voice also has significant potential to ease the documentation burden on physicians, so they can spend more time with patients and less time entering data from their last interaction with patients.
JOHN BOSCO, SENIOR VP AND CHIEF INFORMATION OFFICER NORTHWELL HEALTH
New Hyde Park, NY-based health-care system Northwell announced on 1 October it had teamed up with Allscripts to develop and implement a next-generation electronic medical record system – a cloud-based, voice-enabled, artificial-intelligence based system that will be designed and tested with input from Northwell clinicians, information technology experts and administrators, with the ultimate goal to deploy it system-wide and ease the burden on physicians.
"I don’t know if that would be done through an Alexa device, but I think that physicians are very frustrated with how much typing and clicking they have to do into the EMR, and they would love to see more of that be voice-enabled," Bosco told Medtech Insight. Northwell, New York state's largest health-care system, is already using Allscripts' EMR platform across its hospitals and outpatient practices.
Bosco said there weren't many start-ups that were trying to develop a next-generation EMR, which he described as a "very long and expensive proposition”.
Among the start-ups that have developed HIPAA-complaint software for use with EMRs is Denver-based Sopris Health. Sopris developed an intelligent clinical operations platform with AI scribe technology, which creates patient notes in real time during a patient-doctor visit. The company announced on 19 March it had launched a new chat-interface documentation that delivers a clinic note in 45 seconds or less.
Sopris' co-founder John Froelich, a practicing orthopedic surgeon who also serves as chief medical officer, said in a statement "The Sopris Assistant is a response to the reality that many physicians don't want documentation to be part of their exam room encounter." The system is "trained to know what questions to ask by specialty, by visit type, by note type in the simplest exchange possible.”
Suki, which uses a combination of voice commands from physicians and context in which they operate, announced on 26 March it had teamed up with the not-for-profit Sutter Health network to create clinical notes that can be pushed into the EMR system. The system will initially be introduced in primary care, dermatology and orthopedics. The company said results from one-year pilots across multiple specialties showed up to a 70% reduction in the amount of time physicians spent on medical notes. For every hour of patient interaction, doctors spend nearly two more hours on paperwork, Suki said in a statement.
In the coming weeks, Northwell will also begin a pilot study to put Alexa into private patient rooms, allowing them to tap into their medical records, Bosco said. The study will comprise eight patients in two hospitals. Patients will be able to ask common questions such as, "Who is my physician?" or "When might I be discharged?" or educational questions about their disease.
"We're hoping that patients will feel that it's helpful to them to get answers more quickly to common questions, and, at the same time, it's a learning experience for us and for the hospitals to understand whether this is valuable," Bosco said.
In addition, it will allow IT people to learn about challenges and opportunities to potentially integrate voice technology to key systems such as EMRs. If the study is successful, patients in the trial may be able to keep using the voice assistant and more patients may be able to use it as well, he said.
Bosco said while "generally people feel it's valuable and that patients do like it," challenges remain, in particular, in the areas of privacy and security.
"I think there are questions around privacy and security that have to be taken into consideration – that's why we are starting with private rooms, because we don't want to take the chance that people on the other side of the room could overhear the conversation," he said.
Other technical challenges include integrating the voice assistant into the EMR, which requires ensuring that only authorized people can use Alexa and that the system will turn off quickly, so that an unauthorized user can't come up from behind and use it.
"We'll be using a batching system [ID] to validate that they [care team members] are authorized to use the device," he said. "The actual device itself and the service that comes with the Alexa device also needs to be HIPAA-compliant and patients need to feel comfortable that their information is secure and can't be accessed by Amazon and used in any way."
While it's early days, Bosco said he'd eventually like to use Alexa to do other tasks such as allow patients to order their next meal. In previous studies at Northwell, patients have used Alexa to turn off the lights and turn the TV off and on, and physicians have used Alexa to retrieve patient information using the EMR system.
In Los Angeles, meanwhile, the non-profit 886-bed hospital Cedars-Sinai announced in February it will start a pilot program in more than 100 patient rooms to use an Alexa-powered platform developed by health-care start-up Aiva, to allow patients to do simple tasks such as turn their TV off and on, and change channels via voice commands. A patient may also use the device to call a nurse by saying "Alexa, tell my nurse I need to get up to use the restroom."
Aiva was part of the Cedars-Sinai Technology Accelerator Program. Cedars-Sinai awarded Aiva $120,000 in seed money and guidance and office space to further develop the platform. Avia is now also backed by Amazon and Google.
Voice patterns, such as tone, rhythm, volume and pitch are seen as a rich data source that companies are analyzing to diagnose a variety of conditions.
Boston, MA-based Sonde Health is developing a voice-based technology platform that analyzes vocal biomarkers combined with machine learning to try to improve a person's health. The start-up recently raised $16m in a series A financing to advance its technology across multiple health conditions and device types. David Liu, named Sonde's new CEO last month, will move these efforts forward.
SONDE HEALTH'S VOICE-ACTIVATED APP
Jim Harper, Sonde's co-founder and chief operating officer, told Medtech Insight the company had been deliberate in establishing its technology foundation and was now looking for partners to integrate its voice technology into their technology platform for offering products and services.
"We see the largest opportunity in the long-term being enabling access, to harness the hundreds of billions of voice transactions that are being processed by voice assistants and mobile devices around the world and enable those to provide high-frequency health awareness for individuals that would opt in to that kind of health measurement service," Harper said.
In a trial designed to develop voice measures of depression, 4,000 people were asked to download a smartphone app and generate short voice samples, such as holding an "A" sound for six seconds or repeating syllables. Individuals were also asked to fill out a commonly used questionnaire to predict depression (PHQ-9). A score of 10 or higher on the PHQ-9 tests suggests that a person is at moderate-to-severe risk of depression. The results show that Sonde's test is "on par with what one would expect for very well performing objective biomarkers", Harper said.
The company plans to conduct more studies to validate activities on a larger biobank containing voice samples and metadata from more than 20,000 people.
Changes in voice tone or cadence could also be predictive of high blood pressure, stroke or a heart attack.
Danville, PA-based Geisinger Health System announced on 31 May it would use voice analysis, developed by Israeli-based start-up Healthymize, for a research study to detect flare-ups in patients with chronic obstructive pulmonary disease.
"If we detect changes in a patient's health condition earlier, we can reduce hospitalization and administer care in a timely manner at home," said Paul Simonelli, principal investigator of the study and chair of the Department of Pulmonary and Critical Care Medicine at Geisinger.
At the Mayo Clinic, researchers also leveraged voice to determine the presence of coronary artery disease, according to an article in the July 2018 issue of Mayo Clinic Proceedings. Researchers analyzed the voices of 138 patients (37 controls and 101 patients who underwent coronary angiography) to assess a link between voice characteristics and CAD.
The author wrote, "This study suggests a potential relationship between voice characteristics and CAD, with clinical implications for telemedicine – when clinical health care is provided at a distance."
Joseph Schwab, chief of orthopedic spine surgery at Massachusetts General Hospital, told Medtech Insight during an interview at this year's annual American Academy of Orthopaedic Surgeon meeting on 13 March in Las Vegas, he envisions having an AI-driven listening device in the room during a patient consultation. The device would act like a "clinical assistant" by providing real-time information based on the patient's medical records and clinical support in the decision-making process.
"And so for instance, if you're talking to the patient about, say lumbar laminectomy for lumbar stenosis, and you start talking about the risks, benefits and alternatives to surgery," Schwab said. The AI would come in from the background and give "hard data, very personalized data, in real time as part of the conversation" and offer clinical support.
Schwab said researchers at the Skeletal Oncology Research Group (SORG) at Massachusetts General Hospital have developed algorithms to be integrated with an EMR, which he said, is not an easy task.
Berkeley, CA-based start-up Robin Healthcare is also working on a clinical assistant that aims to help doctors cut down on paperwork by drafting notes from clinicians' spoken conversations during the patient consultation with the patient’s consent.
"The device is listening to the entire conversation and uses AI and machine learning to extract meaningful conversation and then it's populating that into the EMR," said Orbita's Montelongo, who is familiar with the company. When the patient visit is over, the physician simply walks to his desk and the information is ready to review and can even send out prescriptions.
The company raised $11.5m in a Series A financing led by Northwest Venture Partners, bringing its total to $15m, which will enable it to further develop the technology.
Home And Elder Care
According to Addison, Tx-based consulting firm Parks Associates, the adoption of smart speakers/displays with voice assistants among seniors has quadrupled in recent years from 4% in 2017 to 17% in 2019 and 13% of seniors say voice is a "must-have" for their independent living system.
Other experts agree that the use of voice is particularly life-changing for seniors. Given that many older people struggle with mobility issues, dexterity of the hands, vision and cognitive function, engaging with voice makes it easier than using apps.
Ana Sahagun, PSH project manager at Sherman Oaks, CA-based Libertana Home Health, said Libertana started a pilot program in the summer of 2017 at two independent living facilities to assess residents' use of the Libertana app through the Orbita-powered Amazon Echo Dot device.
Typically, a resident wakes up in the morning and asks Alexa to open up the Libertana app. When the app opens up, the resident will hear a greeting such as "'Good morning, it's Monday October 13 and the activities for today are 'Bingo at twelve or coffee social at 10 a.m.,'" Sahagun said. Alexa will also give residents reminders to take their medications and for doctor appointments. Residents can also ask Alexa to "call the caregiver for assistance."
Sahagun said Libertana does not currently have enough data, but noted that the program has been well received thus far and helped residents' combat isolation and loneliness and enticed them to engage in more social activities.
"We know that it has increased the involvement [of residents] in activities and socialization and residents are happier using the Alexa," Sahagun. "They're getting out of their room being around other residents … and it has helped with medication compliance."
The biggest challenge continues to be on the technical side with getting clients set up on the Amazon account, personal email, cell phones and internet access. Many of the residents are making the transition from homeless life to subsidized housing and need to learn how to use new technologies, she said.
Sahagun believes that the program will bring cost-savings. When residents are more compliant to taking their medications, it will prevent admissions to a hospital or skilled nursing facility.
Stuart Patterson, CEO and co-founder of Boston-based health IT start-up LifePod Solutions believes that the company's "proactive" voice-assistant caregiver sets it apart from other voice assistants on the market as it doesn't require a "wake" prompt to activate. (Also see "LifePod Debuts 'Proactive' Voice Assistant Designed To Ease Elder Care" - Medtech Insight, 24 Sep, 2019.) Instead, LifePod uses a smart speaker created by iHome and a web-based portal that is accessed by a caregiver to set up a scheduled routine.
Other companies that are doing work in this space include: RemindMeCare, a person-centered care, activities and companionship software; Memory Lane, which allows users to recollect their lives and improve their mood and share stories with family and friends; and ElliQ, a proactive AI-driven social robot aimed to encourage an active lifestyle.
Integration With Care Management Platforms
Livongo announced earlier this year it would leverage Amazon's Alexa for members to ask about their last blood sugar reading, get blood sugar measurement trends, and receive insights and health nudges that are personalized to them. (Also see "Connected Health Summit: VCs Discuss Digital Health Investment" - Medtech Insight, 6 Sep, 2019.)
Montelongo said given the high cost of chronic disease management, he believed that voice-assistants could play a key role in helping patients comply with their care plan outside the physician's office.
"They [patients] need to be engaged with their care team and a voice-enabled virtual assistant that provides 24/7, 365-day access and availability for coaching and help them with behavior-modification activities," he said. For doctors, there's great value in gaining real-time access to data to help manage care and intervene when necessary.
Northwell is also using Alexa to help patients identify wait times at local emergency centers and urgent care centers. Users can ask Alexa for the shortest wait time near their zip code or check how long the wait is at a specific location.
"We find it's somewhat popular but hasn't taken off as much as we think it could down the road," Bosco said. "People are a little more apt to go to our website and look for wait times [especially for the urgent care centers]."
Bosco believes that that there is "probably a lot of potential and opportunity that's still to be figured out”.
Some people believe that voice assistants may one day have a place in the operating room. But today, there are still many barriers to adoption. A critical issue here would be the surgical mask which can easily muffle sound and accuracy also comes into play, which create liability issues.
"In environments like the OR, these are not places where you want to make mistakes," Montelongo said.
"I think everybody is at a fairly every stage of understanding how valuable this would be and what the technical challenges are and what the privacy and security challenges are," Bosco said.