top of page
Search
  • Writer's pictureEarSwitch

AI may transform healthcare – but it needs responsible data

AI’s potential is a topic discussed widely in healthcare, and for good reason. AI – particularly machine learning (ML) – could empower healthcare providers to deliver effective and proactive care, whether it’s by automating back-office functions or even identifying blood biomarkers for dementia. However, if AI is to transform how we provide and access healthcare, then it needs data we can all trust. 


As Google puts it, ML systems learn and improve from experience. With this in mind, any ML models that shape the healthcare industry need data that offers a breadth of credible insights. Here’s why, and what we believe responsible, trustworthy health data looks like. 


How ML could transform healthcare 

 

AI could open up lots of exciting applications in healthcare (and already is). Studies have suggested that ML could support the diagnosis of lung cancer by analysing imaging. Others conclude that ML could detect events in eye movement data more successfully than some current algorithms.

 

By finding patterns in large data sets, machine learning could equip professionals with insights quickly and precisely. The key here is that AI should empower people, and not disenfranchise those who aren’t digitally confident.  


AI could alleviate pressures on an already stretched workforce. This is one of the ways that we expect EarMetrics®, our in-ear biometric sensor technology, will benefit healthcare. By recording health metrics as they change over time – as you move, for instance – EarMetrics® will create longitudinal data sets. When fed into ML models, these could help practitioners identify when someone’s vitals are leaving ‘normal’ ranges and provide timely alerts. Crucially, we believe that EarMetrics® will record the quality of data needed to inform effective decision making.


Why we need responsible health data 

 

When it comes to defining what responsible, trustworthy data looks like, there are multiple factors to consider. Perhaps the most important is accuracy. If non-medical grade data was fed into a predictive model, for example, it might falsely reassure people of their health outcomes. Health data also needs to be linked to the individual, reliably. Imagine that a wearable, such as a fitness tracking watch, was shared between people.


How could we be sure who the data belongs to? 


Equity is also key. We’ve already seen how bias in healthcare and medtech can have potentially negative impacts, like the reported limitations of some common pulse oximeters for people with brown and black skin. This is something that the UK government has recently acknowledged, while also committing to “work with partners to improve transparency of data used in the development of medical devices using artificial intelligence (AI), as well as AI products which influence clinical decisions”. To ensure a more inclusive future of healthcare, data that minimises the risk of further bias is something we all need to champion. 

 

Recording – and processing – health data responsibly  

 

Responsibility isn’t just about the data itself, but also how it’s used. For instance, it’s crucial that data remains user controlled. People should be able to make their own decisions around their data, including who it’s shared with, how it’s used, and when. 


For ML-enabled applications to have a positive impact on healthcare, they also need to be fed with reliable and accurate data. The saying “garbage in, garbage out” sums this up well; if the data isn’t credible or in a usable format, then it’s unlikely to create helpful outcomes. In healthcare, we might achieve this by making sure that health data is synchronised, providing outputs from several sensors to allow data to be cross-referenced for accuracy.  


Assessing the quality or trustworthiness of data requires us to consider many different factors. However, we believe that for AI to transform healthcare, the data it’s fed with must be medical grade, equitable, proprietary, contextual to physical activity, and consented to. Only then can it inform decision making that creates positive outcomes for patients, the wider populations and providers. 


We intend to support this with EarMetrics®. By integrating EarMetrics into in-ear medical devices, wearables, or hearing aids, we aim to empower patients and professionals to record multi-biometric data that is racially inclusive and interoperable. To learn more about EarMetrics® and its potential use cases, click here or get in touch. 

39 views0 comments

Comentários


bottom of page