Waiting for your name to be called. The squeeze on your arm from a blood pressure test. Visiting a doctor feels nothing like logging into Facebook, but for how long?
Your health is becoming another data point to be collected and analysed on a massive scale, as clinical records are digitised, apps track our heart rate, and new tools promise they can tell someone’s mental state from a tweet.
This growing mass of health data has led some experts to call for a new “social contract” — a fresh negotiation of trust between patients and the groups that want their most intimate information.
While digital healthcare could have significant benefits, University of Maryland artificial intelligence law expert Frank Pasquale warned citizens may need protection from the unintended consequences of big data analytics in healthcare.
What do we do, for example, when “big data” predicts that certain people are likely to die earlier than others? Should they know? Should their employer know?
“Once that information is out there, do people have a right to understand they’ve been classified in this way? Who gets it? How is it used?” Dr Pasquale asked.
Do you want to know?
“Big data” can be a Rube Goldberg machine — turn the right knobs and levers, and you can find almost any answer.
In some cases, this could transform the most innocuous moment into a predictor of health: researchers have investigated whether computer mouse movements combined with search terms could help indicate the development of Parkinson’s disease.
Online ad targeting, which tries to tap into our fears and desires by responding to things we’ve “liked” and web pages we’ve visited, also shows how this might work.
“Anybody who’s seen an ad for depression medication or other things on their Instagram or Google search results knows intuitively that … they’ve been classified as someone who might benefit from these things,” Dr Pasquale said.
Exposure to an ad is arguably “a relatively non-consequential inference”. But if that same data was fed into an employment score, Dr Pasquale believes the law should intervene.
Lisa Eckstein, a medical law lecturer at the University of Tasmania, agreed there ought to be greater protections for what can be done with insights gleaned from medical databases.
If population-level data collection indicates a group has a cancer predisposition, early intervention could be an invaluable result.
But we also need to ensure such a finding does not leave the lab and compromise those people’s quality of life, she said — such as their ability to get insurance.
If that were to occur, the relationship of trust between patient and healthcare workers could be damaged.
“I think that’s going to come down to how much people feel like the uses that that data is being put towards are for their benefit, and for societal benefits that they are connected with and agree with,” Dr Eckstein said.
“Even when research has been done in the best possible way, and the information has been accessed through all the appropriate channels, … there still needs to be strong oversight.”
Other uses for My Health Record
In Australia, the shift to an opt-out My Health Record scheme has also drawn attention to the Government’s plan to use the database for “secondary purposes”.
Unless users of the national health record platform actively log in and choose otherwise, their information may be examined in anonymised form for research that cannot be “solely” commercial.
While insurance companies are currently excluded, some are campaigning to reverse that decision, and pharmaceutical companies are allowed to apply for access.
This points to a possible disconnect between public understanding and government intention: when Australians visit the doctor, being part of a scrutinised dataset is unlikely to be front of mind.
Dr Pasquale said for patients, the most important issue with a digital health record was not simply the ability to opt in or out.
“The medical establishment is certainly right to emphasise the real and potential benefits of digitisation and integration of health records,” he said.
Three levels of protection
Dr Pasquale suggested there should be at least three levels of protection for digitised health data:
- Allowing people to consent to having it collected at all
- Maintaining high security at the hospital or doctor’s office to protect against data breaches or loss of records
- If there is a data breach or information is lost, ensuring a strong framework of laws is in place to prevent employers, landlords or finance firms, for example, from using it
As genetic and genomic information becomes more prevalent, Dr Eckstein suggested these risks would become increasingly complicated.
In Australia, which has no general right to privacy, life insurers can ask if an applicant has had, or is considering having, genetic testing and can use the results to approve or deny coverage.
“Up until now, health information has been about an individual,” she said.
“With genetic and genomic information … that can have really serious implications for that individual’s family members, both living, deceased, or yet to be born.”
A recent case in the United States illustrates this reach: genetic information from third and fourth cousins reportedly helped identify the notorious “Golden State Killer”.
In Dr Eckstein’s view, this intergenerational power means we need tighter restrictions on the reidentification of individuals from genomic data, as well as controls on how such information can be used outside of delivering healthcare.
“Consent, security and post hoc protection for people whose data has gotten out there,” Dr Pasquale added.
“Before we charge ahead with faster digitisation and integration of records, we’ve got [to be] really careful that extra protections and foundations are in place.”