Lawsuit against Google for use of UK patient data • The Register
A UK law firm is taking legal action on behalf of patients whose confidential medical records were obtained by Google and DeepMind Technologies in violation of data protection laws.
Mishcon de Reya said today he plans representative action on behalf of Mr Andrew Prismall and the roughly 1.6 million people whose data has been used as part of a program to test medical software developed by corporations.
He said The register the request had already been made to the High Court.
DeepMind, acquired by Google in 2014, worked with search software giant and Royal Free London NHS Foundation Trust under an agreement reached in 2015.
The law firm said tech companies obtained about 1.6 million confidential medical records from individuals without their knowledge or consent.
The register contacted Google, DeepMind and the Royal Free Hospital for their comment.
“Considering the very positive NHS experience I have always had during my various treatments, I was very worried to find that a tech giant had ended up with my confidential medical records,” said lead applicant Prismall said in a statement.
“As a patient receiving medical treatment, the last thing you expect is your private medical records to be in the hands of one of the biggest tech companies in the world.
“I hope this case will help achieve a fair outcome and closure for all patients whose confidential records were obtained in this case without their knowledge or consent.”
The case is led by Mishcon partner Ben Lasserson, who said: “This important claim should help answer fundamental questions about the handling of sensitive personal data and special category data.
“This comes at a time of heightened public interest and understandable concern about who has access to people’s personal data and medical records and how that access is managed.”
The law firm argued that action would be an important step in trying to address “very real” public concerns about large-scale access to private health data and its use by tech companies. It also raises questions regarding the precise status and responsibility of these tech companies in the data protection context, both in this specific case, and potentially more generally.
In 2017, Google’s use of hospital patient medical records to test a software algorithm was deemed legally “inappropriate” by Dame Fiona Caldicott, then National Data Guardian at the Department of Health.
In April 2016, it was revealed that the web giant had signed an agreement with the Royal Free Hospital in London to create an app called Streams, which can analyze patient details and identify those with acute kidney damage. The application uses a fixed algorithm, developed with the help of doctors, so not technically AI.
The software – developed by Google’s AI affiliate DeepMind – was first tested with simulated data. But it was tested again using 1.6 million sets of real NHS medical records provided by London Hospital. However, not all patients were aware that their data was being passed to Google to test the Streams software. Streams had been deployed inward and therefore now handles details of real people, but during development it also used live medical records as well as mock entries.
Dame Caldicott told the hospital’s medical director, Professor Stephen Powis, that he had passed the mark and that there had been no consent from people for their information to be used in this way before. the deployment.
A subsequent investigation by the Office of the Information Commissioner revealed several shortcomings in the way the data was handled, including the fact that patients were not sufficiently informed that their data would be used in the test.
In a data-sharing deal discovered by New Scientist, Google and its artificial intelligence wing DeepMind have gained access to current and historical patient data at three London hospitals run by the Royal Free NHS Trust. ®