What has the race for a Covid-19 vaccine revealed, and how can AI and blockchain technologies improve these systems?
For a full analysis of technology in drug discovery, download our report "AI, Privacy & Genomics: The Next Era of Drug Design".
Big Pharma has been held up as the devil of the medical industry for decades now. Charging high premiums for life-saving medicines makes it easy to discount them as evil.
However, this discussion is more nuanced in medical communities, where the years and investments that go into drug R&D are a little more known. 10-12 years and upwards of $2 billion are not small potatoes. But, that doesn’t sign off on charging prohibitive rates for life-saving treatments.
2020 will be known for Coronavirus - which forced most, if not all, of the world into lockdowns, quarantines, and economic hardship. But, this year also saw the first AI-identified drugs enter human trials.
While AI was used by several biotech startups to identify likely vaccines faster than many others, we still have a heavy reliance on traditional drug design methods. However, the speed at which this technology can cross-reference information drove down front-end time to clinical trials. If innovation continues to drive forward, eventually much of the early testing could be performed in-silico, or by computers, which could not only drive down delivery time, but also cost.
This is a bet on the future, but one that is beginning to show signs of paying off already. And, to be quite honest, the pharmaceutical industry cannot keep going as it is. However, there are several stipulations to make sure that any innovations in this sector remain useful, and do not need to be rebuilt when the problems become too big.
Considerations are two-fold: first is how to make sure there is enough available data to make this useful, both when training models and to provide a diverse base for research, and the second, is of course, privacy.
Leon Doorn, Head of Regulatory Compliance at Aidence, an AI radiology company, summed this up with:
‘AI solutions have the potential to improve healthcare by supporting physicians in making faster, better informed and more accurate decisions. This is only possible if AI companies have sufficient access to large amounts of data’.
Because now is the time to make sure that this technology has privacy baked into its core. There’s no use developing this technology, only to have to rehash it for better privacy 5-10 years from now after a leak or some other scandal.
To address this, several technologies in AI have emerged that can be deployed so that the model can be trained without ever seeing the data. Namely, they are:
However, no single technology offers a perfect solution. Homomorphic Encryption is underdeveloped, and while extremely exciting for its ability to perform computations on entirely encrypted data, it’s just not fast enough to be usable.
SMPC, while significantly older - dating back to the 1970s - has reached a usable level. However, as the name implies, it relies on multiple parties to occlude data. This comes with not only higher communication costs, but also more assumptions to be made about bad actors. It is still a useful technology, but one that needs further development.
Similarly, Federated Learning is perhaps one of the most promising technologies for healthcare. It has been deployed by tech giants such as Apple and Google. And while it is great for preserving privacy and meeting GDPR requirements, as data never leaves the original source, it too has its downfall.
Namely, Federated Learning is not enough to ensure that the algorithm will not memorise your output. This can be offset with other privacy-preserving technologies, like Differentiated Privacy. And, all in all, with the vertical integration and personalisation that Federated Learning can provide, it is still a deeply interesting area of research.
This has already proven to be deeply useful for medicine, with studies showing that Federated Learning can better account for bias that training a model on pooled data.
However, as Precision Medicine continues to grow as a field, Federated Learning presents us with the interesting opportunity of designing several population specific drugs or vaccines simultaneously, as the models can be vertically integrated. While this is several years, if not decades, out, the potential, especially when paired with the fact that this technology was originally designed for it’s privacy-preservation - not the personalisation - is very exciting.
Privacy-preserving technologies are only half the battle, though. There is still the issue of how to connect different companies and research institutes to the data, all while obtaining patient consent.
For this, Estonia’s e-Health initiative provides a great blueprint. Estonia already has a blockchain-based access portal called the X-Road. This access portal enables all documents to be stored in their original location, so they are never replicated, and creates an immutable trace of who accessed it.
As Florian Marcus, Digitisation Expert at e-Estonia, explained the services it enables:
‘The patient [can] see in the logbook that this particular doctor looked at this particular patient dataset at this time for this and that reason. This can be challenged in court if it feels unqualified and when the system was introduced, some doctors lost their licence over it’.
This restores patient control over how their data is used, which although something that many other countries have some laws moved towards, few have actually achieved the level of transparency as to who has actually accessed these records.
For the sake of research purposes and improved consent, transparency is necessary. And, if such a system is set up on a broader scale, it could help to improve many research efforts for better treatments.
As mentioned at the beginning of this article, Covid-19 has drawn greater notice to reverse-engineering vaccines and drugs. While AI is still in the early days of development for drug design, it is helping to cut down on the front-end. Moderna gained notoriety for using AI to return one of the very earliest Covid-19 vaccine designs.
But, at the moment, many companies using AI to screen for drug designs have shied away from using human genomes due to the difficulty of accessing this information and sensitive nature of this data.
However, human genomes can improve early testing to make sure that drug targets have:
Simultaneously, the EU set the goal of collecting 1 million human genomes by 2022. With no connection network to provide this information to researchers, though, it is just an interesting and expensive goal.
A genome sharing network has already been trialed in the US, under the acronym of GRIN. GRIN brought together three major groups:
Their research focuses on prenatal genetics, pharmacogenomics, cancer genomics, and more. This initiative was largely seen as a success, but could be improved with better technology to streamline access and consent.
Blockchain, as shown by e-Estonia, can standardise much of this, but it also brings two major improvements. First, is that data is not replicated. Researchers are merely granted access to existing records. Second, as Marcus outlined, patients can see who has accessed those records and block them from view if they choose. This means that consent is not given only once, but is a continual process, wherein it can be retracted.
This is a major limitation in current genetic data research projects. 23andMe is possibly the most famous of these, and has entered the news for forming research partnerships with companies that clients then decide they don’t want to lend their data to. However, by the time they withdraw consent, data cannot be pulled from research, and notably, their consent documents provide no clauses to inform participants about which studies their data is used in.
It’s time that health research is held to higher data protection and consent standards. The General Data Protection Regulation (GDPR) was far-and-away the strictest data protection regulation when it was implemented, and may still be to this day. However, it contains a clause providing much greater allowances for research.
Even so, it has a history of forcing privacy-first technology development (Federated Learning was announced in its wake). And, if it is applied to research, especially pharmaceutical research, in the same manner, it could also force better information sharing networks, that centre the consent of data providers, while still enabling data access.
Of course harsher regulations can hamper innovation. However, as in the case of Federated Learning and several other privacy-preserving technologies, it’s clear that they can also force companies to prioritise data protection and drive innovation. This is something that should be at the forefront of anyone working with health data, and one of the greatest failings of the GDPR.
Even with these improvements in place, AI, and even better collaboration networks, will likely not be enough to save our medical industries and restore belief in industries that have seen so many issues. But solutions must be found, and they must be multi-pronged.
Medicine is not going anywhere soon, so we must work with what we have to improve these systems, and make treatments accessible to anyone who wants and needs it. We just can’t forget to respect the needs of people who contribute to those solutions.
Maggie is a writer, researcher, and editor. Trained in literature, critical theory, and gender studies, they are now exploring the ways that technology is changing the landscape of human interaction.