Genetic Engineering and Biotechnology News have just published the first survey conducted by their Science Advisory Board, a project called SciPulse Perspectives which aims “to take the “pulse” of scientific minds all over the world”. The survey asked 500 US members “ to share their views on the direct-to-consumer (DTC) testing industry, specifically in the context of the FDA’s warning letter to 23andMe back in November of 2013.”
You can read the results of the poll, yourself, and I won’t bother to comment on them. What concerns me is the way the poll has been framed suggests a fundamental misunderstanding of the FDA’s action. The poll infographic states that the FDA’s position is that “there is a potential for harm when genetic testing results are presented without the counsel of a physician” and that the FDA decided “to stop 23andme from providing genetic interpretations directly to individuals without involving an intermediary physician for counsel.” This is a serious misrepresentation of the FDA’s action, which, as I stated in a previous post, is more concerned with the failure of 23andme to provide the Agency with data about the safety and effectiveness of their test, the traditional standard to which FDA holds medical devices. Anyone who reads the FDA’s warning letter to 23andme carefully cannot be in any doubt as to the FDA’s concerns.
To recap, way back in 2012 23andme submitted for FDA approval for portions of the clinical service they provide. The FDA’s 2013 letter revealed that the company has subsequently been in discussion with the Agency about
1) which regulatory pathway they have to take (whether they could get approval via the less onerous 510(k) route or instead have to undertake the more stringent process for Pre Market Approval (PMA), and
2) the types of clinical evidence which the FDA requires in order to approve 23andme’s test.
Let me reproduce the relevant parts of the letter in full:
“we provided ample detailed feedback to 23andMe regarding the types of data it needs to submit for the intended uses of the PGS. As part of our interactions with you, including more than 14 face-to-face and teleconference meetings, hundreds of email exchanges, and dozens of written communications, we provided you with specific feedback on study protocols and clinical and analytical validation requirements, discussed potential classifications and regulatory pathways (including reasonable submission timelines), provided statistical advice, and discussed potential risk mitigation strategies. As discussed above, FDA is concerned about the public health consequences of inaccurate results from the PGS device; the main purpose of compliance with FDA’s regulatory requirements is to ensure that the tests work.
However, even after these many interactions with 23andMe, we still do not have any assurance that the firm has analytically or clinically validated the PGS for its intended uses, which have expanded from the uses that the firm identified in its submissions. In your letter dated January 9, 2013, you stated that the firm is “completing the additional analytical and clinical validations for the tests that have been submitted” and is “planning extensive labeling studies that will take several months to complete.” Thus, months after you submitted your 510(k)s and more than 5 years after you began marketing, you still had not completed some of the studies and had not even started other studies necessary to support a marketing submission for the PGS. It is now eleven months later, and you have yet to provide FDA with any new information about these tests. You have not worked with us toward de novo classification, did not provide the additional information we requested necessary to complete review of your 510(k)s, and FDA has not received any communication from 23andMe since May. Instead, we have become aware that you have initiated new marketing campaigns, including television commercials that, together with an increasing list of indications, show that you plan to expand the PGS’s uses and consumer base without obtaining marketing authorization from FDA.”
So, to be clear: the FDA took action not because it wants to stop companies like 23andme selling tests direct-to-consumer, but because 23andme had failed to comply with the FDA’s requirement to provide data on the analytic and clinical validity of their test. So if you are going to run a poll on the FDA’s actions, then the question you should ask is this: do you believe that DTC genetics companies should have to comply with the FDCA and provide evidence to the FDA that gives reasonable assurance of the safety and effectiveness of their tests?
Five days after receiving a warning letter from the FDA, a disgruntled consumer has filed a class action lawsuit in the Southern District Court of California (Forbes on December 2, 2013). Since my knowledge of class actions is derived solely from John Grisham novels, I invited Tania Bubela to provide a guest post which explains the import of this new development:
Lisa Casey, on behalf of herself and others, claims: (1) 23andMe “falsely and misleadingly” advertised their saliva kit and Personal Genome Service (PGS) as providing health information on conditions, traits, drug responses, and carrier status without an analytical basis or clinical validation; (2) 23andMe provided information from the tests and questionnaires completed by consumers to researchers “even though the test results are meaningless”; and (3) despite a lack of FDA approval, 23andMe continued to increase the list of indications for PGS and “initiated new marketing campaigns… in violation of the Federal Food, Drug and Cosmetic Act”.
So what is a class action lawsuit, how is it started and what happens next? A class action lawsuit is a procedure in civil (not criminal) law that enables a group of individuals to join forces as plaintiffs against a defendant (or defendants) – in this case, 23andMe. Many jurisdictions around the world enable class actions to alleviate the burden on the judicial system of multiple similar lawsuits against the same defendant(s).
While class actions save the system significant time and money, they are also beneficial for the plaintiffs. Often, plaintiffs are claiming small amounts of damages, making a lawsuit uneconomical (no lawyer, unless ideologically motivated, would take on a lawsuit for damages of $99, the cost of 23andMe’s test). Lawsuits are expensive propositions and a class action allows the cost to be spread amongst the plaintiffs. By joining forces, a class action may become an action for many millions of dollars, attracting competent legal counsel. Legal counsel often take such cases on contingency, meaning they are not paid upfront, but take a percentage of the damages if they win.
However, there are also downsides to any legal action, besides time, money and energy. For example, there are many hidden costs to contingency arrangements, which are not widely advertised by law firms. Plaintiffs remain responsible for paying for disbursements (e.g., phone calls, photocopies, and other general office charges), which can amount to significant amounts of money. Of even greater concern, if the case goes to court and the plaintiffs lose, they may be responsible for the substantial legal costs of the defendant(s), since the general rule is “loser pays”. Class actions allow costs to be spread amongst individuals who have signed on as plaintiffs, often for a small fee.
So far, Lisa Casey has simply filed a class action. Before it becomes one, however, it needs to be certified by the Court. The filing spends some time explaining how this claim meets both Federal and State law in California on class actions. A threshold issue is if the subject matter of the class action is allowable. Since class actions often arise in the context of consumer protection, and the plaintiff has claimed false and misleading advertising of products and services – appropriate subject matter for a class action. Next the Court will consider the criteria for a class action – does it meet the requirements of numerosity, commonality; adequacy of class representative and adequacy of legal counsel. Each of these is addressed in the filing. Numerosity simply means that there are enough plaintiffs to make a class action an efficient means of handling the claims. Here the plaintiffs are all customers of 23andMe in the United States, which number at least in the thousands (the filing even suggests the possibility that the plaintiffs number in the millions, a very optimistic assessment of the company’s customer base at this time).
The next requirement is common issues of fact and law, and this may explain why the facts in the filing are limited to exposure to advertising, the receipt of the saliva kit, the purchase of the test, the return of results, and the fact that the terms and conditions for 23andMe’s services apply to all customers. If the plaintiff claimed other kinds of damages, such as psychological harm arising from the receipt of disturbing test results – such as increased susceptibility to breast cancer or Alzheimer’s – the facts would be idiosyncratic and not necessarily common enough to be representative of all potential plaintiffs in the class. In other words, this filing has played it safe, increasing the likelihood of certification, but reducing the scope for damages. The facts also suggest that Lisa Casey’s experiences with 23andMe are representative of the majority of customers, meaning that she may meet the criterion as an adequate representative plaintiff. Finally, given the sophistication and rapidity of the filing, it appears that competent legal counsel has been retained, the final criterion.
So what next? Lawsuits of this magnitude often take years, if not decades, to pass through the many main and ancillary hearings and likely appeals. The best case scenario is a rapid settlement of the suit, and most civil actions settle out of court after processes of discovery which enables examination of the evidence and strength of the plaintiffs’ case. However, if there is no settlement, the claim requests a jury trial in a State – California – with a long history of generous damages awards by juries to plaintiffs. The latter will include the direct damages claimed by the plaintiffs, but may also include punitive damages, which the jury may award to “punish” a badly-behaved defendant. Punitive damages in California are awarded according to statute and are not compensatory in nature. While they go to the plaintiff (with a large percentage going to the plaintiffs’ legal counsel), they are designed to deter others from engaging in the same conduct as the defendant. 23andMe therefore faces, if it loses, compensatory damages, possible punitive damages, and, of course, the payment of the plaintiff’s legal fees.
In sum, whether the class action is certified, or it wins or loses, 23andMe is in deep trouble. The result of legal action is a loss of consumer and investor confidence. When faced with a significant financial penalty from a class action, even robust companies have failed. Whether 23andMe can survive remains highly in doubt. What is not in doubt is the emotional energy, cost and time 23andMe will have to expend to defend its actions and business model in the court of public opinion and the Southern District Court of California.
Tania Bubela, Associate Professor, Department of Public Health Sciences
School of Public Health, University of Alberta
The news that FDA has written a stiffly-worded warning letter to 23andme signals a new milestone in the long and winding road to a coherent regulatory framework for consumer genomics. I was just finishing a gruelling slog to submit a paper when I got an email alerting me to this bombshell and have spent the time since en route to Montreal, so I am not up-to-date with reaction on the internet (although I have read Myraqa’s excellent post).
It is now over a year since 23andme submitted a de novo 510k submission to FDA and things have been very quiet since then. FDA’s letter makes plain that behind the scenes things have not been going well between the company and the agency. I have read quite a few FDA warning letters in the course of my research over the last few years but I have never seen one this stern before. The FDA are indicating an extreme displeasure with 23andme’s failure to meet their regulatory requirements. So where did it all go wrong and why has a letter been sent now? I have spoken to neither 23andme nor the FDA but here is my take on what has happened.
I think to understand this we need to go back to the FDA’s advisory committee meeting on consumer genomics, held in March 2011. I was invited to that meeting to give an overview on the regulatory trends across the globe, so had a ringside seat at this particular fight. Much of the subsequent internet discussion of the meeting focused on the panel’s view that most genetic tests should be offered through a physician. But in my opinion that was not the real meat of the meeting. The substance of the discussion was about science, not ethics. From that perspective the highlights of the meeting included a testy exchange between the panel and deCODE’s Jeff Gulcher on statistical issues such as the importance of pre-test probability, and the FDA’s presentation on the statistical challenges of validating polygenic genetic risk assessment (which was given by the scarily brainy Marina Kondratovich).
The critical question the FDA asked the panel was whether this class of tests should be held to the FDA’s statutory standard i.e. should be able to provide “clinically significant results”. Unsurprisingly the panel was not willing to operate a policy of genetic exceptionalism for consumer genomics companies and affirmed that this standard should be applied. But genetic exceptionalism was precisely what 23andme were asking for at the meeting: they suggested that FDA needed to redefine clinical validity to deal with their type class of tests. The FDA’s new warning letter suggests that much of the tension between company and agency is at this sticking point.
23andme have had a fair measure of success in challenging the status quo. I believe that they, along with their erstwhile competitors decode and Navigenics, have shifted the terrain on which we debate the merits of genetic risk prediction, largely by reframing the issue as one of consumer rights. But that ideological victory is of little import when it comes to the question of what constitutes adequate validation for their tests. That was an issue on which the three companies could not even agree amongst themselves when they undertook their industry standard-setting initative. It should be no surprise then that this remains 23andme’s Achilles heel.
But why did they FDA’s letter come now? Only the agency can answer that question, but I think that the tipping point would have been the launch of 23andme’s national consumer advertising campaign. We should remember that the last spate of FDA action in this sector was prompted when Pathway Genomics began to sell its tests through Walgreens, a high-street pharmacy chain. Advertising your tests on the television when you have applied for, but are struggling to gain, FDA approval is not a smart tactic. It is a move, I would suggest, born of desperation, rather than stupidity. The company must have known they were likely to raise regulators’ hackles but they are in desperate search for a milestone which would convince outsiders that they are achieving some measure of success (even if they are still a long way from profit).
Perhaps they thought that since the FDA have still not received the green light from the Obama administration to issue their draft guidance on the regulation of laboratory-developed tests, then they had some political wiggle room. Clearly that was a mistake. Historically the FDA has often developed new policy through a bottom-up process of individual actions, and that is how the LDT issue is currently being played out – witness the agency’s recent action against Atossa Genetics (and see the Myraqa blog for a great post on that story). FDA are to be applauded for sticking to their scientific guns. Let us hope that the draft LDT guidance follows soon.
The following post has just appeared in BioNews:
Last week saw the launch of the UK arm of the Personal Genome Project (PGP). This is the second major sequencing initiative launched this year in the UK (the first being Genomics England). Interestingly, both projects seek to sequence 100,000 genomes, and of course both are fuelled by a belief that genomics is set to become a routine part of healthcare. Yet the projects are as notable for their differences as their similarities.
Genomics England is bankrolled by the government, with £100 million of NHS funds earmarked for the project. The source of funding for PGP is rather more disparate, with the PHG Foundation reporting that it is to be funded for the next year ‘by the Chinese Beijing Genomics Institute (BGI) and commercial sequencing and interpretation service providers Illumina, Life Technologies, and Personalis’. Whether it can secure long-term funding is not clear, with Science Insider reporting that the more established US and Canadian arms are struggling for funding. Another difference is in their approach to confidentiality: PGP operates on the principle that research participants share their data publicly.
Controversy surrounding protecting the privacy of genetic research participants heightened after the 2008 publication by Homer et al demonstrating a new technique which allowed them to re-identify genotyped individuals or even individuals in pooled mixtures of DNA. The publication immediately led research funders including the US National Institutes of Health and the Wellcome Trust to place new restrictions on access to data from genome wide association studies (GWAS).
The PGP’s solution to the problem of privacy is not to increase safeguards but to do away with them altogether, by enrolling research participants in a project which requires consent to full public disclosure of their data. This is a radical move. Genetic epidemiology, like most other biomedical research, has hitherto operated according to a norm that seeks to ensure that research subjects are protected by the cloak of anonymity. As far as I am aware, this remains the case for most genomic research. The ethics and governance framework of the UK Biobank, our flagship initiative in this research field, states that the organisation ‘is committed to protecting the confidentiality of data and samples’. The UK’s other flagship initiative is Genomics England, an organisation whose governance framework is still in development but which has already promised to ‘strictly manage secure storage of personal data in accordance with existing NHS rules designed to securely protect patient information’.
I am unconvinced by the argument that the PGP’s public disclosure policy is the best response to the difficulties of safeguarding genomic confidentiality. In a recent review of this issue, Greenbaum et al provided an alternative perspective: ‘Another approach could be to learn from the legal and banking sectors wherein privacy and confidentiality are protected while the practitioners nevertheless manipulate and analyze large databases of highly confidential personal and financial data. Furthermore, private information is exchanged between many organizations ranging from large companies to small law firms. In those cases, incentives to keep clients, as well as governmental regulations with stiff penalties and civil and criminal repercussions, help to prevent breaches of customer privacy’.
This carrot-and-stick approach is not a 100 percent guarantee of good behaviour: when the incentives are strong enough, as is the case with insider trading, then individuals may break the law. But it does provide a significant measure of protection for individuals and corporations from the malicious abuse or careless disclosure of their confidential data, and, as importantly, it provides legislative support to the principle that such protection is a reasonable expectation.
In the era of big data, the governance of personal information requires the robust defence of such principles.
The following post appeared this week in Bionews and was co-authored with my colleague Michael Hopkins, senior lecturer in science and technology policy studies, University of Sussex.
Last week saw the conclusion of the long-running gene patent lawsuit known as AMP v. Myriad Genetics, which pitted the US biotech company Myriad Genetics against the American Association for Molecular Pathology and others, including a group of cancer patients and campaign organisations with support from the American Civil Liberties Union (1).
At stake was the patentability of isolated DNA sequences, such as those issued by the US Patent and Trademark Office (USPTO) to Myriad and the University of Utah on the BRCA1 and BRCA2 genes. Those genes play an important role in determining risk of breast cancer for a subset of women with a strong family history of the disease.
However, since the patents were granted in the late 1990s, Myriad has excluded other laboratories from offering testing services for patients while monopolising the US market from which most of their US$400 million revenue for BRCA testing is derived (2). The litigation, already attracting international media attention, was thrust even further into the spotlight when Hollywood star Angelina Jolie announced that she had undergone breast surgery after learning that she was at heightened risk of the disease as a result of her BRCA status. At the end of her media statement, the actress and activist expressed concern about the cost of BRCA testing in the USA and the fear that some women may have been unable to access the test as a consequence.
Critics of gene patents have long voiced this concern, as well as expressing fears that diagnostic monopolies limit different approaches to testing, which could detract from the quality of tests available, and that innovation will be hindered as the proliferation of patents creates thickets of intellectual property (3). While many companies hold patents on genes, and often develop products based on these without attracting much attention, the Myriad BRCA1/2 patents have been the lightning rod for concerns over gene patents for over a decade.
Over the years, the patents that Myriad’s monopoly relies upon have been widely challenged in patent offices and courts in the USA, Australia and Europe, and in some countries they have been weakened and liberally infringed by hospital laboratories. However, it is only the vigour of the opposition to Myriad’s tight grip on the US BRCA testing market that overcame prior court verdicts and forced the hearing of the case before the US Supreme Court (which had itself remanded the case to the lower Federal Circuit court once before).
In its acquiescence, the Supreme Court chose to hear arguments in relation only to a very specific point – the patentability of isolated gene sequences, which they ruled on earlier this month. Patent claims on gene sequences have for decades been accepted as a novel composition of matter by the USPTO when isolated and purified from their natural context, provided that the sequence has a specific and substantive industrial utility. This might mean that the sequence is useful for diagnostic or therapeutic purposes, for example. The ruling was therefore significant and much anticipated as it was set to overturn long standing accepted practice. And overturn established practice the Supreme Court judges duly did in a decisive manner, 9-0 in favour of making merely isolated gene sequences patent ineligible.
However the opinion is equally clear that where a nucleotide sequence claiming DNA or RNA differs from the natural sequence, then this may be considered as patentable subject matter, subject of course to the other conditions of patentability – these being: novelty of the subject matter, utility of the invention, sufficiency of the inventive step, and full disclosure of the invention. This leaves considerable latitude for the patenting of engineered genes such as might be used to produce modified proteins and enzymes with improved characteristics in therapeutic use or bioprocessing, as well as synthetic DNA and RNA probes and primers useful for diagnostic testing.
How does this decision change the biotech industry or impact healthcare? The answer is probably that this case will be an anti-climax with little visible change, other than on Myriad’s share price. Indeed in the days since the ruling, much attention has focused on this financial gauge of how much the market will change and whether Myriad’s monopoly will at last be broken in the USA (of course the patent position in other jurisdictions remains unchanged). Certainly, practitioners following the case expect competing tests will now be launched, so explaining the stock price decline which followed the decision. Yet Myriad has hundreds more patent claims still intact, and so competitors will still have to proceed with some caution. In fact, the invalidated patents would have expired soon anyway, given the limits of patent term so this is not a dramatic change in the patent landscape. Myriad also has an unprecedented database of BRCA 1/2 sequence variants and patient histories thus providing them with a significant advantage over other test providers when it comes to test results interpretation (4).
More dramatic is the sudden invalidation of hundreds, if not thousands, of patent claims similar in wording to those in the BRCA patents. These claims cover inventions in agricultural, industrial and health-related biotech sectors. Yet many of these patents will also expire in the near future. Indeed the type of broad patent claims on naturally occurring genes at stake in the Myriad case has been on the decline for over a decade, as the massive gene sequencing efforts by public and private organisations over the years have made it more difficult to meet patentability requirements (5). However, as noted above, it is still possible to patent non-naturally occurring sequences, and reading between the lines of the ruling, we can surmise it is still possible to obtain patent claims on a novel use of a naturally occurring molecule, be it a gene or a protein.
Finally outside the USA, the ruling will have little immediate impact other than providing a (perhaps politically unwelcome) reminder of the now relatively strong legal protections on patentability of genes and other nucleotide sequences enshrined in national laws across Europe, following the adoption of European Directive 98/44/EC. For further consideration of the likely continued impact of diagnostic innovation based on biomarker patents, then see my previous post: Not the Myriad story.
SOURCES & REFERENCES
1) Myriad Supreme Court case history. SCOTUSblog
2) Baldwin, A. L. and R. Cook-Deegan (2013) Constructing narratives of heroism and villainy: case study of Myriad’s BRACAnalysis compared to Genentech’s Herceptin.Genome Medicine | 31 January 2013
3) Hopkins, M.M. and S. Hogarth (2012) Biomarker patents for diagnostics: problem or solution? Nature Biotechnology 30, 498–500 | 07 July 2012
4) Conley, J. (2013) ‘Myriad, Finally: Supreme Court Surprises by not Surprising’. Genomics Law Report | 18 July 2013
5) Graff G. et al. (2013) Not quite a myriad of gene patents. Nature Biotechnology 31, 404–410 | 08 May 2013
An invitation this week to take part in a discussion about the Myriad BRCA patents case on BBC television this Sunday has confirmed that the UK media are as fascinated by this story as their counterparts in the USA. Everyone loves a courtroom drama, and this case has drama in spades, at least for those with a passionate interest in the subject of gene patents. However, I venture to suggest that whatever the outcome of the Myriad / ACLU suit, it will not be year’s the most important decision concerning diagnostic monopolies based on DNA patents. For that we have to turn our gaze across the Atlantic from the USA to the UK; from germline DNA to the somatic mutations present in cancer tumours, and from the USA Supreme Court to the National Institute for Clinical Excellence (NICE).
NICE is not as well known as the US Supreme Court, but for the manufacturers of healthcare products its decisions are just as significant. Within its own domain – Health Technology Assessment – it has a global reputation, and the decisions it makes about whether or not to recommend coverage of new drugs and devices have influence far beyond the UK NHS. In recent years NICE has begun to pay far greater attention to the evaluation of diagnostic devices, and in 2011 it established the Diagnostics Advisory Committee as a focal point for its work in this area. Since then that committee has been involved in what I believe is going to be a landmark evaluation, with profound implications for the molecular diagnostics sector and for public healthcare systems.
The evaluation concerns the relative merits of a number of prognostic tests for post-adjuvant breast cancer patients. There has been a proliferation of such tests in recent years and their exact intended uses vary – they may predict either breast cancer recurrence, risk of metastasis and/or likely response to chemotherapy. Just as BRCA testing became an exemplar for genetic risk prediction, so these tests have become the poster child for molecular tumour profiling based on somatic DNA. Furthermore, the business model adopted by many of the firms entering this space is the same as that used by Myriad Genetics. The traditional in vitro diagnostics (IVD) sector has been a high-volume, low margins business where companies hold intellectual property (IP) in testing platforms, and have not competed over biomarkers.[i] But like Myriad Genetics, the leading companies in the breast cancer prognosis space ( Agendia and Genomic Health) have emerged with a new business model based on exploiting IP in biomarkers and selling their tests not as kits, but as proprietary laboratory-developed tests (LDTs) delivered by the company’s own reference laboratory.
This business model may offer a number of commercial advantages – in the USA firms may sidestep the need for FDA approval (although Agendia’s MammaPrint test is FDA approved); the time to market may also be shorter because technical validation of a test performed in one laboratory may be easier than development of a kit which needs to perform reliably in multiple laboratories; and finally, by creating a proprietary diagnostic monopoly, it may be possible to gain higher reimbursement rates for your tests. This latter trend is perhaps the most significant for hard-pressed healthcare systems in an era of fiscal austerity and Genomic Health’s Oncotype Dx breast cancer test exemplifies the trend – their 10k annual report filed in March 2013 states that the list price of the test is $4,290.
While other poster children for personalized medicine have yet to garner widespread clinical acceptance, breast cancer prognostic tests are in growing use. Genomic Health has had significant success – according to their 2012 report they have achieved insurance coverage for 90% of women with node-negative invasive breast cancer in their domestic US market. Increasingly the company is looking for growth overseas – the 2012 report states that they are now providing testing to patients in over 70 countries. However, the report also states that they anticipate that “it will take several years to establish broad coverage and reimbursement” outside the USA. NICE approval would be a significant milestone in their global ambitions.
Three points arising from NICE’s draft decision are worthy of note:
1) Genomic Health has sought to differentiate itself from its main rival Agendia by predicting likely benefit from chemotherapy as well as likelihood of recurrence. NICE has rejected the data on chemotherapy benefit as “not robust enough”. This decision seems to confirm the profound challenges associated with demonstrating the utility of using molecular technologies to stratify patients based on their likely response to treatment.
2) The NICE draft decision to recommend Oncotype is based on a narrowing of the population for which the test would be used, suggesting that it will only be cost-effective “… in people at intermediate risk of distant recurrence where the decision to prescribe chemotherapy remains unclear … “ In other words NICE deem that it will not be cost-effective to use the test on patients who are classed as at either low or high risk of recurrence using existing protocols.
3) The cost-effectiveness even in this narrower indication is predicated on a discounted price offered to NICE by Genomic Health late last year. The original price quoted in the NICE evaluation is £2,580, but the sterling equivalent of the current US list price of $4,290 is £2,822 (based on a current exchange rate of 1 USD = 0.662522 GBP), so even the initial price is lower than the company’s current list price, but on top of that Genomic Health have now offered a further discount. How much that discount might be is not known; such information is always treated as commercially confidential by NICE (what level of discounting the company commonly offers to US insurers is also unknown.)
Whatever the rate of the discount it seems safe to conclude that the price will still be considerably greater than the cost of running one of the other tests under consideration – the in-house NHS option of a combination of four markers called IHC4. The NICE evaluation costed IHC4 at £150 per test. One HTA expert I spoke to looked at the draft NICE decision and based on some rough calculations suggested the Genomic Health discount might be as little as £100, giving an NHS price for Oncotype Dx of £2,480. But let us be more generous by a factor of ten and imagine a discount of £1,000 (based on an assumption that winning NICE approval is strategically important to Genomic Health as a stepping stone to market penetration in Europe) – we are still left with a test which is nearly ten times the cost of the in-house NHS equivalent.
The key advantage which Oncotype Dx enjoys over IHC4 is evidence. The UK test has been in development for a relatively short time, and although NICE deem it promising, it lacks the cumulative weight of evidence from multiple studies which support use of Oncotype Dx. What this suggests is that in the era of proprietary combinations of biomarkers, then significant first mover advantage is gained from being early to build a clinical evidence base. Furthermore, such advantage can gain its own momentum: the two leading tests – Mammaprint and Oncotype Dx – are now benefiting from large publicly-funded trials to test their utility (MINDAct and TailorRx respectively). What seems to have been missing up until now is comparative head-to-head studies of rival technologies. This is set to change with the launch in the UK of the OPTIMA trial, a study which may give IHC4 the chance to prove its worth against its commercial counterparts.
So what does all this boil down to? Let us return to the Myriad Genetics BRCA story. When Myriad threatened the UK government with litigation if it did not respect its patents, there was a groundswell of opposition and BRCA testing remained an NHS service. With similar outcomes in Canada and much of the rest of Europe, it seemed that the blockbuster diagnostic model based on biomarker patents might not be transferable beyond the unique healthcare system of the USA. If NICE hold to their draft decision on Oncotype Dx, then that assumption will have been blown apart. UK patients will have access to a technology commonly viewed as at the cutting-edge of personalised medicine, but the broader ramifications of such transnational diagnostic outsourcing for a cash-strapped NHS increasingly under threat of partial privatisation remains to be seen. Whether other diagnostics companies can benefit from this decision is also as yet unknown, although industry must be watching this case with great interest.
The question I want to end on is this: why is there so much academic and media attention focused on the Myriad BRCA lawsuit and yet practically zero interest in the NICE decision on Oncotype Dx?
[i]Garrison, L and Finley Austin, M ‘Linking Pharmacogenetics-based diagnostics and drugs for personalized medicine’ Health Affairs 25, 1281–1290 (2006)
Yesterday the UK government announced a £100M initiative to bring whole-genome sequencing into the National Health Service. Initial efforts will be focused on a research programme to sequence the genomes of 100,000 patients in two disease areas: cancer and rare diseases, and on the sequencing of a subset of participants in the UK Biobank. This initiative will also put in place a new bioinformatics framework to facilitate the linking of genomic and clinical data. All these are stepping stones towards sequencing the genomes of every NHS patient and storing that data in a national database (see The Observer).
It would seem that the UK public are being offered a grand bargain by the government: give us your DNA and we will revolutionise medicine. However, it is not clear that we need all this data (the UK Biobank already has 500,000 participants); it is not clear that genomics is going to revolutionise medicine (see previous post: The myth of the genomic revolution), it is not clear that the government can deliver the massive IT infrastructure which would be required to make this work (the current government last year scrapped a national IT system for the NHS as unworkable – see The Guardian). Finally, public acceptance of this radical plan seems doubtful (see public comments on The Observer report for a taste of the response we might expect from many people).
So why is this all happening? Why at a time of healthcare cuts, is £100M being taken out of the NHS budget for an initiative whose necessity, utility, technical feasibility and public acceptability are at best doubtful? My own view is that many leading proponents of the promised genomic revolution have grown impatient with the slow rate of clinical adoption and they are now trying to bring genomics into the NHS by the backdoor by collapsing the traditional distinction between research and clinical practice. Only when the entire population has become genomic research subjects, it is implied, will we have sufficient data to reveal the latent utility of clinical sequencing (and once we have that genomic data as research data then we may as well use it in clinical decision-making). This is a solution which raises as many questions as it might hope to answer, pushing the question of clinical utility downstream whilst bringing to the fore equally intractable issues of public trust in the management of data privacy and the handling of unexpected findings of unknown clinical significance.
There is much more to be said on this topic but for now, I will simply point you in the direction of a fairly trenchant op-ed from two prominent US geneticists on the subject of the utility of whole genome sequencing: The value of your genome.
Way back in January I launched this blog with the news that I had just discovered that 23andme had filed for a patent on polymorphsisms relating to Parkinson’s Disease (see Patently Unclear). In that post I asked various questions about where this patent might fit in their efforts to find a way to make money out of consumer genomics and whether 23andme could reconcile that commercial drive with their intent to “democratise” genomics. I suggested that the company’s lack of public comment on their patent application was at odds with their democratic ambitions (and also at odds with public comments by co-founder Linda Avey on the evils of gene patenting). Well, now 23andme have finally gone public – they announced on Monday 28 May that they expected the patent to be issued the following day. The announcement (posted in the name of company co-founder Anne Wojcicki), is on their coroporate blog The Spitoon. It has already attracted one hostile comment from a customer and it will be interesting to see how the debate now develops. In the meantime, I have posted my own comment (which is still awaiting moderation), but you can read it here:
As you acknowledge in your post, gene patenting is not without controversy. Certainly my experience suggests that this is an issue which attracts much attention – I blogged on your patent application way back in January and that post remains by far and away the most visited page on the Gene Values site.
The first comment on your post (from Arturo) clearly illustrates that your customer base may not be happy about your decision to file for this patent. In response you state that the patent was filed in order to facilitate work on a treatment for Parkinson’s and that 23andme “will not prevent individuals from getting access to information or prevent researchers from researching the target.” However, your patent application was wider than therapeutic applications covering risk prediction, diagnosis and prognosis. I have a number of questions:
1) If your intent was only to support therapeutic R&D then why does the patent cover diagnostic applications?
2) Will you try to prevent other companies selling Parkinson’s Disease tests for these polymorphisms, or, will you seek license fees from other companies selling Parkinson’s Disease tests for these polymorphisms?
3) Given your company’s avowed mission to “democratise genomics”, what were the participants in the Parkinson’s Disease study told about the intended commercial exploitation of discoveries arising from the study, and did you ask them what their preferences were?
4) Given the controversy surrounding gene patenting why have you not invited discussion and debate on this issue?
If, as you frequently avow, 23andme wants to “democratise genomics”, then this is the kind of issue on which you should be seeking feedback from your customers and the broader polity. It’s not a very sophisticated definition, but my understanding of what a democracy should be is a system where everyone has their say, and where what they say counts for something. Trying to reconcile that with a corporate system of decision-making may prove as challenging as trying develop a sustainable business model for consumer genomics but if you really want to democratise genomics (rather than just commodifying it), then that is the task you face.
My colleague Brian Salter and I have a comment piece on the BioNews website today, in which we discuss the move by the UK government to establish a new advisory committee for emerging biomedical science. We pose a number of questions about the remit and scope of the new committee and conclude by suggesting that emerging science is not the only source of governance challenges in the sphere of biomedical innovation – established technologies also create policy problems, a point exemplified by the recent PIP breast implant scandal.
The previous post on this blog took issue with the view (expressed by a UK government advisory committee) that we are witnessing a “genomic revolution”. Paul Martin and I suggested instead that what we are witnessing is in fact a gradual process of incremental and additive change entirely consistent with the general trend in diagnostics innovation in the twentieth century.
In this post I want to elaborate on that view by way of commenting on a couple of things – a new report on personalised medicine from United Healthcare and an interview with Matt Posard, Illumina’s senior VP for Translational and Consumer Genomics.
The viral marketing of personalised medicine?
The United Healthcare report is a very useful piece of research because it puts hard figures on the hype surrounding personalised medicine. The most interesting part of the document is the data on trends in testing, the volumes of tests used, the costs associated with that volume, all broken down by test type. The report uses three categories: infectious diseases, cancer and inherited conditions/other. For me, the major take-home from these figures is the confirmation that in terms of test volume (defined in the report as number of test procedures per 1,000 UnitedHealthcare members) the big growth area for the molecular diagnostics industry has not been the applications which have been attracting the greatest headlines (and the largest amount of ELSI research) i.e. companion diagnostics, susceptibility testing and rare disease genetics. Infectious disease testing far outstrips these applications.
In other words, the molecular diagnostics industry enjoys its status as the fastest growing sector of the diagnostics industry because it has found a quicker, cheaper and more accurate way of diagnosing infectious disease than traditional pathology techniques – thus far the viral (or microbial) genome has outpaced the human genome in the space of clinical practice. Since the report cites 1% growth rates for cancer testing but 9% growth rates for infectious disease testing and for inherited conditions/other testing, this is a gap which is likely to grow rather than diminish. The smart money (and when I say money, I mean public and private investment) should be on that continuing to be the case for some time to come, not least because of the huge growth potential for DNA-based infectious disease testing in some of the rising powers such as Brazil and India.
What, you might ask, has this got to do with personalised medicine? Are patients suffering from infectious diseases now being stratified according to the genetic profile of their infection? The answer for the most part is no. There are exceptions, such as Hepatitis C where treatment selection is guided in part by viral genotype (but for the limitations of this approach see section 4.7 of a recent UK guideline) and HPV testing, which discriminates between low and high-risk strains of HPV (but, as noted in a previous post, in cervical cancer screening cytology remains the primary diagnostic modality not DNA testing). So for the most part the only strain is the mental effort required to try and fit DNA-based infectious disease testing into our common understanding of what personalised medicine might mean. We need to be very careful not to bundle infectious disease testing together with other categories of testing to put a global figure on the growth in personalised medicine. This is a move which would grossly exaggerate how significant the field is now and how quickly it is likely to grow. Personalised medicine, understood as the use of genomic (or proteomic/metabolomic) data to stratify the care of patients, remains a niche market and is likely to remain so for some time to come.
So has DNA-based infectious disease testing got anything to do with personalised medicine? This type of application has provided molecular diagnostics companies with a large and (relatively) risk-free market allowing them to develop platform technologies which generate revenue streams which in turn can help to support investment in higher-risk applications like pharmacogenetics. Infectious disease testing is also an application which is likely to drive investment in point-of-care molecular diagnostics, which, again might pave the way for innovation in areas like companion diagnostics where turnaround time is cited as one issue deterring clinical uptake. So there are important interconnections, but those do not mean that DNA-based infectious disease testing is personalised medicine and we should not collapse the two when putting together statistics about the growth of personalised medicine.
Illumina – redefining the consumer in consumer genomics
The other type of hype which I want to address in this (lengthy) post are two linked ideas: the proposition that consumers are going to drive growth in personalised medicine by demanding access to their personal genomes; and the idea that in the future we will all have our genomes sequenced. I have argued elsewhere that the direct-to-consumer genetics business model is unproven and that many companies have moved away from it because it is unlikely to prove profitable, and my last post (with Paul Martin) questioned the vision of a future where we all have our genome sequences lodged in the healthcare system ready for accessing every time a doctor needs to treat us. These issues are illuminated (no pun intended) by the GenomeWeb publication Clinical Sequencing News which this week carries an interview with Matt Posard of Illumina, the industry leader in genome sequencing technologies.
Like Affymetrix (the microarray company it has begun to overshadow) Illumina has moved into clinical applications. Some years ago Affy set up a CLIA-certified lab but they subsequently sold it to Navigenics; it remains to be seen whether Illumina manage the transition from research tools manufacturer to clinical service provider with greater success.
The Posard interview is interesting because it reveals the caution and conservatism with which Illumina is approaching this space. In key respects this is a case of business as usual, not a revolution in healthcare. For instance for Illumina, the consumer is not a member of the public but a doctor or pathologist:
“The mission really is to enable genomics-based healthcare. The way we intend to do that is not just look at what we sell as an instrument and a set of consumables, but [also] the report that a physician is going to look at. So our primary customer, if you will, is going to be clinical geneticists as well as pathologists because it’s those groups that will ultimately sign off on the report.”
Posard does envisage some kind of consumer market for the general public, but its limits are revealing: “If it’s someone that’s struggling with an undiagnosed disease, it’s always going to go through a physician, without question.” That is a statement which sits uneasily with the heady rhetoric which demands unmediated access to the genome as a fundamental right. It moves the terrain of discussion onto the ground where I would suggest most stakeholders, including most of the industry, sit: some things can be sold DTC, other things should be ordered through a physician. Deciding where to draw the line provokes disagreement, but the need for a line has broad support, not least because the unfettered market would be one in which industry bore significant risks, as Posard makes clear:
“The risk or the responsibility that’s on Illumina and the other providers is [that], when somebody is exploring their own genome, they will find markers that have risk predisposition for different diseases. Making sure those results are provided responsibly and, in some cases, with professional counseling or support to help that individual through that information is the responsibility of the provider of those products.”
My colleague Michael Hopkins wrote a few years ago about the issue of commercial risk management in the genomics industry and his paper remains highly salient. Michael was also lead author on a highly influential paper called “The Myth of the Biotech Revolution” which countered biotech hype with a sober assessment of the scale and pace of change in the biopharmaceutical industry. The Posard interview is similarly revealing for his caution about how quickly genome sequencing will become a routine part of patient care. He begins with the classic figure of five years down the line, a timespan frequently invoked by those who promote expectations around genomic technologies. But Posard ascribes that optimistic vision to other commentators; he is rather more pessimistic in the timeline he envisages: “I think for my children and their generation, particularly, for them and for their kids, clinical sequencing and whole-genome [sequencing] are going to become standard of care.”
Given that Illumina are fighting off a takeover bid by Roche, the politics of expectations management are probably even more complicated than usual in this case. Nevertheless, it is a strikingly pessimistic assessment of the pace of change. Posard makes a series of comments, about the need to demonstrate clinical utility, and to have technologies which work within the clinical laboratory setting, which demonstrate a keen awareness that the rapid pace of technological progress in sequencing technology is moving on a timeline quite different to those which govern the clinical adoption of new diagnostics.
This acceptance that gene-sequencing must come to an accommodation with the challenges of the existing diagnostics innovation system, is reflected in Posard’s view of the FDA. He suggests that NGS products will transform the regulatory paradigm but he also explains that regulatory approval by FDA is central to Illumina’s move into clinical applications: “Our diagnostic business unit is in routine discussions with the FDA to ensure that we get the proper labels for the various products that we’ve been talking about,” Posard says, and he suggests that, in large part, products like the MiSeq platform can be accommodated within the current regulatory system.
What we see then, is the same tension which Paul Martin and I highlighted in the report from the Human Genomics Strategy Group: a transformative vision of a future in which we will all have our genomes sequenced is at odds with a more pragmatic acceptance of the need to demonstrate clinical utility and the need to work within existing paradigms, whether it be FDA approval or physician-led (rather than consumer-driven) healthcare. How are these to be reconciled? For many proponents of the genomic revolution, the mechanism which now appears to be favoured is a collapse of the distinction between research and clinical practice. Only when we have turned the entire population into genomic research subjects, it is argued, will we have sufficient data to reveal the latent utility of clinical sequencing. This is a solution which raises as many questions as it might hope to answer, pushing the question of clinical utility downstream whilst bringing to the fore equally intractable issues of public trust in the management of data privacy and the handling of unexpected findings of unknown clinical significance. But that is a discussion for another post.
 Hopkins, M.M. & Nightingale, P., 2006. Strategic risk management using complementary assets: Organizational capabilities and the commercialization of human genetic testing in the UK. Research Policy, 35(3), 355-374
 Hopkins, M.M. et al. (2007) The myth of the biotech revolution: An assessment of technological, clinical and organisational change. Research Policy, 36(4), 566-589