What could go wrong, right?
Next generation sequencing is at a pivotal point, with the cost of sequencing going below even moore’s law. Now, instead of gene-by-gene approaches, large sets of genes can now be addressed in a single test. The ironically named 1,000 genome project, (actually testing 1,092 genomes – I guess they want to be overachievers?) completing in 2012, showing the immense power and potential of NGS technology.
Already, many molecular diagnostic laboratories have adopted NGS as the preferred technology for diagnosing an increasing number of diseases and a great paper was recently published in the European Journal of Human Genetics (almost authored by more people than there are genes in mycosplasma genitalium!) discussed the problems of implementation of NGS in a laboratory.
The researchers in their paper: “[concluded] that alternative diagnoses may infer a certain level of ‘greediness’ to come to a positive diagnosis in interpreting sequencing results.”
The table from the paper clearly shows that both patient 9 and 4 formed an exception to a general consistency between NGS laboratories, for which six CGCs reported a causal mutation in MYBPC3 and 2 did not for the former. More critically however, no two centers provided identical reports for all patients. Furthermore, no two centers had a similar responsibility chain with 3 to 5 people being involved from counselling through to sequencing.
Similarly, laboratories did had legal consent for NGS applications that target a limited set of genes, but not for exome- and genome-based diagnosis. A mixture of prognostic, predictive and diagnostic biomarkers were also included in the report, confusing I’m sure! This data shows that obtaining a high-quality diagnosis can thus no longer be the sole responsibility of clinical geneticists, but requires intensive interaction between laboratory specialists, clinical geneticists, other medical specialists and bioinformaticians/data analysts.
Maybe Sanger sequencing is still the gold standard?
In the paper, more than 2 laboratories still applied complementary Sanger-based sequencing in a routine diagnostic setting to assure sufficient coverage for all relevant regions (and adding the additional cost onto the NGS run!). Indeed, after the NGS-based results were collected, CGC 8 ‘closed the gaps’ and confirmed the presence of the MYBPC3 mutation. The authors concluded that in most cases, variants will have been identified via Sanger sequencing, considered (at least for now) the gold standard comparative technique.
Going to the ‘go-to’ paper to compare NGS v Sanger sequencing (it has 42 citations, it must be good), the aptly named “Validation of Next Generation Sequencing Technologies in Comparison to Current Diagnostic Gold Standards for BRAF, EGFR and KRAS Mutational Analysis” spoke favourably of NGS, with 2 large comparisons between NGS and Sanger sequencing:
- NGS is reliable in detecting known standard-of-care mutations with good sensitivity and specificity within our small sample panel
- NGS called other mutations in EGFR, KRAS and BRAF that represent standard-of-care but were undetected by Sanger/q-PCR methods
So there is clearly a confused industry here, with a lot needed to be done in order to validate and implement NGS in a laboratory.
But what does the College of American Pathologists (CAP) say about all this?
Perhaps seen as the ‘guiding light’ in all things molecular. In another recent publication, the appropriately named ‘CAP NGS Work Group’ included a total of 18 laboratory accreditation checklist requirements for the analytic wet bench process and bioinformatics analysis processes within the molecular pathology checklist (MOL). The NGS checklist items include new standards for:
- Quality assurance,
- Confirmatory testing,
- Exception logs,
- Monitoring of upgrades,
- Variant interpretation and reporting,
- …Okay, and about 20 others things, you get the picture!
Summarizing it simply for you, ultimately, the essential performance characteristics that need to be determined during any NGS validation being:
- The analytic sensitivity and specificity,
- Accuracy (the degree of closeness of measurements to the actual [true] value),
- Precision (reproducibility and reliability) and;
- Limit of detection (if applicable)
However, the work group noted that NGS validations reported in the literature have varied considerably in sample number size (eg, ~20–80 plus samples – what would Shapiro-Wilk think of this?!), reflecting that individual laboratories are on a validation ‘‘learning curve.’’
This being said, CAP did recommend, because it is not possible to validate all theoretically possible variants that can occur, that it is necessary to use a combination of a ‘‘methods-based’’ and ‘‘analyte specific’’ validation approach.
How do you do this analyte specific validation??
Reference materials are available, including both plasmid-based approach (SeraCare) and genomic DNA derived from cell lines (Horizon). As before, with some bacteria containing a mere 450 genes (example mycosplasma genitalium) for me it’s a no brainer, you’d rather have human DNA than plasmids.
Looking to set-up NGS in your laboratory but not decided on the platform yet? Check out my other post comparing Illumina MiSeq, Ion Torrent PGM, 454 GS Junior and PacBio RS