Prognosis – predicting the future – has long been central to medicine. Once a diagnosis is reached, it is usually possible to outline how things are going to unfold. In the era before effective treatment, diagnosis and prognosis were all doctors could really do. In his memoirs, Kenneth Lane, a GP in the 1930s, described his intimate familiarity with the course of various infectious diseases: with pneumonia, for example, there were telltale signs at 11 days that would enable him to predict whether a patient was going to survive or die.
In recent decades, medicine has been attempting to occupy a whole new territory, that of risk. The theory is seductive: if we can “diagnose” someone as being at high risk of developing a condition, it might be possible to prevent it happening.
The concept relies on three things: the accuracy of the “diagnosis”; the level of any risk identified; and the acceptability and efficacy of whatever strategies might be available to reduce it. Medicine’s adventures in risk assessment have thus far relied on clinical features: family history; demographics such as age and sex; lifestyle factors such as smoking or exercise; and biological markers like blood pressure, weight, glucose and cholesterol. Compare a patient’s profile with a large population data set and you get an estimate of their risk of developing certain conditions over time.
Faced with a physician saying they are at “high risk” of heart disease or diabetes, some people are inspired to change their lifestyle. Many others will start swallowing pills that promise protection. Underpinning it all is the age-old notion of prognosis. Being told one is “at risk” is tantamount to a guarantee of trouble ahead.
The reality is we are bad at this practice. Most of the millions of people currently being medically risk-managed will derive no benefit at all, either because they were never going to develop the risk condition, or because they did so anyway despite preventative efforts. And a significant minority will suffer harm from side effects in the process.
The holy grail would be some method of dramatically improved effectiveness. Enter genomics. It is now both quick and relatively cheap to sequence an entire genome. With detailed knowledge of an individual’s genetic make-up, might a precise, personalised quantification of risk be possible?
The idea is not without foundation. Mutations in a couple of genes for DNA repair mechanisms can confer anywhere from a 50 to a 90 per cent lifetime risk of certain types of malignancy. But these account for only a tiny percentage of cancer cases. Most cancers, as with other major conditions like heart disease or dementia, are decidedly multifactorial. Numerous genes contribute to susceptibility and all interact in fiendishly complex ways with environmental and lifestyle factors. The risk estimates achieved by commercial genomics tests are no better (and may even be worse) than our current clinical judgements.
Our gimmick-bedazzled Health Secretary, Matt Hancock, underwent genomic testing last year, announcing his “shock” at learning he was at a “high risk” of prostate cancer. In fact, at 15 per cent his lifetime risk is not much higher than you’d predict simply from his being a man. He declared that genomics may have saved his life, as he would never now miss a prostate cancer screening appointment.
Leave aside the embarrassing fact that Hancock is unaware that prostate cancer cannot be screened for. More serious is his subsequent determination that the NHS should start offering genomic testing to all newborns. The idea of generating pseudo-predictive health information on people years before they can actually consent is an ethical minefield.
Even more dispiriting is the prospect of countless young people in the same predicament as Hancock: rendered shocked and anxious by their metrics; confused as to what, if anything, they mean; and feeling they should be able to do something about them, only they can’t.
This article appears in the 15 Jan 2020 issue of the New Statesman, Why the left keeps losing