Blog

Why should we care about root causes?

So, there’s been an accident. Let’s patch everyone up and fix the bollard. Why do we care about how the accident happened? One of the reasons I enjoy training people is the questions they ask. Every time I run training, I get at least one question that really makes me think. And often, the question is surprisingly simple – on the surface at least. One of the areas I regularly train organisations on is root cause analysis methods and how issue management should link back to risk management. I presented on this topic at SCOPE Europe last year. So how intriguing it was at a recent training to get a question which I had not really considered in any depth before: why do we need root causes of an issue?

The stock answer is that knowing the root causes helps you to focus on those to try to reduce the likelihood of such issues recurring in the future. It means you focus on the issue at its fundamentals rather than just treating the symptoms. It is here that the realisation hit me – we are actually determining root causes primarily so we can reduce the risk of future issues. If we were not concerned about the risk of the issue recurring then there would be little point in spending time trying to get to root causes. And if it is about reducing the risk, then it is not just about the likelihood of the issue recurring. It could also be about the impact and possibly the detectability. We evaluate risks based on these three after all: likelihood, impact and detectability. For a traffic accident, if the root cause was that a child’s ball had rolled into the road and a car had swerved to avoid the child hitting the bollard instead we could:

      • Erect a fence next to the play area to stop balls going into the road (and children following them) – reducing likelihood
      • Reduce the speed limit near the play area to reduce the likelihood of serious injury – reducing impact
      • Erect motion sensors in the play area and link them to a flashing warning sign for road users – to improve detectability

Thinking of a clinical trial example: If the issue is that very few Adverse Events (AEs) are being reported from a particular site and the root cause is determined to be lack of site understanding on AE reporting then to reduce the risk we could:

      • Work with the site to make sure they understand the reporting requirements (to reduce the likelihood)
      • Review source data and raise queries where AEs should have been reported but were not (to reduce the impact)
      • Monitor the Key Risk Indicator for AEs per participant visit at a greater frequency for that site to see if it picks up (to improve detectability)

You may do one or more of these. In risk terms, you are trying to reduce the risk by modifying one or more of – likelihood, impact and detectability. And, of course, you might decide to take these actions across all sites and even in other studies.

And it brings me back to that thorny problem of corrective actions and preventive actions. Corrective actions work on reducing the risk of the issue recurring – whether it is reducing the likelihood, impact and/or improving detectability. If that is so, what on earth are preventive actions? Well, they should be about reducing the risk of issues ever happening – by building quality in from the start. Before a clinical trial starts, GCP requires that a risk assessment is carried out. And as part of the risk assessment, risks are evaluated and prioritised. The additional risk controls that are implemented before the start of the trial are true preventive actions.

It is unfortunate that GCP confuses the language by referring to corrective actions and preventive actions in relation to issue management rather than showing how they relate to risk. And from the draft of E6 R3, it appears that will not be fixed. ISO 9001 fixed this with the 2015 version. Let’s hope that one day, we in clinical trials, can catch up with thinking in other industries and not continue to confuse people as we do now.

As so often, we should ask the “why” question to get to a deeper truth – as encouraged by Taicchi Ohno. And I was very grateful to be reminded of this as part of a training program I was providing.

I have modified my training on both issue and risk management to show better how the two are intricately linked. Is your organization siloing issues and risks? If so, I think there is a better way.

No children, animals or balls were harmed in the writing of this blog post.

 

Text: © 2024 Dorricott MPI Ltd. All rights reserved.

Image: © 2024 Keith Dorricott

Contingencies: Time to Take out this Tool from the RBQM Toolbox

When evaluating risks in clinical trials, people normally evaluate the likelihood, impact and detectability. This is closely following the guidance in ICH E6 R2. For example, perhaps there is an assessment made by the investigators of morphology of the eye and there is a relatively new rating scale used to assess this. A risk might be “inconsistency in applying the rating scale by investigators due to the unusual rating scale might lead to an inability to assess a key secondary endpoint.” It might be decided by the study team that this is likely to happen, if it does happen then the impact would be high and that detecting this during the study is difficult. This risk would score high for all three dimensions and end up as one of the high risks for the study.

The next step is risk management is to look at the high risks and consider how they can be further controlled (or “mitigated”). I teach teams to look at the component risks scores for likelihood, impact and detectability and consider how/whether each of them can be influenced by additional controls.

To reduce likelihood, for example:

    • Protocol changes (e.g. use a more common scale or have a maximum number of participants per site)
    • Improved training including an assessment (but not “retraining”!)
    • Increasing awareness (e.g. with reminders and checklists)

And to improve detectability (and reduce its score), for example:

    • Implement additional manual checks (e.g. site or remote monitoring)
    • Close monitoring of existing or new Key Risk Indicators (KRIs)
    • Computational checks (e.g. edit checks in EDC)
    • Use of central readers

But what of the impact dimension though? Are there any additional controls that might be able to reduce the impact? Here we need to think more about impact. As issues emerge, they rarely start with their maximum impact. For example, if there is a fire in an office building, it takes time before the building is burnt to the ground. There are actions that can be taken after the emergence of an issue to reduce the overall impact. For a fire in an office building, examples of such actions are: having fire extinguishers available and people trained to use them, having clearly signed fire exits and people who have practiced exiting the building through regular fire drills, and having fire alarms that are regularly tested. These are actions that are implemented before the issue emerges so that when it emerges, they are ready to implement and to reduce the overall impact. They are contingencies.

As I work through possible additional controls with teams, they typically look at the impact and decide there is no way they can affect it. For some risks that might be true but often there are contingencies that might be appropriate.

To reduce the impact the following are example contingency actions:

    • Upfront planning for how to manage missing datapoints statistically
    • Planning for the option of a home healthcare visit if an on-site visit is missed
    • Preparing to be able to ship investigational product direct to patients if a pandemic strikes
    • Back-up sites

In our risk example “inconsistency in applying the rating scale by investigators due to unusual rating scale might lead to an inability to assess a key secondary endpoint,” the impact is the inability to assess a key secondary endpoint. But if we detect this emerging issue early enough, are there any actions we could take (and plan for upfront) that could help stop that maximum impact from being realised? Maybe it is possible to take a picture that could be assessed at a later point if the issue emerges? Or there could be remedial training prepared in case it appears that an investigator’s assessments are too variable?

Of course, not all risks need risk controls. But contingencies are worth considering. In my experience, contingencies are a tool in the risk management toolbox that is not removed often enough. Perhaps by helping teams understand how contingencies fit into the framework of RBQM, we can encourage better use of this tool.

 

Text: © 2023 Dorricott MPI Ltd. All rights reserved.

Image: © 2023 Keith Dorricott

Is the risk of modifying RBQM in GCP worth it?

At SCOPE Europe in Barcelona earlier this month, I took the opportunity to talk with people about the proposed changes to section 5.0 of ICH E6 on Quality Management. People mostly seemed as confused as I was with some of the proposed changes. It’s great we get an opportunity to review and comment on the proposal prior to it being made final. But it is guesswork trying to determine why some of the changes have been proposed.

ICH E6 R2 was adopted in 2016 and section 5.0 was one of the major changes to GCP in twenty years. Since then, organizations have been working on their adoption with much success. Predefined Quality Tolerance Limits (QTLs) is one area that has received much discussion in industry and has been much written about. And I have listened to and personally led many discussions on the challenges of implementation (including through the long-running Cyntegrity mindsON RBQM series of workshops which is nearing episode twenty this year!) So much time and effort has gone into implementing section 5.0 and much of it remains intact in the proposed revision to E6 R3. And there are some sensible changes being proposed.

But there are also some proposed changes that appear minor but might have quite an impact. I wonder if the risk of making the change is actually worth the potential benefit that is hoped for. An example of such a proposed change is the removal of the words “against existing risk controls” from section 5.0.3 – “The sponsor should evaluate the identified risks, against existing risk controls […]” We don’t know why these four words are proposed to be dropped in the revised guidance. But I believe dropping them could cause confusion. After all, if you don’t consider existing risk controls when evaluating a risk then that risk will likely be evaluated as being very high. For example, there may be an identified risk such as “If there are too many inevaluable lab samples then it may not be possible to draw a statistically valid conclusion on the primary endpoint.” Collecting and analysis of lab samples is a normal activity in clinical trials and there are lots of existing risk controls such as provision of dedicated lab kits, clear instructions, training, qualified personnel, specialised couriers, central labs etc. If that risk is evaluated assuming none of the existing risk controls are in place, then I am sure it will come out as a high risk that should be controlled further. But maybe the existing risk controls are enough to bring the risk to an acceptable level without further risk controls. And there may be other risks that are more important to spend time and resource controlling.

We don’t know why the removal of these four words has been proposed and there may be very sound reasons for their removal. As someone experienced in helping organizations implement RBQM and an educator and trainer, however, it is not clear to me. And I worry that a seemingly simple change like this may actually cause more industry confusion. It may take time and resource away from the work of proper risk management to process, system, and SOP updates. It may delay still further some of the laggards in implementing Risk-Based Quality Management (RBQM). Delaying implementation is bad for everyone, but particularly patients. They can end up on trials where risks are higher than they need to be and patients may also not get access to new drugs as quickly because trials fail operationally (as their risks have not been properly controlled).

So my question is, is the risk of modifying RBQM in GCP worth it?

The deadline for comments on the draft of ICH E6 R3 has now passed. The guidance is currently planned for adoption in October 2024. I’ll be presenting my thoughts on the proposed changes at SCOPE in Florida in February.

Text: © 2023 Dorricott Metrics & Process Improvement Ltd. All rights reserved.

Picture: Neil Watkins

Are you asking the right questions?

I wrote recently about the importance of tuning up your KPIs every now and then (#KPITuneUp). When organizations ask me to review their Key Performance Indicators (KPIs), I ask them to provide the question the KPIs are trying to answer as well as the KPI titles. After all, if they are measuring something, there must be a purpose, mustn’t there? Surprisingly perhaps, people are generally surprised that I would want to know this. But if you don’t know why KPIs are being collected and reported, how do you know whether they are doing what you want them to? This is one of the many things I’ve learned from years of working with one of the most knowledgeable people around on clinical trial metrics/KPIs – Linda Sullivan. Linda was a co-founder of the Metrics Champion Consortium (now merged with the Avoca Quality Consortium) and developed the Metric Development Framework which works really well for developing metric definitions. And for determining a set of metrics or KPIs that measure things that really matter rather than simply measuring the things that are easy to measure.

I have used this approach often and it can bring real clarity to the determination of which KPIs to use. Working with one sponsor client, they provided me with lists of proposed KPIs from their preferred CROs. As is so often the case, they were lists of KPI titles without the questions they were aimed at answering. And the lists were largely very different between the CROs even though the same services were being provided. So, I worked with the client to determine the questions that they wanted their KPIs to answer. Then we had discussions with the CROs on which KPIs they had that could help answer those questions. This is a much better place to come at the discussion because it automatically focuses you on the purpose of the KPIs rather than whether one KPI is better than another. And sometimes it highlights that you have questions which are actually rather difficult to answer with KPIs – perhaps because data is not currently collected. Then you can start with a focus on the KPIs where the data is accessible and look to add additional ones if/when the data becomes accessible.

As an example:

    • Key question: Are investigators being paid on time?
    • Originally proposed KPI: Number of overdue payments at month end
    • Does the proposed KPI help answer the key question? No. Because it counts only overdue payments but doesn’t tell us how many were paid on time.
    • New proposed KPI: Proportion of payments made in the month that were made on time
    • Does this new proposed KPI help answer the key question? Yes. A number near 100% is clearly good whereas a low value is problematic.

In this example, we’ve rejected the originally proposed KPI and come up with a new definition. There is more detail to go into, of course, such as what “on time” really means and how an inaccurate invoice is handled for the KPI calculation. And what should the target be? But the approach focuses us on what the KPI is trying to answer. It’s the key questions you have to agree on first!

Perhaps it’s time to tune up your KPIs and make sure they’re fit for 2023. Contact me and I’d be happy to discuss the approach you have now and whether it meets leading practice in the industry. I can even give your current KPI list a review and provide feedback. If you have the key questions they’re tryng to answer, that’ll be a help! #KPITuneUp

 

Text: © 2022 Dorricott MPI Ltd. All rights reserved.

Image: rawpixel CC0 (Public Domain)

#KPITuneUp

Is it time your Vendor/CRO KPIs had a tune up?

As the late, great Michael Hammer once said in The Seven Deadly Sins of Measurement, “…there is a widespread consensus that [companies] measure too much or too little, or the wrong things, and that in any event they don’t use their metrics effectively.” Hammer wrote this in 2007 and I suspect many would think it still rings true today. What are Hammer’s deadly sins?

  1. Vanity – measuring something to make you look good. In a culture of fear, you want to make sure your KPIs are not going to cause a problem. So best to make sure they can’t! If you use KPIs to reward/punish then you’re likely to have some of these. The KPIs that are always green such as percent of key team member handovers with handover meetings. Maybe the annualized percent of key staff turnover might not be so green.
  2. Provincialism – sub-optimising by focusing on what matters to you but not the overall goal. The classic example in clinical trials (which was in the draft of E8 R1 but was removed in the final version) is the race to First Participant In. Race to get the first one but then have a protocol amendment because the protocol was poorly designed in the rush. We should not encourage people into rushing to fail.
  3. Narcissism – not measuring from the customer’s perspective. This is why it is important to consider the purpose of the KPI, what is the question you are trying to answer? If you want investigators to be paid on time, then measure the proportion of payments that are made accurately and on time. Don’t measure the average time from payment approved to payment made as a KPI.
  4. Laziness – not giving it enough thought or effort. To select the right metrics, define them well, verify them, and empowering those using them to get most value from them needs critical thinking. And critical thinking needs time. It also needs people who know what they are doing. A KPI that is a simple count at month end of overdue actions is an example of this. What is it for? How important are the overdue actions? Maybe they are a tiny fraction of all actions or maybe they are most of them. Better to measure the proportion of actions being closed on time. This focuses on whether the process is performing as expected.
  5. Pettiness – measuring only a small part of what matters. OK, so there was an average of only 2 findings per site audit in the last quarter. But how many site audits were there? How many of the findings were critical or major? Maybe one of the sites audited had 5 major findings and is the largest recruiting site for the study.
  6. Inanity – measuring things that have a negative impact on behaviour. I have come across examples of trying to drive CRAs to submit Monitoring Visit Reports within 5 days of a monitoring visit leading to CRAs submitting blank reports so that they meet the timeline. It gets even worse if KPIs are used for reward or punishment – people will go out of their way to make sure they meet the KPI by any means possible. Rather than focus effort on improving the process and being innovative, they will put their effort into making sure the target is met at all costs.
  7. Frivolity – not being serious about measurement. I have seen many organizations do this. They want KPIs because numbers gives an illusion of control. Any KPIs will do, as long as they look vaguely reasonable. And people guess at targets. But no time is spent on why KPIs are needed and how they are to be used. Let alone training people on the skills needed. Without this, KPIs are a waste of resource and effort.

I think Hammer’s list is a pretty good one and covers many of the problems I’ve seen with KPIs over the years.

How well do your KPIs work between you and your CRO/vendor? Does it take all the effort to gather them ready for the governance meeting only to have a cursory review before the next topic? Do you really use your KPIs to help achieve the overall goals of a relationship? Have you got the right ones? Do you and your staff know what they mean and how to use them?

Perhaps it’s time to tune up your KPIs and make sure they’re fit for 2023. Contact me and I’d be happy to discuss the approach you have now and whether it meets leading practice in the industry. I can even give your current KPI list a review and provide feedback. #KPITuneUp

 

Text: © 2022 Dorricott MPI Ltd. All rights reserved.

Image – Robert Couse-Baker, PxHere (CC BY 2.0)

Enough is enough! Can’t we just accept the risk?

I attended SCOPE Europe 2022 in Barcelona recently. And there were some fascinating presentations and discussions in the RBQM track. One that really got me thinking was Anna Grudnicka’s on risk acceptance. When risks are identified and evaluated as part of RBQM, the focus of the team should move to how to reduce the overall risk to trial participants and the ability to draw accurate conclusions from the trial. Typically, the team takes each risk, starting with those that score the highest and decides how to reduce the scores. To reduce the risk scores (“control the risk”), they can try to make the risk less likely to occur, to reduce the impact if it does occur (a contingency) or to improve the detection of the risk (with a KRI, for example). It is unusual for there to be no existing controls for a risk. Clinical trials are not new, after all, and we already have SOPs, training, systems, monitoring, data review, etc. There are many ways we try to control existing risks. In her presentation, Anna was making the point that sometimes it may be the right thing to actually accept a risk without adding further controls. She described how at AstraZeneca they can estimate the programming cost for an additional Key Risk Indicator (a detection method) and to use this to help make the decision on whether to implement this additional risk control or not.

Indeed, the decision on whether to add further controls is always a balance. What is the potential cost of those controls? And what is the potential benefit? Thinking of a non-clinical trial example, there are many level crossings in the UK. This is where a train line crosses a road at the same level. Some of these level crossings have no gates – only flashing lights. A better control would be to have gates that stop vehicles going onto the track as a train approaches. But even better would be a bridge. But, of course, these all have different costs and it isn’t practical to have a bridge to replace every level crossing. So most level crossings have barriers. But for less well-used crossings, where the likelihood of collision is lower, the flashing light version is considered to be enough and the risk is accepted. The balance of cost and benefit means the additional cost of barriers is not considered worth it for the potential gain.

So, when deciding whether to add further controls, you should consider the cost of those controls and the potential benefits. Neither side of the equation may be that easy to determine – but I suspect the cost is the easier of the two. We could estimate the cost of additional training or programming and monitoring of a new KRI. But how do we determine the benefit of the additional control? In the absence of data, this is always going to be a judgement.

The important thing to remember is that not all risks on your risk register need to have additional controls. Make sure the controls you add are as cost-effective as possible and meet the goal of reducing the overall risk to trial participants and the ability to draw accurate conclusions from the trial.

 

Text: © 2022 Dorricott MPI Ltd. All rights reserved.

Image – © Walter Baxter CC2.0

Knock, Knock! Who’s There?

Recently, in my street, a gas company was relining all the gas pipes under the road, and to each house. There was a safety problem. And it needed them to turn off gas supply to each house one by one as they worked on each of the delivery pipes. Unfortunately, the method of communication of when your gas needed to be turned off was by knocking on your door and talking to you. Perhaps during Covid lockdowns, this worked OK. But not now – they kept missing me, either because I was out or because I was on a work video conference. This led to frustrations on both sides and when they eventually got hold of me, they complained at how difficult I was to contact. I asked whether anyone had considered using a phone or perhaps dropping a message through the letterbox. “That’s a good suggestion!” the workman said. I pondered this for a while. This is a national company and they do this all the time. This “problem” must have come up before, surely? Why didn’t they have a standard process for contacting householders? They know all the addresses after all.

A really challenging area in process improvement is how to make changes stick? Processes invariably rely somewhere on people. And people get used to doing things the way they have always done them. So, to change that takes effort – change management. But after the improvement project is finished, what if people go back to doing things the way they always did? When thinking about the “Control” part of process improvement, you have to think carefully about how you can ensure the process changes stay in place. And how those responsible can get an early signal if they do not. If you don’t do this, the improvement may gradually be lost.

As I was leaving the house later that day, I bumped into the same workman. He told me to give them a call as soon as I was back so I could get the gas turned on again. He gave me a card with the phone number to call. The phone number was on a card which was designed to be dropped through someone’s letterbox if they were not at home. It had a space to enter the start and end date of the work and a phone number to contact. He had not thought to use it for its actual purpose! Presumably, at some point, someone had made a process improvement and introduced these cards, but the change had not stuck.

You might be wondering, in this example, how you could make sure the changes are permanent. Well, you could ask residents about the service and monitor the responses for the original issue repeating. You could audit the process. And you could monitor how many cards are reordered to give a signal as to whether they are being used at the expected rate.

Any changes you introduce to a process need to be effective. But if they work, you also want to make them permanent. Thinking about how you can make those changes stick is an important part of any process improvement project.

 

Text: © 2022 Dorricott MPI Ltd. All rights reserved.

Image – Pavel Danilyuk, pexels.com

Don’t let metrics distract you from the end goal!

We all know the fable of the tortoise and the hare. The tortoise won the race by taking things at a steady pace and planning for the end rather than rushing and taking their eye off the end goal. Metrics and how they are used can drive the behaviours we want but also behaviours that mean people take their eye off the end goal. As is often said, what gets measured gets managed – and we all know metrics can influence behaviour. When metrics are well-designed and are focused on answering important questions, and there are targets making it clear to a team what is important, they can really help focus efforts. If the rejection rate for documents being submitted to the TMF is set to be no greater than 5% but is tracking well above, then there can be a focus of effort to try and understand why. Maybe there are particular errors such as missing signatures, or there is a particular document type that is regularly rejected. If a team can get to the root causes then they can implement solutions to improve the process and see the metric improve. That is good news – metrics can be used as a great tool to empower teams. Empowering them to understand how the process is performing and where to focus their effort for improvement. With an improved, more efficient process with fewer errors, the end goal of a contemporaneous, high quality, complete TMF is more likely to be achieved.

But what if metrics and their associated targets are used for reward or punishment? We see this happen with metrics when used for personal performance goals. People will focus on those metrics to make sure they meet the targets at almost any cost! If individuals are told they must meet a target of less than 5% for documents rejected when submitted to the TMF, they will meet it. But they may bend the process and add inefficiency in doing so. For example, they may decide only to submit the documents they know are going to be accepted and leave the others to be sorted out when they have more time. Or they may avoid submitting documents at all. Or perhaps they might ask a friend to review the documents first. Whatever the approach, it is likely it will impact the process of a smooth flow of documents into the TMF by causing bottlenecks. And they are being done ‘outside’ the documented process – sometimes termed the ‘hidden factory’. Now the measurement is measuring a process of which we no longer know all the details – it is different to the SOP. The process has not been improved, but rather made worse. And the more complex process is liable to lead to a TMF that is no longer contemporaneous and may be incomplete. But the metric has met its target. The rush to focus on the metric in exclusion to the end goal has made things worse.

And so, whilst it is good news that in the adopted ICH E8 R1, there is a section (3.3.1) encouraging “the establishment of a culture that supports open dialogue” and critical thinking, it is a shame that the following section in the draft did not make it into the final version:

“Choose quality measures and performance indicators that are aligned with a proactive approach to design. For example, an overemphasis on minimising the time to first patient enrolled may result in devoting too little time to identifying and preventing errors that matter through careful design.”

There is no mention of performance indicators in the final version or the rather good example of a metric that is likely to drive the wrong behaviour – time to first patient enrolled. What is the value in racing to get the first patient enrolled if the next patient isn’t enrolled for months? Or a protocol amendment ends up being delayed leading to an overall delay in completing the trial? More haste, less speed.

It can be true that what gets measured gets managed – but it will only be managed well when a team is truly empowered to own the metrics, the targets, and the understanding and improvement of the process. We have to move away from command and control to supporting and trusting teams to own their processes and associated metrics, and to make improvements where needed. We have to be brave enough to allow proper planning and risk assessment and control to take place before rushing to get to first patient. Let’s use metrics thoughtfully to help us on the journey and make sure we keep our focus on the end goal.

 

Text: © 2022 Dorricott MPI Ltd. All rights reserved.

Image – openclipart.org

And Now For Some Good News

It feels as though we need a good news story at the moment. And I was reading recently about the incredible success of the Human papillomavirus (HPV) vaccine. It really is yet another amazing example of the power of science. HPV is a large group of viruses that are common in humans but normally do not cause any problems. A small number of them though can lead to cancers and are deemed “high risk”. Harald zur Hausen isolated HPV strains in cervical cancer tumours back in the 1980s and theorised that the cancer was caused by HPV. This was subsequently proved right: in fact we now think 99.7% of cervical cancers are caused by persistent HPV infection. This understanding along with vaccine technology led to the development of these amazing vaccines, which are incredibly as much as 99% effective against the high risk virus strains. And the results speak for themselves, as you can see in the graphic above. This shows the percentage of women at age 20 diagnosed with cervical cancer by birth year and that the numbers have dropped dramatically as the vaccination rates have increased. zur Hausen won the Nobel Prize for medicine for his fundamental work that has impacted human health to such a degree.

What had me intrigued particularly about this story is that here in the UK, there has been public concern that the frequency of testing for cervical cancer (via the “smear test”) is being reduced – in Wales specifically. The concern is that this is about reducing the cost of the screening programme. The reason the frequency is being reduced from 3 to 5 years is scientifically supported however, because the test has changed. In the past, the test involved taking a smear and then looking for cancerous cells through a microscope. This test had various problems. First, the smear may not have come from a cancerous part of the cervix. Second, as it involves a human looking through a microscope, they might miss seeing a cancerous cell in the early stages.

The new test, though, looks for the high risk HPV strains. If there is HPV present, it will be throughout the cervix and so will be detected regardless of where the test is from. And it doesn’t involve a human looking through a microscope. But there is an added, huge, benefit. Detecting the high risk HPV strain doesn’t mean there is cancer – it is a risk factor. And so further screening can take place if this test is positive. This means that cancer can be detected at an earlier stage. Because the new test is so much better, and gives an earlier detection, there is more time to act. Cervical cancer normally develops slowly.

In Risk-Based Quality Management (RBQM) in clinical trials, we identify risks, evaluate them, and then try to reduce the highest risks to the success of the trial (in terms of patient safety and the integrity of the trial results). One way to reduce a risk is to put a measurement in place. People I work with often struggle with understanding how to assess the effectiveness of a risk measurement but I think this cervical cancer testing gives an excellent example. The existing test (with the microscope) can detect the early stages of cancer. But the newer test can actually detect the risk of a cancer – it is earlier in the development cycle of the cancer. The newer test detects with more time to act. And because of that, the test frequency is being reduced. The best measurements for risk provide plenty of time to take action in order to reduce the impact – in this case, cervical cancer.

This example also demonstrates another point. That understanding the process (the cause and effect) means that you can control the process better. In this case by both eliminating the cause (via the HPV vaccine) and improving the measurement of the risk of cancer (via the test for high risk HPV strains). Process improvement always starts with process understanding.

Vaccines have been in our minds rather more than usual over the last couple of years. It is sobering to think of the number of lives they have saved since their discovery in 1796 by Edward Jenner.

 

Text: © 2022 Dorricott MPI Ltd. All rights reserved.

Image – Vaccine Knowledge Project https://vk.ovg.ox.ac.uk/vk/hpv-vaccine

Why Do Metrics Always Lie?

We’ve all come across the phrase “Lies, Damned Lies, & Statistics” which was popularised by Mark Twain in the nineteenth century. And we’re used to politicians using metrics and statistics to prove any point they want to. See my previous example of COVID test numbers or “number theatre” as Professor Sir David Spiegelhalter calls it. His critique to the UK Parliament of the UK government’s metrics used in COVID briefings is sobering reading. We’re right to be sceptical of metrics we see. But we should avoid moving from scepticism to cynicism. Unfortunately, because we see so many examples of the misuse of metrics, we can end up mistrusting all of them and not believing anything.

Metrics can tell us real truths about the world. Over 150 years ago, Florence Nightingale used metrics to demonstrate that more British soldiers were dying in the Crimean War from disease than from fighting. Her use of data eventually saved thousands of lives. Similarly with Richard Doll and Austin Bradford Hill who demonstrated in 1954 the link between smoking and lung cancer. After all, science relies on the use of data and metrics to prove or disprove theories and to progress.

So we should be sceptical when we see metrics being used – we should especially ask who is presenting them and how impartial they might be. We should use our critical thinking skills and not simply accept at face value. What question is the metric trying to answer? Spiegelhalter and others argue for five principles for trustworthy evidence communication:

    • Inform, not persuade
    • Offer balance but not false balance
    • Disclose uncertainties
    • State evidence of quality
    • Pre-empt misinformation

If everyone using metrics followed these principles, then maybe we would no longer be talking about how metrics lie – but rather about the truths they can reveal.

 

Text: © 2021 Dorricott MPI Ltd. All rights reserved.

Image by D Miller from Pixabay