Why should we care about root causes?

So, there’s been an accident. Let’s patch everyone up and fix the bollard. Why do we care about how the accident happened? One of the reasons I enjoy training people is the questions they ask. Every time I run training, I get at least one question that really makes me think. And often, the question is surprisingly simple – on the surface at least. One of the areas I regularly train organisations on is root cause analysis methods and how issue management should link back to risk management. I presented on this topic at SCOPE Europe last year. So how intriguing it was at a recent training to get a question which I had not really considered in any depth before: why do we need root causes of an issue?

The stock answer is that knowing the root causes helps you to focus on those to try to reduce the likelihood of such issues recurring in the future. It means you focus on the issue at its fundamentals rather than just treating the symptoms. It is here that the realisation hit me – we are actually determining root causes primarily so we can reduce the risk of future issues. If we were not concerned about the risk of the issue recurring then there would be little point in spending time trying to get to root causes. And if it is about reducing the risk, then it is not just about the likelihood of the issue recurring. It could also be about the impact and possibly the detectability. We evaluate risks based on these three after all: likelihood, impact and detectability. For a traffic accident, if the root cause was that a child’s ball had rolled into the road and a car had swerved to avoid the child hitting the bollard instead we could:

      • Erect a fence next to the play area to stop balls going into the road (and children following them) – reducing likelihood
      • Reduce the speed limit near the play area to reduce the likelihood of serious injury – reducing impact
      • Erect motion sensors in the play area and link them to a flashing warning sign for road users – to improve detectability

Thinking of a clinical trial example: If the issue is that very few Adverse Events (AEs) are being reported from a particular site and the root cause is determined to be lack of site understanding on AE reporting then to reduce the risk we could:

      • Work with the site to make sure they understand the reporting requirements (to reduce the likelihood)
      • Review source data and raise queries where AEs should have been reported but were not (to reduce the impact)
      • Monitor the Key Risk Indicator for AEs per participant visit at a greater frequency for that site to see if it picks up (to improve detectability)

You may do one or more of these. In risk terms, you are trying to reduce the risk by modifying one or more of – likelihood, impact and detectability. And, of course, you might decide to take these actions across all sites and even in other studies.

And it brings me back to that thorny problem of corrective actions and preventive actions. Corrective actions work on reducing the risk of the issue recurring – whether it is reducing the likelihood, impact and/or improving detectability. If that is so, what on earth are preventive actions? Well, they should be about reducing the risk of issues ever happening – by building quality in from the start. Before a clinical trial starts, GCP requires that a risk assessment is carried out. And as part of the risk assessment, risks are evaluated and prioritised. The additional risk controls that are implemented before the start of the trial are true preventive actions.

It is unfortunate that GCP confuses the language by referring to corrective actions and preventive actions in relation to issue management rather than showing how they relate to risk. And from the draft of E6 R3, it appears that will not be fixed. ISO 9001 fixed this with the 2015 version. Let’s hope that one day, we in clinical trials, can catch up with thinking in other industries and not continue to confuse people as we do now.

As so often, we should ask the “why” question to get to a deeper truth – as encouraged by Taicchi Ohno. And I was very grateful to be reminded of this as part of a training program I was providing.

I have modified my training on both issue and risk management to show better how the two are intricately linked. Is your organization siloing issues and risks? If so, I think there is a better way.

No children, animals or balls were harmed in the writing of this blog post.

 

Text: © 2024 Dorricott MPI Ltd. All rights reserved.

Image: © 2024 Keith Dorricott

You’re Solving the Wrong Problem!

The basic idea behind continuous process improvement is not difficult. It’s the idea of a cycle – defining the problem, investigating, determining actions to improve, implementing those actions, and then looking again to see if there has been improvement. It’s the Plan-Do-Check-Act cycle of Shewhart and Deming. Or the DMAIC cycle of Six Sigma. It’s a proven approach to continually improving. But it takes time and effort. It takes determination. And it can easily be derailed by those who say “Just get on with it!” Much better to be rushing into implementation to show how you are someone of action rather than someone who suffers from “paralysis by analysis.” But a greater danger is to move into actions without taking time to analyse properly – or even to define the problem. It looks great because you’re taking action. But what if your actions make things worse?

Let’s take the example of HS2 in the UK. This is the UK’s second high-speed railway line. The cost is enormous and keeps going up. Building is underway and billions have been spent already. The debate continues as to whether it is worth all the money. During one of the many consultations, in 2011, I wrote to give my perspective. I had read the proposal and was shocked to see there was no problem defined. Here was an expensive solution without a clear definition of the problem it was designed to resolve. It talked about trains being overcrowded currently. If that was the problem, then was this the best solution? I suggested they take that problem and drill down some more – when are the trains crowded? Where? Why? And so on. Then see if they could come up with solutions. Preferably ones that don’t cost tens of billions. If they are overcrowded during commuting times, I suggested that perhaps people could be given a tax incentive to work from home. Which would have the added advantage of being better for the environment.

Of course, since then, we’ve had the pandemic. And many have been working from home. Trains have not been overcrowded. And many have found they rather like working from home. So while the case for HS2 was flimsy 10 years ago, it’s become transparently thin since then. And because they didn’t spend time defining the problem or analysing it, there is no obvious route to go back and re-evaluate the decision. Given the change of circumstances, is it still the right thing to do? We can’t answer because we don’t know the problem it is trying to solve.

I do find it odd that so many organisations (governments included) rush into implementing changes without taking time to define the problem and analyse it. I suspect motives such as vanity – “let’s implement this new, shiny thing because it’ll make me look good”, and wanting to be seen as someone of action. Interesting that Taiichi Ohno, creator of the Toyota Production System, and Lean, used to get graduates to spend time just watching production. Afterwards, he would ask them what they saw and if he didn’t think they had observed enough he would get them to watch some more. Better to pause, observe, reflect, analyse than to go straight into actions that might actually make things worse.

For process improvement, make sure you understand the problem you’re trying to solve. Solving the wrong problem can be costly and wasteful!

 

Text: © 2021 Dorricott MPI Ltd. All rights reserved.

Picture: pxhere.com

Pareto: Focus Your Efforts

For some of my work with the Metrics Champion Consortium, I was looking at MHRA inspection finding categories. MHRA publish reports on their findings – the most recent is for the year 2017-2018. For major findings, 86% are within just 21% of the categories. If this is representative of the industry, then focusing our improvement efforts on the processes associated with those 21% of categories could have a disproportionate impact on findings in the future. This fits the pattern of the Pareto Principle.

The Pareto Principle was proposed by Joseph Juran, a 20th century pioneer of quality improvement. He based it on the observation of the economist Vilfredo Pareto of Italy who noted that 80% of Italy’s land was owned by 20% of the people. The principle is that in any given situation, roughly 80% of the effect is due to 20% of the causes. It seems to work well in many fields, for example:

    • 20% of the most reported software bugs cause 80% of software crashes
    • It is often claimed in business that 80% of the sales comes from 20% of the clients
    • 20% of people account for 80% of all healthcare spending
    • Even in COVID-19, 80% of deaths have occurred among 20% of the population (65 and older)

The principle is sometimes called the 80:20 rule or the law of the vital few because it implies that if you can focus on the 20% and put effort into improving that, you can impact 80% of the results – having a disproportionate effect on the whole. It is regularly discussed in business and I once worked with a company which had the 80:20 rule as one of its guiding principles.

Davis Balestracci’s Data Sanity has a really interesting observation on the power of the Pareto Principle in process improvement. One mode of process improvement is taking the exceptional and trying to understand why it happened and to learn from it. So, if site contracts in one country take much longer than in others to finalise, you can focus on that country to understand why and to improve. Or, of course, you could take the country with the shortest cycle time and try to understand why so you can spread “best practice”. This is the world of root cause analysis (RCA) & CAPA and can be effective in improvement. But what if the approach is over-used – for example maybe there are regularly issues detected in site audits for clinical trials that relate to problems with the process of Informed Consent. If there are many issues, then perhaps it would be better to look at them all rather than take each one individually as its own self-contained issue. In other words, maybe there is a systemic cause that is not related to the individual sites or studies. If you took all the issues (findings) together, you could use the Pareto Principle. It’s likely that 80% of the effects seen are due to a small number of causes. Why not work to find out what they are and implement changes to the whole system that affect those? Then continue to measure over time to see if it’s worked. Isn’t that likely to get better results than lots of independent RCA & CAPA efforts that each only has a small part of the picture?

That does bring up the challenge of how you determine when one issue is similar (or the same) as another. If you categorized all the issues in a consistent way, you’d see that around 80% of the observed issues come from 20% of the categories – the Pareto Principle in action. Just as we see from the MHRA. It’d be a good idea to focus process improvement on those 20% of categories.

Next time you look to improve a process, make sure you use the Pareto Principle to help focus your efforts so you can have maximum effect.

Tip: Pronounce Pareto as “pah-ree-toh”

 

Text: © 2021 Dorricott MPI Ltd. All rights reserved.

Bringing Processes into Focus

I have been leading a process integration from a merger recently. The teams provided their many long SOPs and I tried to make sense of them – but with only minimal success. So, at the first meeting (web-based of course), I said we should map the process at high level (one page) for just one of the organisations. People weren’t convinced there would be a benefit but were willing to humour me. In a two-hour meeting, we mapped the process and were also able to:

  • Mark where the existing SOPs fit in the high-level process – giving a perspective no-one had seen before
  • Highlight differences in processes between the two organisations – in actual process steps, equipment or materials
  • Discuss strengths, weaknesses and opportunities in the processes
  • Agree an action plan for the next steps to move towards harmonisation

Mapping was done using MS PowerPoint. They loved this simple approach that made sure the focus of the integration effort was on the process – after all, to quote W. Edwards Deming, “If you can’t describe what you are doing as a process, you don’t know what you’re doing.” At a subsequent meeting, reviewing another process, one of the participants had actually mapped their process beforehand – and we used that as the starting point.

Process maps are such a powerful tool in helping people focus on what matters – without getting into unnecessary detail. They help people to come to a common perspective and to highlight differences to discuss. We also use them this way at the Metrics Champion Consortium where one of the really important outcomes from mapping is the recognition of different terminology used by different organisations. We can then focus on harmonising the terminology and developing a glossary of terms that we all agree on. This reduces confusion in subsequent discussions.

Process maps are really a great tool. They are useful when complete, but so much more benefit comes from a team of people with different perspectives actually developing them. They help to bring processes into focus. And can even help with root cause analysis. If you don’t use them, perhaps you should!

For those that use process maps, what do you find as the benefits? And the challenges?

 

Text: © 2020 Dorricott MPI Ltd. All rights reserved.

Picture – PublicDomainPictures from Pixabay

What No-one Tells You About Root Cause Analysis

When something significant goes wrong, we all know that getting to the root cause is an important start to understanding and helping to prevent the same issue recurring. I’ve talked many times in this blog about methods of root cause analysis and, of course, I recommend DIGR-ACT®. But there are other methods too. The assumption with all these methods is that you can actually get to the root cause(s).

I was running process and risk training for the Institute of Clinical Research recently. The training includes root cause analysis. And one of the trainees gave an example of a Principal Investigator (PI) who had randomized a patient, received the randomization number and proceeded to pick the wrong medication lot for the patient. She should have selected the medication lot that matched the randomization number but picked the wrong one. This was discovered later in the trial when Investigational Product accountability was carried out by the CRA visiting the site. By this time, of course, the patient had potentially been put at risk and the results could not be included in the analysis. So why had this happened? It definitely seemed to be human error. But why had that error occurred?

The PI was experienced in clinical trials. She knew what to do. This error had not occurred before. There was no indication that she was particularly rushed or under pressure on that day. The number was clear and in large type. How was it possible to mis-read the number? The PI simply said she made a mistake. And mistakes happen. That’s true, of course, but would we accept that of an aeroplane pilot? We’d still want to understand how it happened. Human error is not a root cause. But if human error isn’t the root cause, what is?

Sometimes, we just don’t know. Root cause analysis relies on cause and effect. If we don’t understand the cause and effect relationships, we will not be able to get to true root causes. But that doesn’t mean we just hold up our hands and hope it doesn’t happen again. That would never pass in the airline industry. So what should we do in this case?

It’s worth trying to see, first, how widespread a problem this is. Has it happened before at other sites? On other studies? What are the differences between sites / studies where this has and had not happened? This may still not be enough to lead you to root cause(s). If not, then maybe we could modify the process to make it less likely to recur? Could we add a QC step such as having the PI write the number of the medication down next to the randomization number – this should highlight a difference if there is one. Or perhaps they could enter the number into a system so that it can check, Or maybe there has to be someone else at the site that checks at the point of dispensing.

A secret in root cause analysis that is rarely mentioned is that sometimes you can’t get to the root cause(s). There are occasions when you simply don’t have enough information to be able to get there. In these cases, whatever method you use, you cannot establish the root cause(s). Of course, if you do, it will help in determining effective actions to help stop recurrence. But without establishing root cause(s), there are still actions you can take to try to reduce the likelihood of recurrence.

 

Text: © 2020 Dorricott MPI Ltd. All rights reserved.

What’s Swiss Cheese got to do with Root Cause Analysis?

“There can be only one true root cause!” Let’s examine this oft-made statement with an example of a root cause analysis. Many patients in a study have been found at database lock to have been mis-stratified – causing difficulties with analysis and potentially invalidating the whole study. We discover that at randomization, the health professional is asked “Is the BMI ≤ 25? Yes/No”. In talking with CRAs and sites we realise that at a busy site, where English is not your first language, this is rather easy to answer incorrectly. If we wanted to make it easier for the health professional to get it right, why not simply ask for the patient’s height and weight. Once those are entered, the IXRS could calculate the BMI and determine whether it is less than or equal to 25. This would be much less likely to lead to error. So, we determine that the root cause is that “the IXRS was set up without considering how to reduce the likelihood for user error.” We missed an opportunity to prevent the error occurring. That’s definitely actionable. Unfortunately, of course, it’s too late for this study but we can learn from the error for existing and future studies. We can look at other studies to see how they stratify patients and whether a similar error is likely to occur. We can update the standards for IXRS for future studies. Great!

But is there more to it? Were there other actions that might have helped prevent the issue? Why was this not detected earlier? Were there opportunities to save this study? As we investigate further, we find:

  1. During user acceptance testing, this same error occurred but was put down to user error.
  2. There were several occasions during the study where a CRA had noticed that the IXRS question was answered incorrectly. They modified the setting in EDC but were unable to change the stratification as this is set at randomization. No-one had realized that this was a systemic issue (i.e. had been detected at several sites due to a special cause).

Our one root cause definitely takes us forward. But there is more to learn from this issue. Perhaps there are some other root causes too. Such as “the results of user acceptance testing were not evaluated for the potential of user error”. And “issues detected by CRAs were not recognised as systematic because there is no standard way of pulling out common issues found at sites.” These could both lead to additional actions that might help to reduce the likelihood of the issue recurring. And notice that actions on these root causes might also help reduce the likelihood of other issues occurring too.

In my experience, root cause analysis rarely leads to one root cause. In a recent training course I was running for the Institute of Clinical Research, one of the delegates reminded me of the “Swiss Cheese” model of root causes. There are typically many hazards, such as a user entering data into an IXRS. These hazards don’t normally end up as issues because we put preventive measures in place (such as standards, user acceptance testing, training). You can think of each of these preventive measures as a slice of swiss cheese – they prevent many hazards becoming issues but won’t prevent everything. Sometimes, a hazard can get through a hole in the cheese. We also put detection methods in place (such as source data verification, edit checks, listing review). You can think of each of these as additional slices of cheese which prevent issues growing more significant but won’t prevent everything. It’s when the holes in each of the layers of prevention and detection line up that a hazard can become a significant issue that might even lead to the failure of a study. So, in our example, the IXRS was set up poorly (a prevention layer failed), the user acceptance testing wasn’t reviewed considering user error (another prevention layer failed), and CRA issues were not reviewed systematically (a detection layer failed). All these failures led to the study potentially being lost.

So if, in your root cause analysis, you have only one root cause, maybe it’s time to take another look. Are there maybe other learnings you can gain from the issue? Are there other prevention or detection layers that failed?

Do you need help in root cause analysis? Take a look at DIGR-ACT training. Or give me a call.

 

Text: © 2019 DMPI Ltd. All rights reserved.

Oh No – Not Another Audit!

It has always intrigued me, this fear of the auditor. Note that I am separating out auditor from (regulatory) inspector here. Our industry has had an over reliance on auditing for quality rather than on building our processes to ensure quality right the first time. The Quality Management section of ICH E6 (R2) is a much needed change in approach. And this has been enhanced by the ICH E8 (R1) (draft) “Quality should rely on good design and its execution rather than overreliance on retrospective document checking, monitoring, auditing or inspection”. The fear of the auditor has led to some very odd approaches.

Trial Master File (TMF) is a case in point. I seem to have become involved with TMF issues and improving TMF processes a number of times in CROs and more recently have helped facilitate the Metrics Champion Consortium TMF Metrics Work Group. The idea of an inspection ready TMF at all times comes around fairly often. But to me, that misses the point. An inspection ready (or audit ready) TMF is a by-product of the TMF processes working well – not an aim in itself. We should be asking – what is the TMF for? The TMF is to help in the running of the trial (as well as to document it to be able to demonstrate processes, GCP etc were followed). It should not be an archive gathering dust until an audit or inspection is announced when a mad panic ensues to make sure the TMF is inspection ready. It should be being used all the time – a fundamental source of information for the study team. Used this way, gaps, misfiles etc will be noticed and corrected on an ongoing basis. If the TMF is being used correctly, there shouldn’t be significant audit findings. Of course, process and monitoring (via metrics) need to be set up around this to make sure it works. This is process thinking.

And then there are those processes that I expect we have all come across. No-one quite understands why there are so many convoluted steps. Then you discover that at some point in the past there was an audit and to close the audit finding (or CAPA), additional steps were added. No-one knows the point of the additional steps any more but they are sure they must be needed. One example I have seen was of a large quantity of documents being photo-copied prior to sending to another department. This was done because documents had got lost on one occasion and an audit had discovered this. So now someone spent 20% of their day photocopying documents in case they got lost in transit. Not a good use of time and not good for the environment. Better to redesign the process and then consider the risk. How often do documents get lost en route? Why? What is the consequence? Are some more critical than others? Etc. Adding the additional step to the process due to an audit finding was the easiest thing to do (like adding a QC step). But it was the least efficient response.

I wonder if part of the issue is that some auditors appear to push their own solution too hard. The process owner is the person that understand the process best. It is their responsibility to demonstrate they understand the audit findings, to challenge where necessary, and to argue for the actions they think will address the real issues. They should focus on the ‘why’ of the process.

Audit findings can be used to guide you in improving the process to take out risk and make it more efficient. Root cause analysis, of course, can help you with the why for particular parts of the process. And again, understanding the why helps you to determine much better actions to help prevent recurrence of issues.

Audits take time, and we would rather be focusing on the real work. But they also provide a valuable perspective from outside our organisation. We should welcome audits and use the input provided by people who are neutral to our processes to help us think, understand the why and make improvements in quality and efficiency. Let’s welcome the auditor!

 

Image: Pixabay

Text: © 2019 Dorricott MPI Ltd. All rights reserved.

Hurry Up and Think Critically!

At recent conferences I’ve attended and presented at, the topic of critical thinking has come up. At the MCC Summit, there was consternation that apparently some senior leaders think the progress in Artificial Intelligence will negate the need for critical thinking. No-one at the conference agreed with those senior leaders. And at the Institute for Clinical Research “Risky Business Forum”, everyone agreed on the importance of fostering critical thinking skills. We need people to take a step back and think about future issues (risks) rather than just the pressing issues of the day. Most people (except those senior leaders) would agree we need more people to be developing and using critical thinking skills in their day-to-day work. We need to teach people to think critically and not “spoon-feed” them the answers with checklists. But there’s much more to this than tools and techniques. How great to see, then in the draft revision of ICH E8: “Create a culture that values and rewards critical thinking and open dialogue about quality and that goes beyond sole reliance on tools and checklists.” And that culture needs to include making sure people have time to think critically.

Think of those Clinical Research Associates on their monitoring visits to sites. At a CRO it’s fairly common to expect them to be 95% utilized. This leaves only 5% of their contracted time for all the other “stuff” – the training, the 1:1’s, the departmental meetings, the reading of SOPs etc. Do people in this situation have time to think? Are they able and willing to take the time to follow up on leads and hunches? As I’ve mentioned previously, root cause analysis needs critical thinking. And it needs time. If you are pressurized to come up with the results now, you will focus on containing the issue so you can rush on to the next one. You’ll make sure the site staff review their lab reports and mark clinical significance – but you won’t have time to understand why they didn’t do that in the first place. You will not learn the root cause(s) and will not be able to stop the issue from recurring. The opportunity to learn is lost. This is relevant in other areas too, such as risk identification, evaluation and control. With limited time for risk assessment on a study, would you be tempted to start with a list from another study, have a quick look over and move on to the next task quickly? You would know it wasn’t a good job but hopefully it was good enough.

Even worse, some organizations, in effect, punish those thinking critically. If you can see a way of improving the process, of reducing the likelihood of a particular issue recurring, what should you do? Some organizations make it a labyrinthine process to make the change. You might have to go off to QA and complete a form requesting a change to an SOP. And hope it gets to the right person – who has time to think about it and consider the change. And how should you know about the form? You should have read the SOP on SOP updates in your 5% of non-utilized time!

Organizations continue to put pressure on employees to work harder and harder. It is unreasonable to expect employees to perform tasks needing critical thinking well without allowing them the time to do so.

Do you and others around you have time to think critically?

 

Text: © 2019 DMPI Ltd. All rights reserved. (With thanks to Steve Young for the post title)

Picture: Rodin – The Thinker (Andrew Horne)

Why Can’t You Answer My Simple Question?

Often during a Root Cause Analysis session, it’s easy to get lost in the detail. The issues are typically complex and there are many aspects that need to be considered. Something that really doesn’t help is when people seem to be unable to answer a simple question. For example, you might ask “At what point would you consider escalating such an issue?” and you get a response such as “I emphasised the importance of the missing data in the report and follow-up letter.” The person seems to be making a statement about something different and has side-stepped your question. Why might that be?

Of course, it might be simply that they didn’t understand the question. Maybe English isn’t their first language, or the phone line is poor. Or they were distracted by an urgent email coming in. If you think this is the reason, it’s worth asking again – perhaps re-wording and making sure you’re clear.

Or maybe, they don’t know the answer but feel they need to answer anyway. A common questioning technique is to ask an open question and then be silent to try to draw out a response. People tend not to like silence and so they fill the gap. An unintended consequence of this might be that they fill the gap with something that doesn’t relate to the question you asked. They may feel embarrassed that they don’t know the answer and feel they should try to answer with something. You will need to listen carefully to the response and perhaps if it appears they simply don’t know the answer, you could ask them whether anyone else might. Perhaps the person who knows is not at the meeting.

Another possibility is that they are fearful. They might fear the reaction of others. Perhaps procedures weren’t followed and they know they should have been. But admitting it might bring them, or their colleagues, trouble. This is probably more difficult to ascertain. To understand whether this is going on, you’ll need to build a rapport with those involved in the root cause analysis. Can you help them by asking them to think of Gilbert’s Behavioral Engineering factors that support good performance? Was the right information available at the right time to carry out the task? What about appropriate, well-functioning tools and resource? And were those involved properly trained? See if you can get them thinking about how to stop the issue recurring – as they come up with ideas, that might lead to a root cause of the actual issue. For example, if they think the escalation plan could be clearer, is a root cause that the escalation plan was unclear?

“No-one goes to work to do a bad job!” [W. Edwards Deming] They want to help improve things for next time. If they don’t seem to be answering your question – what do you think the root cause of that might be? And how can you overcome it?

Do you need help in root cause analysis? Take a look at DIGR-ACT training. Or give me a call.

 

No Blame – Why is it so Difficult?

I have written before about the importance of removing blame when trying to get to the root causes of an issue. To quote W Edwards Deming, “No one can put in his [/her] best performance unless he [/she] feels secure. … Secure means without fear, not afraid to express ideas, not afraid to ask questions. Fear takes on many faces.” But why is it so difficult to achieve? You can start a root cause analysis session by telling people that it’s not about blame but there’s more to it than telling people.

It’s in the culture of an organization – which is not easy to change. But you can encourage “no blame” by your questioning technique and approach too. If significant issues at an investigative site have been uncovered during an audit, the easiest thing might be to “blame” the CRA. Why didn’t he/she find the problems and deal with them earlier? What were they doing? Why didn’t they do it right? If I was the CRA and this appeared to be the approach to get to root cause, I certainly would be defensive. Yes, I got it wrong and I need to do better next time. Please don’t sack me! I would be fearful. Would it really help to get to the root causes?

Would it be better to start by saying that QC is not 100% effective – we all miss things. What actually happens before, during and after a monitoring visit to this site? Are the staff cooperative? Do they follow-up quickly with questions and concerns? And the key question – “What could be done differently to help make it more likely that these issues would have been detected and dealt with sooner?” This is really getting at the Gilbert’s Behavior Engineering Model categories. Are site staff and CRA given regular feedback? Are the tools and resources there to perform well? Do people have the right knowledge and skills?

This is where you’re likely to start making progress. Perhaps the site has not run a clinical trial before, they are research-naïve. We haven’t recognised this as a high risk site and are using our standard monitoring approach. The CRA has limited experience. There’s been no co-monitoring visit and no-one’s been reviewing the Monitoring Visit Reports – because there’s a lack of resources due to high CRA turnover and higher than expected patient enrollment. And so on and so on…To quote W. Edwards Deming again, “Nobody goes to work to do a bad job.”

Don’t just tell people it’s not about blame. Show that you mean it by the questions you ask.

 

Want to find more about effective root cause analysis in clinical trials? Visit www.digract.com today.

 

Text: © 2019 DMPI Ltd. All rights reserved.