Blog

Please FDA – Retraining is NOT the Answer!

The FDA has recently issued a draft Q&A Guidance Document on “A Risk-Based Approach to Monitoring of Clinical Investigations”. Definitely worth taking a look. There are 8 questions and answers. Two that caught my eye:

Q2. “Should sponsors monitor only risks that are important and likely to occur?”

The answer mentions that sponsors should also “consider monitoring risks that are less likely to occur but could have a significant impact on the investigation quality.” These are the High Impact, Low Probability events that I talked about in this post. The simple model of calculating risk by multiplying Impact and Probability essentially prioritises a High Impact, Low Probability event the same as a Low Impact, High Probability event. But many experts in risk management think these should not be prioritized equally. High Impact, Low Probability events should be prioritised higher. So I think this is a really interesting answer.

Q7. “How should sponsors follow up on significant issues identified through monitoring, including communication of such issues?”

One part of the answer here has left me aghast. “…some examples of corrective and preventive actions that may be needed include retraining…” I have helped investigate issues in clinical trials so many times, and run root cause analysis training again and again. I always tell people that retraining is not a corrective action. Corrective actions should be based on the root cause(s). See a previous post on this and the confusing terminology. If you think someone needs retraining, ask yourself “why?” Could it be:

      • They were trained but didn’t follow the training. Why? Could it be one or more of the Behavior Engineering Model categories was not supported e.g. they didn’t have time, they didn’t have the right tools, they weren’t provided with regular feedback to tell them how they were doing? Etc. If it’s one of these, then focus on that. Retraining will not be effective.
      • They haven’t ever received training. Why? Maybe they were absent when the rest of the staff was trained and there was no plan to make sure they caught up later. They don’t need retraining – they were never trained. They need training. And is it possible that there might be others in this situation? Who else might have missed training and needs training now? Maybe at other sites too.
      • There was something missing from the training (as looks increasingly likely as one possible root cause in the tragic case of the Boeing 737 Max). Then the training needs to be modified. And it’s not about retraining one person or one site on training they had already received. It’s about training everyone on the revised training. Of course, later on, you might want to try to understand why an important component was missing from the training in the first place.

I firmly believe retraining is never the answer. There must be something deeper going on. If your only action is retraining, then you’ve not got to the root cause. I can accept reminding as an immediate action – but it’s not based on a root cause. It is more about providing feedback and is only going to have a short-term effect. An elephant may never forget but people do.

Got questions or comments? Interested in training options? Contact me.

 

Text: © 2019 DMPI Ltd. All rights reserved.

Beyond Human Error

One of my most frequently viewed posts is on human errors. I am intrigued by this. I’ve run training on root cause analysis a number of times and occasionally someone will question my claim that human error is not a root cause. Of course, it may be on the chain of cause-and-effect but why did the error occur? And you can be sure it’s not the first time the error has occurred – so why has it occurred on other occasions? What could be done to make the error less likely to occur? Using this line of questioning is how we can make process improvements and learn from things that go wrong rather than just blame someone for making a mistake and “re-training” them.

There is another approach to errors which I rather like. I was introduced to it by SAM Sather of Clinical Pathways. It comes from Gilbert’s Behavior Engineering Model and provides six categories that need to be in place to support the performance of an individual in a system:

Category Example questions
Expectations & Feedback Is there a standard for the work? Is there regular feedback?
Tools, Resources Is there enough time to perform well? Are the right tools in place?
Incentives & Disincentives Are incentives contingent on good performance?
Knowledge & Skills Is there a lack of knowledge or skill for the tasks?
Capacity & Readiness Are people the right match for the tasks?
Motives & Preferences Is there recognition of work well done?

 

Let’s take an example I’ve used a number of times: getting documents into the TMF. As you consider Gilbert’s Behavior Engineering Model you might ask:

    • Do those submitting documents know what the quality standard is?
    • Do they have time to perform the task well? Does the system help them to get it right first time?
    • Are there any incentives for performing well?
    • Do they know how to submit documents accurately?
    • Are they detail-oriented and likely to get it right?
    • Does the team celebrate success?

I have seen systems with TMF where most of the answers to those questions is “no”. Is there any wonder that there are rejection rates of 15%, cycle times of many weeks and TMFs that are never truly “inspection ready”?

After all, “if you always do what you’ve always done, you will always get what you’ve always got”. Time to change approach? Let’s get beyond human error.

Got questions or comments? Interested in training options? Contact me.

 

Text: © 2019 DMPI Ltd. All rights reserved.

DIGR-ACT® is a registered trademark of Dorricott Metrics & Process Improvement Ltd.

Picture: Based on Gilbert’s Behavior Engineering Model

What My Model of eTMF Processing Taught Me (Part II)

In a previous post, I described a model I built for 100% QC of documents as part of an eTMF process. We took a look at the impact of the rejection rate for documents jumping from 10% to 15%. It was not good! So, what happens when an audit is announced and suddenly the number of documents submitted doubles? In the graph below, weeks 5 and 6 had double the number of documents. Look what it does to the inventory and cycle time:

The cycle time has shot up to around 21 days after 20 weeks. The additional documents have simply added to the backlog and that increases the cycle time because we are using First In, First Out.

So what do we learn overall from the model? In a system like this, with 100% QC, it is very easy to turn a potential bottleneck into an actual bottleneck. And when that happens, the inventory and cycle time will quickly shoot upwards unless additional resource is added (e.g. overtime). But, you might ask, do we really care about cycle time? We definitely should: if the study team can’t access documents until they have gone through the QC, those documents are now not available for 21 days on average. That’s not going to encourage every day use of the TMF to review documents (as the regulators expect). And might members of the study team send in duplicates because they can’t see the documents that are awaiting processing? Adding further documents and impacting inventory and cycle time still further. And this is not a worst case scenario as I’m only modelling one TMF here – typically a Central Files group will be managing many TMFs and may be prioritizing one over another (i.e. not First In, First Out). This spreads out the distribution of cycle times and will lead to many more documents that are severely delayed through processing.

“But we need 100% QC of documents because the TMF is important!” I hear you shout. But do you really? As the great W Edwards Deming said, “Inspection is too late. The quality, good or bad, is already in the product.” Let’s get quality built in in the first place. You should start by looking at that 15% rejection rate. What on earth is going on to get a rejection rate like that? What are those rejections? Are those carrying out the QC doing so consistently? Do those uploading documents know the criteria? Is there anyone uploading documents who gets it right every time? If so, what is it that they do differently to others?

What if you could get the rejection rate down to less than 1%? At what point would you be comfortable taking a risk-based approach – that assumes those uploading documents do it right the first time? And carrying out a random QC to look for systemic issues that could then be tackled? How much more efficient this would be. See the diagram in this post. And you’d remove that self-imposed bottleneck. You’d get documents in much quicker, costing less and with improved quality. ICH E6 (R2) is asking us to consider quality as not being 100% but concerning ourselves with errors that matter. Are we brave enough as an industry to apply this to the TMF?

 

Text: © 2019 DMPI Ltd. All rights reserved.

Picture: CC BY 2.0 Remko Van Dokkum

Searching For Unicorns

I read recently that we have reached “peak unicorn”. I wonder if that is true. I joined a breakout discussion at SCOPE in Florida last month entitled “RBM and Critical Reasoning Skills” and the discussion shifted to unicorns. The discussion was about how difficult it is to find people with the right skills and experience for central monitoring. They need to understand the data and the systems. They need to have an understanding of processes at investigator sites. And they need to have the critical reasoning skills to make sense of everything they are seeing, to dig into the data and to escalate concerns to a broader group for consideration. Perhaps this is why our discussion turned to unicorns – these are people who are perhaps impossible to find.

It does, though, strike me in our industry how much we focus on the need for experience. Experience can be very valuable, of course, but it can also lead to “old” ways of thinking without the constant refreshing of a curious mind, new situations and people. And surely we don’t have to just rely on experience? Can’t we train people as well? After all, training is more than reading SOPs and having it recorded in your training record for auditors to check. It should be more than just the “how” for your current role. It should give you some idea of the “why” too and even improve your skills. I asked the group in the breakout discussion whether they thought critical reasoning skills can be taught – or do they come only from experience? Or are they simply innate?  The group seemed to think it was rather a mixture but the people who excel at this are those who are curious – who want to know more. Those who don’t accept everything at face value.

If we can help to develop people’s skills in critical reasoning, what training is available? Five Whys is often mentioned. I’ve written about some of the pitfalls of Five Whys previously. I’m excited to announce that I’ve been working with SAM Sather of Clinical Pathways to develop a training course to help people with those critical thinking skills. We see this as a gap in the industry and have developed a new, synthesized approach to help. If you’re interested in finding out more, go to www.digract.com.

Unfortunately, looking for real unicorns is a rather fruitless exercise. But by focusing on skills, perhaps we can help to train future central monitors in the new ways they need to think as they are presented with more and more data. And then we can leave the unicorns to fairy tales!

 

Text: © 2019 DMPI Ltd. All rights reserved.

What My Model of eTMF Processing Taught Me

On a recent long-haul flight, I got to thinking about the processing of TMF documents. Many organisations and eTMF systems seem to approach TMF documents with the idea that every one must be checked by someone other than the document owner. Sometimes, the document owner doesn’t even upload their own documents but provides them, along with metadata, to someone else to upload and index. And then their work is checked. There are an awful lot of documents in the TMF and going through multiple steps of QC (or inspection as W Edwards Deming would call it) seems rather inefficient – see my previous posts. But we are a risk-averse industry – even having been given the guidance to used risk-based approaches in ICH E6 (R2) and so many organizations seem to use this approach.

So what is the implication of 100% QC? I decided I would model it via an Excel spreadsheet. My assumptions are that there are 1000 documents submitted per week. Each document requires one round of QC. The staff in Central Files can process up to 1100 documents per week. I’ve included a random +/-5% to these numbers for each week (real variation is much greater than this I realise). I assume 10% of documents are rejected at QC. And that when rejected, the updated documents are processed the next week. I’ve assumed First In, First Out for processing. My model looks at the inventory at the end of each week and the average cycle time for processing. It looks like this:

It’s looking reasonably well in control. The cycle time hovers around 3 days after 20 weeks which seems pretty good. If you had a process for TMF like this, you’d probably be feeling pretty pleased.

So what happens if the rejection rate is 15% rather than 10%?

Not so good! It’s interesting just how sensitive the system is to the rejection rate. This is clearly not a process in control any more and both inventory and cycle time are heading upwards. After 20 weeks, the average cycle time sits around 10 days.

Having every document go through a QC like this forms a real constraint on the system – a potential bottleneck in terms of the Theory of Constraints. And it’s really easy to turn this potential bottleneck into a real bottleneck. And a bottleneck in a process leads to regular urgent requests, frustration and burn-out. Sound familiar?

In my next post, I’ll take a look at what happens when an audit is announced and the volume of documents to be processed jumps for a couple of weeks.

 

Text: © 2019 DMPI Ltd. All rights reserved.

Picture: CC BY 2.0 Remko Van Dokkum

Performance Appraisal – A Better Way

I wrote previously about the waste of the annual performance appraisal. Perhaps you’ve just gone through yours or you’re about to. As I wrote at the time, “With employees and managers hating the process of annual performance appraisals, isn’t it about time we ditched them in favour of a continuous assessment approach and an ongoing focus on goals – for both the employee and organization?” A reasonable criticism is – but how would that process work? And wouldn’t it suffer from the same problems?

My friend, Linda Sullivan, recommended a book for me to read recently – John Doerr’s “Measure What Matters”. It’s about a process called OKRs (Objectives, Key Results) and Part 2 of the book is about moving away from annual appraisals to continuous performance management. Worth a read if you want to see another way. As Doerr says “individuals cannot be reduced to numbers”. Something we all know really. Some ideas that I think could be revolutionary in work places that focus on the annual performance appraisal and goal-setting:

    • Objectives should have a short cycle time – maybe only 3 months
    • Objectives shouldn’t be between just the employee and manager. They should be shared broadly. This makes it clear what the priorities are – if your request isn’t within my priorities, don’t be surprised if I put it off for now. And if I’m struggling to meet objectives, please help me!
    • Don’t stick to objectives just because they were agreed at the start. Things change. Objectives sometimes need to change too.
    • Don’t link achieving of objectives directly to remuneration. This encourages “sand-bagging” and meeting objectives at all costs.
    • Regular employee meetings should focus on learning, coaching, understanding barriers and development.

For next year, could you persuade your management and HR department to get rid of the hated annual performance appraisals and goal setting?

Want to learn more about using KPIs correctly? Drop me a line! Or take a look at the training opportunities.

Happy New Year!

Text: © 2022 Dorricott MPI Ltd. All rights reserved.

Picture: Marco Verch  (CC BY 2.0)

Save me from the snow – a perspective on risk

I recently attended and presented at the MCC Clinical Trial Risk and Performance Management Summit in Princeton. It was a fantastic event – always great to meet people you’ve been talking with on the phone and there was a real energy and desire to exchange ideas and learn. Around noon on day two, snow started to fall. And it kept falling. I wasn’t concerned. After all, snow is hardly unusual in these parts and I assumed it would all be sorted out fine. Unfortunately, this was not to be the case. Our taxi was around an hour late arriving to take us to Newark airport. And the drive that should have taken 45 minutes took four hours. There were plenty of accidents and broken-down vehicles on the way. When we got near to the airport itself, things seemed to get worse and at one point we were stuck, not moving, for around an hour. At the airport itself there was plenty of confusion as flight after flight was cancelled. The queue for the Customer Service Desk for people to rebook flights and find a hotel was around 400 people. I estimated based on the processing time that it would take around 10 hours for the person at the end of the queue to be seen. My flight was delayed by five hours but did leave. Other delegates from the conference had flights cancelled and ended up in the airport over night.

It did get me thinking about the whole thing from a risk perspective. This was, apparently a rare event – so much snow settling in November. The probability of such an event was low. But the impact was quite significant on people trying to get anywhere and many people’s plans were significantly disrupted. This is one of those high impact, low probability events which are actually rather difficult to manage from a risk perspective. Much more extreme examples are the 2011 Fukishima nuclear plant melt down following a tsunami caused by an earthquake, and the possibility of a large asteroid hitting the earth. There’s even a UK government report on these high impact, low probability events from 2011 where a range of experts reviewed the latest research and different approaches. It’s important not to simply dismiss these risks – in particular because the probability is actually rather uncertain. The events happen rarely which makes determining the true probability difficult. One approach is to improve detection – if you can detect early enough to take action, you can reduce the impact. And you can always have contingencies in place.

So back to the snow. I wonder, could they have predicted earlier that there was going to be so much snow? And that it would actually settle rather than melt away? Why didn’t they have better contingencies in place (e.g. gritting of roads, snow ploughs, better practices to deal with customers whose flights have been cancelled)? And here’s a scary thought – the probability of such events may be low. But it is uncertain. And with climate change, could it be that weather-related high impact, low probability events are actually becoming more common? Perhaps we need to improve our detection and contingencies for such events in the future.

On a final note, I will say I was very impressed by the stoicism of those impacted. I saw no-one getting angry. I saw people queuing in apparently hopeless queues without complaint. And there was plenty of good humour to go around. Enough to lift the spirits as we head into the holiday season!

 

Text and Picture: © 2018 Dorricott MPI Ltd. All rights reserved.

Wearables, Virtual Trials, Yes. But What About the Basics?

I was lucky enough to be presenting at and attending the SCOPE Europe conference recently. It started with some fascinating presentations and discussion on wearables and virtual trials. We all know technology is moving fast and some of the potential impacts in clinical trials are phenomenal. There was also a presentation by an extraordinary woman – Victoria Abbott-Fleming. She has started her own charity for sufferers of Complex Regional Pain Syndrome (Burning Nights CRPS) having been diagnosed with this condition. She had found it difficult to obtain information from her health professionals in the NHS. Talking with Victoria and her husband, it was shocking to hear of the daily challenges and prejudices she encounters through insensitive actions and comments due to her being young and confined to a wheelchair. On top of this she has taken an activism role trying to cajole the NHS and government to help get the support she and others like her need.

Victoria was presenting on the challenges patients face to getting on to a clinical trial. And it really makes you wonder how we can improve patient access. Often it is a real challenge to find out about, understand and access clinical trials. Victoria herself has wanted to go on a clinical trial for 15 years but has not managed it – if you’re not being treated by a physician who participates in clinical trials, your opportunities are limited. She has discovered clinical trials but too late to actually participate. When we talk about patient-centred this should be a clear concern. TJ Sharpe also speaks powerfully on this topic from a patient perspective.

Of course, wearables and virtual trials might hold some of the answers to including more patients in clinical trials but you can’t help thinking something is wrong at a basic level if we can’t match up patients desperately wanting to participate in a clinical trial with trials that are actually available.

The charity Victoria founded:

 

Text: © 2018 Dorricott MPI Ltd. All rights reserved.

Deliver Us From Delivery Errors

I returned home recently to find two packages waiting for me. They had been delivered while I was out. One was something I was expecting. The other was not – it was addressed to someone else. And at a completely different address (except the house number). How did that happen I wondered? I called the courier company. After waiting 15 minutes to get through, the representative listened to the problem and was clearly perplexed as the item had been signed for on the system. Eventually he started “Here’s what I can do for you…” and went on to explain how they could pick it up and deliver it to the right address. Problem solved.

Except that. It caused me inconvenience (e.g. a 20 minute call) for which no apology ever came. Their customer did not receive the service they paid for (the package would now be late). The package was put at risk – I could have kept it and no-one would have known. There was no effort at trying to understand how the error was made. They seem to be too busy for this. It has damaged their reputation – I would certainly not use that delivery firm. It was simply seen as a problem to resolve. Not an opportunity to improve.

The next day, a neighbour came round to hand over a mis-delivered parcel. You guessed it, it was the same courier company who had delivered a separate package that was for us to a neighbour. It’s great our neighbour brought it round. But the company will never hear of that error.

So many learnings from this! If the company was customer-focused they would really want to understand how such errors occur (by carrying out root cause analysis). And they would want to learn from the problems rather than just resolving each one individually. They should take a systemic approach. They should also consider that data they hold on the number of errors (mis-deliveries in this case) is incomplete. Helpful people sort mis-deliveries out for them every day without them even knowing. When they review data on the scale of the problem they should be aware that their data is an underestimate. And as for customer service, I can’t believe I didn’t even get a “sorry for the inconvenience” comment. According to a recent UK survey, 20% of people have had a parcel lost during delivery in the last 12 months. This is, after all, a critical error. Any decent company would want to really understand the issue and put systems in place to try to prevent future issues.

To me, this smacks of a culture of cost-cutting and lack of customer focus. Without a culture of continuous improvement, they will lose ground against their competitors. I have dealt with other courier companies and some of them are really on the ball. Let’s hope their management realises they need to change sooner rather than later…

 

Text: © 2018 Dorricott MPI Ltd. All rights reserved.

Have You Asked the Regulators?

To quote W Edwards Deming, “Every system is perfectly designed to give you exactly what you are getting today.” We all know our industry needs radical innovation and we are seeing it in many places – as you can see when attending DIA. I wonder why innovation seems to be so slow in our industry compared with others though.

I was talking to a systems vendor recently about changing the approach to QC for documents going in to the TMF. I was taken aback by the comment “Have you asked the regulators about it? I’m not sure what they would think.” Regulation understandably plays a big part in our industry but have we learned to fear it? If every time someone wants to try something new, the first response is “But what would the regulators think?” doesn’t that limit innovation and improvement? I’m not arguing for ignoring regulation, of course, it is there for a very important purpose. But does our attitude to it stifle innovation?

When you consider the update to ICH E6 (R2), it is not exactly radical when compared with other industries. Carrying out a formal risk assessment has been standard for Health & Safety in factories and workplaces for years. ISO – not a body known for moving swiftly – introduced its risk management standard ISO 13000 in 2009. The financial sector started developing their approach to risk management in the 1980s (although that didn’t seem to stop the 2008 financial crash!) And, of course, insurance has been based on understanding and quantifying risk for decades before that.

There has always been a level of risk management in clinical trials – but usually rather informal and based on the knowledge and experience of the individuals involved in running the trial. Implementing ICH E6 (R2) brings a more formal approach and encourages lessons learned to be used as part of risk assessment, evaluation and control for other trials.

So, if ICH E6 (R2) is not radical, why did our industry not have a formal and developed approach to risk management beforehand? Could it be this fear of the regulator? Do we have to wait until the regulators tell us it is OK to think the unthinkable (such as not having 100% SDV)?

What do you think? Is our pace of change right? Does fear of regulators limit our horizons?

Text: © 2018 Dorricott MPI Ltd. All rights reserved.