In a previous post, I discussed whether retraining is ever a good answer to an issue. Short answer – NO! So what about that other common one of adding more QC?
An easy corrective action to put in place is to add more QC. Get someone else to check. In reality, this is often a band-aid because you haven’t got to the root cause and are not able to tackle it directly. So you’re relying on catching errors rather than stopping them from happening in the first place. You’re not trying for “right first time” or “quality by design”.
“Two sets of eyes are better than one!” is the common defence of multiple layers of QC. After all, if someone misses an error, someone else might find it. Sounds plausible. And it does make sense for processes that occur infrequently and have unique outputs (like a Clinical Study Report). But for processes that repeat rapidly this approach becomes highly inefficient and ineffective. Consider a process like that below:
Specialist I carries out work in the process – perhaps entering metadata in relation to a scanned document (investigator, country, document type etc). They check their work and modify it if they see errors. Then they pass it on to Specialist II who checks it and modifies it if they see any errors. Then the reviewer passes it on to the next step. Two sets of eyes. What are the problems with this approach?
- It takes a long time. The two steps have to be carried out in series i.e. Specialist II can’t QC the same item at the same time as Specialist I. Everything goes through two steps and a backlog forms between the Specialists. This means it takes much longer to get to the output.
- It is expensive. A whole process develops around managing the workflow with some items fast-tracked due to impending audit. It takes the time of two people (plus management) to carry out the task. More resources means more money.
- The quality is not improved. This may seem odd but if we think it through. There is no feedback loop in the process for Specialist I to learn from any errors that escape to Specialist II so Specialist I continues to let those errors pass. And the reviewer will also make errors – in fact the rework they do might actually add more errors. They may not agree on what is an error. This is not a learning process. And what if the process is under stress due to lack of resources and tight timelines? With people rushing, do they check properly? Specialist I knows That Specialist II will pick up any errors so doesn’t check thoroughly. And Specialist II knows that Specialist I always checks their work so doesn’t check thoroughly. And so more errors come out than Specialist II had not been there at all. Having everything go through a second QC as part of the process takes away accountability from the primary worker (Specialist I).
So let’s recap. A process like this takes longer, costs more and gives quality the same (or worse) than a one step process. So why would anyone implement a process like this? Because “two sets of eyes are better than one!”
What might a learning approach with better quality and improved efficiency look like? I will propose an approach in my next post. As a hint, it’s risk-based!
Text: © 2018 Dorricott MPI Ltd. All rights reserved.
When is part 2? 🙂
My original post was rather long. So I split it into two parts. Also creates a bit of suspense. It is coming out soon…
Cool 🙂
Looking forward to seeing the solution in the next post 🙂
Thanks Oleg. I hope you like my solution. I’ll be interested to hear any critique.
Well explained Keith. Unfortunately, what you have pointed is widely prevalent and acceptable as the truth. It takes many people to point this out and much empirical evidence to prove old misconceptions false.
Thanks Sajid. That’s true – but it is industry specific. Manufacturing learned long ago not to do this. Deming was teaching it 50 years or more ago. It’s odd that this industry seems so slow to adopt these sorts of efficiencies.