Blog

Big Data – Garbage in, garbage out?

Change of plan for this post…I visited the dentist recently. And before the consultation, I was handed an ipad with a form to complete. I was sure I had completed this form before last time – and checking with the receptionist she said it had to be completed every six months. So I had completed it before. It was a long form asking all sorts of details about medical history, medicines being taken etc. It included questions about lifestyle – how much exercise you get, whether you smoke, how much alcohol you drink etc. It all seemed rather over the top to be completing every six months. It seemed such an inefficient process and prone to error. Every patient completing all these detailed questions (often in a rush). And no way to check what my previous answers were – wouldn’t it be nice if they just pre-filled my previous answers and I could make any adjustments. All a little frustrating really. So I asked the receptionist why all this was needed.

“The government needs it,” was the reply. Really? What on earth do they do with it all, I wondered? I have to admit, that answer made me try a little experiment. I tried to see if the form would submit without me entering anything. It didn’t – it told me I had to sign the form first. So I signed it and sure enough it was accepted. So I handed the ipad back to the receptionist and she thanked me for being so quick. Off I went to my appointment and all was fine. And I felt as though I had struck a very small blow for freedom.

I wonder what does happen to all the data. Does it really go to “the government”? What would they do with it? Is it a case of gathering big data that can then be mined for trends – how the various factors affect dental health maybe? Well, one thing’s for sure, I wouldn’t trust the conclusions given how easy it seems to be to dupe the system. What guarantee is there on the accuracy of any of the data? Seems to me a case of garbage in, garbage out.

As we are all wowed by what Big Data can do and the incredible neural networks and algorithms teams can develop to help us (see previous blog), we do need to think about the source of the Big Data. Where has it come from? Could it be biased (almost certainly)? And in what way? How can we guard against the impact of that bias? There’s been a lot in the news recently about the dangers of bias – for example in Time and the Guardian. If we’re not careful, we can build bias into the algorithms and just continue with the discrimination we already have. Our best defence is scepticism. Just as when, in root cause analysis, an expert is quoted for evidence. As Edward Hodnett says: “Be sceptical of assertions of fact that start, ‘J. Irving Allerdyce, the tax expert, says…’ There are at least ten ways in which these facts may not be valid. (1) Allerdyce may not have made the statement at all. (2) He may have made an error. (3) He may be misquoted. (4) He may have been quoted only in part….”

Being sceptical and asking questions can help us avoid erroneous conclusions. Ask questions like: “how do you know that?”, “do we have evidence for that?” and “could there be bias here?”

Big Data has huge potential. But let’s not be wowed by it so that we don’t question. Be sceptical. Remember, it could be another case of garbage in, garbage out.

Image: Pixabay

Text: © 2017 Dorricott MPI Ltd. All rights reserved.

To Err is Human But Human Error is Not a Root Cause

In a recent post I talked about Human Factors and different error types. You don’t necessarily need to classify human errors into these types but splitting them out this way helps us think about the different sorts of errors there are. This moves us on from when we get to ‘human error’ when carrying out our root cause analysis (using DIGR® or another method). Part of the problem with having ‘human error’ as a root cause is that there isn’t much you can do with your conclusion. To err is human after all so let’s move on to something else. But people make errors for a reason and trying to understand why they made the error can lead us down a much more fruitful path to actions we can implement to try to prevent recurrence. If a pilot makes an error that leads to a near disaster or worse, we don’t just conclude that it was human error and there is nothing we can do about it. In a crash involving a self-driving car we want to go beyond “human error” as a root cause to understand why the human error might have occurred. As we get more self-driving cars on the road, we want to learn from every incident.

By getting beyond human error and considering different error types, we can start to think of what some actions are that we can implement to try to stop the errors occurring (“corrective actions”). Ideally, we want processes and systems to be easy and intuitive and the people to be well trained. When people are well trained but the process and/or system is complex, there are likely to be errors from time to time. As W. Edwards Deming once said, “A bad system will beat a good person every time.”

Below are examples of each of the error types described in my last post and example corrective actions.

Error Type Example Example Corrective Action
Action errors (slips) Entering data into the wrong field in EDC Error and sense checks to flag a possible error
Action errors (lapses) Forgetting to check fridge temperature Checklist that shows when fridge was last checked
Thinking errors (rule based) Reading a date written in American format as European (3/8/16 being 8-Mar-2016 rather than 3-Aug-2016) Use an unambiguous date format such as dd-mmm-yyyy
Thinking errors (knowledge based) Incorrect use of a scale Ensure proper training and testing on use of the scale. Only those trained can use it.
Non-compliance (routine, situational and exceptional) Not noting down details of the drug used in the Accountability Log due to rushing Regular checking by staff and consequences for not noting appropriately

These are examples and you should be able to think of additional possible corrective actions. But then which ones would you actually implement? You want the most effective and efficient ones of course. You want your actions to be focused on the root cause – or the chain of cause and effect that leads to the problem.

The most effective actions are those that eliminate the problem completely such as adding an automated calculation of BMI (Body Mass Index) from height and mass, for example, rather than expecting staff to calculate it correctly. If it can’t go wrong, it won’t go wrong (the corollary of Murphy’s Law). This is mistake-proofing.

The next most effective actions are ones that help people to get it right. Drop-down lists and clear, concise instructions are examples of this. Although instructions do have their limitations (as I will discuss in a future post). “No-one goes to work to do a bad job!” (W Edwards Deming again) so let’s help them do a good job.

The least effective actions are ones that rely on a check catching an error right at the end of the process. For example, the nurse checking the expiry date on a vial before administering. That’s not to say these checks should not be there, but rather they should be thought of as the “last line of defence”.

Ideally, you also want some sort of check to make sure the revised process is working. This check is an early signal as to whether your actions are effective at fixing the problem.

Got questions or comments? Interested in training options? Contact me.

 

Text: © 2017 Dorricott MPI Ltd. All rights reserved.

DIGR® is a registered trademark of Dorricott Metrics & Process Improvement Ltd.

“To err is human” – Alexander Pope

Where’s My Luggage?

On a recent flight, I had a transfer in Dublin. My arriving flight was delayed as there weren’t enough available stands at the airport. I made it to my connecting flight but evidently my hold luggage did not. Have you ever been there? Stood by the baggage reclaim watching the bags come out. Slowly, they are collected by their owners who disappear off and you are left to watch the one or two unclaimed bags go round and round and yours is not there? Not great.

The process of finding my luggage and delivering it home the next day was actually all pretty efficient. I filled in a form, my details were entered in the system and then I got regular updates via email and text on what was happening. The delivery company called me 30 minutes before arriving at my house to check I was in. But it was still frustrating not having my luggage for 24 hours. It got me thinking…

How often does this happen? Apparently, on average, less than 1% of bags are lost. Although given the number of bags, that’s still a lot and explains why the process of locating and delivering them seems to be well refined with specific systems to track and communicate. But what is the risk on specific journeys and transfers? When I booked the flight, the airline had recommended the relatively short transfer time in Dublin. My guess is that luggage missing the connecting flight on the schedule I was on is not that unusual – it only needs a delay of 30 minutes or more and it seems your luggage is likely to miss the transfer. A 30 minute delay is not unusual as we all know.

This is a process failure and it has a direct cost. The cost of the administration (forms, personnel entering data into a system, help line, labelling), IT (a specific IT system with customer access), transport (from the airport to my home). I would guess at US$200 minimum. This must easily wipe out the profit on the sale of my ticket (cost US$600). So this gives some idea of the frequency – it cannot be so high as to negate all the profit from selling tickets. It must be a cost-benefit analysis by the airline. Perhaps luggage missing this particular connecting flight occurs 5% of the time and they accept the direct cost. But the benefit is that the shorter transfer time is preferred by customers and makes the overall travel time less. So far so good.

But, what about the cost of the 24 hours I had without my luggage? That’s not factored into the cost-benefit I’m sure because it’s not a cost the airline can quantify. Is my frustration enough to make me decide not to fly with that airline again? I have heard of someone recently whose holiday was completely messed up due to delayed luggage. They had travelled to a country planning to hire a car and drive to a neighbouring country the next day. But the airline said they could only deliver the delayed luggage within the country of arrival. And it would take 48 hours. Direct cost to the airline was fairly small but the impact to the customer was significant.

So how about this for an idea. We’re in the information age and the data on delayed luggage must already be captured. When I go to book a flight with a short transfer time in future, I’d like to know the likelihood (based on past data) of my luggage not making the transfer. Instead of the airline being the only one to carry out the cost-benefit, I want in on the decision too – but based on data. If the risk looks small then I might decide to take it. As we all have our own tolerance for risk, we might make different decisions. But at least we are more in control that way rather than leaving it all to the airline. That would be empowerment.

We can’t ensure everything always goes right. But we can use past performance to estimate risk and take our own decisions accordingly.

 

Photo : Kenneth Lu  license

Text: © 2017 Dorricott MPI Ltd. All rights reserved.

Would You Give Me 10 out of 10?

After a recent intercontinental flight, my luggage didn’t turn up on the carousel. Not a great feeling! I was eventually reunited with my bag – more about that in a future post. The airline sent me a survey about the flight and offered a small incentive to complete it. I felt I had something to say and so clicked the button to answer the ‘short’ survey. It went on for page after page asking about the booking process, how I obtained my boarding card, whether my check-in experience was acceptable, on-board entertainment, meals etc. etc. After the first few pages I gave up. And I’m sure I’m not the only one to give up part way through. Why do companies go over the top when asking for feedback? What do they do with all the data?

I’ve come across a number of examples where data from surveys is not really used. At one company, whenever someone resigned, they were asked to complete an exit survey online. I asked HR if I could see the results from the survey as we were concerned about staff retention and I wondered if it might be a useful source of information. They said they had no summary because no-one had ever analysed the data. No-one ever analyses the data? It is disrespectful of people’s time and also misleading them to ask them to complete a survey and then ignore their responses. What on earth were they running the survey for? This is an extreme version of a real danger with surveys – doing them without knowing how you plan to use the data. If you don’t know before you run the survey, don’t run it!

Of course, there are also cases where you know the survey data itself is misleading. I heard a story of someone who worked as a bank teller and was asked to make sure every customer completed a paper survey. They had to get at least 10 completed every day. These were then all forwarded to head office to be entered into a system and analysed. The problem was that the customers did not want to complete the surveys – they were all too busy. So what did the bank tellers do? They got their friends and family to complete them so that they met their 10 per day target. I wonder how many hours were spent analysing the data from those surveys, reporting on them, making decisions and implementing changes. When running a survey, be mindful of how you gather the data – using the wrong incentives might lead to very misleading results.

Another way that incentives can skew your data is by tying financial incentives to the results. At Uber (in 2015 at least) you need an average driver score of 4.6 out of 5 to continue as a driver. So if a passenger gives you 4 out of 5 (which they might think of as a reasonable score), you need another two passengers to give you 5 out of 5 to make up for it. And if a passenger gives you a 3 you need another four passengers to give you a 5 to get you back to 4.6 average. What behaviour does that drive? Some good for sure – trying to improve the passenger experience. But could there also be drivers who make it clear to the passenger how their livelihood depends on getting a top mark of 5 as is apparently common in car dealerships? This data set is surely skewed.

It’s easy to come up with questions and set up a survey. But it’s much more difficult to do it well. Here’s a great article on the “10 big mistakes people make when running customer surveys” along with great suggestions on how to analyse your survey data using Excel.

Talking of surveys, please make sure you ‘like’ this post!

 

Text: © 2017 Dorricott MPI Ltd. All rights reserved.

Don’t blame me! The corrosive effect of blame

Root cause analysis (RCA) is not always easy. And there is frequently not enough time. So where it is done, it is common for people to take short cuts. The easiest short cuts are:

  1. to assume this problem is the same as one you’ve seen before and that the cause is the same (I mentioned this in a previous post). Of course, you might be right. But it might be worth taking a little extra time to make sure you’ve considered all options. The DIGR® approach to RCA can really help here as it takes everyone through the facts and process in a logical way.
  2. to blame someone (or a department, site etc)

Blame is corrosive. As soon as that game starts being played, everyone clams up. Most people don’t want to open up in that sort of environment because they risk every word they utter being used against them. So once blame comes into the picture you can forget getting to root cause.

To help guard against blame, it’s useful to know a little about the field of Human Factors. This is an area of science focused on designing products, systems, or processes to take proper account of the interaction between them and the people who use them. It is used extensively in the airline industry and has helped them get to their current impressive safety record. The British Health and Safety Executive has a great list of different error types.

This is based on the Human Factors Analysis and Classification System (HFACS). The error types are split into:

Error Type Example
Action errors (slips) Turning the wrong switch on or off
Action errors (lapses) Forgetting to lock a door
Thinking errors (rule based) – where a known rule is misapplied Ignoring an evacuation alarm because of previous false alarms
Thinking errors (knowledge based) – where lack of prior knowledge leads to a mistake Using an out-of-date map to plot an unfamiliar route
Non-compliance (routine, situational and exceptional) Speeding in a car (knowingly ignoring the speed limit because everyone else does)

So how can human factors help us? Consider a hypothetical situation where you are the Clinical Trial Lead on a vaccine study. Information is emerging that a number of the injections of trial vaccine have actually been administered after the expiry date of the vials. This has happened at several sites. It might be easiest to blame the nurse administering of the pharmacist prescribing. They should have taken more care and checked the expiry date properly. What could the human errors have been?

They might have forgotten (lapse). Or they might have read the expiry date in European date format when it was written in American date format (rule-based thinking error). Or they might have been rushing and not had time (non-compliance). Of course, we know the error occurred on multiple occasions and by different people as it happened at multiple sites. This suggests a systemic issue and that reminding or retraining staff will only have a limited effect.

Maybe it would be better to make sure that expired drug can’t reach the point of being dispensed or administered so that we don’t rely on the final check by the pharmacist and nurse. We still want them to check but do not expect them to find expired vaccine.

After all, as W. Edwards Deming said “No-one goes to work to do a bad job!”

In my next post I will talk about the different sorts of actions you can take to try to minimise the chance of human error.

And as an added extra, here’s a link to an astonishing story that emphasises the importance of taking blame out of RCA.

 

Photo: NYPhotographic

Text: © 2017 Dorricott MPI Ltd. All rights reserved.

DIGR® is a registered trademark of Dorricott Metrics & Process Improvement Ltd.

Root Cause Analysis – A Mechanic’s View

My car broke down recently and I was stuck by the side of the road waiting for a recovery company. It gave me an opportunity to watch a real expert in root cause analysis at work.

He started by ascertaining exactly what the problem was – the car had just been parked and would now not start. He then went into a series of questions. How much had the car been driven that day? Was there any history of the car not starting or being difficult to start? Next he was clearly thinking of the process of how a car starts up – the electrics of turning the motor, drawing fuel into the engine, spark plugs igniting the fuel, pistons moving and the engine idling. He started at the beginning of the process. Could the immobiliser be faulty? Had I dropped the key? No. Maybe the battery was not providing enough power. So he attached a booster – but to no avail. What about the fuel? Maybe it had run out? But the gauge showed ½ tank – had I filled it recently? After all the gauge might be faulty. Yes, I had filled it that day. Maybe the fuel wasn’t getting to the engine – so he tapped the fuel pipe to try to clear any blockage. No. Then he removed the fuel pipe and hey presto, no fuel was coming through. It was a faulty fuel pump. And must have just failed. This all took about 10 minutes.

The mechanic was demonstrating very effective root cause analysis. It’s what he does every day. Without thinking about how to do it. I asked him whether he had come across “Five Whys” – no he hadn’t. And as I thought about Five Whys with this problem, I wondered how he might have gone about it. Why has the car stopped? Because it will not start. Why will the car not start? Erm. Don’t know. Without gathering information about the problem he would not be able to get to root cause.

Contrast the Five Whys approach with the DIGR® method:

Define – the car will not start

Is/Is not – the problem has just happened. No evidence of a problem earlier.

Go step-by-step – Starter motor, battery, immobiliser, fuel, spark plugs.

Root cause – He went through all the DIGR® steps and it was when going through the process step-by-step that he discovered the cause. He had various ideas en route and tested them until he found the cause. He could have kept going of course – why did the fuel pump fail? But he had gone far enough, to a cause he had control over and could fix.

Of course, he hadn’t heard of DIGR® and didn’t need it. But he was following the steps. In clinical trials, there is often not a physical process we can see and testing our ideas may not be quite so easy. But we can still follow the same basic steps to get to a root cause we can act on.

If you don’t carry out root cause analysis every day like this mechanic, perhaps DIGR® can help remind you the key steps you should take. If you’re interested in finding out more, please feel free to contact me.

 

Photo: Craig Sunter (License)

Text: © 2017 Dorricott MPI Ltd. All rights reserved.

DIGR® is a registered trademark of Dorricott Metrics & Process Improvement Ltd.

Let’s Stop Confusing Everyone With CAPA!

I am really not a fan of the term “CAPA”. I think people’s eyes glaze over at the mention of it. It is seen as an administrative burden that the Quality Department and Regulators foist onto the people actually trying to do the real work. And I think it’s a mis-named term. CAPA stands for Corrective Action, Preventive Action. When there is a serious issue arising in a clinical trial, a CAPA is raised. This is meant to get beyond the immediate fire-fighting of the situation and to get to root cause so that corrective and/or preventive actions can be put in place. Sounds sensible. But what about when I ask you what the difference is between a corrective and a preventive action?

ISO9001:2008 defines them as:

Corrective Actions – “The organization shall take action to eliminate the causes of nonconformities in order to prevent recurrence.”

Preventive Actions – “The organization shall determine action to eliminate the causes of potential nonconformities in order to prevent their occurrence.”

Not very easy to get your head around in part because of the use of the word ‘prevent’ in both definitions. And if a Preventive Action is designed to prevent occurrence then that means the nonconformity (error) cannot have already occurred. And yet a CAPA is raised when a nonconformity (error) has occurred. So the PA part of CAPA seems wrong to me. The different definitions of Corrective and Preventive have caused no end of confusion as organisations implemented ISO9001. The good news is that in ISO9001:2015, there is a significant update in this area. When a significant issue (non-conformity) occurs you are expected to implement those immediate actions to contain the issue (termed Corrections) and also Corrective Actions to try to prevent recurrence. But the Preventive Actions are not associated with the issue. They now fit into an overall risk approach. By assessing risks in processes up-front and then continuously through their life-cycle, you are expected to develop ways to reduce the risk. These are the Preventive Actions or in risk language, the Mitigations.

Sound familiar? In clinical trials of course, we have the ICH addendum (ICH E6 R2) bringing in significant language on risk which brings it more in line with the revised ISO9001:2015 standard and is a welcome change. What is odd is that the addendum includes the following in 5.20.1:

If noncompliance that significantly affects or has the potential to significantly affect human subject protection or reliability of trial results is discovered, the sponsor should perform a root cause analysis and implement appropriate corrective and preventive actions.

This, unfortunately, mentions preventive actions next to corrective ones without any explanation of the difference and no link to the approach to risk in section 5.0. So it seems the confusion will remain in our area of work. And that confusion is compounded by our use of the CAPA terminology.

I would vote to get rid of the CAPA term all together and talk about CAR (Corrective Action Requests) and Risk. Maybe along with that, we could rehabilitate the whole approach. Done well with good root cause analysis and corrective actions, CARs are an important part of a learning organization. They should not be seen as some tedious administration that the Quality Department is requesting.

What do you think? Perhaps it’s all clear to you and you think CAPA is a great term?

In my next post I want to go back into the root cause analysis (RCA) process itself – whether DIGR® or another method. I’ll talk more about the corrosive effect of blame on RCA and how to overcome it.

 

Text: © 2017 Dorricott MPI Ltd. All rights reserved.

DIGR® is a registered trademark of Dorricott MPI Ltd.

Picture: ccPixs.com

Process Improvement: Let’s Automate Our Processes!

I came across an example of a process in need of improvement recently. Like you, I come across these pretty regularly in everyday life. But this one has an interesting twist…

I was applying for a service via a broker. The broker recommended a company and he was excited because this company had a new process using electronic signatures. They had ‘automated the process’ rather than needing paper forms, snail mail etc. etc. I was intrigued too and so was pleased to give it a go. The email arrived and it was a little disconcerting because warned that if I made any error in the electronic signature process that it was my fault and it might invalidate it. They would not check for accuracy. When I clicked on the link there was a problem because the broker had entered my landline number into the system and not my mobile number. The phone number was needed to send an authentication text. So he attempted to correct that and a new email arrived. When I clicked the link this time it told me that “the envelope is being updated”. I have no idea what envelope it was talking about – a pretty useless error message. I wasn’t feeling great about this process improvement now.

The broker said “Let’s go back to the paper way then.” He emailed me a 16-page form that I had to complete. I had to get it signed by 4 different people in a particular order. It was a pretty challenging form that needed to be completed, scanned and emailed back. I did wonder as I completed it just how many times there must be errors in completion (including, possibly my own). There seemed to be hundreds of opportunities for error. Makes sense, I thought, to implement a process improvement and use a process with electronic signatures – to ‘automate the process’. Where they had failed was clearly in the implementation – they had not trained the broker or given adequate instructions to the end user (me). Error messages using IT jargon were of no help to the end user. It reminded me of an electronic filing system I saw implemented some years ago, where a company decided to ‘automate the process’ of filing. The IT Department was over the moon because they had implemented the system one week ahead of schedule. But no-one was actually using it because they hadn’t been trained, the roll-out had not been properly considered, there was no thought about reinforcing behaviours or monitoring actual use. No change management considerations. A success for IT but a failure for the company!

Anyway, back to the story. After completing the good old paper-based process, I was talking some more with the broker and he said “their quote for you was good but their application process is lousy. Other companies have a much easier way of doing it – for most of them the broker completes the information on-line and then sends a two-page form via email to print, review, sign (once), scan and return. After that a confirmation pack comes through and the consumer has the chance to correct errors at that stage. But it’s all assumed to be right at the start.” These companies had a simple and efficient process and no need to ‘automate the process’ with electronic signatures.

Hang on. Why does the company I used need a 16-page form and 4 signatures I hear you ask? Who knows! They had clearly recognised that their process needed improving but had headed down the route of ‘let’s automate it’. They could have saved themselves an awful lot of cost of implementing their new improved process if they had talked with the broker about his experience first.

The lesson here is – don’t just take a bad process and try to ‘automate’ it with IT. Start by challenging the process. Why is it there? Does it have to be done that way? There might even be other companies out there who have a slick process already – do you know how your competition solves the problem? Even more intriguingly, perhaps another industry has solved a similar problem in a clever way that you could copy. If you discover that a process is actually unnecessary and you can dramatically simplify it then you’re mistake-proofing the process. Taking out unnecessary steps means they can’t go wrong.

In my next post I will explore the confusion surrounding the term CAPA.

Breaking News – the broker just got back to me to tell me I had got one of the pages wrong on the 16-page form. This is definitely a process in need of improvement!

 

Text: © 2017 Dorricott MPI Ltd. All rights reserved.

Go Step-By-Step to get to Root Cause

In an earlier post, I described my DIGR® method of root cause analysis (RCA):

Define

Is – Is Not

Go Step By Step

Root Cause

In this post, I wanted to look more at Go Step By Step and why it is so powerful.

“If you can’t describe what you’re doing as a process, you don’t know what you’re doing” – a wonderful quote from W. Edwards Deming! And there is a lot of truth to it. In this blog, I’ve been using a hypothetical situation to help illustrate my ideas. Consider the situation where you are the Clinical Trial Lead on a vaccine study. Information is emerging that a number of the injections of trial vaccine have actually been administered after the expiry date of the vials. This has happened at several sites. You’ve taken actions to contain the situation for now. And have started using DIGR® to try to get to the root cause. It’s already brought lots of new information out and you’ve got to Go Step By Step. As you start to talk through the process, it becomes clear that not everyone has the same view of what each role in the process should do. A swim-lane process map for how vaccine should be quarantined shows tasks split into roles and helps the team to focus on where the failures are occurring:

In going step-by-step through the process, it becomes clear that the Clinical Research Associates (CRAs) are not all receiving the emails. Nor are they clear what they should do with them when they do receive them. The CRA role here is really a QC role however – the primary process takes place in the other two swimlanes. And it was the primary process that broke down – the email going from the Drug Management System to the Site (step highlighted in red).

So we now have a focus for our efforts to try to stop recurrence. You can probably see ways to redesign the process. That might work for future clinical trials but could lead to undesired effects in the current one. So a series of checks might be needed. For example, sending test emails from the system to confirm receipt by site and CRA or regular checks for bounced emails. Ensuring CRAs know what they should do when they receive an email would also help – perhaps the text in the email can be clearer.

By going step-by-step through the process as part of DIGR®, we bring the team back to what they have control of. We have moved away from blaming the pharmacists or the nurses at the two sites. Going down the blame route is never good in RCA as I will discuss in a future post. Reviewing the process as it should be also helps to combat cognitive bias which I’ve mentioned before.

As risk assessment, control and management is more clearly laid out in ICH GCP E6 (R2), process maps can help with risk identification and reduction too. To quote from section 5.0 “The sponsor should identify risks to critical trial processes and data”. Now we’ve discovered a process that is failing and could have significant effects on subject safety. By reviewing process maps of such critical processes, consideration can be given to the identification, prioritisation and control of risks. This might involve tools such as Failure Mode and Effects Analysis (FMEA) and redesign where possible in an effort to mistake-proof the process. This helps to show one way how RCA and risk connect – the RCA led us to understand a risk better and we can then put in controls to try to reduce the risk (by reducing the likelihood of occurrence). We can even consider how, in future trials, we might be able to modify the process to make similar errors much less likely and so reduce the risk from the start. This is true prevention of error.

In my next post I will talk about how (not) to ‘automate’ a process.

 

Text: © 2017 Dorricott MPI Ltd. All rights reserved.

DIGR® is a registered trademark of Dorricott MPI Ltd.

Lies, Damned Lies and Statistics

Change of plan for this post after receiving a mailshot from a local estate agent…

Statistics are the at the heart of the scientific method. They help us to prove or disprove hypotheses (to a certain level of confidence) and so make the discussion about facts rather than opinion. They have huge power – both to reveal the truth but also to hide it when used wrongly.

When I was 16, I received, as a present, the book “How to Lie with Statistics” by Darrell Huff. OK, so perhaps I was rather an odd teenager as I thought this book was fantastic. I am pleased to see it is still available from good bookstores. It has stood me in good stead for many years as I always go to graphs in any article I read and I always wonder what the author is trying to show (or hide). So when I received a mailshot through the door recently from an estate agent full of pretty graphs I was impressed to see that they were able to demonstrate many of Huff’s observations in one glossy sheet of paper.

The first graph the mailshot has is what Huff calls a “gee whiz graph”. It’s the one below. They state that they have done some “spatial interpolation of property price data, also known as number crunching!” They go on to explain helpfully that “for every 0.25km you live closer to the station, the average property price rose by £2700.” Do you believe them?

Huff describes this use of statistics as “statisticulation” which I rather like. Of course, what they have done is “supress zero” on the y-axis without any warning – cutting off 89% of the bar on the left and 98% of the bar on the right. The bar on the left is nine times the height of the bar on the right even though the numerical difference is just 10%. But, of course, the graph begs many more questions – such as what sort of average is shown? (see Huff’s “Well chosen average”) Is the difference statistically significant? (see Huff’s “Much ado about nothing”) How many properties are included in the figures? Is the mix of properties the same within both radii? And what if I tell you that within 5km of the particular train station they are talking about, there are actually another 9 train stations – including at least one that has many more commuter trains stopping regularly? And there a well-regarded school nearby that it is known many parents want to live near to in order to increase the chance of their child attending. Could that be a factor?

Of course, even if they are able to prove a correlation between distance from station and property price (which they certainly haven’t with the data above), we know that “correlation does not imply causation” and Huff describes it in his chapter “Post hoc rides again”. It reminds me of the annual story of how living near Waitrose (a top-end UK supermarket) can increase the value of your home. Could it be that wealthier people tend to shop at high-end supermarkets and so high-end supermarkets locate where wealthier people live (in more expensive properties).

Another of the graphs is shown below. Along with the text “People are at lots of different stages of their lives. The largest number of people are Retired which accounts for 20.5% of the total. This is 0.4% lower than the national average.” Is this what you take as the most interesting feature of the graph?

When I look at the graph (I am assuming the data is accurate and they claim it comes from the Office for National Statistics so I think that’s OK), the tiny difference in the red and grey bars for Retired is not what strikes me. I would say it looks as though this area has more families and Empty Nesters than the average. But, of course, I don’t really know because I don’t know whether the differences are statistically significant (see Huff’s “Much ado about nothing”). We can be reasonably confident that the larger differences are likely to be significant because the sample is large. But could we really say that there are 0.4% fewer Retired households than the national average? I think it likely this is within the range of error and that we can’t really say whether there is any difference – but I don’t know, of course, because there are no numbers shown for the samples. We only have percentages. It also starts me wondering about how the data is collected. What about a house with grown-up children where one of those grown-up children has had a child (i.e. three generations in the house), which category does that fall into? And a couple without children – are they a Young Family? What if they are older but not retired? Or a split family where one parent looks after the children one week and the other the next week? And how does the Office of National Statistics know what type of family is living in each property? After all, people are moving all the time whether buying/selling but also moving in with others or moving out.

You get the point.

Statistics and data can tell us so much. They are the bedrock of the scientific method. But we must always be sceptical and question them. Who is telling us? How do they know? What’s missing? Does it make sense? Or as Huff puts it “talk back to a statistic”!

In my next post I will go back to looking at the DIGR® method of root cause analysis by looking in some more detail at the G of DIGR®. How using process maps can really help everyone involved to Go step by step and start to see where a process might fail.

 

Text © 2017 Dorricott MPI Ltd. All rights reserved.

DIGR® is a registered trademark of Dorricott MPI Ltd.