Blog

When is a test not a test?

First, I hope you are keeping safe in these disorienting times. This is certainly a time none of us will forget.

There have been lots of really interesting examples during this pandemic of the challenge of measurement. We know that science is key to us getting through this with the minimum impact and measurement is fundamental to science. I described a measurement challenge in my last post. Here’s another one that caught my eye. Deceptively simple and yet…

On 2-Apr-2020, the UK Government announced a target of 100,000 COVID-19 tests a day by the end of April. On 30-Apr-2020, they reported 122,347 tests. So they met the target, right? Well, maybe. To quote the great Donald J. Wheeler’s First Principle for Understanding Data “No data have meaning apart from their context”. So, let’s be sceptical for a moment and see if we can understand what these 122,347 counts actually are. Would it be reasonable to include the following in the total?

    • Tests that didn’t take place – but where there was the capacity to run those tests
    • Tests where a sample was taken but has not yet been reported on as positive or negative
    • The number of swabs taken within a test – so a test requiring two swabs which are both analysed counts as two tests
    • Multiple tests on the same patient
    • Test kits that have been sent out by post on that day but have not yet been returned (and may never be returned)

You might think that including some of these is against the spirit of the target of 100,000 COVID-19 tests a day. Of course, it depends what the question is that the measurement is trying to answer. Is it the number of people who have received test results? Or is it the number of tests supplied (whether results are in or not)? In fact, you could probably list many different questions – each that would give different numbers. Reporting from the Government doesn’t go into all this detail so we’re not sure what they include in their count. And we’re not really sure what question they are asking.

And these differences aren’t just academic. The 122,347 tests include 40,369 test kits that were sent on 30-Apr-2020 but had not been returned (yet). And 73,191 individual patients were tested i.e. a significant number of tests were repeat tests on the same patients.

So, we should perhaps not take this at face value, and we need to ask a more fundamental question – what is the goal we are trying to achieve? Then we can develop measurements that focus on telling us whether the goal has been achieved. If the goal is to have tests performed for everyone that needs them then a simple count of number of tests is not really much use on its own.

As to whether it is wise to set an arbitrary target for a measurement which seems of limited value? To quote Nicola Stonehouse, professor in molecular virology at the University of Leeds, “In terms of 100,000 as a target, I don’t know where that really came from and whether that was a plucked out of thin air target or whether that was based on any logic.” On 6-May-2020, the UK Government announced a target of 200,000 tests a day by the end of May.

Stay safe.

 

Text: © 2020 Dorricott MPI Ltd. All rights reserved.

Picture – The National Guard

Metric Challenges With COVID-19

Everyone’s talking about the novel coronavirus, COVID-19. It is genuinely scary. And it’s people’s lives and livelihoods being affected. But with all the numbers flying around, I realised it’s quite a good example of how metrics can be mis-calculated and mislead.

For example, the apparently simple question – what is the mortality rate? is actually really difficult to determine during an epidemic. We need to determine the numerator and the denominator to estimate this. For the numerator, the number of deaths seems the right place to start. The denominator is a little more challenging though. Should it be the total population? Clearly not – so let’s take those who are known to be infected. But we know this will not be accurate: not everyone has been tested, some people have very mild symptoms etc. There is also the challenge of accurate data in such a fast-moving situation. We would need to make sure the data for the numerator and denominator are both as accurate as possible at the same time point.

Once the epidemic has run its course, scientists will be able to determine the actual mortality rate.  For example, if scientists are able to develop tests to determine the population exposure (testing for antibodies to COVID-19), then they will be able to make a much better estimate of the mortality rate.

But during the epidemic, there is another challenge with this metric. It actually impacts the numerator. We don’t know whether those who are infected and not yet recovered will die. It can take 2-8 weeks to know the outcome. Some of those infected will sadly die from their infection in the future. And so, the numerator is actually an underestimate.

As we measure processes in clinical trials, we can have similar issues with metrics. If we are trying to use metrics to predict the final drop-out rate from an ongoing trial (patients who discontinue treatment during the trial), dividing the number of drop-outs to-date by the number of patients randomized will be a poor (low) estimate. A patient who has just started treatment will have had little chance to drop out. But a patient who has nearly completed treatment is unlikely to drop out. At the end of the trial, the drop-out rate will be easy to calculate. But during the trial, we need to take account of the amount of time patients have been in treatment. We should weight a patient more if they have completed, or nearly completed treatment. And less if they have just started. We would also want to be sure that the numerator and denominator were accurate at the same time point. If data on drop-outs is delayed then again, our metric will be too low. By considering carefully the way we calculate the metric, we can ensure that we have a leading indicator that helps to predict the final drop-out rate (assuming things stay as is). That might provide an early warning signal so that action can be taken early to reduce a drop-out rate that would otherwise end up invalidating the trial results.

In the mean time, let’s hope the news of this virus starts to improve soon.

Much more detailed analysis of the Case Fatality Rate of COVID-19 is available here.

 

Text: © 2020 Dorricott MPI Ltd. All rights reserved.

What No-one Tells You About Root Cause Analysis

When something significant goes wrong, we all know that getting to the root cause is an important start to understanding and helping to prevent the same issue recurring. I’ve talked many times in this blog about methods of root cause analysis and, of course, I recommend DIGR-ACT®. But there are other methods too. The assumption with all these methods is that you can actually get to the root cause(s).

I was running process and risk training for the Institute of Clinical Research recently. The training includes root cause analysis. And one of the trainees gave an example of a Principal Investigator (PI) who had randomized a patient, received the randomization number and proceeded to pick the wrong medication lot for the patient. She should have selected the medication lot that matched the randomization number but picked the wrong one. This was discovered later in the trial when Investigational Product accountability was carried out by the CRA visiting the site. By this time, of course, the patient had potentially been put at risk and the results could not be included in the analysis. So why had this happened? It definitely seemed to be human error. But why had that error occurred?

The PI was experienced in clinical trials. She knew what to do. This error had not occurred before. There was no indication that she was particularly rushed or under pressure on that day. The number was clear and in large type. How was it possible to mis-read the number? The PI simply said she made a mistake. And mistakes happen. That’s true, of course, but would we accept that of an aeroplane pilot? We’d still want to understand how it happened. Human error is not a root cause. But if human error isn’t the root cause, what is?

Sometimes, we just don’t know. Root cause analysis relies on cause and effect. If we don’t understand the cause and effect relationships, we will not be able to get to true root causes. But that doesn’t mean we just hold up our hands and hope it doesn’t happen again. That would never pass in the airline industry. So what should we do in this case?

It’s worth trying to see, first, how widespread a problem this is. Has it happened before at other sites? On other studies? What are the differences between sites / studies where this has and had not happened? This may still not be enough to lead you to root cause(s). If not, then maybe we could modify the process to make it less likely to recur? Could we add a QC step such as having the PI write the number of the medication down next to the randomization number – this should highlight a difference if there is one. Or perhaps they could enter the number into a system so that it can check, Or maybe there has to be someone else at the site that checks at the point of dispensing.

A secret in root cause analysis that is rarely mentioned is that sometimes you can’t get to the root cause(s). There are occasions when you simply don’t have enough information to be able to get there. In these cases, whatever method you use, you cannot establish the root cause(s). Of course, if you do, it will help in determining effective actions to help stop recurrence. But without establishing root cause(s), there are still actions you can take to try to reduce the likelihood of recurrence.

 

Text: © 2020 Dorricott MPI Ltd. All rights reserved.

What’s Swiss Cheese got to do with Root Cause Analysis?

“There can be only one true root cause!” Let’s examine this oft-made statement with an example of a root cause analysis. Many patients in a study have been found at database lock to have been mis-stratified – causing difficulties with analysis and potentially invalidating the whole study. We discover that at randomization, the health professional is asked “Is the BMI ≤ 25? Yes/No”. In talking with CRAs and sites we realise that at a busy site, where English is not your first language, this is rather easy to answer incorrectly. If we wanted to make it easier for the health professional to get it right, why not simply ask for the patient’s height and weight. Once those are entered, the IXRS could calculate the BMI and determine whether it is less than or equal to 25. This would be much less likely to lead to error. So, we determine that the root cause is that “the IXRS was set up without considering how to reduce the likelihood for user error.” We missed an opportunity to prevent the error occurring. That’s definitely actionable. Unfortunately, of course, it’s too late for this study but we can learn from the error for existing and future studies. We can look at other studies to see how they stratify patients and whether a similar error is likely to occur. We can update the standards for IXRS for future studies. Great!

But is there more to it? Were there other actions that might have helped prevent the issue? Why was this not detected earlier? Were there opportunities to save this study? As we investigate further, we find:

  1. During user acceptance testing, this same error occurred but was put down to user error.
  2. There were several occasions during the study where a CRA had noticed that the IXRS question was answered incorrectly. They modified the setting in EDC but were unable to change the stratification as this is set at randomization. No-one had realized that this was a systemic issue (i.e. had been detected at several sites due to a special cause).

Our one root cause definitely takes us forward. But there is more to learn from this issue. Perhaps there are some other root causes too. Such as “the results of user acceptance testing were not evaluated for the potential of user error”. And “issues detected by CRAs were not recognised as systematic because there is no standard way of pulling out common issues found at sites.” These could both lead to additional actions that might help to reduce the likelihood of the issue recurring. And notice that actions on these root causes might also help reduce the likelihood of other issues occurring too.

In my experience, root cause analysis rarely leads to one root cause. In a recent training course I was running for the Institute of Clinical Research, one of the delegates reminded me of the “Swiss Cheese” model of root causes. There are typically many hazards, such as a user entering data into an IXRS. These hazards don’t normally end up as issues because we put preventive measures in place (such as standards, user acceptance testing, training). You can think of each of these preventive measures as a slice of swiss cheese – they prevent many hazards becoming issues but won’t prevent everything. Sometimes, a hazard can get through a hole in the cheese. We also put detection methods in place (such as source data verification, edit checks, listing review). You can think of each of these as additional slices of cheese which prevent issues growing more significant but won’t prevent everything. It’s when the holes in each of the layers of prevention and detection line up that a hazard can become a significant issue that might even lead to the failure of a study. So, in our example, the IXRS was set up poorly (a prevention layer failed), the user acceptance testing wasn’t reviewed considering user error (another prevention layer failed), and CRA issues were not reviewed systematically (a detection layer failed). All these failures led to the study potentially being lost.

So if, in your root cause analysis, you have only one root cause, maybe it’s time to take another look. Are there maybe other learnings you can gain from the issue? Are there other prevention or detection layers that failed?

Do you need help in root cause analysis? Take a look at DIGR-ACT training. Or give me a call.

 

Text: © 2019 DMPI Ltd. All rights reserved.

Oh No – Not Another Audit!

It has always intrigued me, this fear of the auditor. Note that I am separating out auditor from (regulatory) inspector here. Our industry has had an over reliance on auditing for quality rather than on building our processes to ensure quality right the first time. The Quality Management section of ICH E6 (R2) is a much needed change in approach. And this has been enhanced by the ICH E8 (R1) (draft) “Quality should rely on good design and its execution rather than overreliance on retrospective document checking, monitoring, auditing or inspection”. The fear of the auditor has led to some very odd approaches.

Trial Master File (TMF) is a case in point. I seem to have become involved with TMF issues and improving TMF processes a number of times in CROs and more recently have helped facilitate the Metrics Champion Consortium TMF Metrics Work Group. The idea of an inspection ready TMF at all times comes around fairly often. But to me, that misses the point. An inspection ready (or audit ready) TMF is a by-product of the TMF processes working well – not an aim in itself. We should be asking – what is the TMF for? The TMF is to help in the running of the trial (as well as to document it to be able to demonstrate processes, GCP etc were followed). It should not be an archive gathering dust until an audit or inspection is announced when a mad panic ensues to make sure the TMF is inspection ready. It should be being used all the time – a fundamental source of information for the study team. Used this way, gaps, misfiles etc will be noticed and corrected on an ongoing basis. If the TMF is being used correctly, there shouldn’t be significant audit findings. Of course, process and monitoring (via metrics) need to be set up around this to make sure it works. This is process thinking.

And then there are those processes that I expect we have all come across. No-one quite understands why there are so many convoluted steps. Then you discover that at some point in the past there was an audit and to close the audit finding (or CAPA), additional steps were added. No-one knows the point of the additional steps any more but they are sure they must be needed. One example I have seen was of a large quantity of documents being photo-copied prior to sending to another department. This was done because documents had got lost on one occasion and an audit had discovered this. So now someone spent 20% of their day photocopying documents in case they got lost in transit. Not a good use of time and not good for the environment. Better to redesign the process and then consider the risk. How often do documents get lost en route? Why? What is the consequence? Are some more critical than others? Etc. Adding the additional step to the process due to an audit finding was the easiest thing to do (like adding a QC step). But it was the least efficient response.

I wonder if part of the issue is that some auditors appear to push their own solution too hard. The process owner is the person that understand the process best. It is their responsibility to demonstrate they understand the audit findings, to challenge where necessary, and to argue for the actions they think will address the real issues. They should focus on the ‘why’ of the process.

Audit findings can be used to guide you in improving the process to take out risk and make it more efficient. Root cause analysis, of course, can help you with the why for particular parts of the process. And again, understanding the why helps you to determine much better actions to help prevent recurrence of issues.

Audits take time, and we would rather be focusing on the real work. But they also provide a valuable perspective from outside our organisation. We should welcome audits and use the input provided by people who are neutral to our processes to help us think, understand the why and make improvements in quality and efficiency. Let’s welcome the auditor!

 

Image: Pixabay

Text: © 2019 Dorricott MPI Ltd. All rights reserved.

Hurry Up and Think Critically!

At recent conferences I’ve attended and presented at, the topic of critical thinking has come up. At the MCC Summit, there was consternation that apparently some senior leaders think the progress in Artificial Intelligence will negate the need for critical thinking. No-one at the conference agreed with those senior leaders. And at the Institute for Clinical Research “Risky Business Forum”, everyone agreed on the importance of fostering critical thinking skills. We need people to take a step back and think about future issues (risks) rather than just the pressing issues of the day. Most people (except those senior leaders) would agree we need more people to be developing and using critical thinking skills in their day-to-day work. We need to teach people to think critically and not “spoon-feed” them the answers with checklists. But there’s much more to this than tools and techniques. How great to see, then in the draft revision of ICH E8: “Create a culture that values and rewards critical thinking and open dialogue about quality and that goes beyond sole reliance on tools and checklists.” And that culture needs to include making sure people have time to think critically.

Think of those Clinical Research Associates on their monitoring visits to sites. At a CRO it’s fairly common to expect them to be 95% utilized. This leaves only 5% of their contracted time for all the other “stuff” – the training, the 1:1’s, the departmental meetings, the reading of SOPs etc. Do people in this situation have time to think? Are they able and willing to take the time to follow up on leads and hunches? As I’ve mentioned previously, root cause analysis needs critical thinking. And it needs time. If you are pressurized to come up with the results now, you will focus on containing the issue so you can rush on to the next one. You’ll make sure the site staff review their lab reports and mark clinical significance – but you won’t have time to understand why they didn’t do that in the first place. You will not learn the root cause(s) and will not be able to stop the issue from recurring. The opportunity to learn is lost. This is relevant in other areas too, such as risk identification, evaluation and control. With limited time for risk assessment on a study, would you be tempted to start with a list from another study, have a quick look over and move on to the next task quickly? You would know it wasn’t a good job but hopefully it was good enough.

Even worse, some organizations, in effect, punish those thinking critically. If you can see a way of improving the process, of reducing the likelihood of a particular issue recurring, what should you do? Some organizations make it a labyrinthine process to make the change. You might have to go off to QA and complete a form requesting a change to an SOP. And hope it gets to the right person – who has time to think about it and consider the change. And how should you know about the form? You should have read the SOP on SOP updates in your 5% of non-utilized time!

Organizations continue to put pressure on employees to work harder and harder. It is unreasonable to expect employees to perform tasks needing critical thinking well without allowing them the time to do so.

Do you and others around you have time to think critically?

 

Text: © 2019 DMPI Ltd. All rights reserved. (With thanks to Steve Young for the post title)

Picture: Rodin – The Thinker (Andrew Horne)

Why Can’t You Answer My Simple Question?

Often during a Root Cause Analysis session, it’s easy to get lost in the detail. The issues are typically complex and there are many aspects that need to be considered. Something that really doesn’t help is when people seem to be unable to answer a simple question. For example, you might ask “At what point would you consider escalating such an issue?” and you get a response such as “I emphasised the importance of the missing data in the report and follow-up letter.” The person seems to be making a statement about something different and has side-stepped your question. Why might that be?

Of course, it might be simply that they didn’t understand the question. Maybe English isn’t their first language, or the phone line is poor. Or they were distracted by an urgent email coming in. If you think this is the reason, it’s worth asking again – perhaps re-wording and making sure you’re clear.

Or maybe, they don’t know the answer but feel they need to answer anyway. A common questioning technique is to ask an open question and then be silent to try to draw out a response. People tend not to like silence and so they fill the gap. An unintended consequence of this might be that they fill the gap with something that doesn’t relate to the question you asked. They may feel embarrassed that they don’t know the answer and feel they should try to answer with something. You will need to listen carefully to the response and perhaps if it appears they simply don’t know the answer, you could ask them whether anyone else might. Perhaps the person who knows is not at the meeting.

Another possibility is that they are fearful. They might fear the reaction of others. Perhaps procedures weren’t followed and they know they should have been. But admitting it might bring them, or their colleagues, trouble. This is probably more difficult to ascertain. To understand whether this is going on, you’ll need to build a rapport with those involved in the root cause analysis. Can you help them by asking them to think of Gilbert’s Behavioral Engineering factors that support good performance? Was the right information available at the right time to carry out the task? What about appropriate, well-functioning tools and resource? And were those involved properly trained? See if you can get them thinking about how to stop the issue recurring – as they come up with ideas, that might lead to a root cause of the actual issue. For example, if they think the escalation plan could be clearer, is a root cause that the escalation plan was unclear?

“No-one goes to work to do a bad job!” [W. Edwards Deming] They want to help improve things for next time. If they don’t seem to be answering your question – what do you think the root cause of that might be? And how can you overcome it?

Do you need help in root cause analysis? Take a look at DIGR-ACT training. Or give me a call.

 

No Blame – Why is it so Difficult?

I have written before about the importance of removing blame when trying to get to the root causes of an issue. To quote W Edwards Deming, “No one can put in his [/her] best performance unless he [/she] feels secure. … Secure means without fear, not afraid to express ideas, not afraid to ask questions. Fear takes on many faces.” But why is it so difficult to achieve? You can start a root cause analysis session by telling people that it’s not about blame but there’s more to it than telling people.

It’s in the culture of an organization – which is not easy to change. But you can encourage “no blame” by your questioning technique and approach too. If significant issues at an investigative site have been uncovered during an audit, the easiest thing might be to “blame” the CRA. Why didn’t he/she find the problems and deal with them earlier? What were they doing? Why didn’t they do it right? If I was the CRA and this appeared to be the approach to get to root cause, I certainly would be defensive. Yes, I got it wrong and I need to do better next time. Please don’t sack me! I would be fearful. Would it really help to get to the root causes?

Would it be better to start by saying that QC is not 100% effective – we all miss things. What actually happens before, during and after a monitoring visit to this site? Are the staff cooperative? Do they follow-up quickly with questions and concerns? And the key question – “What could be done differently to help make it more likely that these issues would have been detected and dealt with sooner?” This is really getting at the Gilbert’s Behavior Engineering Model categories. Are site staff and CRA given regular feedback? Are the tools and resources there to perform well? Do people have the right knowledge and skills?

This is where you’re likely to start making progress. Perhaps the site has not run a clinical trial before, they are research-naïve. We haven’t recognised this as a high risk site and are using our standard monitoring approach. The CRA has limited experience. There’s been no co-monitoring visit and no-one’s been reviewing the Monitoring Visit Reports – because there’s a lack of resources due to high CRA turnover and higher than expected patient enrollment. And so on and so on…To quote W. Edwards Deming again, “Nobody goes to work to do a bad job.”

Don’t just tell people it’s not about blame. Show that you mean it by the questions you ask.

 

Want to find more about effective root cause analysis in clinical trials? Visit www.digract.com today.

 

Text: © 2019 DMPI Ltd. All rights reserved.

Lack of Formal Documentation – Not a Root Cause

When conducting root cause analysis, “Lack of formal documentation” is a suggested root cause I have often come across. It seems superficially like a good, actionable root cause. Let’s get some formal documentation of our process in place. But, I always ask, “Will the process being formally documented stop the issue from recurring?” What if people don’t follow the formally documented process? What if the existing process is poor and we are simply documenting it? It might help, of course. But it can’t be the only answer. Which means this is not the root cause – or at least it’s not the only root cause.

When reviewing a process, I always start off by asking those in the process what exactly they do and why. They will tell you what really happens. Warts and all. When you send the request but never get a response back. When the form is returned but the signature doesn’t match the name. When someone goes on vacation, their work was in process and no-one knows what’s been done or what’s next. Then I take a look at the Standard Operating Procedure (SOP) if there is one. It never matches.

So, if we get the SOP to match the actual process, our problems will go away won’t they? Of course not. You don’t only need a clearly defined process. You need people that know the process and follow it. And you also want the defined process to be good. You want it carefully thought through and the ways it might fail considered. You can then build an effective process – one that is designed to handle the possible failures. And there is a great tool for this – Failure Mode and Effects Analysis (FMEA). Those who are getting used to Quality-Based Risk Management as part of implementing section 5.0 of ICH E6 (R2) will be used to the approach of scoring risks by Likelihood, Impact and Detectability. FMEA takes you through each of the process steps to develop your list of risks and prioritise them prior to modifying the process to make it more robust. This is true preventive action. Trying to foresee issues and stop them from ever occurring. If you send a request but don’t get a response back, why might that be? Could the request have gone into spam? Could it have gone to the wrong person? How might you handle it? Etc. Etc.

Rather than the lack of a formal documented process being a root cause, it’s more likely that there is a lack of a well-designed and consistently applied process. And the action should be to agree the process and then work through how it might fail to develop a robust process. Then document that robust process and make sure it is followed. And, of course, monitor the process for failures so you can continuously improve. Perhaps more easily said than done. But better to work on that than spend time formally documenting a failing process and think you’ve fixed the problem.

Here are more of my blog posts on root cause analysis where I describe a better approach than Five Whys. Got questions or comments? Interested in training options? Contact me.

 

Text: © 2019 DMPI Ltd. All rights reserved.

Image: Standard Operating Procedures – State Dept, Bill Ahrendt

Do Processes Naturally Become More Complex?

I have been taking a fascinating course in language by Professor John McWhorter. One of his themes is that languages naturally become more complex over time. There are many processes that cause this as languages are passed through the generations and slowly mutate – vowels sounds change and consonants can be added to the ends of words, for example. And meanings are constantly changing too. He discusses the Navajo language which is phenomenally complex with, incredibly, almost no regular verbs.

It got me to wondering about whether processes, like languages, have a tendency to get more complex over time too. I think perhaps they do. I remember walking through a process with a Project Research Associate (assistant to the Project Manager) and she explained each of the steps with a green light package used for approving a site for drug shipment. One of the steps was to photocopy all the documents before returning them to the Regulatory Department. These photocopies were then stored in a bulging set of filing cabinets. The documents were often multi-page, double-sided and with staples and there were many of them – so it took over an hour for each site. I asked what the purpose was but the Project Research Associate didn’t know. No-one had told her. It was in the Work Instruction so that’s what she did. The only reason I could think for this was that at some point in the past, a pack of documents had been lost in transit to the Regulatory Department and fingers of blame were pointed in each direction. So the solution? Add a Cover-Your-Arse step to the process for every pack from then on. More complexity and the reason lost in time.

I’ve seen the same happen in reaction to an audit finding. A knee-jerk change made to an SOP so that the finding can be responded to. But making the process more complicated. Was it really needed? Was it the most effective change? Not necessarily – but we have to get the auditors off our back!

Technology can also lead to increasing complexity of processes if we’re not careful. That wonderful new piece of technology is to be used for new studies but the previous ones have to continue in the “old” technology. And those working in the process have to cope with processes for both the old and the new. More complexity.

There are a set of languages which are much simpler than most though. That have somehow shed their complexity. These are creoles. They develop where several languages are brought together and children grow up learning them. The creole ends up as a mush of the different languages but tends to lose much of the complexity in the mean time.

Perhaps processes have an analogy to creoles. Those people joining your organisation from outside – they do things somewhat differently. Maybe by pulling these ideas in and really examining your processes, you can take some of the complexity out and make it easier for people to follow? For true continuous improvement, we need to be open to those outside ideas and not dismiss them with “that’s not the way we do things here!” People coming in with fresh eyes looking at how you do things can be frustrating but can also lead to real improvements and perhaps simplification (like getting rid of the photocopying step!)

What do you think? Do processes tend to become more complex over time? How can we counter this trend?

 

Text: © 2019 DMPI Ltd. All rights reserved.

Image: Flag of the Navajo Nation (Himasaram)