Transcript of: Evidence-Based Medicine: What is it? Why Should We Care?

Transcript of: Evidence-Based Medicine: What is it? Why Should We Care?

Presented by Dr. Miller on November 2020   Watch the Webinar 

[Soft Music]
And welcome to everybody, thanks for joining us. Sam, my thing is not advancing here.
If you hover towards the bottom left of the PowerPoint, I think a little like arrow.
Uh-huh, there we go. Okay, thank you. Well, I hope everybody's doing well, thanks again for being here.
I hope everybody's healthy and doing well. I will apologize in advance by starting to cough, it is a little bit
smoky, here in Colorado, and even though I'm not using an evidence-based strategy and throat
lozenges, hopefully, it'll keep me from interrupting the talk today. So what I'd like to do is start off with a
scenario. It's not one that I'm wishing on any of you. But let's say you go to your doctor, you have a fever
103. 4, you receive an injection of penicillin and a penicillin prescription 24 hours later, your
temperature is 100.2.
So the question is, did the penicillin treat the disease or reduce your fever? So we have three options
here. One, yes, you believe that penicillin cure you, no, that the penicillin was not a value? And for those
of you that are convinced that I'm up to no good if I'm starting off a talk with a quiz, you don't have to
commit either way. So Samuel put up the poll, people can go ahead and vote. And we'll see what people
think about this.
Okay, looks like I've got some trust issues, most of you don't trust me here. So and then for everybody
else is pretty evenly divided in terms of Yes or no? The answer is in my opinion maybe, and maybe not,
could be penicillin took care of whatever was reducing your fever. But it could be that even if you hadn't
taken the penicillin, your fever might have gone down anyway, let's say it was some sort of viral
infection, and what it quickly resolved on its own. So to figure that out, a lot of times we need a lot of
ancillary data to figure out what's going on.
So why don't we go ahead and modify this question a little bit here. And you go, you have the fever, you
receive an injection of penicillin, your temperatures is 100.2, 24 hours later, then all of a sudden, and
also have a COVID-19 test. And then four days later, the results come back, and it says you're positive.
So the question that some people may, throw out is, did the penicillin cure that COVID-19?
Probably not because penicillin is more active for bacterial infections and COVID-19 is a virus, when it's
the first time that a drug had unintended or unanticipated benefits. But the reality is, it probably was
not of any value and for those that think that the penicillin cured them of their COVID-19, they're
probably not entirely right there. So I'm sorry, I didn't move out ahead quickly enough. So anyway, these
sorts of scenarios are what hopefully illustrate the challenges that go along with evidence-based
medicine, and why it's important.
So some of the topics that we'll be going over; one, what is evidence-based medicine? Two, how did we
get here? What's the history in terms of why evidence-based medicine came to be, and then what types
of clinical research and confidence can we have to support evidence-based medicine? And I've got that
in bold and underlined because that's pretty much going to be the emphasis of what we're going to be
talking about here. But there are other elements of evidence-based medicine. Then we'll talk a little bit
about outcomes as well, which we just went over with our scenarios in terms of the fever going down,
and hopefully, you're feeling better at that point out and it's not like holy grail for everything. So we'll
also be talking a little bit about some of the criticisms of evidence-based medicine as well. So what is
evidence-based medicine? This definition from Masic is the one that I really like because it emphasizes
that not only is conscientious and reasonable use the best available medicine, but it's also explicit and
judicious meaning, you're pretty upfront in terms of not hiding, what sort of biases you have in mind.
So I'd like you to do is burn this image that I just brought up in your mind because even though, I'm
going to be emphasizing the science, and hopefully, you guys can see my cursor there, but other
This transcript was exported on Nov 29, 2020 – view latest version here.
16 – Miller – Evidence Based Medicine (Completed 11/29/20)
Transcript by Rev.com
Page 2 of
elements of evidence-based medicine are clinical judgment, patient values, and that sweet spot of
evidence-based medicine is where everything meets. So again, I'll be emphasizing the science, but a
clinical judgment example of that, let's say the patient is on other medications. And the preferred
medication for a given condition ends up potentially either counteracting or not working in harmony
with the medications that are already on or potentially exasperating using clinical conditions that are
also already there, the patient values are a real big one. So example of that is let's say maybe I have
declined treatment for maybe cancer, due to my own personal values, or because of my personal
assessment of the trade offs for the strengths and weaknesses going down that path.
So let's have a little bit of history since history is always something that we always want to get into. And
back in the not so great old days, bloodletting was a therapy for a lot of diseases, probably wasn't that
effective in most cases. And then, during the same time periods, roughly, we also had scurvy. You might
have heard of the limeys, for the sailors, that was resolved. Scurvy was resolved with lemons or oranges,
meaning basically vitamin C, and that was reported by Lind 1753. But that particular treatment has
stood the test of time.
So the question is, which diagnostic tests are we going to be using that stand the test of time, as well as
treatments like these that we just talked about? So that's where evidence-based medicine is of value. So
again, how is selective and condense history of medicine? Back in the 19th, and 20th century, Germans
were the ones that were in the lead in terms of medical school training because they integrated clinical
and research training. And there are a lot of big names that many of you probably heard, Albert
Schweitzer, who was a great physician, as well as being a humanitarian, there is Virchow, who was the
father of pathology, and also had a number of other talents. There was Koch, who discovered TB,
cholera, anthrax, and for whom you know about Koch's postulates, and many others.
So back in that time, if you wanted a good doctor and you are in United States. What you wanted to do
is get somebody that was trained in Germany, or even better yet go to Germany for treatment. Now,
that all changed back in 1910 when there's the Flexner report. What that did was that it ended up
setting certain standards for medical training in the United States and that ended up being an end to
allow the known university-based schools and that's where universities like Johns Hopkins, Harvard and
the like ended up developing the medical programs and starting to train doctors. And that ended up
leading to the United States having much better medical care. Some will say we've been leaders for a
long time.
But even back then, there's a guy named Ernest A. Codman, that I read some of his work, I think, as far
back as 1912, he was a surgeon in the Boston area. And he didn't want a lot of fans because even back
then he was saying, "Hey, everybody's kind of doing their own thing. Something's work, something's
don't, we need to have better standards." So he was really big on having more efficient medical
treatment and therapies and having more consistency. So jumping ahead in my highly selective history
here, into the 1980s, there was a guy named David Eddie, who spoke quite a bit about errors and clinical
decision making, had a strong influence on a lot of clinical history or how people approach things
medicine. There is David Sackett, one of my personal favorites, who had big influence on me on how to
apply epidemiological methods into clinical practice, and train physicians into using epidemiological
methods.
And then there was the RAND Health Insurance Experiment from 1974-1982 and that was funded by the
United States Department of Health, Education and Welfare. So it was an insurance program and what
they were doing, were looking at better ways of cost-sharing, comparing different groups. It was
randomized control, randomized trials, and that's how they randomly assigned different people
different insurance plans. And so what they wanted to do is they were trying to figure out where should
we be applying our medical care best? And where can we use it when it's needed, as opposed to
This transcript was exported on Nov 29, 2020 – view latest version here.
16 – Miller – Evidence Based Medicine (Completed 11/29/20)
Transcript by Rev.com
Page 3 of
sometimes where maybe it's by personal whim, in terms of using these medical resources. At that time,
we also have professional guidelines starting to be developed for various conditions, such as the
professional societies like the American Cancer Society, American Heart Association, and many others.
So jumping up to the '90s, there's Cochrane Collaboration, which again, is one of my favorites. And what
they did was they set up systematic reviews and guidelines with the idea being that we wanted
healthcare that was evidence-informed. It was effective and sustainable. But again, that big word
transparent in terms of what is it that we're, what is the evidence? What's the research behind it? Why
are we saying X, Y, or Z, and a certain part of that transparency is also depending on non-commercial
funding, so that even though drug companies or others have the money to support a lot of his research,
there's strong potential for bias in those situations. The Cochrane Collaboration has been successful
enough that the World Health Organization has adopted them as a partner for allow their decision
making processes.
There are other processes that were developed in '90s as well, such as out of Oxford, and then GRADE,
the Grading of Recommendations Assessment, Development Evaluation for looking at research studies,
GRADE actually ended up being adopted and incorporated into Cochrane as we'll be talking a little bit
about. One aside that I will mention, I'm not saying anything one way or another in terms of pros or
cons about socialized medicine. But one of the benefits of socialized medicine is that you do have these
large centralized databases that are perfect for evaluating the diagnostic and treatment selections that
clinicians make and the outcomes. And as a consequence, some of the leaders and evidence-based
medicine seem to be concentrated in those countries with socialized medicine. And again, I'm not
pushing it one way or another, but basically pointing out that it is a fact.
So anyway, let's get back to the definition as we talked about, what's important is that we're being
transparent, we're being clear. Also, a reminder that we're looking at not only the research as I'll be
finding in a little bit later, but again, the patient values and the clinical experience of the clinician are
important. The strategy is also how do we incorporate this high quality research on a more regular basis,
kind of a lot like what Ernest Codman was talking about, over 100 years ago. There's also a need for
some skills, there are some strategies for how to incorporate the clinical literature into a given case.
And you also always want to be raising the bar, not only do you want to be always learning, but
hopefully, research is always progressing. And we're also going to have the hopefully have the
advantage of systematic reviews, meta-analysis, other critical analysis that help us along the process
here, and hopefully progress our medicine, so that we're not just doing things like bloodletting that are
probably less than effective. So to pound the point and a little bit more, why are we using evidence-
based medicine? Well, looking for improved outcomes, we're looking for cost-effectiveness.And then
again, as I may not have made clear, we're interested in diagnostic methods, prognosis for given
conditions, and treatment.
So I may blur the boundaries between these, but just keep in mind that they're all part of evidence-
based medicine, and all targets for improving what we do and how we do it. So how do we get there?
Dawes summarized this in 2005? The five steps of evidence-based medicine? So one is he translate
uncertainty into an unanswerable question. So an example of that would be, what are the odds of
success with one treatment versus another? What is the survivorship? How long can we expect
somebody to survive with a given condition?
So a lot of that gets back to the critical questions that we asked, and looking at the literature in terms of
the study design, and how confident we are in the results. As I mentioned a few minutes ago, there is a
strategy for making sure you get the best evidence available, and using those tools, and then critically
appraising the evidence. So you're looking at the internal validity, meaning in terms of the study itself,
the cause and effect? Is there a strong association between cause and effect based on the research
This transcript was exported on Nov 29, 2020 – view latest version here.
16 – Miller – Evidence Based Medicine (Completed 11/29/20)
Transcript by Rev.com
Page 4 of
design? Is it clinically relevant? So as an example, I've done some research where we came up with
statistically significant results, but clinically, the reference ranges that we were looking at, or the values
were within the reference range. So clinically, it really wasn't of much value and wasn't of much
applicability, looking for different errors, such as selection bias? Are you selecting only people from a
different economic group or racial group? Is there a large enough effect size that we can feel confident
in the results?
And then getting back to the validity of the results, there's also external validity. Meaning, are the
research findings that we have applicable to the larger population in general, so maybe certain results
are good in one country, but maybe in a different country, with different genetics and culture, diet,
other cultural practices, maybe things aren't quite so applicable. We always want to be evaluating
performance, critically evaluating how things are going, are we still on the right path are things still
working. And then again, as part of the evidence-based medicine strategy for clinician, one of the
recommendations, they also keep a log of what they're doing and the concepts that they're asking, as a
way of quality control and mapping, how things are coming along.
So let's get to the types of studies and this is what I'm going to be emphasizing a bit for this talk. And, as
I've alluded to, and unfortunately, my cursor is not coming up but I'm sure you can look at the arrow and
see that there are studies that you're not going to have a lot of confidence in and then there are studies
that will have a bit more confidence in the evidence. And a lot of you may have seen these types of
things, but I'm going go over it anyway, just to kind of burn into your brain. But the lowest level of
confidence is somebody's editorial or opinion, basically saying, "Hey, I think this works." But until we
have data to support it, it's not going to be one of those things that we incorporate in evidence-based
medicine, as having a lot of confidence.
So going up a little bit more confidence, we can have a case report, kind of like the first scenario that we
talked about, where "Hey, this patient got better with penicillin." Maybe it works, better yet case series
where there's a series of individuals that got better on penicillin. Next, there are cross-sectional studies
where we're looking at a population at one specific time seeing whether comparing whatever outcomes
of interest that we have. Next, there's case-control, meaning, we have individuals that are cases that
have the condition that we're interested in, then the controls those that aren't, and we're going to be
trying to match them based on age and where they live, all that sort of thing.
Next level of confidence are cohort studies. Cohort studies are more of a longitudinal study following
populations over time, and looking at cohorts that have something in common. Again, whether it be the
condition, the age, whatever confounding factors may be of interest. And then we have randomized
clinical trials that everybody knows kind of what we're shooting for a lot of times, ideally, the patient
and those involved in the trial are blinded. So it's a double-blind study that gives us the most confidence
in a controlled setting.
But what if we have different controlled trials that come to different that have come to different
conclusions, and have different results. So that's where a systematic review comes into play, a little bit
about what I was talking about with Cochrane, where there's a method and a madness that's consistent
for evaluating these studies and hopefully making out some good conclusions and weeding out exactly
what it is that we, with the results that are available have, or even better yet, if we can combine
multiple studies results to have a much larger sample size and a meta-analysis, then that's a way we can
have even more confidence potentially, in the results. Now, nothing's 100% of course, but again, we're
going went from lowest to highest level of influence with these different types of studies.
So why is this important? What do we do with it? Well, then we can go ahead and classify the
confidence that we have and study results by categories such as level one, when there's at least one
high quality, randomized clinical trial, that you can hang your hat on for your results. Then there's level
This transcript was exported on Nov 29, 2020 – view latest version here.
16 – Miller – Evidence Based Medicine (Completed 11/29/20)
Transcript by Rev.com
Page 5 of
two, and there's several sub levels within there. There you have controlled trials, with some
randomization.
And then next level down is we have cohort or case control. Ideally, we have more than one center
involved. One other case for that level, is if we have a really dramatic results in controlled clinical trials
that would fall in that category. Then there's level two where there's multiple time-series studies. And
then level three, which is opinion. So ideally, if you're a patient, or if you're a clinician making a decision,
hopefully you're working with evidence that is level one confidence of your data there.
So let's talk a little bit more about systematic reviews. You need to have some input into those
systematic reviews. One, as I've kind of alluded to, you have observational studies with one potential
input, but they tend to have a high risk of bias, just because you're not necessarily understanding all the
factors that are involved with that particular study. And then, of course, as I mentioned earlier, the
randomized clinical trials, which have a lower risk of bias but aren't completely without bias.
So what Cochrane has done is they've set up a system for increasing the rigor. They've got various
methods for assessing different studies, as a part of the systematic reviews, and they also have some
software that helps walk you through the process if you choose to use it, and the process there is that
when a group or a panel of people, professionals look at different studies, what they do is subjectively
assess the bias as low, high or uncertain. And then you know that is a judgment call, of course. And then
of course, there are different sources of bias that we'll be talking about a little bit later here.
Some of them are the allocation. So is everybody randomly allocated? Or are you comparing different
populations, maybe different incomes, different geographic areas, different cultures, things of that
nature? Again, as I talked about earlier, blinding is good. So then, both the participants, ideally, and the
researchers, when they make their assessments now or either, what they think the results are, from
either the researcher or the clinician, or the patient's viewpoint, they aren't biased by whether they
think they got the treatment, the placebo effect that maybe some of you have heard, there may be
many other risks of bias as well.
And then, as I think I alluded to earlier, one of the nice things about Cochrane, they also provide a way
of doing meta-analysis, then you can use statistics to have a little bit better feel for the confidence that
you can have, and the results of that analysis. So as I mentioned before, GRADE was originally developed
independently of Cochrane, but then was incorporated as a part of assessing studies. So going in a little
bit more detail of what we were just talking about. Again, the goal here is, even though this is a
subjective process, and there really isn't a good strategy, at least that I've seen where you can just look
at a study and come up with a number that's very objective and reproducible to classify your confidence
in a study. So it's very transparent, and again, it's a way of being upfront about what is being done.
Now, there are different ways of modifying, based on the study designs that we talked about earlier, in
terms of increasing the confidence that you may have in that study design. And then this reference here
is something you can find on the web from the British Medical Journal. Anyway, you can increase the
rating for a given study, even though it may be something like a cohort study that's maybe not really
rated highness quite as high as a randomized clinical trial. But if you're able to demonstrate a dose
response, a really huge effect, something like that, GRADE allows for that, and allows you to increase
your confidence in the results for that. And then there's also criteria for downgrading studies as well.
So if there's a lot of range, a lot of you know, there's not a real precise estimate. So as an example, in the
studies, what you're going to see is the standard deviation, coefficient of variation, the range, things like
that tell you that there's a lot of variation. So if you're making a judgement for a given test, or a given
treatment, the outcome is going be a lot iffy, there's going to be some people that are going to do really
well or that you'll have a lot of confidence in the test results, and others, not so much.
This transcript was exported on Nov 29, 2020 – view latest version here.
16 – Miller – Evidence Based Medicine (Completed 11/29/20)
Transcript by Rev.com
Page 6 of
Indirectness is another one of their categories. And the way I look at this is that a lot of times we study
what we can rather than when you really need to so as an example, let's say we're concerned about how
effective a medication is for preventing bone fracture in the elderly. So let's say we don't have a lot of
time, we don't have a lot of money, maybe what we do is we use bone density as a proxy for the real
question, which is the risk of fracture over again for a period of time. And so the risk of fracture over
five, 10 years is probably a better study for addressing that.
Another example in today's day and age is, if you were to compare last August with all the August before
that for the last 10 years, and you looked at all-cause mortality, and it was increased, is that a
reasonable proxy for the effects of COVID-19? Or do we need to maybe look at test results that are a
little bit quicker to obtain, maybe in some way, but the trade off there is that maybe we're not as
confident a lot of these tests that have been rushed out to market pretty quickly.
Inconsistency, as I mentioned a little bit earlier, if there's a lot of variability in a study or across studies,
that gives you less confidence in the results. And maybe you're going to think a little bit more about
what path is going to go down in terms of addressing a given condition. Then there's publication bias,
and probably the biggest thing there is non-publication bias. Because traditionally, what gets published
is significant results. But if you have a non-significant result, that can potentially be a value as well, in
terms of having a feel for how much confidence you can have in a given treatment, or diagnostic test,
and those may not make it into the press. And as a consequence, it's harder to find information and
have a more unbiased assessment, what path you want to go down.
So what GRADE does is they end up classifying things as high quality evidence, which is basically that
there's a low probability that future research is going to change your conclusions, moderate quality
evidence, where you're maybe you're not quite as confident results, and maybe you're going to change
your opinion at some point. There's low quality evidence where you really don't have a lot of
confidence, and it's very likely that new results are going to change your opinion and very low quality
evidence in the next category, where it's probably not much better than a case report, and you may or
may not be of any value, and may not influence what decisions you make.
So as I told you, I really got into the Cochrane systematic approach, but there are other methods. So the
US Preventive Services Health Task Force also came up with a way of classifying research, they have low
level A, in which basically, the evidence is that good evidence outweighs the risks, if you're going down a
different path, there's B, where maybe not quite so strong and you're going to kind of think about it,
maybe a little bit more.
There's level C, where the balance between the benefits and the risks is pretty close. So there
particularly, you're going to be pulling in a lot of the ancillary evidence to decide how to go down a path,
and then there's level D, where the risk outweighs the benefit. So examples of that maybe I remember,
there have been periods of time where medicine has been imported from overseas, and maybe it wasn't
A, the benefits were questionable. And maybe there were a lot of contaminants, such as mercury or
other things that even if there was some benefit, maybe the risks far outweigh those benefits. And then
there's also Level I where the evidence is kind of uncertain in terms of how you should proceed in terms
of a given clinical strategy.
So as I mentioned earlier, evidence-based medicine is not the holy grail, there are some criticisms. One
is that the demand for evidence-based medicine far exceeds the supply. The amount of funding and
other resources that are needed to conduct these sorts of studies is limited. And the number of
conditions where we'd really like to have these evidence-based medicine strategies is fairly limited in
terms of where you can go ahead and you have to be selective, or you have to have other reasons why
you want to go or able to apply these resources to a given question.
This transcript was exported on Nov 29, 2020 – view latest version here.
16 – Miller – Evidence Based Medicine (Completed 11/29/20)
Transcript by Rev.com
Page 7 of
Randomised trials may not always be a value, as I mentioned, there can be different ethnic groups,
different cultural, socio-economic, environmental factors. So for people like me that have a strong
interest in epidemiology, we kind of look at the real world as our metric for how well some of these
things work. But that's not to discount randomized clinical trials, it is the starting point for a lot of these
things. As you mentioned earlier, there's publication bias either because of what is published and what
is not. And then there's also who's supporting these studies, and potential conflicts of interest there
because of industry.
There's a lag-time, as most of you know, it's usually at least a year or two, before research is published.
So there's that year too where you have a certain finding, you're not able to apply it in the real world,
that's been sped up a little bit with COVID. But the trade off there is that some of the stuff that's coming
out, is probably not going to stand up the proof over time, and we'll go by the wayside as more
information comes in.
Patient values, that's always a really tough one, at least in my mind, for incorporating because you never
know what somebody's personal values are, what their history is, what other family members have gone
through, friends, all those sorts of things influence what a patient chooses in terms of what's best for
their particular situation.
And then there's hypocognition, where basically there's not a framework for incorporating new
information. So my favorite example for that is, many of you probably heard of mad cow disease from a
decade ago, Creutzfeldt-Jacob disease, scrapie and many other diseases that for a long time, I wasn't
sure what the source was, but then the prion hypothesis and came up, which is basically, it was an
infectious protein, not a bacteria, not a fungus, not a virus, not the usual things that we think of as
infective particles but a protein. And I'm drawing Linkoln on who won the Nobel Prize for that. But
basically, we didn't have the framework to think of proteins as being infectious, so that's an example of
hypocognition.
So in summery, again I imagine, even though I emphasize study design because that's something that
each of you have some capacity to use, either for yourself or in other settings, particularly, if you're a
clinician, there is the issue of clinical judgment, which as we talked about with the medical school
training, we're always trying to train that, but it's always hard to make the perfect clinician that can
handle every situation and assess all the information correctly. And again, patient values is an important
part of the equation.
Updating systematic reviews and various guidelines and continuing to do the research, and keeping up
to date are all important. And then also continuing to use strategies in a clinical setting, to continue to
question what you're doing and try to improve, and hopefully have better outcomes as time goes along.
So I think we're reaching the end here, and if there are any questions, hopefully, these people slept
throughout there, I have no way of seeing whether people are awake or anything. We look forward to
hearing from you.
So if you have any questions for Dr. Miller, please feel free to put it in the Q&A box or put it in the chat.
But I guess just to start us off this, how do you balance the science, the clinical judgment, and then the
patient values?
The question is going to be very situational, it's going to depend on the patient, and the physician and
their particular interactions. As I mentioned a few seconds ago, it depends on the training, how
committed the clinician is to taking in new research and applying these evidence-based approaches, and
also how much the clinician is open to changing. So I guess one of the examples I can think of for that
was, I read studies from the '90s, or chlorhexidene was, for a number of reasons, for many situations,
was preferred antiseptic for skin for many uses. And then just a couple of years ago, I heard a human ear
This transcript was exported on Nov 29, 2020 – view latest version here.
16 – Miller – Evidence Based Medicine (Completed 11/29/20)
Transcript by Rev.com
Page 8 of
physician giving a talk where he was begging people to switch from the iodine products to chlorhexidine
and most products, and again, maybe beating the drum a little too hard. But there's that key to really
understanding the personal values, what's important to a given situation and being able to get the
patient involved, and sharing what's important to them, what their biases and concerns are.
Is there a difference between how industry and clinicians use evidence-based medicine?
That's an interesting question and hopefully brings kind of a more real world twist to the kind of a
theoretical question. I guess for clinicians, I get back to the five steps of evidence-based medicine slide
where you're looking at answerable questions using the tools that you have to get the literature and
evaluate it, apply it, and then just critically evaluating isn't working, and really having that commitment
to continuing education, applying these tools. From an industry standpoint, for those of you that maybe
are interested in developing various tools, maybe keeping these concepts in mind when you're
developing tools for clinicians, and then in terms of the products that you're developing, whether it be
medicine, a diagnostic test something else, do the quality research, have some transparency in terms of
presenting the outcomes for the results and how you got to those results, and then effectively
communicating those results as well.
So how can someone learn more about the Cochrane Collaboration?
They have a website under the Cochrane Collaboration, as I recall, there's also other websites, even
Wikipedia I looked at a few years ago, had a nice summary. So depending on the questions that you're
interested in, you can do a web search for evidence-based medicine in general as going a little beyond
that question and that's a way of also learning about different opportunities for training and continuing
education. But also put a plug in for I'll be giving a talk in a few weeks on clinical epidemiology that has
some relevance to evidence-based medicine as well. So make sure you sign up for that.
I'm not seeing any other incoming questions, but do you want to give any last final remarks or advice to
attendees?
I just want to thank everybody for their interest. Hopefully, there's people out there and I wish
everybody the best during these tough times and hope this presentation was of some value to you.
Share This!