We live in an amazing information era! People all over the world have been empowered by social media and their ability to share thoughts and opinions. This has afforded many people in the fitness, nutrition and wellness industry an amazing marketing opportunity. The unfortunate truth; however, is that people without adequate experience and education are calling themselves experts. I realize the financial opportunities this type of communication allows, but do these individuals have an ethical code?
I don’t believe people set out with the intention to mislead or take advantage of others. I think they are just trying to provide some value to the world and feel good about themselves! This is human nature—we want to feel intelligent, valued and like we mean something to the world. The question we have to ask ourselves is—at what cost? What am I willing to give up in order to feel good about myself and maybe make some money at the same time?

There is exponential growth in the online fitness, wellness and nutrition industry. Many people have figured out if they look the part, they can have a voice. What usually follows is a quick and ill prepared online search about a topic to present to the world. Many self-proclaimed experts and influencers do not have adequate education and training to truly understand the information they are reading. This lack of understanding leads to a, “Read and repeat” scenario where bad information becomes accepted on a large scale.
I want to be clear—this is not meant to dishearten all the people who, at their core, really do care about others. However, as a healthcare provider, I have to pick up the pieces when people go down the wrong, following the poor advice they find online. Health and wellness, fitness and nutrition are areas that have a large impact on people’s lives. I will share a recent experience I had with a patient to illustrate just how import this is.

Less than a month ago a young and active person came to my clinic for low back pain. She and her husband had spent nearly their entire savings on trying to fix her low back pain. Over $80,000! They had been sold multiple online programs and these programs included oxygen and ozone treatment, vitamins and minerals and expensive ointments—all treatments with NO EVIDENCE for treating low back pain. They also had several different online low back fitness experts and nutrition experts guiding them the wrong direction and racking up their bill. She was given some research articles—mostly surveys and testimonials, to support many of these treatments as an evidence-based approach. If only my patient new the research given to here was garbage, maybe she could have saved herself a lot of money and lost time.

I know any healthcare provider reading this has countless stories and examples of when bad information has a negative impact on a person’s life. I imagine this isn’t specific to providers and that many people share a concern for this large-scale problem. The purpose of this article is to answer a massive pain point—the general public has a hard time understanding research and how to detect B.S.! How do you know if what you are hearing has any merit? Does it take a PhD to understand how to read and practically apply the research being thrown at you? What follows is a brief, easy to understand and practical blueprint for people wishing to expand their knowledge and confidence in reading and interpreting research. If you don’t have the time or passion to take courses on research methods and statistics but also don’t want to throw in the towel and believe everything you hear, this article is for you! If you aren’t a big reader and don’t want to sit through all the details, there is a quick checklist at the end and please feel free to use this article as a reference in the future.

So—why does research matter? The purpose of research is to better the world with good information! Research can help the following:
- Build knowledge and facilitate learning opportunities
- Guide business strategy
- Provide a foundation for truth and reasoning
- Give a platform for analyzing and sharing valuable information
- Provide nourishment for the mind
- Improve understanding of complex issues
- Raise public awareness
- Improve practice patterns
Okay—let’s break this down. I know I will lose some people here but in order to understand research, you MUST understand basic research design. Hold on! Don’t leave yet because I will not try to take you back to college! What I am providing is a VERY brief synopsis of research design. In less than five minutes you will have a good guide for understanding how to rank studies as well as whether a cause and effect can be established. Translation—If what’s being stated is actually shown by the article being quoted!
This picture provides a nice hierarchy of different studies. A layman’s view of this pyramid is that good studies are at the top and bad studies are at the bottom—FALSE!
What this pyramid illustrates is a hierarchy of evidence related to a study’s ability to draw firm conclusions about topics. Randomized controlled trials (RCT’s) and systematic reviews allow us to draw such conclusions, while surveys, expert opinions and many other research designs cannot. Let’s go over an example—Let’s say a fitness blogger makes a comment like, “Research has shown you will burn more fat with high intensity interval training. Let’s take an even bigger leap and say they reference the following research titled, “HIIT training burns more fat according to fitness experts.” When you track down the article you find out it’s a survey of 4 fitness experts. By applying the hierarchy of evidence, you immediately know a survey cannot show a cause-and-effect relationship. The comment may have had some merit if the blogger provided a high-quality RCT comparing HIIT exercise to moderate intensity cardiovascular exercise.
Now that we have a baseline with this pyramid, we need to know what all these different studies are! Below I have provided the definitions for different study designs. The first definition I give is a little longer and the second translation is my attempt to make the information easier to understand. I am doing this to maintain the attention of two very different types of readers! Feel free to skim through this and refer back to it in the future!

Editorials and expert opinion
An authors personalized report and interpretation of their findings, without a process of discriminating facts.
Translation- one person’s view and opinion of what they found.
Case reports
A detailed report of the signs, symptoms, diagnosis, treatment, and follow-up of an individual. Case reports may contain demographic information about the patient, but usually describe an unusual or novel occurrence. Some case reports also contain a research review of other reported cases.
Translation
- Professional written work that dives into specific conditions or specific individuals
- Provides feedback on current practice guidelines
- Offer a framework for early signals of things like effectiveness, adverse events and cost.
- Cannot determine a cause and effect relationship
Case Series
A type of medical research study that tracks subjects with a known exposure, such as patients who have received a similar treatment, or examines their medical records for exposure and outcome.
Translation
- Similar to case reports but have many people involved
- Mostly about an intervention being given, rather than about the people receiving it.
- A next step after a case report to “bump up the study quality”
- No cause and effect established and the results may not apply to other people.
Cohort studies
A particular form of longitudinal study that samples a cohort, typically those who experienced a common event in a selected period, such as birth or graduation), performing a cross-section at intervals through time.
Translation
- “Cohort” is just a group of people with similar characteristics
- Looks at one or more groups, follows them over a period of time and performs evaluations for a disease or outcome (watches how the group is doing).
- Used to determine if the similarities of the group (rick factors, birth date, living area, etc) are associated with a disease or outcome.
- No cause and effect established.
Randomized controlled trial (RCT)

A RCT is a type of experiment which aims to reduce bias when testing a new treatment. The people participating in the trial are randomly allocated to either the group receiving the treatment (treatment group) or to the group receiving standard or placebo treatment (control group). Randomization minimizes selection bias and the different comparison groups allow the researchers to determine any effects of the treatment when compared with the control group, while other variables are kept constant. The RCT is often considered the gold standard for a clinical trial. RCT’s are often used to test the effectiveness of various types of medical treatments and may provide information about adverse effects.
Translation
- Randomly assigns participants into an experimental group or a control group.
- The only expected difference between the control and experimental groups in a RCT is the outcome variable being studied—a drug or type of exercise, for example
- Able to establish a cause and effect relationship
- Considered the highest level of evidence for an individual trial

I think we should pause here for a minute and discuss the bias I just created. I have basically set up all RCT’s as being gospel but there is a very important qualifier here- not all RCT’s are created equal! I don’t want people thinking that RCT equates with truth. Let’s discuss how to determine a study’s quality! This is the ultimate, “Are you really a nerd test!”
I’ll kick this off by looking at validity– the extent to which a measurement, concept or conclusion is well-founded and likely to apply to the real world. When reading a research study, it is important to understand internal and external validity.
Internal validity refers to how well the design was controlled so the results can be attributed to the variable being tested and not some other reason. In other words, it establishes a strong cause and effect relationship. This should be strong in a RCT but that doesn’t mean it always is!
For example, let’s suppose you ran an experiment to see if people lost weight with a gluten free diet. You would have to set a number of control variables to account for other things that might cause weight loss (change in diet, disease, age, exercise, etc). In other words, because you accounted for the confounding variables that might affect your data, your experiment has higher internal validity. Below is a list of threats to a study’s internal validity. This is not an exhaustive list!
- Size of subject population- if the sample size is too small it may not accurately represent a large group of people
- Subject variability- are all the people in the study a similar age, sex, gender, race
- Study attrition- how many people drop out of the study
- Maturation- any biological or psychological changes that occur to within subjects and these changes may account in part or in total for effects discerned in the study. If 4/8 people in the study come down with the flu and we are studying an antiviral agent, this will affect the outcome.
- Instrumentation– the extent to which the instrument is accurate in its measurement
- Blinding- when information about the research is masked from the participant and/or the researcher until after a trial outcome is known. This is an attempt to avoid or eliminate bias—with the understanding that bias may be intentional or subconscious. If only the participants are blinded it is called a single-blind. If the participants and the study staff are blinded, it is called a double-blind study. Triple-blinded studies also extend blinding to the data analysts. A trial in which no blinding is used and all parties are aware of the treatment groups is called open label or unblinded.
External validity refers to the degree to which the results of an investigation can be generalized to and across different individuals, settings and times. Factors that affect external validity are listed below. Again, this is not an exhaustive list!
- Population characteristics- how representative is the sample of the population? Are we studying 8 y/o children and applying the results to 42 y/o adults?
- The effect of the research environment- a lab is quite different from the real world!
- Researcher or experimenter effects- the way the researcher performed the study can affect the outcome. If the study is about strength training and the testers got a one-rep-max from the participants, how much did the tester motivate the participants to push as hard as they could? Is this documented?
- The effect of time- changes that occur naturally over time need to be accounted for. If a study is looking at arthritis and the subjects were studied over a period of 20 years, did the authors account for arthritic changes that naturally occurred over time?
- Fidelity- how well the intervention (treatment, etc) was used in the study. If we are looking at the effects of hamstring stretching, was it performed in a way consistent with best practice methods? If the people performed the stretch for 2 sets of 2 second holds, we wouldn’t expect this to do much.
Systematic review
Systematic reviews are a type of research review that uses systematic methods to collect data second-hand, critically appraise research studies, and synthesize these studies. Systematic reviews formulate research questions that are broad or narrow and identify and synthesize studies that directly relate to the systematic review’s question. They are designed to provide a complete, exhaustive summary of current evidence relevant to a research question. Systematic reviews of RCT’s are crucial for any evidence-based practice.
Translation-
- Highest level of evidence
- May review RCT’s or other trials, like prospective studies
- A clearly stated set of objectives with predefined eligibility criteria for studies
- An assessment of the validity of the findings of the included studies (example- looking at the level of bias in a group of studies)
- Assumes authors have systematically searched for, appraised, and summarized all of the available studies for a specific topic.
- May include a meta-analysis: the use of statistical methods to summarize the results of the studies included in the review.

Congratulations for getting through that review of research study validity! You are now on the downhill slope! Just a few more questions to ask in order to assess research quality.
Was the research peer reviewed?
Peer reviewed research studies have already been evaluated by experienced researchers with relevant expertise. Peer-reviewed research is usually of high quality but unfortunately, this is not always the case. Every person reading research needs to be aware of predatory journals! Although there is no agreed upon definition, a good definition has been provided by Shamseer et al. This author states predatory journals “Actively solicit manuscripts and charge publication fees without providing robust peer review and editorial services.” In other words- the research isn’t thoroughly reviewed and may be GARBADGE! Here are a few references for lists of known predatory journals:
https://predatoryjournals.com/journals/
So, we mentioned predatory journals but how do you assess the quality of the journals that aren’t deemed predatory? Impact Factor is the most widely-recognized method for attempting to gauge a journal’s rank or importance. The impact factor is based on two figures: the number of citations to a given journal over the previous two years (A) and the number of research articles and review articles published by that journal over the same two-year period (B), so: A/B = Impact Factor. To be clear, there are certainly quality articles that come from journals with lower impact factors but this does help provide a general guide. Using the impact factor should be done with care. If anyone wants more information on this topic it can be found here:
https://libguides.bc.edu/c.php?g=44457&p=281381#13039110
Can a study’s quality be evaluated with the information provided?
Every study should include a description of the methods. This describes things like the population of interest, how the people were recruited, definitions of key concepts and variables, how things were measured and potential biases. If you are not given any information about a study’s methods, you really can’t interpret the quality of the results.
Now—if I wasn’t able to get you to read through all the details and you skipped to the end, I still want you to learn something here! Below you will find a check list you can use as a reference when you are given research or told research has shown something.

There you have it. Anyone trained in research methods will note there is a lot more information that could be added here; however, I have tried to keep it as short as I can, while still providing a guide people will actually use and refer to. You shouldn’t have to dedicate weeks to months to improve your B.S. radar and get a few tips on how to practically interpret research. This article is meant to empower you— every person trying to make sense of all the information thrown at them. If you found value in this article and know someone who could use a guide like this, please share it. Remember, the purpose of research is to better the world with good information!