Article Appraisal 1

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 5

Critical appraisal checklist for Randomized Controlled Trials (RCT)

Adapted from JBI WEBSITE: HTTPS://JOANNABRIGGS.ORG/ -CRITICAL APPRAISAL TOOLS: HTTPS://JOANNABRIGGS.ORG/CRITICAL-


APPRAISAL-TOOLS

Article Title: Marín-Pagán C, Blazevich AJ, Chung LH, Romero-Arenas S, Freitas TT, Alcaraz PE.
Acute Physiological Responses to High-Intensity Resistance Circuit Training vs. Traditional
Strength Training in Soccer Players. Biology (Basel). 2020;9(11):383. Published 2020 Nov 7.
doi:10.3390/biology9110383
Reviewer: Erin Brink

Yes No Unclear NA
1. Was true randomization used for assignment of x
participants to treatment groups?
2. Was allocation to treatment groups concealed? x

3. Were treatment groups similar at the baseline? x

4. Were participants blind to treatment assignment? x

5. Were those delivering treatment blind to treatment x


assignment?
6. Were outcomes assessors blind to treatment x
assignment?
7. Were treatment groups treated identically other than x
the intervention of interest?
8. Was follow up complete and if not, were differences x
between groups in terms of their follow up adequately
9. Were participants analyzed in the groups to which they x
were randomized?
10. Were outcomes measured in the same way for x
treatment groups?
11. Were the instruments used to measure outcomes x
reliable and valid?
12. Was appropriate statistical analysis used? x

13. Was the trial design appropriate, and any deviations x


from the standard RCT design (individual
randomization, parallel groups) accounted for in the
conduct and analysis of the trial?

Overall credibility of article results per your assessment on the scale of 0-10, with 0 –“I don’t
trust the results, as the intervention study outcomes are questionable” to 10 – “I will definitely
use the results of the study in planning interventions for my patients” 8- I do trust the results of
the study but could trust even more with a bigger participant pool

© JBI, 2020. All rights reserved. JBI grants use of these tools for research purposes only. All other enquiries
should be sent to jbisynthesis@adelaide.edu.au.
Explanation of critical appraisal of Randomized Controlled Trial (RCT)

Brief and structured summary of the article in a form of Abstract


The question to be answered in this study is whether high-intensity resistance training circuit
training or traditional strength training would bring better acute physiological responses to
athletes, specific to this study, soccer players. The point of this study is to determine if high-
intensity resistance training circuits will be more beneficial to overall performance rather than just
the normal strength training that they do during training sessions. According to those in charge,
strength training does not bring the same benefits as different types of training throughout a
season, so they want to see if solely doing the high-intensity training will make up those areas
lacking. The study was done in a cross-over randomized order where the same protocol was
conducted. Each participant went into the lab four different times for four different training
sessions. One session is determining load and getting familiar with the lab, one is focusing on
maximal oxygen consumption, and two sessions are resistance training protocols (one high-
resistance and one strength training). A t-test was done with results among all participants.
Results showed that oxygen consumption in high-resistance was higher than strength training by
75%, heart rate in high-intensity was 39% higher than strength training, and blood lactate levels
were higher in high-intensity. Along with these, respiratory exchange ratio was higher by almost
7% in high-intensity and energy cosy was about 66% higher in high-intensity compared to strength
training. To conclude, this study showed that high-intensity resistance training circuits are more
time-efficient training strategies that will bring greater cardiorespiratory and metabolic responses
in athletes, specifically soccer players.

1. Was true randomization used for assignment of participants to treatment groups?

The subjects of this study were 10 volunteers that had more than 3 years of soccer experience.
They were not randomly chosen. However, the order of treatments were completely random,
and subjects were unaware of what treatment test they were going to be doing during the 3 rd
and 4th sessions. The type of randomization was not mentioned in the article.

2. Was allocation to groups concealed?

Within this study, the participants were kept from knowing which treatment they were doing
in which session. They would find out when they got to each training session. Not all
participants had the same training schedule to increase accurate results.

3. Were treatment groups similar at the baseline?

All subjects had more than 3 years of experience playing soccer, ages 19-30. The participants
also were not allowed to have taken and enhancing drugs, no musculoskeletal injuries in the
last 6 months, and no cardiorespiratory issues going on. Differences could include body type as
well as current physical status. If one participant already has a higher rate in any area being
tested, their results might not show as much difference as someone who has lower rates
pretesting.

© JBI, 2020. All rights reserved. JBI grants use of these tools for research purposes only. All other enquiries
should be sent to jbisynthesis@adelaide.edu.au.
4. Were participants masked (or blinded) to treatment assignment?

They were aware of the four different tests but were unaware when they would be doing each
test. They all started off with the same first two baseline sessions but had no idea which
session they would be doing for sessions 3 and 4.

5. Were those delivering treatment blind to treatment assignment?

Those delivering the treatment were not blind to the treatment assignment. They needed to
be aware which session they were administering before the fact so they can do the correct
protocol and have the participants doing the right tests to collect the data for the end results.
They want to avoid messing up and numbers to avoid inaccurate results.

6. Were outcomes assessors blind to treatment assignment?

The outcome assessors were the same as the ones delivering the treatment. Because of this,
they needed to be aware of assignments so that they can administer correct testing and collect
accurate results. If they were unaware, they more than likely would not have been able to give
out the correct testing parameters.

7. Were treatment groups treated identically other than the intervention of interest?

Each group was given the exact same set up. Each participant went through two baseline
sessions and no matter what order they did their second two sessions, they still were given the
same tests. No matter what position each participant played, they all had same reps/sets, only
difference being weight which was determined by their baseline tests in session one.

8. Was follow up complete and if not, were differences between groups in terms of their follow
up adequately described and analyzed?

It is unclear if there were any follow ups completed. Through reading the whole article, there
was not information that I could find about any follow ups. They mentioned how 20 minutes
after the test that they would follow up with the getting some of the results for some of the
final results on how the session worked or did not work. Because they did not specifically talk
about follow ups, it is just too unclear to assume that this was or was not done.

9. Were participants analyzed in the groups to which they were randomized?

Since there were not really groups that participants were split into, they were analysed
individually but at the same time as a whole. What this means is that during each individual
session, the individual was analysed right then, but after all tests, the ten participants were
analysed as a whole and results were compared and recorded.

10. Were outcomes measured in the same way for treatment groups?

All participants were measured using the same equipment, and, from my understanding, the
same tester was doing all tests for each athlete. Because all participants were doing the same
© JBI, 2020. All rights reserved. JBI grants use of these tools for research purposes only. All other enquiries
should be sent to jbisynthesis@adelaide.edu.au.
tests to compare results of certain aspects about physical health, each participant used the
same equipment to get any results to keep it consistent through the duration of all tests that
were completed. All participants used the same treadmill, monitors, and face masks that were
all examined before all tests were done to make sure they would bring accurate results.

11. Were the instruments used to measure outcomes reliable and valid?

Since they were using a lot of machines to test each outcome the study designers wanted to
test, there was a lot of reliability within the results coming from the test. However, it is unclear
if the testers were consistent in every detail with every participant. If there were even the
slightest variability, that could affect the validity and potentially the reliability of the outcomes.
Like mentioned above, each instrument used was tested before the use in order to make sure
each one was valid and reliable.

12. Was appropriate statistical analysis used?

The researchers used a statistical software to analyse all data taken during tests. It is noted
that they used means and standard deviation in order to show physical characteristics
between all participants. Homogeneity and any normal distribution were checked with two
tests, Kolmogorov- Smirnov and Lavene. Any differences in protocols were analysed with a
paired t-test except for the EPOC testing where they used an ANOVA of repeated measures. All
statistical significance was set to p< 0.001. Given this information about what statistical
analysis they used, it would be considered appropriate to compare any data they collected.

14. Was the trial design appropriate for the topic, and any deviations from the standard RCT
design accounted for in the conduct and analysis?

I would say that this design was appropriate for the topic. To my knowledge there were no
deviations from the standard design, except maybe the fact that there were not many
participants and they were not randomly selected. All results were available and were
meaningful and useful for future use. No changes were made to the study design throughout
the process.

Additional consideration

The two main authors of this article are Cristian Marín-Pagán and Anthony J. Blazevich. Cristian
got his PhD in sports sciences at Universidad Católica San Antonio de Murcia and works at
Centro de Investigación en Alto Rendimiento Deportivo. As for Anthony, he works at Edith
Cowen University where he is a faculty member in the school of medical and health sciences.
This article was found through PubMed where it lead me to MDIP, which has a lot of strong,
peer-reviewed journals with similar topics. Looking through these journals, there were a lot of
journals focusing on why high-intensity training is good, but I went with this one because it
was actually comparing this type compared to typical sports training. The point of this journal
was to find out the effects high-intensity training has compared to typical training, and the
results showed a strong positive outcome with overall fitness with high-intensity training.
© JBI, 2020. All rights reserved. JBI grants use of these tools for research purposes only. All other enquiries
should be sent to jbisynthesis@adelaide.edu.au.
Why you should or should not use this evidence?

More than likely I would use this article as evidence. It would not be my main evidence, but it
would be something that I use to help strengthen my point along with other articles. I think
that this article is lacking in some places, even though there are a lot of boxes checked with
criteria for this study. I would have loved to see more participants and a longer training
regimen to show better results how this would benefit athletes all across the board, not just
soccer players. I think this study would be useful for a physical therapist going into a sports
medicine-based clinic. It has very good points and the results seem to be reliable. This could be
useful for any athletes coming back from an injury and have goals of being in a same physical
shape as teammates. A physical therapist could take this kind of training into consideration
once athlete is cleared to do high-intensity activities.

© JBI, 2020. All rights reserved. JBI grants use of these tools for research purposes only. All other enquiries
should be sent to jbisynthesis@adelaide.edu.au.

You might also like