Translate

Πέμπτη 19 Σεπτεμβρίου 2019

Motor Competence Levels and Developmental Delay in Early Childhood: A Multicenter Cross-Sectional Study Conducted in the USA
Page 1, Abstract, Results, sentence 1: The following sentence, which previously read:

Comment on: “Incidence, Severity, Aetiology and Prevention of Sports Injuries: A Review of Concepts”

Resistance Priming to Enhance Neuromuscular Performance in Sport: Evidence, Potential Mechanisms and Directions for Future Research

Abstract

Recent scientific evidence supports the use of a low-volume strength–power ‘resistance priming’ session prior to sporting competition in an effort to enhance neuromuscular performance. Though research evidence relating to this strategy is presently limited, it has been shown to be effective in improving various measures of neuromuscular performance within 48 h. Post-activation potentiation strategies have previously been shown to enhance strength–power performance within 20 min of completing maximal or near-maximal resistance exercise. Comparably, a delayed potentiation effect has been demonstrated following ‘resistance priming’ at various times between 1 and 48 h in upper- and lower-body performance measures. This may have significant implications for a range of athletes when preparing for competition. Various exercise protocols have been shown to improve upper- and lower-body neuromuscular performance measures in this period. In particular, high-intensity resistance exercise through high loading (≥ 85% 1 repetition maximum) or ballistic exercise at lower loads appears to be an effective stimulus for this strategy. Although current research has identified the benefits of resistance priming to some physical qualities, many questions remain over the application of this type of session, as well as the effects that it may have on a range of specific sporting activities. The aims of this brief review are to assess the current literature examining the acute effects (1–48 h) of resistance exercise on neuromuscular performance and discuss potential mechanisms of action as well as provide directions for future research.

Test–Retest Reliability of the Yo-Yo Test: A Systematic Review

Abstract

Background

The Yo-Yo test is widely used both in the practical and research contexts; however, its true test–retest reliability remains unclear.

Objective

The present systematic review aims to identify studies that have examined the test–retest reliability of the Yo-Yo test and summarize their results.

Methods

A search of ten databases was performed to find studies that have investigated test–retest reliability of any variant of the Yo-Yo test. The COSMIN checklist was employed to assess the methodological quality of the included studies.

Results

Nineteen studies of excellent or moderate methodological quality were included. When considering all variants of the Yo-Yo test, the included studies reported intra-class correlation coefficients for test–retest reliability ranging from 0.78 to 0.98 where 62% of all intra-class correlation coefficients were higher than 0.90, while 97% of intra-class correlation coefficients were higher than 0.80. The coefficients of variation ranged from 3.7 to 19.0%. Regardless of the variant of the test, the participants’ familiarization with the test, and previous sport experience, the intra-class correlation coefficients generally seem high (≥ 0.90) and coefficients of variation low (< 10%).

Conclusions

The results of this review indicate that the Yo-Yo test (in all its variants) generally has good-to-excellent test–retest reliability. The evidence concerning reliability arises from 19 included studies that were of moderate or high methodological quality. Considering that most of the included studies examined the Yo-Yo intermittent recovery level 1 test while including Association Football players, more reliability studies examining Yo-Yo intermittent recovery level 2 test and Yo-Yo intermittent endurance level 1 and level 2 tests, and in the context of sports other than Association Football as well as in non-athletic populations, are required. Finally, future studies should explicitly state the type of intra-class correlation coefficient used for the reliability data analysis to allow for better between-study comparisons.

The Impact of Different Types of Exercise Training on Peripheral Blood Brain-Derived Neurotrophic Factor Concentrations in Older Adults: A Meta-Analysis

Abstract

Background

As the prevalence of neurodegenerative diseases (such as dementia) continues to increase due to population aging, it is mandatory to understand the role of exercise for maintaining/improving brain health.

Objectives

To analyse the impact of aerobic, strength and combined aerobic/strength exercise training on peripheral brain-derived neurotrophic factor (BDNF) concentrations in older adults (minimum age 60 years).

Methods

This meta-analysis adhered to PRISMA guidelines. Inclusion criteria were: (i) studies with subjects aged ≥ 60 years, (ii) completing a single exercise bout or an exercise programme, with (iii) measurements of blood BDNF in the periphery; (iv) with comparison between (a) an intervention and control group or (b) two intervention groups, or (c) pre- and post-measurements of an exercise intervention without control group. Studies with specific interest in known chronic co-morbidities or brain diseases affecting the peripheral and/or central nervous system, except for dementia, were excluded.

Results

In general, peripheral blood BDNF concentrations increased significantly after a single aerobic/strength exercise bout (Z = 2.21, P = 0.03) as well as after an exercise programme (Z = 4.72, P < 0.001). However, when comparing the different types of exercise within these programmes, the increase in the peripheral BDNF concentrations was significant after strength training (Z = 2.94, P = 0.003) and combined aerobic/strength training (Z = 3.03, P = 0.002) but not after (low-to-moderate intense) aerobic exercise training (Z = 0.82, P = 0.41).

Conclusions

Based on current evidence, to increase the peripheral blood BDNF concentrations in older adults, strength training and combined aerobic/strength training is effective. More studies are needed to examine the impact of aerobic exercise training.

Motor Competence Levels and Developmental Delay in Early Childhood: A Multicenter Cross-Sectional Study Conducted in the USA

Abstract

Background and Objectives

Developmental delay in motor competence may limit a child’s ability to successfully participate in structured and informal learning/social opportunities that are critical to holistic development. Current motor competence levels in the USA are relatively unknown. The purposes of this study were to explore motor competence levels of US children aged 3–6 years, report percentages of children demonstrating developmental delay, and investigate both within and across childcare site predictors of motor competence, including sex, race, geographic region, socioeconomic status, and body mass index percentile classification. Potential implications from results could lead to a greater awareness of the number of children with developmental delay, the impetus for evidence-based interventions, and the creation of consistent qualification standards for all children so that those who need services are not missed.

Methods

Participants included children (N = 580, 296 girls) aged 3–6 years (Mage = 4.97, standard deviation = 0.75) from a multi-state sample. Motor competence was assessed using the Test of Gross Motor Development, Second Edition and the 25th and 5th percentiles were identified as developmental delay-related cutoffs.

Results

For both Test of Gross Motor Development, Second Edition subscales, approximately 77% of the entire sample qualified as at risk for developmental delay (≤ 25th percentile), while 30%  of the entire sample were at or below 5th percentile. All groups (e.g., sex, race, socioeconomic status) were prone to developmental delay. Raw object control scores differed by sex.

Conclusions

Developmental delay in motor competence is an emerging epidemic that needs to be systematically acknowledged and addressed in the USA. By shifting norms based upon current data, there may be a lower standard of “typical development” that may have profound effects on factors that support long-term health.

Use of Social or Behavioral Theories in Exercise-Related Injury Prevention Program Research: A Systematic Review

Abstract

Background

The use of social or behavioral theories within exercise-related injury prevention program (ERIPP) research may lead to a better understanding of why adherence to the programs is low and inform the development of interventions to improve program adherence. There is a need to determine which theories have been used within the literature and at what level theory was used to further the field.

Objective

To determine which social or behavioral science theories have been incorporated within ERIPP research and assess the level at which the theories were used. The key question guiding the search was “What social or behavioral theories have been used within ERIPP research?”

Methods

A systematic review of the literature was completed with an appraisal of bias risk using a custom critical appraisal tool. An electronic search of EBSCOhost (Academic Search Complete, CINAHL, Medline, Psychology and Behavioral Sciences Collection) and PubMed was completed from inception to October 2018. Studies investigating attitudes towards ERIPP participation with the use of a social or behavioral theoretical model or framework were eligible for inclusion.

Results

The electronic search returned 7482 results and two articles were identified though a hand search, which resulted in ten articles meeting inclusion criteria. Four different behavioral or social theoretical models or frameworks were identified including the health action process approach model, health belief model, self-determination theory, and theory of planned behavior. Six studies utilized the theory at a B level meaning a theoretical construct was measured while four utilized the theory at the C level meaning the theory was tested. The mean critical appraisal score was 78%, indicating a majority of the studies were higher quality.

Conclusion

There has been an increase in the use of theory within literature that is specific to ERIPP participation. Additionally, the use of theory has shifted from guiding program design to the measurement of theoretical constructs and testing of the theoretical models.

Match and Training Injuries in Women’s Rugby Union: A Systematic Review of Published Studies

Abstract

Background

There is a paucity of studies reporting on women’s injuries in rugby union.

Objective

The aim of this systematic review was to describe the injury epidemiology for women’s rugby-15s and rugby-7s match and training environments.

Methods

Systematic searches of PubMed, SPORTDiscus, Web of Science Core Collection, Scopus, CINAHL(EBSCO) and ScienceDirect databases using keywords.

Results

Ten articles addressing the incidence of injury in women’s rugby union players were retrieved and included. The pooled incidence of injuries in women’s rugby-15s was 19.6 (95% CI 17.7–21.7) per 1000 match-hours (h). Injuries in women’s rugby-15s varied from 3.6 (95% CI 2.5–5.3) per 1000 playing-h (including training and games) to 37.5 (95% CI 26.5–48.5) per 1000 match-h. Women’s rugby-7s had a pooled injury incidence of 62.5 (95% CI 54.7–70.4) per 1000 player-h and the injury incidence varied from 46.3 (95% CI 38.7–55.4) per 1000 match-h to 95.4 (95% CI 79.9–113.9) per 1000 match-h. The tackle was the most commonly reported injury cause with the ball carrier recording more injuries at the collegiate [5.5 (95% CI 4.5–6.8) vs. 3.5 (95% CI 2.7–4.6) per 1000 player-game-h; χ2(1) = 6.7; p = 0.0095], and Women’s Rugby World Cup (WRWC) [2006: 14.5 (95% CI 8.9–23.7) vs. 10.9 (95% CI 6.2–19.2) per 1000 match-h; χ2(1) = 0.6; p = 0.4497; 2010: 11.8 (95% CI 6.9–20.4) vs. 1.8 (95% CI 0.5–7.3) per 1000 match-h; χ2(1) = 8.1; p = 0.0045] levels of participation. Concussions and sprains/strains were the most commonly reported injuries at the collegiate level of participation.

Discussion

Women’s rugby-7s had a higher un-pooled injury incidence than women’s rugby-15s players based on rugby-specific surveys and hospitalisation data. The incidence of injury in women’s rugby-15s and rugby-7s was lower than men’s professional rugby-15s and rugby-7s competitions but similar to male youth rugby-15s players. Differences in reporting methodologies limited comparison of results.

Conclusion

Women’s rugby-7s resulted in a higher injury incidence than women’s rugby-15s. The head/face was the most commonly reported injury site. The tackle was the most common cause of injury in both rugby-7s and rugby-15s at all levels. Future studies are warranted on injuries in women’s rugby-15s and rugby-7s.

PROSPERO registration number

CRD42018109054 (last updated on 17 January 2019).

Should Competitive Bodybuilders Ingest More Protein than Current Evidence-Based Recommendations?

Abstract

Bodybuilding is an aesthetic sport whereby competitors aspire to achieve a combination of high levels of muscularity combined with low levels of body fat. Protein is an important macronutrient for promoting muscle growth, and meeting daily needs is necessary to optimize the accretion of lean mass. Current recommendations for muscle hypertrophy suggest a relative protein intake ranging from 1.4 g/kg/day up to 2.0 g/kg/day is required for those involved in resistance training. However, research indicates that the actual ingestion of protein in competitive bodybuilders is usually greater than advocated in guidelines. The purpose of this current opinion article is to critically evaluate the evidence on whether higher intakes of protein are warranted in competitive bodybuilders. We conclude that competitive bodybuilders may benefit from consuming a higher protein intake than what is generally prescribed for recreationally trained lifters; however, the paucity of direct research in this population makes it difficult to draw strong conclusions on the topic.

Frequency and Magnitude of Game-Related Head Impacts in Male Contact Sports Athletes: A Systematic Review and Meta-Analysis

Abstract

Background

Sensor devices have enabled estimations of head impact kinematics across contact sports.

Objectives

To quantitatively report the magnitude (linear and rotational acceleration) and frequency of game-related head impacts recorded in male contact sports athletes.

Methods

A systematic review was conducted in June 2017. Inclusion criteria were English-language in vivo studies published after 1990 with a study population of male athletes aged ≥ 16 years, in any sport, where athletes were instrumented with an accelerometer device for measuring head impacts. Study populations were not limited to players with a clinical diagnosis of concussion.

Results

Twenty-one studies met the inclusion criteria with 12 conducted on American Football athletes. Six of these studies were included for meta-analysis. At a threshold of 10g, amateur rugby players sustained the most impacts per player per game (mean = 77, SD = 42), followed by amateur Australian Football (mean = 29, SD = 37) and collegiate lacrosse athletes (mean = 11.5, SD = 3.6). At thresholds of greater than 14.4g, high school American Football athletes sustained between 19 (SD = 19.1) and 24.4 (SD = 22.4) impacts per player per game. Statistically significant heterogeneity was observed among the included studies, and meta-analysis of impact magnitude was limited.

Conclusions

The frequency of “head acceleration events” was quantified and demonstrated substantial variation in methodology and reporting of results. Future research with standardised reporting of head impacts and inclusion of non-helmeted sports is warranted to enable more robust comparisons across sports.

Prospero ID

CRD42017070065.

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου

Αρχειοθήκη ιστολογίου

Translate