Probiotics have immunomodulatory effects. However, little is known about the potential benefit of probiotics on the inflammation subsequent to strenuous exercise. In a double-blind, randomized, placebo controlled, crossover design separated by a 21-day washout, 15 healthy resistance-trained men ingested an encapsulated probiotic Streptococcus (S.) thermophilus FP4 and Bifidobacterium (B.) breve BR03 at 5 bn live cells (AFU) concentration each, or a placebo, daily for 3 weeks prior to muscle-damaging exercise (ClinicalTrials.gov NCT02520583). Isometric strength, muscle soreness, range of motion and girth, and blood interleukin-6 (IL-6) and creatine kinase (CK) concentrations were measured from pre- to 72 h post-exercise. Statistical analysis was via mixed models and magnitude-based inference to the standardized difference. Probiotic supplementation resulted in an overall decrease in circulating IL-6, which was sustained to 48 h post-exercise. In addition, probiotic supplementation likely enhanced isometric average peak torque production at 24 to 72 h into the recovery period following exercise (probiotic–placebo point effect ±90% CI: 24 h, 11% ± 7%; 48 h, 12% ± 18%; 72 h, 8% ± 8%). Probiotics also likely moderately increased resting arm angle at 24 h (2.4% ± 2.0%) and 48 h (1.9% ± 1.9%) following exercise, but effects on soreness and flexed arm angle and CK were unclear. These data suggest that dietary supplementation with probiotic strains S. thermophilus FP4 and B. breve BR03 attenuates performance decrements and muscle tension in the days following muscle-damaging exercise.
The purpose of this study was to evaluate intrasession reliability of countermovement jump (CMJ) and isometric mid-thigh pull (IMTP) force–time characteristics, as well as relationships between CMJ and IMTP metrics. Division I sport and club athletes (n = 112) completed two maximal effort CMJ and IMTP trials, in that order, on force plates. Relative and absolute reliability were assessed using intraclass correlation coefficients (ICCs) > 0.80 and coefficients of variation (CVs) < 10%. Intrasession reliability was acceptable for the majority of the CMJ force–time metrics except for concentric rate of force development (RFD), eccentric impulse and RFD, and lower limb stiffness. The IMTP’s time to peak force, instantaneous force at 150 ms, instantaneous net force, and RFD measures were not reliable. Statistically significant weak to moderate relationships (r = 0.20–0.46) existed between allometrically scaled CMJ and IMTP metrics, with the exception of CMJ eccentric mean power not being related with IMTP performances. A majority of CMJ and IMTP metrics met acceptable reliability standards, except RFD measures which should be used with caution. Provided CMJs and IMTPs are indicative of distinct physical fitness capabilities, it is suggested to monitor athlete performance in both tests via changes in those variables that demonstrate the greatest degree of reliability.
Commercial off-the shelf (COTS) wearable devices continue development at unprecedented rates. An unfortunate consequence of their rapid commercialization is the lack of independent, third-party accuracy verification for reported physiological metrics of interest, such as heart rate (HR) and heart rate variability (HRV). To address these shortcomings, the present study examined the accuracy of seven COTS devices in assessing resting-state HR and root mean square of successive differences (rMSSD). Five healthy young adults generated 148 total trials, each of which compared COTS devices against a validation standard, multi-lead electrocardiogram (mECG). All devices accurately reported mean HR, according to absolute percent error summary statistics, although the highest mean absolute percent error (MAPE) was observed for CameraHRV (17.26%). The next highest MAPE for HR was nearly 15% less (HRV4Training, 2.34%). When measuring rMSSD, MAPE was again the highest for CameraHRV [112.36%, concordance correlation coefficient (CCC): 0.04], while the lowest MAPEs observed were from HRV4Training (4.10%; CCC: 0.98) and OURA (6.84%; CCC: 0.91). Our findings support extant literature that exposes varying degrees of veracity among COTS devices. To thoroughly address questionable claims from manufacturers, elucidate the accuracy of data parameters, and maximize the real-world applicative value of emerging devices, future research must continually evaluate COTS devices.
Purpose The commercial market is saturated with technologies that claim to collect proficient, free-living sleep measurements despite a severe lack of independent third-party evaluations. Therefore, the present study evaluated the accuracy of various commercial sleep technologies during in-home sleeping conditions. Materials and Methods Data collection spanned 98 separate nights of ad libitum sleep from five healthy adults. Prior to bedtime, participants utilized nine popular sleep devices while concurrently wearing a previously validated electroencephalography (EEG)-based device. Data collected from the commercial devices were extracted for later comparison against EEG to determine degrees of accuracy. Sleep and wake summary outcomes as well as sleep staging metrics were evaluated, where available, for each device. Results Total sleep time (TST), total wake time (TWT), and sleep efficiency (SE) were measured with greater accuracy (lower percent errors) and limited bias by Fitbit Ionic [mean absolute percent error, bias (95% confidence interval); TST: 9.90%, 0.25 (−0.11, 0.61); TWT: 25.64%, −0.17 (−0.28, −0.06); SE: 3.49%, 0.65 (−0.82, 2.12)] and Oura smart ring [TST: 7.39%, 0.19 (0.04, 0.35); TWT: 36.29%, −0.18 (−0.31, −0.04); SE: 5.42%, 1.66 (0.17, 3.15)], whereas all other devices demonstrated a propensity to over or underestimate at least one if not all of the aforementioned sleep metrics. No commercial sleep technology appeared to accurately quantify sleep stages. Conclusion Generally speaking, commercial sleep technologies displayed lower error and bias values when quantifying sleep/wake states as compared to sleep staging durations. Still, these findings revealed that there is a remarkably high degree of variability in the accuracy of commercial sleep technologies, which further emphasizes that continuous evaluations of newly developed sleep technologies are vital. End-users may then be able to determine more accurately which sleep device is most suited for their desired application(s).
T o address the food assistance crisis during the Covid pandemic, the United States Department of Agriculture (USDA) launched a multi-billion dollar "Farmers to Families Food Box program" (Box program) by working with approved suppliers (or distributors) to purchase fresh produce, dairy, and meat directly from farmers and package them into boxes. Recognizing that food banks did not have spare capacity to support the Box program, how should these food boxes be distributed to people who are in need? The USDA developed a novel solution by asking: (a) suppliers to distribute food boxes directly to agencies (shelters, food pantries, and soup kitchens); and (b) food banks to serve as "virtual intermediaries" to coordinate supply and demand between suppliers and agencies. However, as food banks were overwhelmed with their regular operations for distributing donated food during the pandemic, the Los Angeles Regional Food Bank (LARFB) found it difficult to develop and deploy a Decision Support System (DSS) to support the Box program with limited manpower and expertise. In this study, we describe a DSS co-developed by LARFB, Salesforce, and UCLA. Unlike other DSSs developed in normal circumstances, the development and deployment of the DSS were conducted virtually within 45 days. Without this DSS, it would have been impossible for LARFB to support the Box program. Because this DSS was developed in a record time, we discuss several limitations and suggest future research opportunities for managing food bank operations during a pandemic.
A necessarily high standard for physical readiness in tactical environments is often accompanied by high incidences of injury due to overaccumulations of neuromuscular fatigue (NMF). To account for instances of overtraining stimulated by NMF, close monitoring of neuromuscular performance is warranted. Previously validated tests, such as the countermovement jump, are useful means for monitoring performance adaptations, resiliency to fatigue, and risk for injury. Performing such tests on force plates provides an understanding of the movement strategy used to obtain the resulting outcome (e.g., jump height). Further, force plates afford numerous objective tests that are valid and reliable for monitoring upper and lower extremity muscular strength and power (thus sensitive to NMF) with less fatiguing and safer methods than traditional one-repetition maximum assessments. Force plates provide numerous software and testing application options that can be applied to military’s training but, to be effective, requires the practitioners to have sufficient knowledge of their functions. Therefore, this review aims to explain the functions of force plate testing as well as current best practices for utilizing force plates in military settings and disseminate protocols for valid and reliable testing to collect key variables that translate to physical performance capacities.
Askow, AT, Merrigan, JJ, Neddo, JM, Oliver, JM, Stone, JD, Jagim, AR, and Jones, MT. Effect of strength on velocity and power during back squat exercise in resistance-trained men and women. J Strength Cond Res 33(1): 1–7, 2019—The purpose was to examine load-velocity and load-power relationships of back squat in resistance-trained men (n = 20, 21.3 ± 1.4 years, 183.0 ± 8.0 cm, 82.6 ± 8.0 kg, 11.5 ± 5.0% total body fat) and women (n = 18; 20.0 ± 1.0 years; 166.5 ± 6.9 cm; 63.9 ± 7.9 kg, 20.3 ± 5.0% body fat). Body composition testing was performed followed by determination of back squat 1 repetition maximum (1RM). After at least 72 hours of recovery, subjects returned to the laboratory and completed 2 repetitions at each of 7 separate loads (30, 40, 50, 60, 70, 80, and 90% 1RM) in a random order. During each repetition, peak and average velocity and power were quantified using a commercially available linear position transducer. Men produced higher absolute peak and average power and velocity at all loads. When power output was normalized for body mass, significant differences remained. However, when normalizing for strength, no significant differences were observed between sexes. Furthermore, when subjects were subdivided into strong and weak groups, those above the median 1RM produced higher peak power, but only at loads greater than 60% 1RM. It was concluded that differences between men and women may be a result of strength rather than biological sex. Furthermore, training for maximal strength may be an appropriate method to augment maximal power output in those athletes who exhibit low levels of strength.
Comparisons of countermovement jump force-time characteristics among NCAA Division I American football athletes: use of principal component analysis. J Strength Cond Res 36(2): 411-419, 2022-This study aimed to reduce the dimensionality of countermovement jump (CMJ) force-time characteristics and evaluate differences among positional groups (skills, hybrid, linemen, and specialists) within National Collegiate Athletic Association (NCAA) division I American football. Eighty-two football athletes performed 2 maximal effort, no arm-swing, CMJs on force plates. The average absolute and relative (e.g., power/body mass) metrics were analyzed using analysis of variance and principal component analysis procedures (p , 0.05). Linemen had the heaviest body mass and produced greater absolute forces than hybrid and skills but had lower propulsive abilities demonstrated by longer propulsive phase durations and greater eccentric to concentric mean force ratios. Skills and hybrid produced the most relative concentric and eccentric forces and power, as well as modified reactive strength indexes (RSI MOD ). Skills (46.7 6 4.6 cm) achieved the highest jump height compared with hybrid (42.8 6 5.5 cm), specialists (38.7 6 4.0 cm), and linemen (34.1 6 5.3 cm). Four principal components explained 89.5% of the variance in force-time metrics. Dimensions were described as the (a) explosive transferability to concentric power (RSI MOD , concentric power, and eccentric to concentric forces) (b) powerful eccentric loading (eccentric power and velocity), (c) countermovement strategy (depth and duration), and (d) jump height and power. The many positional differences in CMJ forcetime characteristics may inform strength and conditioning program designs tailored to each position and identify important explanatory metrics to routinely monitor by position. The overwhelming number of force-time metrics to select from may be reduced using principal component analysis methods, although practitioners should still consider the various metric's applicability and reliability.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.