The aim of this study was to compare performance in the Yo-Yo IR1, 20-meter sprint, COD test, loaded and unloaded lower-limb muscle power tests (squat jump SJ, countermovement jump CMJ and jump squat ...JS tests), as well as resting and exercise heart rate variability parameters in high-level senior professional and under-20 (U-20) futsal players.
All the players (18 senior and 15 U-20 male players) performed the Yo-Yo Intermittent Recovery Test level 1 (Yo-Yo IR1), 20-m sprint, COD test, loaded and unloaded lower-limb power tests (SJ, CMJ and JS tests), as well as resting and post-exercise log-transformed root-mean-square difference of successive normal RR intervals (lnRMSSD) recording. The t-test for independent samples and magnitude-based inference were used to compare the groups.
Seniors were likely to very likely superior than U-20 in the Yo-Yo IR1 (1506.7±287.1 and 1264.0±397.9 m, P<0.05), and resting (3.43±0.32 and 3.21±0.37 ms) and post-exercise lnRMSSD (2.95±0.39 and 2.48±0.59 ms, P<0.05). Conversely, U-20 players performed very likely to almost certainly better than seniors in the relative mean propulsive power (10.39±1.60 and 9.05±1.57 W/kg, P<0.05), 20-m sprint time (2.92±0.10 and 3.05±0.10 s, P<0.05) and COD (5.50±0.15 and 5.71±0.22 s, P<0.05).
Findings from this cross-sectional study indicate that long-term exposure to futsal may lead to improvement in the aerobic fitness and cardiac autonomic regulation, while impairing the muscle power and speed performance of players. Future longitudinal studies are necessary to confirm the occurrence of such concurrent training adaptations.
Building robust deep learning-based models requires large quantities of diverse training data. In this study, we investigate the use of federated learning (FL) to build medical imaging classification ...models in a real-world collaborative setting. Seven clinical institutions from across the world joined this FL effort to train a model for breast density classification based on Breast Imaging, Reporting & Data System (BI-RADS). We show that despite substantial differences among the datasets from all sites (mammography system, class distribution, and data set size) and without centralizing data, we can successfully train AI models in federation. The results show that models trained using FL perform 6.3% on average better than their counterparts trained on an institute's local data alone. Furthermore, we show a 45.8% relative improvement in the models' generalizability when evaluated on the other participating sites' testing data.