|
Journal CiteScores 2023:
citation metrics from the Scopus database Will G Hopkins, Internet Society for
Sport Science, Auckland, New Zealand. Email. Download a workbook of the current year of CiteScores from Elsevier's Scopus site for journals in sport and exercise medicine and science. Please email me with any journal titles I have missed and I will update the workbook. This year Elsevier did not provide a complete spreadsheet with citation metrics that I could filter for journals in our disciplines. Instead, I figured out that I could select one or more "subject areas" to retrieve the statistics in a browser, then download the resulting limited set of journals and metrics. Unfortunately, sport or exercise science are not listed, so I used the following subjects: Orthopedics And Sports Medicine; Physiology; Neuropsychology And Physiological Psychology; Applied Psychology; Psychology (Miscellaneous); Physical Therapy, Sports Therapy And Rehabilitation. I then filtered out what I considered to be irrelevant titles. The In-brief item for 2021 provides an explanation of the CiteScore and a comparison with the traditional impact factor. Elsevier has provided several other potentially useful citation metrics in the spreadsheet (the SNIP and the SJR), although you will see that they are obviously highly correlated with the CiteScore. I have included Elsevier's definitions thereof on a tab in the workbook. Here's the top 10 journals based on the CiteScore:
The Future of this Site: a further call for expressions of interest
Will G Hopkins, Internet Society for Sport
Science, Auckland, New Zealand. Email. I am still interested in working part-time for an institution for another year or two, in some reasonably informal capacity, especially if there is an individual or group within the institution who could take over the Sportscience site. If no-one comes forward, this edition of Sportscience will be the last. I will keep the site active, while download royalties pay for the site hosting and for the sportsci.org and newstats.org URLs. Possible new developments at the site include paying for DOIs for the most important articles, extending the content to exercise generally, and adding resources for machine learning and artificial intelligence. If interested, please email me. A Spreadsheet for Technical Error and Biological Variability using two devices
Will G Hopkins, Basilio Pueo; Internet
Society for Sport Science, Auckland, New Zealand; University of Alicante,
Spain. Email. Sportscience 27,
ii-iii, 2024 (sportsci.org/2023/inbrief.htm#techbio).
Published April 2024. Update October 2024. A technical error could be substantial considered on its own, but when combined with biological variability, the contribution of the technical error to the typical error will be smaller and could even be negligible. The spreadsheets therefore now include estimation and assessment of the magnitude of the increase in the typical error due to the technical error (the typical error minus the biological variability). Whether the magnitude thresholds for this increase should be the same as the usual thresholds for standard deviations (half those for means) is an open question. In a straightforward reliability study, subjects are measured on two or more occasions, and the changes within subjects are analyzed as the most important measure of reliability, a standard deviation known as the typical or standard error of measurement (Hopkins, 2000). This SD consists of random biological variability, which each subject exhibits every time they are measured, combined with random technical error, which the measuring device adds to every measurement. Estimation of technical error separate from biological variability would allow assessment of device reliability independent of subject variability, which inevitably differs between types of subjects (young adults, athletes, the elderly, and so on). Estimation of the technical error would also allow it to be removed from the between-subject SD used to standardize differences and changes between means, as described in an article/slideshow in this issue (Hopkins & Rowlands, 2024). For some kinds of measurement, such as concentration of a biomarker in blood samples, you can estimate the technical error separately by splitting the samples and analyzing the splits as if they were test and retest measurements. The resulting error of measurement is the coefficient of variation you often see in the Methods section of studies using such biomarkers. This approach is not directly applicable to measurements of performance or other human behaviors, because ostensibly you can't split the behavior. But there is a sense in which you can: simply measure the behavior simultaneously on each occasion with two units of the device! They can even be two different devices. The idea is that the usual test-retest error of measurement for each device considered separately provides an estimate of biological variability plus the device's technical error, whereas the device-to-device error of measurement on the first or second testing occasion estimates the combination of the two technical errors, with no contribution of biological variability. From the estimates of standard errors of measurement, you can solve for standard deviations representing biological variability and either two technical errors (in the case of two different devices) or one technical error (in the case of two units of the same device). We adopted this approach to determine the technical errors with different methods for measuring jump height (Pueo et al., 2017), including use of videos at different frame rates (Pueo et al., 2023). We used mixed modeling to do the analyses, but mixed modeling is still a challenge for sports practitioners who cannot access a statistics expert. We have therefore implemented the analysis with a spreadsheet. It was a simple matter to estimate all the relevant standard deviations: technical error(s), biological variability, and true differences between subjects (the differences free of technical error and also free of biological variability, if required). Deriving the sampling uncertainty in the estimates expressed as confidence limits and probabilistic assertions about the true values was much harder. Mixed models provide the uncertainties, but we could not devise equations for use in the spreadsheet. We therefore resorted to bootstrapping, by adapting a spreadsheet introducing the concept linked in an article at this site. We also wrote a program in the Statistical Analysis System to replicate the spreadsheet analysis and to analyze the same data with a mixed model. The analyses were repeated thousands of times on data simulated with known true values of the various SDs. By comparing the mean results of the analyses with the true SDs, we were able to derive factors to correct small-sample bias. It was also necessary to correct the width of 90% confidence intervals provided by the bootstrapping, so that 90% of the intervals included the true values in 90% of simulations. Download the workbook (18 MB), which consists of two spreadsheets: one for analysis of raw data and one for log-transformed data. They are designed for two devices and two testing occasions; for more devices or testing occasions, you will have to do pairwise analyses, or a full analysis with a mixed model. The article on validity and reliability at this site also has a link to access the workbook. This research will be presented at the annual meeting of the European College of Sport Science in Glasgow this July. A full article is in preparation. ———– |