| Review article - (2026)25, 58 - 83 DOI: https://doi.org/10.52082/jssm.2026.58 |
| The Role of Machine Learning in Talent Identification for Team Sports: A Systematic Review |
Qingrong Tang1, Xiufang Wei2, , Bo Tan3 |
| Key words: Youth athletes, talent development, predictive modeling, sports analytics, artificial intelligence |
| Key Points |
|
|
|
The review was conducted and reported in accordance with PRISMA 2020 recommendations to ensure transparent and reproducible synthesis (Page et al., |
| Eligibility criteria PICO criteria |
Studies were considered eligible if they addressed the use of ML methods for TID in team sports. Eligibility was defined using a modified PICO framework as follows: Population (P): Youth athletes (≤21 years) engaged in organized team sports (e.g., soccer, basketball, rugby, hockey, handball, volleyball, American football, baseball, and other team sports). Studies were eligible regardless of competitive level (grassroots, academy, sub-elite, or elite youth), and no restrictions were imposed on sex. Studies focusing exclusively on adult/professional-only cohorts or on individual sports were excluded. We defined “youth” as athletes ≤21 years to align with established competitive tiers and developmental transition points in team sports. In football and other codes, U21 is the terminal youth category preceding senior squads; research shows that experience and performance at U21 best predict subsequent senior participation compared with earlier youth levels, situating age 21 as the practical boundary of the youth pathway (Herrebrøden and Bjørndal, Intervention/Exposure (I): Application of ML algorithms (supervised, unsupervised, reinforcement, or hybrid approaches) to support talent identification or selection processes (e.g., prediction of selection vs. deselection, progression to higher competitive levels, role-agnostic player clustering, or position-specific profiling in youth athletes). Studies limited to traditional statistical analyses without ML components were excluded. Comparators (C): Comparator groups were not mandatory. Where applicable, comparators could include traditional scouting, expert coach assessment, or alternative analytic approaches (e.g., regression, rule-based classification). Outcomes (O): Eligible studies had to report at least one youth TID-related outcome, such as predictive accuracy of selection, identification of key features contributing to progression, classification of athlete profiles, or algorithmic discrimination of performance tiers within youth cohorts. Studies were excluded if ML was applied exclusively to non-TID outcomes (e.g., injury prediction, workload monitoring, or tactical analysis), if ML was applied only in adult/professional samples, or if results were not disaggregated to allow extraction of youth TID-specific findings. |
| Study design and setting |
All quantitative empirical studies employing ML algorithms for TID were included, regardless of design (cross-sectional, longitudinal, retrospective, or prospective). Proof-of-concept studies, validation studies, and applied analyses in real-world settings were all eligible. Qualitative studies, narrative commentaries, editorials, opinion pieces, and reviews were excluded, though their reference lists were screened for potential eligible primary studies. |
| Report characteristics |
Only peer-reviewed journal articles were included to ensure methodological rigor. Grey literature, preprints, conference abstracts, theses, and unpublished reports were excluded due to limitations in methodological detail and peer review. Only studies published in English were considered eligible. No restrictions were placed on the year of publication. |
| Information sources |
The literature search was conducted across three major bibliographic databases to ensure coverage of relevant studies: PubMed, Scopus, and the Web of Science Core Collection. No restrictions were applied with respect to publication year, study design, or participant age at the search stage. The final searches of all databases were completed on October 15, 2025. To complement the electronic database searches, the reference lists of all studies meeting the eligibility criteria were manually examined to identify additional articles not retrieved in the initial search. Reference lists of previous systematic and narrative reviews relevant to talent identification, sports analytics, or the application of machine learning in sport were also screened. Furthermore, backward and forward citation searches were conducted using the Web of Science Core Collection for all included studies to capture any additional eligible publications. No study registers, trial registries, organizational repositories, or grey literature sources were searched. Only peer-reviewed journal publications retrieved through the databases and reference list searches were included for screening. |
| Search strategy |
The search strategy was designed to capture all available studies addressing the use of ML for TID in team sports. The strategy combined controlled vocabulary terms and free-text words related to “machine learning,” “artificial intelligence,” and “talent identification” with sport-specific terms, following iterative piloting and refinement to balance sensitivity and specificity. The conceptual structure of the strategy was based on a modified PICO approach, focusing on the population of team sport athletes and the intervention or exposure of machine learning applications for talent identification outcomes. The following search strategy was employed: ("machine learning” OR "artificial intelligence” OR "deep learning" OR "supervised learning" OR "unsupervised learning" OR "neural network*" OR "support vector machine*" OR "random forest*" OR "gradient boosting" OR "learning algorithms" OR "bayesian logistic regression" OR “random forest" OR "random forests" OR "trees" OR "elastic net" OR "ridge" OR "lasso" OR "boosting" OR "predictive modeling") AND (talent* OR "talent identification" OR "talent detection" OR "talent development" OR "player selection" OR "athlete selection" OR "talent promotion") AND ("team sport*" OR "soccer" OR "football" OR "basketball" OR "rugby" OR "handball" OR "volleyball" OR "hockey" OR "baseball" OR "softball" OR "lacrosse" OR "water polo"). |
| Selection process |
All records identified through database searching were imported into an Excel sheet, and duplicates were removed prior to screening. Two reviewers independently assessed the eligibility of studies against the predefined inclusion and exclusion criteria in title/abstract screening and then in full-text screening. Disagreements between reviewers were resolved through discussion. The reasons for excluding studies at the full-text stage were documented and reported. |
| Data collection process |
Two reviewers independently extracted data from each study. The extracted information was subsequently compared, and any discrepancies were resolved through discussion. No automation tools or machine learning–based systems were used for data collection. Only information explicitly reported in tables, text, or graphs was included. |
| Data items |
The domain of interest was the performance of machine learning models applied to talent identification in youth team sports. Within this domain, data were extracted on predictive or classification performance metrics reported by each study. These included, where available, overall accuracy, sensitivity (recall), specificity, precision, F1-score, area under the receiver operating characteristic curve (AUC-ROC), and area under the precision–recall curve (AUC-PR). When studies reported multiple metrics, all available values were collected to allow for a comprehensive synthesis. Other domains included talent-related predictions and classifications such as selection versus deselection, progression to higher competition levels, clustering of players into performance profiles, and position- or role-specific identification. Where studies reported longitudinal prediction outcomes, all time points were collected, and no restrictions were applied to the follow-up period. In cases where results were presented using different analysis strategies (e.g., cross-validation folds, test set performance, external validation), all eligible outcomes were extracted, with priority given to independent test set or external validation results when synthesizing evidence. No changes were made during the review process to the inclusion or definition of outcome domains. All outcome domains compatible with TID were considered equally relevant at the data extraction stage. However, in the interpretation of findings, external validation performance and transparent reporting of prediction quality were considered most critical, as these outcomes are directly aligned with the review’s objectives of evaluating methodological robustness and generalizability. In addition to outcomes, other variables were extracted from each study to support subgroup analyses and contextual interpretation. Study characteristics included publication year and country of origin. Participant characteristics comprised sample size, sex distribution, age range, competitive context (e.g., grassroots, academy, or elite youth), and where available, indicators of biological maturation. Sport type was also recorded. Data characteristics included the domain of features used (e.g., anthropometric, physical, technical, perceptual - cognitive, psychosocial, or multi-domain) and the methods of data acquisition (e.g., field-based tests, questionnaires, match-derived tracking data). Machine learning–related variables included the class of algorithms applied (e.g., supervised, unsupervised, ensemble, deep learning), model development strategies (e.g., feature selection, dimensionality reduction), training and validation procedures (e.g., cross-validation, independent test set, external validation), and performance metrics reported. Where available, reporting of interpretability approaches (e.g., feature importance, SHapley Additive exPlanations, Local Interpretable Model-agnostic Explanations) was also extracted. When information was missing or unclear, we recorded it as “not reported” without making assumptions. |
| Study risk of bias assessment |
The methodological quality and risk of bias of all included studies were assessed using the Prediction model Risk Of Bias Assessment Tool (PROBAST, version 1.0), which is specifically designed for evaluating studies that develop, validate, or update predictive models (de Jong et al., The PROBAST framework consists of four domains (Wolff et al., Two reviewers independently performed the risk of bias assessment for each included study. Discrepancies in judgments were resolved through discussion. All judgments were based exclusively on information reported in the published articles. Given the particularities of machine learning research, special attention was given to signaling questions within the analysis domain, including handling of class imbalance, prevention of data leakage, adequacy of validation strategies, and transparency of reporting model performance metrics. |
| Effect measures |
For the outcome domain - predictive performance of machine learning models for talent identification in team sports - we extracted and reported all performance metrics provided by the original studies. Given the diversity of machine learning methods and outcome definitions, no single effect measure was imposed a priori. Instead, the following effect measures were prioritized based on their frequency of use and interpretability in predictive modeling research. For binary classification outcomes (e.g., selected vs. deselected, progressed vs. not progressed), the principal effect measures were overall accuracy, sensitivity (recall), specificity, precision (positive predictive value), F1-score, and the area under the receiver operating characteristic curve (AUC-ROC). Where reported, the area under the precision–recall curve (AUC-PR) was also extracted to account for class imbalance, which is common in talent identification contexts. For multi-class or clustering outcomes (e.g., player profiles, position-specific categories), measures such as overall classification accuracy, macro- and micro-averaged F1-scores, and adjusted Rand index were extracted. For continuous outcomes (e.g., predictive regression of performance scores or advancement probabilities), effect measures included mean absolute error (MAE), root mean square error (RMSE), and coefficient of determination (R2). Where multiple metrics were presented for the same model, all were recorded, but in synthesis greater emphasis was placed on metrics reflecting generalizability, particularly those derived from independent test sets or external validation cohorts. No thresholds for minimally important differences were defined a priori, as such benchmarks do not currently exist for talent identification in team sports. Instead, results were interpreted with reference to established conventions in machine learning research (e.g., AUC-ROC values of 0.50 indicating no discrimination, 0.70-0.80 acceptable, 0.80-0.90 excellent, and >0.90 outstanding performance) while acknowledging the limitations of applying generic thresholds to heterogeneous sporting contexts. No re-expression of results into alternative effect measures was required, as extracted metrics were analyzed in their originally reported form. The choice to retain multiple performance measures was justified by the heterogeneous reporting practices in the field and by the need to provide a transparent overview of predictive model performance rather than privileging a single effect measure. |
| Synthesis methods |
Data from included studies were extracted into structured evidence tables designed to enable consistent cross-study comparison. Extraction focused on: (i) study identification details (sport, competitive level, and sample characteristics); (ii) input data domains (e.g., anthropometric, physical, technical, perceptual–cognitive, psychosocial, or multi-domain); (iii) machine learning approach (e.g., supervised classification, regression, ensemble learning, clustering, or deep learning methods); (iv) type of outcome predicted (e.g., selection vs. deselection, progression, positional classification, performance prediction, profiling, or maturation); (v) validation strategy and performance metrics; (vi) interpretability analyses or insights reported by authors; and (vii) main results and conclusions. If studies tested multiple algorithms, results were extracted for each model, though synthesis tables emphasized the best-performing or most interpretable approach. No data transformations, imputations, or re-analyses were performed; where performance metrics or validation details were missing, these were reported as “not reported.” To facilitate synthesis, studies were grouped according to their primary analytic aim rather than by sport or algorithm. Each table followed a standardized column structure (General Aim, Outcomes Predicted, Key Performance Metrics, Interpretability/Key Insights, and Main Results & Conclusions). To improve clarity, abbreviation glossaries were provided for each table, and narrative overviews were written to introduce and contextualize the included studies. Given the heterogeneity of sports, data modalities, machine learning methods, and outcome definitions, statistical pooling or meta-analysis was not feasible. Instead, a structured narrative synthesis was undertaken. This narrative integrated the tabular evidence with cross-cutting themes, focusing on: (i) recurring methodological patterns; (ii) relative strengths and limitations of different ML approaches; (iii) the role of interpretability in practical application; and (iv) conceptual insights into how ML has been used in talent identification and development. |
|
|
| Study selection |
A total of 228 records were identified through database searches (PubMed, n = 28; Scopus, n = 128; Web of Science, n = 72). After removal of 83 duplicates, 145 records were screened by title and abstract, of which 63 were excluded. The remaining 82 reports were retrieved in full text, with none unretrievable. Following detailed eligibility assessment, 55 reports were excluded, primarily due to population not meeting inclusion criteria (n = 53) or intervention/outcomes not relevant (n = 2). Ultimately, 27 studies fulfilled all criteria and were included in the systematic review ( |
| Study characteristics |
Across the 27 studies included in this review, most (n=13) focused exclusively on football (soccer), reflecting its global prominence in youth talent pathways (Zhao et al., In terms of data domains, studies frequently combined anthropometric and physical performance measures (Craig and Swinton, The |
| Risk of bias in studies |
Across the 27 included studies, the PROBAST assessment ( |
| Synthesis of studies |
|
|
This systematic review synthesized evidence on the application of ML methods in sport TID and development. Across the included studies, ML was employed for diverse purposes, ranging from predicting selection and performance outcomes to supporting team formation, profiling, maturation assessment, and scouting. The findings highlight the challenges of applying ML in this domain: on one hand, advanced algorithms can capture complex, multidimensional patterns that traditional statistical approaches may overlook; on the other, the heterogeneity of data types, small sample sizes, and lack of external validation continue to limit their translational value. This capacity to model multidimensional structure aligns closely with the ecological dynamics view of talent development, in which performance emerges from interaction-dominant rather than variable-dominant processes. ML’s real strength lies not merely in detecting correlations among isolated predictors but in uncovering higher-order patterns that emerge from the interaction of biological, psychological, and environmental constraints (Reis et al., |
| Selection prediction |
The synthesis of selection-focused studies demonstrates that ML models can capture important physical, technical, psychological, and socio-cultural factors associated with advancement or deselection in talent pathways. Models such as XGBoost, neural networks, and one-class SVMs achieved moderate to high predictive validity in academy soccer (Jauhiainen et al., Nevertheless, these studies highlight important limitations. Predictive accuracies often fell below thresholds typically required for decision-making in practice (e.g., AUC < 0.70, (Altmann et al., Many models also relied too much on physical test data, which limits interpretability when predicting long-term success within already selected elite groups (Craig and Swinton, Moreover, the dominance of soccer-based studies likely shapes the implicit model priors in this field, since features that are salient in invasion games (e.g., intermittent high-speed running, rapid change of direction, spatial–temporal awareness, and transition behaviors) are overrepresented in training data and outcome labels. As a result, ML models - and the feature-engineering conventions they normalize - may capture sport-specific regularities that do not readily transfer to sports with different task dynamics. This concentration can narrow ecological validity, as the performer–environment couplings and constraint sets underpinning soccer differ from those governing performance in sports such as volleyball. Expanding the evidence base beyond invasion games and encouraging cross-sport external validation would therefore strengthen the domain generalizability of ML applications in TID. |
| Performance prediction |
Studies applying ML to performance prediction showed promising results in linking physiological and technical markers with skill-based and in-game outcomes. Early work (Cornforth et al., Despite these advances, performance prediction studies also exhibit challenges. The use of laboratory or field-test performance outcomes raises questions about ecological validity for predicting actual match performance. Furthermore, over-reliance on physiological data may neglect tactical, cognitive, and psychosocial contributors to performance. While explainable ML techniques provide interesting information into feature importance, few studies validated whether these insights align with real-world coaching expertise. To enhance translation, future work should integrate multimodal data sources and conduct prospective validation in competitive environments. |
| Team formation & position classification |
The reviewed studies demonstrate that ML can approximate and in some cases outperform coach-derived decisions regarding position classification and team formation. For example, Random Forest and Multilayer Perceptrons achieved >90% accuracy in predicting player positions and generating lineups closely resembling coaches’ choices in youth soccer (Abidin, However, most models were trained on small or academy-level datasets, limiting their generalizability across contexts. For instance in Australian football study (Woods et al., |
| Profiling, development, scouting & maturation |
Studies beyond direct selection and performance prediction illustrate the expanding scope of ML in talent identification and development. Morphological and neuromuscular profiling models showed value for orienting youth into appropriate sports (de Almeida-Neto et al., Nevertheless, several limitations constrain the translation of these broader applications. Many studies remain proof-of-concept, conducted with small or single-institution datasets (de Almeida-Neto et al., |
| Limitations on ML reporting |
Across the included studies, the analysis domain emerged as the most frequent source of high risk of bias, primarily due to small samples, reliance on internal validation, or use of synthetic/augmented data without adequate safeguards against optimism. For example, a study (Abidin, A second recurrent issue relates to applicability of predictors and outcomes, especially where subjective or indirect measures were used. For instance, studies using coach-rated assessments as input variables (Abidin, |
| Limitations of this systematic review, future research and practical applications |
This review has limitations that should be acknowledged. Despite a comprehensive search and systematic screening process, it is possible that relevant studies were missed, particularly those published in grey literature (e.g., technical reports, theses). The exclusion of grey literature was a deliberate methodological choice to maintain peer-reviewed quality standards; however, it introduces the possibility of publication bias, as studies reporting weaker or non-significant results are less likely to appear in indexed journals. Consequently, the synthesized evidence may overrepresent positive findings and potentially overestimate ML model performance. This limitation may be important, as it reflects a broader tendency within data-driven research toward selective visibility of success - a phenomenon that underscores the need for greater transparency, data sharing, and preregistration in ML-based sports science. Moreover, the heterogeneity of sports, outcome measures, and machine learning approaches precluded meta-analysis and restricted the synthesis to a structured narrative. The reliance on published results also meant that incomplete reporting of performance metrics or validation methods could not be clarified or supplemented, further limiting interpretability. Finally, as many included studies were exploratory, single-sample, or lacked external validation, the evidence base summarized here represents an emerging rather than mature field. Interpretability emerged as one of the least consistently addressed dimensions across studies, yet it represents a continuum of conceptual transparency rather than a binary property. At the most basic level, interpretability can involve global feature importance or coefficient-based rankings that indicate which variables most influence predictions. More advanced methods, such as SHAP (SHapley Additive Explanations) or LIME (Local Interpretable Model-Agnostic Explanations), allow for instance-level attribution, showing how specific inputs contribute to individual outcomes. At the highest tier, counterfactual reasoning provides actionable insight by simulating how changes in certain features might alter selection probabilities or developmental trajectories. Viewing interpretability hierarchically underscores that transparency in ML is scalable—from descriptive feature inspection to causal exploration—and that its depth should align with the practical stakes of decision-making in TID. Looking ahead, future research should prioritize larger, longitudinal, and multi-sport datasets that allow for robust model development and both statistical and ecological external validation. In addition to conventional hold-out or cross-cohort testing, ecological external validation involves evaluating model performance across different clubs, regions, and competition levels to ensure contextual robustness and ecological realism. Such cross-setting validation helps determine whether predictive patterns reflect genuine developmental principles or context-specific artifacts, bridging methodological rigor with the complex, adaptive nature of sport environments. Standardized reporting of ML pipelines - including feature engineering, calibration assessment, validation strategies, and interpretability methods - would improve transparency and comparability across studies. Greater integration of multidimensional data is also needed to capture the complexity of talent development. Moreover, collaboration between sport scientists, data scientists, and practitioners will be essential to ensure that models are not only accurate but also interpretable, ethically sound, and practically relevant. By embracing open science practices and methodological rigor, the field can move beyond optimism bias toward a more cumulative, self-correcting body of evidence that meaningfully informs talent identification and development systems. To enhance reproducibility and comparability, future ML studies in talent identification should adopt, at minimum, clearly describe their data partitioning strategy, including whether splits were performed at the athlete or trial level; outline steps for leakage control to prevent information overlap between training and testing sets; report how class imbalance was handled within validation folds; and include both discrimination and calibration metrics (e.g., AUC, Brier score, calibration slope). In addition, transparency around fairness auditing - such as assessing model performance across relative-age quartiles, sex, or maturation status - will improve interpretability and ethical accountability. Consistent reporting of these elements would substantially strengthen the methodological quality, transparency, and applied trustworthiness of ML research in youth talent identification. To promote equitable predictions across subpopulations, we propose a minimal fairness framework specifying main covariates that should be recorded, modeled, and audited in youth TID, as exemples, birth quarter/relative age, biological maturation status (e.g., PHV indicators), and socio-economic background (e.g., school type or deprivation index), alongside sex and playing context (e.g., region/club resource level). These variables should be (i) pre-specified in protocols, (ii) considered as features or stratification factors where appropriate, and (iii) subjected to subgroup and intersectional audits reporting discrimination, calibration, and error-rate parity at a stated operating point. If disparities are detected, studies should apply bias-mitigation procedures (e.g., reweighting, stratified sampling, threshold adjustment, post-hoc recalibration) and re-report subgroup metrics. From a practical standpoint, the findings of this review suggest that ML may have potential to complement, rather than replace, traditional talent identification and development practices. Current evidence indicates that ML models can highlight patterns across large, multidimensional datasets and may assist coaches and scouts in refining their decisions or monitoring athlete development. However, given the frequent limitations of small sample sizes, context-specific data, and limited external validation, these tools should be viewed as exploratory decision-support aids rather than definitive selection instruments. Practitioners are advised to use ML outputs in conjunction with expert judgment, holistic evaluation of athletes, and awareness of potential biases (e.g., relative age, socio-cultural influences). This complementary role can be understood along two interconnected pathways, namely an operational pathway, in which ML assists practitioners with data-driven screening, workload monitoring, and early flagging of developmental trends to enhance decision efficiency, and a discovery pathway, where ML identifies novel, interaction-based patterns among physical, technical, and psychosocial constraints that can inform longitudinal experimentation and theory development. These pathways illustrate that the value of ML lies not in replacing human expertise but in augmenting it - bridging empirical discovery with applied decision-making in youth talent systems. Careful integration in practice may enhance efficiency and provide additional perspectives, but overreliance on unvalidated models risks reinforcing existing inequalities or producing misleading conclusions. To operationalize these findings, practitioners could adopt tiered decision protocols in which ML models are first used for broad early screening - prioritizing high sensitivity to avoid missing potential talent - followed by structured expert evaluation emphasizing context, adaptability, and psychosocial maturity. Such hybrid frameworks can combine algorithmic efficiency with human interpretive depth, ensuring that automated outputs inform but do not dictate selection. In this way, ML functions as an evidence-based triage tool that supports individualized monitoring, facilitates ongoing re-evaluation, and helps direct coaching resources toward athletes with emerging potential rather than early advantage. From a practitioner perspective, the implementation of ML in TID can also be conceptualized as a sequential decision pathway encompassing model development, validation, deployment, and monitoring. During development, multidisciplinary teams should ensure data representativeness, apply rigorous leakage control, and use nested cross-validation to optimize model tuning. Validation should progress from internal to independent external testing to evaluate transportability and calibration before any operational use. In deployment, ML outputs should serve as decision-support tools within structured selection frameworks - for instance, as high-sensitivity screening aids that prompt subsequent expert evaluation. Finally, ongoing monitoring is essential to detect model drift, reassess fairness across athlete subgroups, and recalibrate performance metrics as data and populations evolve. This cyclical process ensures that ML models remain methodologically sound, contextually relevant, and ethically aligned with the developmental principles of youth sport. |
|
|
This systematic review found that research applying ML in sport talent identification remains limited in scope but expanding. The majority of available studies focused on selection prediction tasks, particularly in soccer and other team sports, where algorithms were used to forecast admission, progression, or draft success. A smaller but growing body of work addressed performance prediction, leveraging physiological, anthropometric, or cognitive markers to estimate test results or in-game performance. Fewer studies explored team formation and positional classification, and an emerging set of contributions examined broader applications such as profiling, maturation, and scouting support. Across domains, Random Forest, gradient boosting methods, and neural networks were the most frequently applied, often achieving moderate to high internal accuracy. However, very few studies provided external validation, and most were conducted on relatively small, single-sport or academy-specific datasets, limiting generalizability. The findings suggest that while ML offers clear potential to enrich talent identification and development systems, its current role should be viewed as exploratory and complementary rather than decisive. The predominance of selection-focused studies highlights a narrow evidence base, with underrepresentation of longitudinal designs, female athletes, and diverse sporting contexts. Moreover, interpretability methods - although increasingly adopted - remain inconsistently applied, and socio-cultural or psychological factors are still less frequently integrated than physical and technical measures. Future progress will depend on larger, multi-sample datasets, standardized reporting of algorithms and metrics, and collaborative efforts to embed interpretability and equity within predictive pipelines. Until such methodological and theoretical maturity is achieved, the use of ML in practice should remain cautious, serving as a support to - not a substitute for - expert judgment and holistic athlete evaluation. Ultimately, in youth TID, transparency, transportability, and theoretical coherence are the pillars upon which meaningful ML applications must be built. |
| ACKNOWLEDGEMENTS |
This study was supported by the Project of China West Normal University, Project Number: [CWNUJG2024098]. The author reports no actual or potential conflicts of interest. The datasets generated and analyzed in this study are not publicly available, but are available from the corresponding author who organized the study upon reasonable request. All experimental procedures were conducted in compliance with the relevant legal and ethical standards of the country where the study was performed. |
| AUTHOR BIOGRAPHY |
|
| REFERENCES |
|