|
Since its launch, the chatbot ChatGPT has gained significant popularity and may serve as a valuable resource for evidence-based exercise training advice. However, its capability to provide accurate and actionable exercise training information has not been systematically evaluated. This study assessed ChatGPT’s proficiency by comparing its responses to those of human personal trainers. Nine currently active level 4 (European Qualification Framework (EQF)) personal trainers (PTs) submitted their most frequently asked exercise training questions along with their own answers to them, and these questions were then posed to ChatGPT (version 3.5). Responses from both sources were evaluated by 18 PTs and 9 topic experts, who rated them on scientific correctness, actionability, and comprehensibility. Scores for each criterion were averaged into an overall score, and group means were compared using permutation tests. ChatGPT outperformed PTs in six of nine questions overall, with higher ratings in scientific correctness (5/9), comprehensibility (6/9), and actionability (5/9). In contrast, none of the responses from PTs were higher than those from ChatGPT for any question or metric. Our results suggest that ChatGPT can be used as a tool to answer questions that are frequently asked to PTs, and that chatbots may be useful for delivering informational support relating to physical exercise. |