“…4B), but with more overlap of different behavior classes. Performing segmentation via clustering on this fully unsupervised behavioral embedding – a standard approach [4, 26, 27] – may therefore result in misclassified behaviors.…”
Section: Resultsmentioning
confidence: 99%
“…For example, [26] use an autoencoder RNN to produce a behavioral embedding from pose estimates, apply UMAP [30] to further reduce the dimensionality, then apply k-means clustering to perform unsupervised behavioral segmentation. Other recent works use different combinations of algorithms for embedding, dimensionality reduction, and clustering [4, 47, 17, 3, 27]. Our work expands this pipeline to include hand and heuristic labels, and we show that this semi-supervised approach can produce higher quality segmentations.…”
A popular approach to quantifying animal behavior from video data is through discrete behavioral segmentation, wherein video frames are labeled as containing one or more behavior classes such as walking or grooming. Sequence models learn to map behavioral features extracted from video frames to discrete behaviors, and both supervised and unsupervised methods are common. However, each approach has its drawbacks: supervised models require a time-consuming annotation step where humans must hand label the desired behaviors; unsupervised models may fail to accurately segment particular behaviors of interest. We introduce a semi-supervised approach that addresses these challenges by constructing a sequence model loss function with (1) a standard supervised loss that classifies a sparse set of hand labels; (2) a weakly supervised loss that classifies a set of easy-to-compute heuristic labels; and (3) a self-supervised loss that predicts the evolution of the behavioral features. With this approach, we show that a large number of unlabeled frames can improve supervised segmentation in the regime of sparse hand labels and also show that a small number of hand labeled frames can increase the precision of unsupervised segmentation.
“…4B), but with more overlap of different behavior classes. Performing segmentation via clustering on this fully unsupervised behavioral embedding – a standard approach [4, 26, 27] – may therefore result in misclassified behaviors.…”
Section: Resultsmentioning
confidence: 99%
“…For example, [26] use an autoencoder RNN to produce a behavioral embedding from pose estimates, apply UMAP [30] to further reduce the dimensionality, then apply k-means clustering to perform unsupervised behavioral segmentation. Other recent works use different combinations of algorithms for embedding, dimensionality reduction, and clustering [4, 47, 17, 3, 27]. Our work expands this pipeline to include hand and heuristic labels, and we show that this semi-supervised approach can produce higher quality segmentations.…”
A popular approach to quantifying animal behavior from video data is through discrete behavioral segmentation, wherein video frames are labeled as containing one or more behavior classes such as walking or grooming. Sequence models learn to map behavioral features extracted from video frames to discrete behaviors, and both supervised and unsupervised methods are common. However, each approach has its drawbacks: supervised models require a time-consuming annotation step where humans must hand label the desired behaviors; unsupervised models may fail to accurately segment particular behaviors of interest. We introduce a semi-supervised approach that addresses these challenges by constructing a sequence model loss function with (1) a standard supervised loss that classifies a sparse set of hand labels; (2) a weakly supervised loss that classifies a set of easy-to-compute heuristic labels; and (3) a self-supervised loss that predicts the evolution of the behavioral features. With this approach, we show that a large number of unlabeled frames can improve supervised segmentation in the regime of sparse hand labels and also show that a small number of hand labeled frames can increase the precision of unsupervised segmentation.
“…performance is plateauing despite the modications to training. Note that both subtyping and health trajectory are visualised with dimensionality reduction techniques such as t-SNE or UMAP to make highdimensional nonlinear models human interpretable [43,423,517,782]. (B) In strength training, the equivalent to one-size-ts-all approach are the training program templates oered often online with little customisation to the athlete [443].…”
In strength training, personalised strength training (autoregulation) approaches have been used to individualise exercise programs with monitoring an for dynamic adjustment based on their responses to training. While this transition from tradition-based training to evidence-based training framework has been an improvement in training practices, we argue that the future of strength training will also incorporate deep learning models powered by data. We refer to this data-driven framework as precision strength training inspired by the similar modeling frameworks used in precision medicine. In contrast to current personalised training in which the acquired athlete data is often subject to human expert decision-making, we are anticipating the rise of human-in-the-loop systems with an augmented coach who will be doing decisions collaboratively with the machine. Similar to other precision frameworks, such as precision health, we envision such a future to take decades to be realised and we focus here on practical short-term targets on a way to long-term realisation. In this chapter, we will review the measurement technology needed for continuous data acquisition from an individual during training/physical activity, how to acquire these datasets for the development of such systems and, how a proof-of-concept system could be developed for powerlifting training with applicability to general strength and conditioning (S&C) and physical rehabilitation purposes. Additionally, we will evaluate how the user experience (UX) of the system feedback and visualisation could be designed.
“…Unilateral stimulation of VTA neurons during presentation of a sensory cue elicit learned cue-approach behavior, while unilateral stimulation of SNc neurons during cue presentation elicits learned cue-triggered rotational behavior (56). 160 Increasingly sophisticated rodent motor learning assays (46)(47)(48)(49)(50)(51) and tools for 161 quantifying motor behavior (57,58) have been developed in recent years, but for the most part these tools have not yet been applied to the question of dopaminergic contributions to motor learning. However, their development greatly expands the universe of questions that can be asked within the mouse model system.…”
Motor learning is a core aspect of human life, and appears to be ubiquitous throughout the animal kingdom. Dopamine, a neuromodulator with a multifaceted role in synaptic plasticity, may be a key signaling molecule for motor skill learning. Though typically studied in the context of reward-based associative learning, dopamine appears to be necessary for some types of motor learning. Mesencephalic dopamine structures are highly conserved among vertebrates, as are some of their primary targets within the basal ganglia, a subcortical circuit important for motor learning and motor control. With a focus on the benefits of cross-species comparisons, this review examines how "model-free" and "model-based" computational frameworks for understanding dopamine's role in associative learning may be applied to motor learning. The hypotheses that dopamine could drive motor learning either by functioning as a reward prediction error, through passive facilitating of normal basal ganglia activity, or through other mechanisms are examined in light of new studies using humans, rodents, and songbirds. Additionally, new paradigms that could enhance our understanding of dopamine's role in motor learning by bridging the gap between the theoretical literature on motor learning in humans and other species are discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.