Envisioned communication densities in Internet of Things (IoT) applications are increasing continuously. Because these wireless devices are often battery powered, we need specific energy efficient (low-power) solutions. Moreover, these smart objects use low-cost hardware with possibly weak links, leading to a lossy network. Once deployed, these Low-power Lossy Networks (LLNs) are intended to collect the expected measurements, handle transient faults and topology changes, etc. Consequently, validation and verification during the protocol development are a matter of prime importance. A large range of theoretical or practical tools are available for performance evaluation. A theoretical analysis may demonstrate that the performance guarantees are respected, while simulations or experiments aim on estimating the behaviour of a set of protocols within real-world scenarios. In this article, we review the various parameters that should be taken into account during such a performance evaluation. Our primary purpose is to provide a tutorial that specifies guidelines for conducting performance evaluation campaigns of network protocols in LLNs. We detail the general approach adopted in order to evaluate the performance of layer 2 and 3 protocols in LLNs. Furthermore, we also specify the methodology that should be adopted during the performance evaluation, while reviewing the numerous models and tools that are available to the research community.
Jazz improvisation on a given lead sheet with chords is an interesting scenario for studying the behaviour of artificial agents when they collaborate with humans. Specifically in jazz improvisation, the role of the accompanist is crucial for reflecting the harmonic and metric characteristics of a jazz standard, while identifying in real-time the intentions of the soloist and adapt the accompanying performance parameters accordingly. This paper presents a study on a basic implementation of an artificial jazz accompanist, which provides accompanying chord voicings to a human soloist that is conditioned by the soloing input and the harmonic and metric information provided in a lead sheet chart. The model of the artificial agent includes a separate model for predicting the intentions of the human soloist, towards providing proper accompaniment to the human performer in real-time. Simple implementations of Recurrent Neural Networks are employed both for modeling the predictions of the artificial agent and for modeling the expectations of human intention. A publicly available dataset is modified with a probabilistic refinement process for including all the necessary information for the task at hand and test-case compositions on two jazz standards show the ability of the system to comply with the harmonic constraints within the chart. Furthermore, the system is indicated to be able to provide varying output with different soloing conditions, while there is no significant sacrifice of “musicality” in generated music, as shown in subjective evaluations. Some important limitations that need to be addressed for obtaining more informative results on the potential of the examined approach are also discussed.
Automatically synthesizing dance motion sequences is an increasingly popular research task in the broader field of human motion analysis. Recent approaches have mostly used recurrent neural networks (RNNs), which are known to suffer from prediction error accumulation, usually limiting models to synthesize short choreographies of less than 100 poses. In this paper we present a multimodal convolutional autoencoder that combines 2D skeletal and audio information by employing an attention-based feature fusion mechanism, capable of generating novel dance motion sequences of arbitrary length. We first validate the ability of our system to capture the temporal context of dancing in a unimodal setting, by considering only skeletal features as input. According to 1440 rating answers provided by 24 participants in our initial user-study, we show that the optimal performance was presented by the model that was trained with input sequences of 500 poses. Based on this outcome, we train the proposed multimodal architecture with two different approaches, namely teacher-forcing and self-supervised curriculum learning, to deal with the autoregressive error accumulation phenomenon. In our evaluation campaign, we generate 1800 sequences and compare our method against two state-of-the-art approaches. Through qualitative and quantitative experiments we demonstrate the improvements introduced by the proposed multimodal architecture in terms of realism, motion diversity and multimodality, reducing the Fréchet Inception Distance (FID) metric value by 0.39. Subjective results confirm the effectiveness of our approach to synthesize diverse dance motion sequences, reporting a 6% increase in style consistency preference according to 1800 answers provided by 45 evaluators.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.