Two genes with a common region that is characteristic of the TPSI1/Mt4 family were cloned from a Pi-starvationinduced cDNA library of rice roots using suppression subtracted hybridization (SSH). Based on the consensus sequence of these two genes, members of the TPSI1/Mt4 family were found in maize, wheat and barley. BLAST and a cluster analysis in the eight members of the TPSI1/Mt4 family showed two classes of four genes each among monocots. The first gene from rice was designated OsIPS1 based on a comparison of the consensus sequence with AtIPS1 , and consequently the second gene, which has been previously reported as OsPI1 , was designated OsIPS2 . Accumulation of the mRNA of OsIPS1/2 was examined by northern blotting and quantitative reverse transcriptasepolymerase chain reaction in whole-root and split-root experiments under treatment with phosphate (Pi) and the Pi analogue phosphite (Phi). OsIPS1 showed much higher mRNA accumulation in roots than OsIPS2 , and an opposite trend was seen in shoots. OsIPS1/2 showed both systemic and local responses to Pi starvation, and less than 10% of the overall induced mRNA level was due to the local Pi concentration in roots. The results indicate that Phi may interfere with earlier events in roots that are associated with a local Pi signalling pathway. An analysis of transgenic plants showed that OsIPS1/2 are independently responsive to Pi signalling and are mainly expressed in lateral roots and in the vascular cylinder in the primary root. Exogenous cytokinin (6-BA) almost completely suppressed systemic Pi starvation signalling and partially suppressed local Pi signalling. Exogenous abscisic acid remarkably reduced Pi starvation signalling. In contrast, exogenous auxin enhanced Pi signalling, especially local Pi signalling in roots. Exogenous ethylene (ethyphon) and the ratio of auxin to cytokinins did not appear to affect the expression of these two genes.
BRASSINAZOLE RESISTANT 1 (BZR1), the critical regulator of brassinosteroid (BR) response, participates in various BR-mediated developmental processes. However, the roles of BZR1 in stress tolerance are less clear. Here, we found that BZR1-like protein in tomato controls BR response and is involved in thermotolerance by regulating the FERONIA (FER) homologs. The CRISPR-bzr1 mutant showed reduced growth and was not responsive to 24-epibrassinolide (EBR) with regard to the promotion of plant growth. Mutation in BZR1 impaired the induction of RESPIRATORY BURST OXIDASE HOMOLOG1 (RBOH1), production of H2O2 in the apoplast and heat tolerance. Exogenous H2O2 recovered the heat tolerance of the tomato bzr1 mutant. Overexpression of BZR1 enhanced the production of apoplastic H2O2 and heat stress responses. However, silencing of RBOH1 abolished the BZR1-mediated heat tolerance. Further analysis showed that BZR1 bound to the promoters of FERONIA2 (FER2) and FER3 and induced their expression. Silencing of FER2/3 suppressed BZR1-dependent BR signaling for the induction of RBOH1 transcripts, accumulation of apoplastic H2O2 and heat tolerance. These results indicate that BZR1 regulates heat stress responses in tomato through RBOH1-dependent reactive oxygen species (ROS) signaling, which is at least partially mediated by FER2 and FER3.
Source separation is the task to separate an audio recording into individual sound sources. Source separation is fundamental for computational auditory scene analysis. Previous work on source separation has focused on separating particular sound classes such as speech and music. Many of previous work require mixture and clean source pairs for training. In this work, we propose a source separation framework trained with weakly labelled data. Weakly labelled data only contains the tags of an audio clip, without the occurrence time of sound events. We first train a sound event detection system with Au-dioSet. The trained sound event detection system is used to detect segments that are mostly like to contain a target sound event. Then a regression is learnt from a mixture of two randomly selected segments to a target segment conditioned on the audio tagging prediction of the target segment. Our proposed system can separate 527 kinds of sound classes from AudioSet within a single system. A U-Net is adopted for the separation system and achieves an average SDR of 5.67 dB over 527 sound classes in AudioSet.Index Terms-Source separation, weakly labelled data, computational auditory scene analysis, AudioSet.
The control of apical dominance involves auxin, strigolactones (SLs), cytokinins (CKs), and sugars, but the mechanistic controls of this regulatory network are not fully understood. Here, we show that brassinosteroid (BR) promotes bud outgrowth in tomato through the direct transcriptional regulation of BRANCHED1 (BRC1) by the BR signaling component BRASSINAZOLE-RESISTANT1 (BZR1). Attenuated responses to the removal of the apical bud, the inhibition of auxin, SLs or gibberellin synthesis, or treatment with CK and sucrose, were observed in bud outgrowth and the levels of BRC1 transcripts in the BR-deficient or bzr1 mutants. Furthermore, the accumulation of BR and the dephosphorylated form of BZR1 were increased by apical bud removal, inhibition of auxin, and SLs synthesis or treatment with CK and sucrose. These responses were decreased in the DELLA-deficient mutant. In addition, CK accumulation was inhibited by auxin and SLs, and decreased in the DELLA-deficient mutant, but it was increased in response to sucrose treatment. CK promoted BR synthesis in axillary buds through the action of the type-B response regulator, RR10. Our results demonstrate that BR signaling integrates multiple pathways that control shoot branching. Local BR signaling in axillary buds is therefore a potential target for shaping plant architecture.
This paper presents a novel supervised approach to detecting the chorus segments in popular music. Traditional approaches to this task are mostly unsupervised, with pipelines designed to target some quality that is assumed to define "chorusness," which usually means seeking the loudest or most frequently repeated sections. We propose to use a convolutional neural network with a multi-task learning objective, which simultaneously fits two temporal activation curves: one indicating "chorusness" as a function of time, and the other the location of the boundaries. We also propose a post-processing method that jointly takes into account the chorus and boundary predictions to produce binary output. In experiments using three datasets, we compare our system to a set of public implementations of other segmentation and chorus-detection algorithms, and find our approach performs significantly better.
Automatic music transcription (AMT) is the task of transcribing audio recordings into symbolic representations such as Musical Instrument Digital Interface (MIDI). Recently, neural networks based methods have been applied to AMT, and have achieved state-of-the-art result. However, most of previous AMT systems predict the presence or absence of notes in the frames of audio recordings. The transcription resolution of those systems are limited to the hop size time between adjacent frames. In addition, previous AMT systems are sensitive to the misaligned onsets and offsets labels of audio recordings. For high-resolution evaluation, previous works have not investigated AMT systems evaluated with different onsets and offsets tolerances. For piano transcription, there is a lack of research on building AMT systems with both note and pedal transcription. In this article, we propose a high-resolution AMT system trained by regressing precise times of onsets and offsets. In inference, we propose an algorithm to analytically calculate the precise onsets and offsets times of note and pedal events. We build both note and pedal transcription systems with our high-resolution AMT system. We show that our AMT system is robust to misaligned onsets and offsets labels compared to previous systems. Our proposed system achieves an onset F1 of 96.72% on the MAESTRO dataset, outperforming the onsets and frames system from Google of 94.80%. Our system achieves a pedal onset F1 score of 91.86%, and is the first benchmark result on the MAESTRO dataset. We release the source code of our work at https://github.com/bytedance/piano_transcription.
Transformer is a successful deep neural network (DNN) architecture that has shown its versatility not only in natural language processing but also in music information retrieval (MIR). In this paper, we present a novel Transformer-based approach to tackle beat and downbeat tracking. This approach employs SpecTNT (Spectral-Temporal Transformer in Transformer), a variant of Transformer that models both spectral and temporal dimensions of a time-frequency input of music audio. A SpecTNT model uses a stack of blocks, where each consists of two levels of Transformer encoders. The lower-level (or spectral) encoder handles the spectral features and enables the model to pay attention to harmonic components of each frame. Since downbeats indicate bar boundaries and are often accompanied by harmonic changes, this step may help downbeat modeling. The upper-level (or temporal) encoder aggregates useful local spectral information to pay attention to beat/downbeat positions. We also propose an architecture that combines SpecTNT with a state-ofthe-art model, Temporal Convolutional Networks (TCN), to further improve the performance. Extensive experiments demonstrate that our approach can significantly outperform TCN in downbeat tracking while maintaining comparable result in beat tracking.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.