Any polar-ordered material with a spatially uniform polarization field is internally frustrated: The symmetry-required local preference for polarization is to be nonuniform, i.e., to be locally bouquet-like or "splayed." However, it is impossible to achieve splay of a preferred sign everywhere in space unless appropriate defects are introduced into the field. Typically, in materials like ferroelectric crystals or liquid crystals, such defects are not thermally stable, so that the local preference is globally frustrated and the polarization field remains uniform. Here, we report a class of fluid polar smectic liquid crystals in which local splay prevails in the form of periodic supermolecular-scale polarization modulation stripes coupled to layer undulation waves. The polar domains are locally chiral, and organized into patterns of alternating handedness and polarity. The fluid-layer undulations enable an extraordinary menagerie of filament and planar structures that identify such phases.
Binary mixtures of the aggregating helicene 1 with dodecane have, at dodecane concentrations >30 vol %, a nematic liquid crystalline phase and, at dodecane concentrations j5 vol %, a hexagonal columnar liquid crystalline phase. The helical structure with donor and acceptor groups at opposite ends gives rise to a molecular dipole moment that is parallel to the helix axis and a positive dielectric anisotropy. Accordingly, the helix axes orient parallel to an applied electric field, providing the basis for electro-optic switching.
We propose Jointly trained Duration Informed Transformer (JDI-T), a feed-forward Transformer with a duration predictor jointly trained without explicit alignments in order to generate an acoustic feature sequence from an input text. In this work, inspired by the recent success of the duration informed networks such as FastSpeech and DurIAN, we further simplify its sequential, two-stage training pipeline to a single-stage training. Specifically, we extract the phoneme duration from the autoregressive Transformer on the fly during the joint training instead of pretraining the autoregressive model and using it as a phoneme duration extractor. To our best knowledge, it is the first implementation to jointly train the feed-forward Transformer without relying on a pre-trained phoneme duration extractor in a single training pipeline. We evaluate the effectiveness of the proposed model on the publicly available Korean Single speaker Speech (KSS) dataset compared to the baseline text-to-speech (TTS) models trained by ESPnet-TTS.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.