This paper tests the hypothesis that irrational market misvaluation affects firms' takeover behavior. We employ two contemporaneous proxies for market misvaluation, pre-takeover book/price ratios and pre-takeover ratios of residual income model value to price. Misvaluation of bidders and targets influences the means of payment chosen, the mode of acquisition, the premia paid, target hostility to the offer, the likelihood of offer success, and bidder and target announcement period stock returns. The evidence is broadly supportive of the misvaluation hypothesis.
Does Investor Misvaluation Drive the Takeover Market?This paper tests the hypothesis that irrational market misvaluation affects firms' takeover behavior. We employ two contemporaneous proxies for market misvaluation, pre-takeover book/price ratios and pre-takeover ratios of residual income model value to price. Misvaluation of bidders and targets influences the means of payment chosen, the mode of acquisition, the premia paid, target hostility to the offer, the likelihood of offer success, and bidder and target announcement period stock returns. The evidence is broadly supportive of the misvaluation hypothesis.
This paper uses pre-offer market valuations to evaluate the misvaluation and "Q" theories of takeovers. Bidder and target valuations (price-to-book, or price-to-residual-income-model-value) are related to means of payment, mode of acquisition, premia, target hostility, offer success, and bidder and target announcement-period returns. The evidence is broadly consistent with both hypotheses. The evidence for the "Q" hypothesis is stronger in the pre-1990 period than in the 1990-2000 period, whereas the evidence for the misvaluation hypothesis is stronger in the 1990-2000 period than in the pre-1990 period. Copyright 2006 by The American Finance Association.
As an essential way of human emotional behavior understanding, speech emotion recognition (SER) has attracted a great deal of attention in human-centered signal processing. Accuracy in SER heavily depends on finding good affect-related, discriminative features. In this paper, we propose to learn affect-salient features for SER using convolutional neural networks (CNN). The training of CNN involves two stages. In the first stage, unlabeled samples are used to learn local invariant features (LIF) using a variant of sparse auto-encoder (SAE) with reconstruction penalization. In the second step, LIF is used as the input to a feature extractor, salient discriminative feature analysis (SDFA), to learn affect-salient, discriminative features using a novel objective function that encourages feature saliency, orthogonality, and discrimination for SER. Our experimental results on benchmark datasets show that our approach leads to stable and robust recognition performance in complex scenes (e.g., with speaker and language variation, and environment distortion) and outperforms several well-established SER features.Index Terms-Affective-salient discriminative feature analysis, convolutional neural networks, feature learning, speech emotion recognition.
We developed and validated a GAN model using a single T1-weighted MR image as the input that generates robust, high quality synCTs in seconds. Our method offers strong potential for supporting near real-time MR-only treatment planning in the brain.
Deep learning systems, such as Convolutional Neural Networks (CNNs), can infer a hierarchical representation of input data that facilitates categorization. In this paper, we propose to learn affect-salient features for Speech Emotion Recognition (SER) using semi-CNN. The training of semi-CNN has two stages. In the first stage, unlabeled samples are used to learn candidate features by contractive convolutional neural network with reconstruction penalization. The candidate features, in the second step, are used as the input to semi-CNN to learn affect-salient, discriminative features using a novel objective function that encourages the feature saliency, orthogonality and discrimination. Our experiment results on benchmark datasets show that our approach leads to stable and robust recognition performance in complex scenes (e.g., with speaker and environment distortion), and outperforms several well-established SER features.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.