Question answering models struggle to generalize to novel compositions of training patterns, such to longer sequences or more complex test structures. Current end-to-end models learn a flat input embedding which can lose input syntax context. Prior approaches improve generalization by learning permutation invariant models, but these methods do not scale to more complex train-test splits. We propose Grounded Graph Decoding, a method to improve compositional generalization of language representations by grounding structured predictions with an attention mechanism. Grounding enables the model to retain syntax information from the input in thereby significantly improving generalization over complex inputs. By predicting a structured graph containing conjunctions of query clauses, we learn a group invariant representation without making assumptions on the target domain. Our model significantly outperforms state-ofthe-art baselines on the Compositional Freebase Questions (CFQ) dataset, a challenging benchmark for compositional generalization in question answering. Moreover, we effectively solve the MCD1 split with 98% accuracy. All source is available at https:// github.com/gaiyu0/cfq.
Purpose The Group Method of Data Handling (GMDH) neural network has demonstrated good performance in data mining, prediction, and optimization. Scholars have used it to forecast stock and real estate investment trust (REIT) returns in some countries and region, but not in the United States (US) REIT market. The primary goal of this study is to predict the US REIT market using GMDH and then compare its accuracy with that derived from the traditional prediction method. Design/methodology/approach To forecast the return on the US REIT index, this study used the GMDH neural network and the generalized autoregressive conditional heteroscedasticity (GARCH) model. In this test, the training samples, testing samples, and kernel functions of the GMDH model are controlled to investigate their impact on the accuracy of the machine learning approach. Corresponding experiments were performed using the GARCH model, and the accuracies of these two approaches were compared. Findings Compared with GARCH, GMDH’s accuracy is much higher, indicating that the machine learning approach can provide a highly accurate prediction of REIT prices. The size of the training samples and the kernel functions in the GMDH model affect the accuracy of the prediction results. In particular, the kernel function has a significant impact on prediction accuracy. The linear and linear covariance kernel functions are simple to train and yield accurate predictions, whereas the quadratic function is difficult to train. Even with small training samples, GMDH can outperform GARCH in prediction accuracy. Research limitations/implications Although GMDH shows good performance in predicting the US REIT return, it is still a black-box model, and the algorithm is difficult for financial analysts to develop and customize. The data used in this study come from the US REIT market, which is the world’s largest and most liquid market. Social implications This research shows that the GMDH model outperforms the GARCH model in forecasting REIT returns. Hence, investors can use the machine learning approach to make more accurate predictions of the target REITs’ returns and thus better investment decisions. Future investors and researchers may use GMDH to forecast the performance of REITs in other markets. Originality/value This is the first study to apply the GMDH neural network to the US REIT market and determine the impact of the two factors on its performance. For example, this research first discusses the impact of kernel functions on the US REIT market using the GMDH neural network. It also includes short-term daily prediction returns that were not previously considered, making it a valuable reference for financial industry analysts.
Question answering models struggle to generalize to novel compositions of training patterns, such to longer sequences or more complex test structures. Current end-to-end models learn a flat input embedding which can lose input syntax context. Prior approaches improve generalization by learning permutation invariant models, but these methods do not scale to more complex train-test splits. We propose Grounded Graph Decoding, a method to improve compositional generalization of language representations by grounding structured predictions with an attention mechanism. Grounding enables the model to retain syntax information from the input in thereby significantly improving generalization over complex inputs. By predicting a structured graph containing conjunctions of query clauses, we learn a group invariant representation without making assumptions on the target domain. Our model significantly outperforms state-ofthe-art baselines on the Compositional Freebase Questions (CFQ) dataset, a challenging benchmark for compositional generalization in question answering. Moreover, we effectively solve the MCD1 split with 98% accuracy. All source is available at https:// github.com/gaiyu0/cfq.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.