Purpose
Software developers extensively use stack overflow (SO) for knowledge sharing on software development. Thus, software engineering researchers have started mining the structured/unstructured data present in certain software repositories including the Q&A software developer community SO, with the aim to improve software development. The purpose of this paper is show that how academics/practitioners can get benefit from the valuable user-generated content shared on various online social networks, specifically from Q&A community SO for software development.
Design/methodology/approach
A comprehensive literature review was conducted and 166 research papers on SO were categorized about software development from the inception of SO till June 2016.
Findings
Most of the studies revolve around a limited number of software development tasks; approximately 70 percent of the papers used millions of posts data, applied basic machine learning methods, and conducted investigations semi-automatically and quantitative studies. Thus, future research should focus on the overcoming existing identified challenges and gaps.
Practical implications
The work on SO is classified into two main categories; “SO design and usage” and “SO content applications.” These categories not only give insights to Q&A forum providers about the shortcomings in design and usage of such forums but also provide ways to overcome them in future. It also enables software developers to exploit such forums for the identified under-utilized tasks of software development.
Originality/value
The study is the first of its kind to explore the work on SO about software development and makes an original contribution by presenting a comprehensive review, design/usage shortcomings of Q&A sites, and future research challenges.
Relation Extraction suffers from dramatical performance decrease when training a model on one genre and directly applying it to a new genre, due to the distinct feature distributions. Previous studies address this problem by discovering a shared space across genres using manually crafted features, which requires great human effort. To effectively automate this process, we design a genre-separation network, which applies two encoders, one genreindependent and one genre-shared, to explicitly extract genre-specific and genre-agnostic features. Then we train a relation classifier using the genre-agnostic features on the source genre and directly apply to the target genre. Experiment results on three distinct genres of the ACE dataset show that our approach achieves up to 6.1% absolute F1-score gain compared to previous methods. By incorporating a set of external linguistic features, our approach outperforms the state-of-the-art by 1.7% absolute F1 gain. We make all programs of our model publicly available for research purpose 1 .
Hashtags of microblogs can provide valuable information for many natural language processing tasks. How to recommend reliable hashtags automatically has attracted considerable attention. However, existing studies assumed that all the training corpus crawled from social networks are labelled correctly, while large sample statistics on real social media shows that there are 8.9% of microblogs with hashtags having wrong labels. The notable influence of noisy data to the classifier is ignored before. Meanwhile, recency also plays an important role in microblog hashtag, but the information is not used in the existing studies. Some temporal hashtags such as World Cup will ignite at a particular time, but at other times, the number of people talking about it will sharply decrease. To address the twofold shortcomings above, the authors propose an long short-term memory-based model, which uses temporal enhanced selective sentence-level attention to reduce the influence of wrong labelled microblogs to the classifier. Experimental results using a dataset of 1.7 million microblogs collected from SINA Weibo microblogs demonstrated that the proposed method could achieve significantly better performance than the state-of-the-art methods.
Recently, it has gained lots of interests to jointly learn the embeddings of knowledge graph (KG) and text information. However, previous work fails to incorporate the complex structural signals (from structure representation) and semantic signals (from text representation). This paper proposes a novel text-enhanced knowledge graph representation model, which can utilize textual information to enhance the knowledge representations. Especially, a mutual attention mechanism between KG and text is proposed to learn more accurate textual representations for further improving knowledge graph representation, within a unified parameter sharing semantic space. Different from conventional joint models, no complicated linguistic analysis or strict alignments between KG and text are required to train our model. Besides, the proposed model could fully incorporate the multi-direction signals. Experimental results show that the proposed model achieves the state-of-the-art performance on both link prediction and triple classification tasks, and significantly outperforms previous text-enhanced knowledge representation models.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.