Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
Online user feedback has become an essential mechanism for software organizations to gain insight into user concerns and to recognize areas for improvement. In software platform ecosystems, staying abreast of user feedback is particularly challenging due to the multitude of feedback channels and the complex interplay with third party applications. In this paper we report from a mixed-method study of user feedback from over 40,000 relevant reviews from 139 SECO platforms out of 2.4 million online user reviews scraped from 283 retrieved SECO platforms. Through thematic analysis and machine learning classifiers with high accuracy, we identified and analyzed six categories of user challenges in the areas of Integration, Customer Support, Design & Complexity, Privacy & Security, Cost & Pricing, and Performance & Compatibility. Our analysis also shows a significant growth of SECO user feedback in the past five years, highlighting the importance of understanding such user feedback as well as research methodologies to automatically study online user concerns in software ecosystems. To further understand mitigation strategies for challenges reported by end users, we interviewed four executives from large ecosystems and describe strategies in addressing those identified challenges. This research is a first large scale study of user feedback in software ecosystems; the categories of user concerns are hopefully useful in guiding platforms in designing and fostering better software ecosystems. Our methodology for automatically classifying the user feedback that is SECO-related can also serve as guidance for future studies that can further advance our understanding of user feedback and how to integrate it into improved software ecosystems.
Online user feedback has become an essential mechanism for software organizations to gain insight into user concerns and to recognize areas for improvement. In software platform ecosystems, staying abreast of user feedback is particularly challenging due to the multitude of feedback channels and the complex interplay with third party applications. In this paper we report from a mixed-method study of user feedback from over 40,000 relevant reviews from 139 SECO platforms out of 2.4 million online user reviews scraped from 283 retrieved SECO platforms. Through thematic analysis and machine learning classifiers with high accuracy, we identified and analyzed six categories of user challenges in the areas of Integration, Customer Support, Design & Complexity, Privacy & Security, Cost & Pricing, and Performance & Compatibility. Our analysis also shows a significant growth of SECO user feedback in the past five years, highlighting the importance of understanding such user feedback as well as research methodologies to automatically study online user concerns in software ecosystems. To further understand mitigation strategies for challenges reported by end users, we interviewed four executives from large ecosystems and describe strategies in addressing those identified challenges. This research is a first large scale study of user feedback in software ecosystems; the categories of user concerns are hopefully useful in guiding platforms in designing and fostering better software ecosystems. Our methodology for automatically classifying the user feedback that is SECO-related can also serve as guidance for future studies that can further advance our understanding of user feedback and how to integrate it into improved software ecosystems.
Asking and answering questions are common activities in both the workplace and everyday life. Knowledge-sharing websites have become a popular resource for obtaining instant and searchable answers. However, users of these sites may encounter challenges in acquiring timely and appropriate content from user-provided answers owing to factors such as limited expertise, spam, and time constraints. Identifying trustworthy experts who can provide relevant and reliable answers in knowledge-sharing communities is crucial to overcome this issue. In this study, we propose a solution to the problem of identifying credible experts on knowledge-sharing sites by introducing the CredibleExpertRank algorithm. Our algorithm calculates a CredibleExpert score based on two main factors: activity and credibility. The credibility score is determined by analyzing users' interactions related to questioning, answering, recommending, and mining users' opinions, while the activity score reflects the user's level of participation on the platform. We conducted experiments to evaluate the performance of the CredibleExpertRank algorithm, using user satisfaction measures for answers to given questions. Our findings confirmed that the credible experts identified by our algorithm provided more relevant and timely answers compared to other ordinary users. The timely nature of the credible experts' answers was due to the reflection of their activity factor, while the superior performance in relevance was attributed to the high recommendation rate of their answers and positive evaluations received from opinion mining results. Our study undertakes an extensive investigation focused on the identification and prioritization of credible experts, revealing their profound advantages in significantly enhancing the overall quality of knowledge-sharing platforms. We proposed the CredibleEx-pertRank algorithm as a powerful method for effectively identifying trustworthy experts and giving priority to their answers. Through a meticulous process of experimental evaluation, we provide compelling evidence that this approach leads to substantial improvements in both search efficiency and reliability on knowledgesharing sites. By highlighting the potential benefits derived from the identification of credible experts, our study underscores their pivotal role in elevating the overall performance of knowledge-sharing platforms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.