Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
Empirical insights into promising commercial sentiment analysis solutions that go beyond the claims of their vendors are rare. Moreover, due to the constant evolution in the field, previous studies are far from reflecting the current situation. The goal of this article is to evaluate and compare current solutions using two experimental studies. In the first part of the study, based on tweets about airline service quality, we test the solutions of six vendors with different market power, such as Amazon, Google, IBM, Microsoft, Lexalytics, and MeaningCloud, and report their measures of accuracy, precision, recall, (macro)F1, time performance, and service level agreements (SLA). Furthermore, we compare two of the services in depth with multiple data sets and over time. The services tested here are Google Cloud Natural Language API and MeaningCloud Sentiment Analysis API. For evaluating the results over time, we use the same data set as in November 2020. In addition, further topic-specific and general Twitter data sets are used. The experiments show that the IBM Watson NLU and Google Cloud Natural Language API solutions may be preferred when negative text detection is the primary concern. When tested in July 2022, the Google Cloud Natural Language API was still the clear winner compared to the MeaningCloud Sentiment Analysis API, but only on the airline service quality data set; on the other data sets, both services provided specific benefits and drawbacks. Furthermore, we detected changes in the sentiment classification over time with both services. Our results motivate that an independent, critical, and longitudinal experimental analysis of sentiment analysis services can provide interesting insights into their overall reliability and particular classification accuracy beyond marketing claims to critically compare solutions based on real data and analyze potential weaknesses and margins of error before making an investment.
Empirical insights into promising commercial sentiment analysis solutions that go beyond the claims of their vendors are rare. Moreover, due to the constant evolution in the field, previous studies are far from reflecting the current situation. The goal of this article is to evaluate and compare current solutions using two experimental studies. In the first part of the study, based on tweets about airline service quality, we test the solutions of six vendors with different market power, such as Amazon, Google, IBM, Microsoft, Lexalytics, and MeaningCloud, and report their measures of accuracy, precision, recall, (macro)F1, time performance, and service level agreements (SLA). Furthermore, we compare two of the services in depth with multiple data sets and over time. The services tested here are Google Cloud Natural Language API and MeaningCloud Sentiment Analysis API. For evaluating the results over time, we use the same data set as in November 2020. In addition, further topic-specific and general Twitter data sets are used. The experiments show that the IBM Watson NLU and Google Cloud Natural Language API solutions may be preferred when negative text detection is the primary concern. When tested in July 2022, the Google Cloud Natural Language API was still the clear winner compared to the MeaningCloud Sentiment Analysis API, but only on the airline service quality data set; on the other data sets, both services provided specific benefits and drawbacks. Furthermore, we detected changes in the sentiment classification over time with both services. Our results motivate that an independent, critical, and longitudinal experimental analysis of sentiment analysis services can provide interesting insights into their overall reliability and particular classification accuracy beyond marketing claims to critically compare solutions based on real data and analyze potential weaknesses and margins of error before making an investment.
Knowledge-based capital is a key factor for productivity growth. Over the past 15 years, it has been increasingly recognised that knowledge-based capital comprises much more than technological knowledge and that these other components are essential for understanding productivity developments and competitiveness of both firms and economies. We develop selected indicators for knowledge-based capital, often denoted as intangible capital, on the basis of publicly available data from online platforms. These indicators based on data from Facebook and the employer branding and review platform Kununu are compared by OLS regressions with firm-level survey data from the Mannheim Innovation Panel (MIP). All regressions show a positive and significant relationship between survey-based firm-level expenditures for marketing and on-the-job training and the respective information stemming from the online platforms. We therefore explore the possibility of predicting brand equity and firm-specific human capital with machine learning methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.