2019
DOI: 10.24251/hicss.2019.111
|View full text |Cite
|
Sign up to set email alerts
|

Towards Computational Assessment of Idea Novelty

Abstract: In crowdsourcing ideation websites, companies can easily collect large amount of ideas. Screening through such volume of ideas is very costly and challenging, necessitating automatic approaches. It would be particularly useful to automatically evaluate idea novelty since companies commonly seek novel ideas. Three computational approaches were tested, based on Latent Semantic Analysis (LSA), Latent Dirichlet Allocation (LDA) and term frequency-inverse document frequency (TF-IDF), respectively. These three appro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(8 citation statements)
references
References 32 publications
0
8
0
Order By: Relevance
“…Computers actively participate in decision-making processes (Jarrahi, 2018) and assess the novelty of ideas (K. Wang et al, 2019). Within the research field of computational creativity, studies and artifacts show how AI-based systems can take over different creative tasks and thus act as active partners in creative processes (Nass et al, 1996).…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…Computers actively participate in decision-making processes (Jarrahi, 2018) and assess the novelty of ideas (K. Wang et al, 2019). Within the research field of computational creativity, studies and artifacts show how AI-based systems can take over different creative tasks and thus act as active partners in creative processes (Nass et al, 1996).…”
Section: Discussionmentioning
confidence: 99%
“…In our research, we specifically refer to the process of idea evaluation and whether it would be beneficial to use AI to assess and comment on ideas. AI-based system are already used for idea evaluation in various domains, where the evaluation can be compared to that by a human expert (Maher & Fisher, 2012;Varshney et al, 2019;K. Wang et al, 2019).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Thus, similar to traditional workplaces, in digital work settings, it might be easier to evaluate and spot the novelty of ideas but not their usefulness. It has been shown that human-technology interaction is enhancing one's novelty in ideas (Shuxin et al, 2017), and novelty overall evokes happiness in people, and it is making things interesting and easier for outsourcers to understand and evaluate (Wang et al, 2019). Such underlying mechanism might not work as effectively when outsourcers need to evaluate the usefulness of gig workers' ideas, because these might need either outsourcers' close expertise in the field to understand the ideas or practical implication of ideas, taking additional time and resources for the outsourcers.…”
Section: Creative Self-efficacy and Accuracy Of Creativity Evaluationsmentioning
confidence: 99%
“…Little research has addressed this approach. A growing number of studies shed some light on the semantic features of ideas in crowdsourcing by computational techniques: Schaffhausen and Kowalewski (2016) developed the semantic similarity algorithm to estimate idea similarity in an open innovation platform; Wang et al (2019) used topic modeling to measure idea novelty; Hoornaert et al (2017) applied latent semantic indexing to predict IT product implementation in the Mendeley community; and Rhyn and Blohm (2017) used a machine learning approach to filter the quality of ideas from crowdsourcing. However, at present, research focusing on popular contributor prediction by user-generated content is still largely uninvestigated.…”
Section: Literature Reviewmentioning
confidence: 99%