2019
DOI: 10.2196/preprints.16691
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

The Influence of Doctors’ Online Reputation on the Sharing of Outpatient Experiences: Empirical Study (Preprint)

Abstract: BACKGROUND Reviews are important for consumers to make informed decisions in online communities and for organizations to predict sales in the future. Most existing literature are conducted in product fields, with little attention paid to healthcare. Whether patients prefer to use these new platforms to discuss the reputation of doctors has so far remained an open question. OBJECTIVE We investig… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 47 publications
0
1
0
Order By: Relevance
“…Intriguing chainof-thought (CoT) techniques have greatly fully exploited the emergent ability of LLMs by eliciting them to decompose multi-step reasoning. Recent work in this field can be broadly classified into four categories: (i) Improving the performance of general-purpose reasoning tasks (Wei et al, 2022;Kojima et al, 2022a;Wang et al, 2022b;Zhou et al, 2022;Fu et al, 2022), i.e., arithmetic, symbolic, logical, and commonsense reasoning; (ii) Applying to domain-specific reasoning, such as multi-modality , or some purely linguistic tasks, such as translation (He et al, 2023), summarization , sentiment analysis (Fei et al, 2023), question-answer , etc; (iii) Analyzing the mechanics and interpretability of CoT (Wang et al, 2022a;Lyu et al, 2023); (iv) Distilling CoT techniques for smaller models (Ho et al, 2022;Kim et al, 2023).…”
Section: Related Workmentioning
confidence: 99%
“…Intriguing chainof-thought (CoT) techniques have greatly fully exploited the emergent ability of LLMs by eliciting them to decompose multi-step reasoning. Recent work in this field can be broadly classified into four categories: (i) Improving the performance of general-purpose reasoning tasks (Wei et al, 2022;Kojima et al, 2022a;Wang et al, 2022b;Zhou et al, 2022;Fu et al, 2022), i.e., arithmetic, symbolic, logical, and commonsense reasoning; (ii) Applying to domain-specific reasoning, such as multi-modality , or some purely linguistic tasks, such as translation (He et al, 2023), summarization , sentiment analysis (Fei et al, 2023), question-answer , etc; (iii) Analyzing the mechanics and interpretability of CoT (Wang et al, 2022a;Lyu et al, 2023); (iv) Distilling CoT techniques for smaller models (Ho et al, 2022;Kim et al, 2023).…”
Section: Related Workmentioning
confidence: 99%