2023
DOI: 10.1111/1911-3846.12832
|View full text |Cite
|
Sign up to set email alerts
|

FinBERT: A Large Language Model for Extracting Information from Financial Text*

Abstract: We develop FinBERT, a state‐of‐the‐art large language model that adapts to the finance domain. We show that FinBERT incorporates finance knowledge and can better summarize contextual information in financial texts. Using a sample of researcher‐labeled sentences from analyst reports, we document that FinBERT substantially outperforms the Loughran and McDonald dictionary and other machine learning algorithms, including naïve Bayes, support vector machine, random forest, convolutional neural network, and long sho… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
33
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 157 publications
(85 citation statements)
references
References 82 publications
(138 reference statements)
1
33
0
Order By: Relevance
“…A. Huang et al (2022) use human-annotated sentences of financial text of three sentiment classes (positive, neutral, and negative) to fine-tune FinBERT for sentiment analysis. They report that the overall accuracy of FinBERT is 88.2% for their test sample, substantially higher than that of the dictionary approach (62.1%), NB (73.6%), SVM (72.6%), RF (71.9%), CNN (75.1%), and LSTM (76.3%).…”
Section: Machine Learning Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…A. Huang et al (2022) use human-annotated sentences of financial text of three sentiment classes (positive, neutral, and negative) to fine-tune FinBERT for sentiment analysis. They report that the overall accuracy of FinBERT is 88.2% for their test sample, substantially higher than that of the dictionary approach (62.1%), NB (73.6%), SVM (72.6%), RF (71.9%), CNN (75.1%), and LSTM (76.3%).…”
Section: Machine Learning Methodsmentioning
confidence: 99%
“…A. Huang et al (2022) report the sensitivity of model performances to varying sizes of training sample. FinBERT maintains an 22.…”
Section: Machine Learning Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…A recent article analyzes FinBERT’s sentiment performance, as well as its advantages over classic BERT large language models (LLMs) [ 43 ]. FLANG models [ 44 ] expand upon FinBERT and benefit from training on a benchmark known as the Financial Language Understanding Evaluation (FLUE) benchmark.…”
Section: Related Workmentioning
confidence: 99%
“…The most successful labeling functions we tested utilize large language models (Devlin et al, 2018;Brown et al, 2020;Hoffmann et al, 2022) to achieve one or both of the above tasks. Large language models can achieve impressive results on passage retrieval and information extraction tasks (Agrawal et al, 2022;Huang et al, 2022), although not without limitations and biases (Bender et al, 2021).…”
Section: Core Features and User Interfacementioning
confidence: 99%