2024
DOI: 10.1039/d3dd00202k
|View full text |Cite
|
Sign up to set email alerts
|

Harnessing GPT-3.5 for text parsing in solid-state synthesis – case study of ternary chalcogenides

Maung Thway,
Andre K. Y. Low,
Samyak Khetan
et al.

Abstract: In solid-state thermoelectrics, optimally doped single-phase compounds are necessary for advancing the state-of-the-art. Thermoelectric devices, which convert heat into electricity and vice-versa, rely on bulk materials, usually made using solid-state...

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
3
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 42 publications
0
3
0
Order By: Relevance
“…[1][2][3][4][5][6][7] These models have received substantial attention for the fact that they can be intuitively "programmed" or "taught" using daily conversational language, thereby assisting with diverse chemistry research tasks. [8][9][10][11][12][13][14][15][16][17][18][19][20] It is envisioned that the evolution from text-only to more dynamic, multi-modal LLMs will result in even more powerful and convenient AI assistants across various applications. 5,[21][22][23] The recent introduction of GPT-4V, with 'V' denoting its vision capability, stands as a testament to this progress.…”
Section: Introductionmentioning
confidence: 99%
“…[1][2][3][4][5][6][7] These models have received substantial attention for the fact that they can be intuitively "programmed" or "taught" using daily conversational language, thereby assisting with diverse chemistry research tasks. [8][9][10][11][12][13][14][15][16][17][18][19][20] It is envisioned that the evolution from text-only to more dynamic, multi-modal LLMs will result in even more powerful and convenient AI assistants across various applications. 5,[21][22][23] The recent introduction of GPT-4V, with 'V' denoting its vision capability, stands as a testament to this progress.…”
Section: Introductionmentioning
confidence: 99%
“…The recent success of general-purpose large language models (LLMs) offers a new direction for chemical data extraction. While LLMs can achieve notable performance on NER–REL extraction with task-specific prompt engineering obtaining an F1-score of ∼90% with GPT-4, their results can be inconsistent and highly dependent on the task and instructions provided. Fine-tuning LLMs for chemical tasks offers potential improvements, but this process demands large data sets of labeled examples and remains computationally intensive .…”
Section: Introductionmentioning
confidence: 99%
“…General purpose large language models (LLMs) are a form of generative artificial intelligence (AI), pretrained on a broad data set so they can be applied to many different tasks using natural language. Pretrained LLMs have been investigated for a wide variety of chemical tasks, , such as extracting structured data from the literature, writing numerical simulation software, and education . LLM-based workflows have been used to plan syntheses of organic molecules , and metal–organic frameworks (MOFs). , Recent work has benchmarked materials science , and general chemical knowledge of existing LLMs, and there are efforts to develop chemistry/materials-specific LLMs. , Fine-tuning LLMs on modest amounts of data improves performance for specific tasks, while still taking advantage of the general pretraining to provide basic symbol interpretation and output formatting guidance.…”
mentioning
confidence: 99%