2022
DOI: 10.48550/arxiv.2206.01962
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Formal Specifications from Natural Language

Abstract: We study the ability of language models to translate natural language into formal specifications with complex semantics. In particular, we fine-tune off-the-shelf language models on three datasets consisting of structured English sentences and their corresponding formal representation: 1) First-order logic (FOL), commonly used in software verification and theorem proving; 2) linear-time temporal logic (LTL), which forms the basis for industrial hardware specification languages; and 3) regular expressions (rege… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(7 citation statements)
references
References 41 publications
0
6
0
Order By: Relevance
“…Related Work Existing works fall under two buckets: one which admits support for a range of LTL formulas but compromises on the expressiveness of the input (Hahn et al 2022;Schmitt 2022;Narizzano et al 2018;Narizzano and Vuotto 2017), and the other which admits natural language inputs but were built for a particular domain like robotics and are not readily useful as a general purpose package for practitioners (Wang et al 2020;Wang 2020;Nikora and Balcom 2009;Dwyer, Avrunin, and Corbett 1998;Kim, Banks, and Shah 2017;Lignos et al 2015). Furthermore, among these works, other than (Wang 2020;Narizzano and Vuotto 2017;Schmitt 2022), none have publicly available code and are therefore not readily usable for practitioners.…”
Section: Natural Language and Ltlmentioning
confidence: 99%
“…Related Work Existing works fall under two buckets: one which admits support for a range of LTL formulas but compromises on the expressiveness of the input (Hahn et al 2022;Schmitt 2022;Narizzano et al 2018;Narizzano and Vuotto 2017), and the other which admits natural language inputs but were built for a particular domain like robotics and are not readily useful as a general purpose package for practitioners (Wang et al 2020;Wang 2020;Nikora and Balcom 2009;Dwyer, Avrunin, and Corbett 1998;Kim, Banks, and Shah 2017;Lignos et al 2015). Furthermore, among these works, other than (Wang 2020;Narizzano and Vuotto 2017;Schmitt 2022), none have publicly available code and are therefore not readily usable for practitioners.…”
Section: Natural Language and Ltlmentioning
confidence: 99%
“…Other approaches include an interactive method using SMT solving and semantic parsing [16], or structured temporal aspects in grounded robotics [46] and planning [33]. Neural networks have only recently being used to translate into temporal logics, e.g., by training a model for STL from scratch [22], fine-tuning language models [20], or an approach to apply GPT-3 [14,30] in a one-shot fashion, where [14] output a restricted set of declare templates [34] that can be translated to a fragment of LTLf [10]. Translating natural langauge to LTL has especially been of interest to the robotics community (see [17] for an overview), where datasets and application domains are, in contrast to our setting, based on structured natural language.…”
Section: Natural Language To Linear-time Temporal Logicmentioning
confidence: 99%
“…LLMs are Transformers [43], which is the state of the art neural architecture for natural language proccessing. Additionally, Transformers have shown remarkable performance when being applied to classical problems in verification (e.g., [19,41,26,9]), reasoning (e.g., [28,51]), as well as the auto-formalization [36] of mathematics and formal specifications (e.g., [50,20,22]).…”
Section: Large Language Modelsmentioning
confidence: 99%
See 2 more Smart Citations