2016
DOI: 10.1016/j.jnt.2015.09.020
|View full text |Cite
|
Sign up to set email alerts
|

Newton polygons of L functions of polynomials x+ax

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 10 publications
(4 citation statements)
references
References 9 publications
0
4
0
Order By: Relevance
“…For f (x) = x d + ax, Zhu, Liu-Niu and Ouyang-J. Yang obtained the slopes in [Z2, Theorem 1.1], [LN1,Theorem 1.10] and [OY,Theorem 1.1], see also R. Yang [Y,§1 Theorem] for earlier results.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…For f (x) = x d + ax, Zhu, Liu-Niu and Ouyang-J. Yang obtained the slopes in [Z2, Theorem 1.1], [LN1,Theorem 1.10] and [OY,Theorem 1.1], see also R. Yang [Y,§1 Theorem] for earlier results.…”
Section: Resultsmentioning
confidence: 99%
“…In [N], Niu gave a lower bound of the Newton polygon NP q (f, χ, t). In [OY,Theorem 4.3], Ouyang-Yang showed that if the Newton polygon of L * (f, t) is sufficiently close to its Hodge polygon, the slopes of NP q (f, χ, t) for χ in general follow from the slopes of NP q (f, t). As a consequence they obtained the slopes of NP q (x d + ax, χ, t) when p is bigger than an explicit bound depending only on d and h.…”
Section: Resultsmentioning
confidence: 99%
“…Instruction-tuned large language models (LLMs) have been successful at knowledge retrieval, text extraction, summarization, and reasoning tasks without requiring domain-specific fine-tuning. Prompting LLMs with instruction and data contexts described in natural language has emerged as a means for task and domain specification as well as controllability of model behaviors .…”
Section: Introductionmentioning
confidence: 99%
“…Among these models, ChatGPT (OpenAI) has emerged as a particularly powerful tool based on GPT-3.5 that was designed specifically for the task of generating natural and contextually appropriate responses in a conversational setting. Building on the GPT-3 model, GPT-3.5 was trained on a larger corpus of textual data and with additional training techniques like Reinforcement Learning from Human Feedback (RLHF), which incorporates human knowledge and expertise into the model . This chatbot is an implementation of GPT-3.5 fine-tuned on conversational data, allowing it to generate appropriate responses to user input in a conversational context .…”
Section: Introductionmentioning
confidence: 99%