2019 IEEE 13th International Conference on Semantic Computing (ICSC) 2019
DOI: 10.1109/icosc.2019.8665639
|View full text |Cite
|
Sign up to set email alerts
|

Real-Time Optimized N-Gram for Mobile Devices

Abstract: With the increasing number of mobile devices, there has been continuous research on generating optimized Language Models (LMs) for soft keyboard. In spite of advances in this domain, building a single LM for low-end feature phones as well as high-end smartphones is still a pressing need. Hence, we propose a novel technique, Optimized N-gram (Op-Ngram), an end-to-end N-gram pipeline that utilises mobile resources efficiently for faster Word Completion (WC) and Next Word Prediction (NWP). Op-Ngram applies Stupid… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 6 publications
0
6
0
Order By: Relevance
“…We use Op-Ngram [9] as the underlying language model (LM) for suggestions, which has a vocabulary of top-100k words for each language, stored in a Marisa Trie [26], ranked according to their statistical probability in the training corpus. This trie is referred to as VocabTrie.…”
Section: घरmentioning
confidence: 99%
See 3 more Smart Citations
“…We use Op-Ngram [9] as the underlying language model (LM) for suggestions, which has a vocabulary of top-100k words for each language, stored in a Marisa Trie [26], ranked according to their statistical probability in the training corpus. This trie is referred to as VocabTrie.…”
Section: घरmentioning
confidence: 99%
“…where n c denotes the number of actual characters in the test set and n k denotes the number of keystrokes that were needed to produce the test set with a soft keyboard. The NWP metric [9] refers to the probability that given a sequence of the first (k − 1) words from a sentence in test data, the next word in that sentence, w k , will appear among the word prediction candidates. Mathematically,…”
Section: E Evaluationmentioning
confidence: 99%
See 2 more Smart Citations
“…A language model (LM) calculates the probability of the current word or character with a previous word or character sequence [1]. For the sequence = ( 1 , 2 , … , ), the probability of the LM is denoted as ( ).…”
Section: Introductionmentioning
confidence: 99%