1999
DOI: 10.1590/s0104-93131999000100011
|View full text |Cite
|
Sign up to set email alerts
|

[No Title Available]

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2002
2002
2022
2022

Publication Types

Select...
1
1
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…ID/250 and ID/1000 run an image denoise algorithm to remove Gaussian noise from 2D grayscale images of dimension 250 by 250 and 1000 by 1000. FBP/C1 and FBP/C3 perform belief propagation on a factor graph provided by the cora-1 and cora-3 datasets [79,91]. ALS/N runs an alternating least squares algorithm on the NPIC-500 dataset [81].…”
Section: Parallelizing Dynamic Data-graph Computationsmentioning
confidence: 99%
“…ID/250 and ID/1000 run an image denoise algorithm to remove Gaussian noise from 2D grayscale images of dimension 250 by 250 and 1000 by 1000. FBP/C1 and FBP/C3 perform belief propagation on a factor graph provided by the cora-1 and cora-3 datasets [79,91]. ALS/N runs an alternating least squares algorithm on the NPIC-500 dataset [81].…”
Section: Parallelizing Dynamic Data-graph Computationsmentioning
confidence: 99%
“…The LDA generative model assumes that documents contain a combination of topics, and that topics are a distribution of words; since the words in a document are known, the latent variable of topics can be estimated through Gibbs sampling. We used an implementation of the LDA algorithm provided by the Mallet package [6] adjusting one parameter (alpha~0:30) to favor fewer topics per document, since individual utterance updates tend to contain fewer topics than the typical documents (newspaper or encyclopedia articles) to which LDA is applied. Besides, in order to avoid subsequent unnecessary complexity burden while ensuring high interpretability of the results, we also reduced the size of the vocabulary of the words used by the LDA to the most frequent 500 words, excluding the stopword elements in the Reddit corpus.…”
Section: Topicmentioning
confidence: 99%