2021
DOI: 10.1109/tnnls.2020.3023941
|View full text |Cite
|
Sign up to set email alerts
|

Self-Organizing Nebulous Growths for Robust and Incremental Data Visualization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 13 publications
(8 citation statements)
references
References 26 publications
0
8
0
Order By: Relevance
“…The pseudo code for the incremental learning algorithm is as below. The functionality of SONG [14] algorithm can be summarised in three primary iterative steps. 1) Vector Quantization, 2) Self-Organization and 3) Dimensionality Reduction.…”
Section: Incremental Learned Dimensionality Reduction Modelmentioning
confidence: 99%
See 2 more Smart Citations
“…The pseudo code for the incremental learning algorithm is as below. The functionality of SONG [14] algorithm can be summarised in three primary iterative steps. 1) Vector Quantization, 2) Self-Organization and 3) Dimensionality Reduction.…”
Section: Incremental Learned Dimensionality Reduction Modelmentioning
confidence: 99%
“…Nonlinear dimensionality reduction techniques such as t-Distributed Stochastic Nonlinear Embedding (t-SNE)[12] and Uniform Manifold Approximation and Projection (UMAP)[13] are designed as static dimensionality reduction techniques, often failing to insert new data points (an increment) to an already learned model without distorting the structure of the existing representation[14]. This makes it difficult to relate visualizations generated at different timepoints of the same experiment, particularly in longitudinal studies (Supplementary Fig.1).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Most ML research on increasing interpretability of ML systems on the Y-axis and accessibility through increased AI model design automation on the X-axis (Figure 1) lies along or close to the axes. Research into interpretable neural networks designed with minimal expert intervention (FAIR AI) will increasingly close a significant knowledge gap, also informed by relevant studies for example, ML with continuous and life-long learning capability (Senanayake, Wang, Naik, & Halgamuge, 2021). This paper is organised with an example of scientific interpretation capability of ML models using differential equations followed by relevant applications of ML in three areas of importance namely Food Processing, Agriculture and Health: Examples of uninterpretable ML models applied to food drying, partially interpretable Convolutional Neural Networks (CNN) and XAI in plant disease detection including in rice cultivation, safety of taking multiple pharmaceutical 4 drugs and reuse of existing drugs for new diseases using semi-automated unsupervised ML model construction and shedding some light into yet largely unexplored world of microbes including viruses using semi-automated ML model construction.…”
Section: Introductionmentioning
confidence: 99%
“…The parametric mapping of the local structure has been learnt by parametric t-SNE in the latent space [35]. The self-organizing nebulous growths have been adopted to support incremental data for t-SNE, which adds new data in the previous data distribution [36]. However it has the disadvantage that the local linear approximation ignores the global distribution of data, while the global linear approximation might lose detailed information.…”
Section: Introductionmentioning
confidence: 99%