2021
DOI: 10.1016/j.watres.2021.117001
|View full text |Cite
|
Sign up to set email alerts
|

Prediction of antibiotic-resistance genes occurrence at a recreational beach with deep learning models

Abstract: Antibiotic resistance genes (ARGs) have been reported to threaten the public health of beachgoers worldwide. Although ARG monitoring and beach guidelines are necessary, substantial efforts are required for ARG sampling and analysis. Accordingly, in this study, we predicted ARGs occurrence that are primarily found on the coast after rainfall using a conventional long short-term memory (LSTM), LSTMconvolutional neural network (CNN) hybrid model, and input attention (IA)-LSTM. To develop the models, 10 categories… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8
1

Relationship

3
6

Authors

Journals

citations
Cited by 25 publications
(20 citation statements)
references
References 40 publications
0
18
0
Order By: Relevance
“…2). The Experiment class can be sub-classed to We conducted an experiment to compare the performance of classic machine learning algorithms in predicting antibiotic-resistant genes (ARGs) at a recreational beach (Jang et al, 2021). The results of this experiment are shown in Fig.…”
Section: Model Comparison With Experimentsmentioning
confidence: 99%
“…2). The Experiment class can be sub-classed to We conducted an experiment to compare the performance of classic machine learning algorithms in predicting antibiotic-resistant genes (ARGs) at a recreational beach (Jang et al, 2021). The results of this experiment are shown in Fig.…”
Section: Model Comparison With Experimentsmentioning
confidence: 99%
“…All three models’ predictions were compared to the observed emergence of ARGs and thus “loss” values were calculated. One NN technology had an enhanced performance in detecting single ARGs, whereas another model showed superior performance in predicting multi-ARGs and allowed identification of the importance of the input variables [ 47 ].…”
Section: Future Perspectivesmentioning
confidence: 99%
“…During optimization, the learning rate was converged from the large value to the small value, implying that our model preferred the small step size when adjusting the weight and bias. Jang et al [89] and Yun et al [90] also recommended the smaller learning rate to simulate the water quality. In addition, the lookback also was the influential factor to the model result.…”
Section: Hyper-parameter Optimizationmentioning
confidence: 99%
“…8 shows the attention score map to temporally interpret the attention LSTM model. The plots represent the weight of input data to affect the model output [89]. On the attention score map, the color bar indicates the importance of the dataset [64].…”
Section: Model Interpretability With Attentionsmentioning
confidence: 99%