2016 7th IEEE International Conference on Software Engineering and Service Science (ICSESS) 2016
DOI: 10.1109/icsess.2016.7883073
|View full text |Cite
|
Sign up to set email alerts
|

User-based AutoenCoder for QoS prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
7
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(7 citation statements)
references
References 8 publications
0
7
0
Order By: Relevance
“…The activation functions used by the nodes of the hidden and output layers are reported in 21 of the 26 works. For the standard AEs the Sigmoid function is always used (Hau et al, 2016;Sakurai et al, 2017;Jia et al, 2017), but in the work of Sakurai et al (2017) a custom linear function is used on the output layer. For multi-layer networks the Sigmoid is also used in some works (Duan et al, 2014(Duan et al, , 2016Xie et al, 2019;Sánchez-Morales et al, 2020), but ReLU is the one more often applied (Gondara and Wang, 2017;Ryu et al, 2020;McCoy et al, 2018;Boquet et al, 2019Boquet et al, , 2020Xie et al, 2019;Saeed et al, 2018;Fortuin et al, 2020), sometimes through the Leaky ReLU variant (Ryu et al, 2020;Saeed et al, 2018).…”
Section: Network Structurementioning
confidence: 99%
See 4 more Smart Citations
“…The activation functions used by the nodes of the hidden and output layers are reported in 21 of the 26 works. For the standard AEs the Sigmoid function is always used (Hau et al, 2016;Sakurai et al, 2017;Jia et al, 2017), but in the work of Sakurai et al (2017) a custom linear function is used on the output layer. For multi-layer networks the Sigmoid is also used in some works (Duan et al, 2014(Duan et al, , 2016Xie et al, 2019;Sánchez-Morales et al, 2020), but ReLU is the one more often applied (Gondara and Wang, 2017;Ryu et al, 2020;McCoy et al, 2018;Boquet et al, 2019Boquet et al, , 2020Xie et al, 2019;Saeed et al, 2018;Fortuin et al, 2020), sometimes through the Leaky ReLU variant (Ryu et al, 2020;Saeed et al, 2018).…”
Section: Network Structurementioning
confidence: 99%
“…The training phase of an AE depends on the same aspects of any other ANN: an optimization algorithm, a loss function and the maximum number of epochs. From the 17 works that describe the used optimization algorithm, 5 use the well-known Stochastic Gradient Descent (Sánchez-Morales et al, 2017;Hau et al, 2016;Sánchez-Morales et al, 2019;Xie et al, 2019;Lai et al, 2019) and 6 use one of its variants called Adam (El Esawey et al, 2015;Ryu et al, 2020;Boquet et al, 2019Boquet et al, , 2020Saeed et al, 2018;Fortuin et al, 2020). Other algorithms are used less often, namely the Nesterov's Accelerated Gradient (Gondara and Wang, 2018), the Scaled Conjugate Gradient Algorithm (Sakurai et al, 2017) and the RMSProp (McCoy et al, 2018).…”
Section: Trainingmentioning
confidence: 99%
See 3 more Smart Citations