2018
DOI: 10.14569/ijacsa.2018.090973
|View full text |Cite
|
Sign up to set email alerts
|

A Controlled Environment Model for Dealing with Smart Phone Addiction

Abstract: Smart phones are commonly used in most parts of the world and it is difficult to find a society that is not affected by the smart phone culture. But the usage of smart phone is crossing the limit of being used as a facility towards high level of abnormal dependency on the phone. This dependency can reach to the point where we have no longer control on the over-use and hence the negative impacts it can cause to our lives. The worst situation is that people do not even consider that this dependency is actually a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
13
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 9 publications
(13 citation statements)
references
References 21 publications
0
13
0
Order By: Relevance
“…at is, ITN, is the development of nets, defining the population coverage (a ITN ∈ (0, 1]). e second (3) Initialize target network Q′ with weight θ Q′ ⟵ θ Q (4) Initialize target network μ′ with weight θ μ′ ⟵ θ μ (5) Initialize replay buffer R (6) while For every episode do (7) Randomly initialize N for exploration (8) Get initial observation state s 1 (9) while For every step in the episode do //Repeat until s is terminal (10) Section action a t � μ(s t | θ μ ) + N t as per the current policy and exploration strategy (11) Perform action a t and monitor rewards r t and new states t+1 (12) Store (s t , a t , r t , s t+1 ) in R (13) Sample a randomly selected minibatch of N transition (s i , a i , r i , S i+1 ) from R (14)…”
Section: Simulation and Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…at is, ITN, is the development of nets, defining the population coverage (a ITN ∈ (0, 1]). e second (3) Initialize target network Q′ with weight θ Q′ ⟵ θ Q (4) Initialize target network μ′ with weight θ μ′ ⟵ θ μ (5) Initialize replay buffer R (6) while For every episode do (7) Randomly initialize N for exploration (8) Get initial observation state s 1 (9) while For every step in the episode do //Repeat until s is terminal (10) Section action a t � μ(s t | θ μ ) + N t as per the current policy and exploration strategy (11) Perform action a t and monitor rewards r t and new states t+1 (12) Store (s t , a t , r t , s t+1 ) in R (13) Sample a randomly selected minibatch of N transition (s i , a i , r i , S i+1 ) from R (14)…”
Section: Simulation and Discussionmentioning
confidence: 99%
“…e parameter x is tuned for policy by actor as given in equation (10). Using Temporal Difference error, the policy computed by the action is evaluated by critic as demonstrated in equation (11). e policy decided by the actor is shown by v. e idea of experience replay and separate target network as utilized by Deep Q Network (DQN) [83] is used by DDPG.…”
Section: Deep Deterministic Policymentioning
confidence: 99%
See 1 more Smart Citation
“…e deep learning algorithm 6 Complexity was used to learn the pattern of this big data available by GTD using most recent optimization techniques and make reasonable predictions and classifications. Even though many researchers have worked in the domain of using AI solutions for counterterrorism, no one has studied an effective mechanism of understanding factors of terrorism using deep learning, which is becoming very popular recently with the increased data and increased computational [53,54] power. To the best of the authors' knowledge, no comprehensive work is dedicated to predict and classify factors of terrorism using deep learning algorithms.…”
Section: Dealing With Unbalanced Classesmentioning
confidence: 99%
“…e main objective of this research work is the novel technique of identifying different parameters that can contribute to the absenteeism, preprocess the data to be processed by deep learning algorithms efficiently, and then devise a deep learning algorithm with most recent optimization techniques that can make prediction of absenteeism with reasonable accuracy. Even though many researchers have worked on absenteeism and have demonstrated to find Artificial Intelligence-based solutions for it, no one has studied an effective mechanism of understanding factors of absenteeism using deep learning, which is becoming very popular recently with the increased data and increased computational [58][59][60][61][62] power. According to the knowledge of the authors, no comprehensive work is dedicated to absenteeism prediction using deep learning algorithms.…”
Section: Backward Propagationmentioning
confidence: 99%