2023
DOI: 10.3390/info14080470
|View full text |Cite
|
Sign up to set email alerts
|

Advancements in On-Device Deep Neural Networks

Kavya Saravanan,
Abbas Z. Kouzani

Abstract: In recent years, rapid advancements in both hardware and software technologies have resulted in the ability to execute artificial intelligence (AI) algorithms on low-resource devices. The combination of high-speed, low-power electronic hardware and efficient AI algorithms is driving the emergence of on-device AI. Deep neural networks (DNNs) are highly effective AI algorithms used for identifying patterns in complex data. DNNs, however, contain many parameters and operations that make them computationally inten… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 35 publications
(68 reference statements)
0
0
0
Order By: Relevance
“…Recently developed large language models (LLMs) are becoming popular due to their benefits, such as efficiency, customization, understanding different languages, and automating tasks. But LLMs require large computing resources; hence, all the data move to the cloud for computation [73][74][75][76]. Currently developed hybrid AI architectures can recommend different offload options based on factors like model complexity and query size to distribute processing among the cloud and other computing devices [77,78].…”
Section: Mobile Devicesmentioning
confidence: 99%
“…Recently developed large language models (LLMs) are becoming popular due to their benefits, such as efficiency, customization, understanding different languages, and automating tasks. But LLMs require large computing resources; hence, all the data move to the cloud for computation [73][74][75][76]. Currently developed hybrid AI architectures can recommend different offload options based on factors like model complexity and query size to distribute processing among the cloud and other computing devices [77,78].…”
Section: Mobile Devicesmentioning
confidence: 99%