2019
DOI: 10.13052/jicts2245-800x.726
|View full text |Cite
|
Sign up to set email alerts
|

Machine Learning:The Panaceafor 5G Complexities

Abstract: It's not a myth that transition in next generation technology brings with it a set of exciting applications as well as challenges to the telecom ecosystem and in-turn paves way for new revenue streams. 5G enables ultra-high data rates, exceptional low latencies which enables the telecom operator for the facilitation of interesting parallels like IoT and Next-Gen Industrial enhancements like autonomous vehicles, connected mines, connected agriculture and mission critical communications by enhancing infrastructu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 0 publications
0
1
0
Order By: Relevance
“…The rise of popularity of Machine Learning (ML)/ Artificial Intelligence (AI) applications to boost performance at all levels [1], [2] has raised the need to make available computational capacities at different points of the whole network and communication infrastructure so as to be able to execute said applications, either in the form of small edge computational sites or larger data center sites. Indeed, ML/AI applications impose an intense usage of computational resources in the form of CPU/GPU power and memory, as well as large storage spaces, for the training of the algorithms, with a still notorious, although reduced, usage of computational resources during the inference phase [3], [4].…”
Section: Introductionmentioning
confidence: 99%
“…The rise of popularity of Machine Learning (ML)/ Artificial Intelligence (AI) applications to boost performance at all levels [1], [2] has raised the need to make available computational capacities at different points of the whole network and communication infrastructure so as to be able to execute said applications, either in the form of small edge computational sites or larger data center sites. Indeed, ML/AI applications impose an intense usage of computational resources in the form of CPU/GPU power and memory, as well as large storage spaces, for the training of the algorithms, with a still notorious, although reduced, usage of computational resources during the inference phase [3], [4].…”
Section: Introductionmentioning
confidence: 99%