2017
DOI: 10.1007/s10994-017-5676-y
|View full text |Cite
|
Sign up to set email alerts
|

Distributed multi-task classification: a decentralized online learning approach

Abstract: Although dispersing one single task to distributed learning nodes has been intensively studied by the previous research, multi-task learning on distributed networks is still an area that has not been fully exploited, especially under decentralized settings. The challenge lies in the fact that different tasks may have different optimal learning weights while communication through the distributed network forces all tasks to converge to an unique classifier. In this paper, we present a novel algorithm to overcome… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
29
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 34 publications
(32 citation statements)
references
References 29 publications
(35 reference statements)
0
29
0
Order By: Relevance
“…Moreover, as our work focuses on feature-based MTL model with sparsity (Obozinski et al 2010(Obozinski et al , 2011Wang and Ye 2015), it enables us to design a tailored feature screening technique to further reduce the communication cost. Unlike our framework, decentralized MTL methods have also been studied in Wang et al (2018), Bellet et al (2018), Vanhaesebrouck et al (2017) and Zhang et al (2018). However, these approaches may incur heavier communication cost because frequent communications are often required between tasks in MTL.…”
Section: Introductionmentioning
confidence: 98%
“…Moreover, as our work focuses on feature-based MTL model with sparsity (Obozinski et al 2010(Obozinski et al , 2011Wang and Ye 2015), it enables us to design a tailored feature screening technique to further reduce the communication cost. Unlike our framework, decentralized MTL methods have also been studied in Wang et al (2018), Bellet et al (2018), Vanhaesebrouck et al (2017) and Zhang et al (2018). However, these approaches may incur heavier communication cost because frequent communications are often required between tasks in MTL.…”
Section: Introductionmentioning
confidence: 98%
“…An analogy would be a school of fish tracking a food source: all elements in the fish school sense distance and direction to the same food source and are interested in approaching it. On the other hand, multi-task networks [9,10,11,12,13,14,15,16,17] involve agents sensing data arising from different models and different clusters of agents may be interested in identifying separate models. A second analogy is a school of fish sensing information about multiple food sources.…”
Section: Introduction and Related Workmentioning
confidence: 99%
“…In past examinations, keyword coordinating has dependably been utilized to distinguish malignant URLs, however this technique isn't versatile. Chi Zhang et al [1] the data sharing plan in calculations is isolated into two sessions: perform various tasks' data is first shared inside every hub and afterward the entire system is pushed towards a typical minimizer by correspondence among various hubs. Empower learning numerous errands at the same time on a decentralized disseminated system.…”
Section: Introductionmentioning
confidence: 99%