2021
DOI: 10.1101/2021.04.10.439301
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

DDN2.0: R and Python packages for differential dependency network analysis of biological systems

Abstract: Data-driven differential dependency network analysis identifies in a complex and often unknown overall molecular circuitry a network of differentially connected molecular entities (pairwise selective coupling or uncoupling depending on the specific phenotypes or experimental conditions). Such differential dependency networks are typically used to assist in the inference of potential key pathways. Based on our previously developed Differential Dependency Network (DDN) method, we report here the fully implemente… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 35 publications
0
2
0
Order By: Relevance
“…As discussed above, the proposed EPGN (or NL-EPGN) consists of K phases, where each phase imitates one iteration in the accelerated extra gradient Algorithm 1 with proximal steps (6b) and (6e) replaced by (11) (or (12) for NL-EPGN). The flowchart of variables in the kth phase is shown in Figure 3.…”
Section: Network Trainingmentioning
confidence: 99%
See 1 more Smart Citation
“…As discussed above, the proposed EPGN (or NL-EPGN) consists of K phases, where each phase imitates one iteration in the accelerated extra gradient Algorithm 1 with proximal steps (6b) and (6e) replaced by (11) (or (12) for NL-EPGN). The flowchart of variables in the kth phase is shown in Figure 3.…”
Section: Network Trainingmentioning
confidence: 99%
“…Our goal in this paper is to propose an efficient extra proximal gradient algorithm that employs the Nesterov’s acceleration technique and the extra gradient scheme, and unroll this algorithm into a deep neural network called the extra proximal gradient network (EPGN) to solve a class of inverse problems ( 1 ). Motivated by the least absolute shrinkage and selection operator (LASSO) [ 11 , 12 , 13 ], our EPGN implicitly adopts an -type regularization in ( 1 ) with a nonlinear sparsification mapping learned from data. The proximal operator of this regularization is elaborated by several linear convolutions, nonlinear activation functions, and shrinkage operations for robust sparse feature selection in EPGN.…”
Section: Introductionmentioning
confidence: 99%