Proceedings of the 26th International Conference on World Wide Web 2017
DOI: 10.1145/3038912.3052644
|View full text |Cite
|
Sign up to set email alerts
|

Linear Additive Markov Processes

Abstract: We introduce LAMP: the Linear Additive Markov Process. Transitions in LAMP may be influenced by states visited in the distant history of the process, but unlike higher-order Markov processes, LAMP retains an efficient parameterization. LAMP also allows the specific dependence on history to be learned efficiently from data.We characterize some theoretical properties of LAMP, including its steady-state and mixing time. We then give an algorithm based on alternating minimization to learn LAMP models from data.Fin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
13
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(14 citation statements)
references
References 37 publications
1
13
0
Order By: Relevance
“…Thus, approaches employing higher-order Markov processes have a cubic space complexity, and are for this reason inefficient. To cope with this issue, more efficient parameterization approaches, such as the Linear Additive Markov Process (LAMP [28]) and the Retrospective Higher-Order Markov Process (RHOMP [44]), have been proposed. In contrast to the higher-order Markov process, the number of maintained parameters in the LAMP model and the RHOMP model grow linearly, which makes them more suitable for streaming data.…”
Section: The Candidate Change Point (Detection) Approachmentioning
confidence: 99%
See 1 more Smart Citation
“…Thus, approaches employing higher-order Markov processes have a cubic space complexity, and are for this reason inefficient. To cope with this issue, more efficient parameterization approaches, such as the Linear Additive Markov Process (LAMP [28]) and the Retrospective Higher-Order Markov Process (RHOMP [44]), have been proposed. In contrast to the higher-order Markov process, the number of maintained parameters in the LAMP model and the RHOMP model grow linearly, which makes them more suitable for streaming data.…”
Section: The Candidate Change Point (Detection) Approachmentioning
confidence: 99%
“…In the proposed method, we introduce an adaptive estimation for detecting changes, we use CCP heritage of the CCP model to estimate the CCP mean distribution of streaming data. This method can be compared to the linear high order of states based on Markov chain [28,44].…”
Section: Adaptive Estimation Of Data In Streamingmentioning
confidence: 99%
“…First our RHOMP model de nes a speci c form of the Additive Markov Process (AMP) [21], where the transition probability is a summation of a series of memory functions that are restricted on the next state and one history state each. Applications of the AMP include LAMP [20] (see Section 1), the gravity models [39], and some dynamical systems in physics [23,35] where the memory function is empirically estimated for the application of binary state. In addition to the AMP, recent innovations include new recovery results on mixture of Markov chains [18] (a special case of HMM), which assumes a small set of Markov chains that model various classes of latent indent; and the spacey random walk [3,4,36] as a non-Markovian stochastic process that utilizes higher-order information based on the empirical occupation of states.…”
Section: Related Workmentioning
confidence: 99%
“…[20] proposed the Linear Additive Markov Process (LAMP) that is closely related to our framework. Speci cally our RHOMP model has the same formulation as the generazlied extention GLAMP from the paper [20]. We learned about this paper as we were nalizing our submission to arXiv.…”
Section: Introductionmentioning
confidence: 99%
“…The LAMP model, proposed by Kumar et al [7], overcomes these challenges. The LAMP can fit a measured autocorrelation structure even for sequences that exhibit long-range dependence.…”
Section: Introductionmentioning
confidence: 99%