The Visual Object Tracking challenge VOT2018 is the sixth annual tracker benchmarking activity organized by the VOT initiative. Results of over eighty trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in the recent years. The evaluation included the standard VOT and other popular methodologies for short-term tracking analysis and a "real-time" experiment simulating a situation where a tracker processes images as if provided by a continuously running sensor. A long-term tracking subchallenge has been introduced to the set of standard VOT sub-challenges. The new subchallenge focuses on long-term tracking properties, namely coping with target disappearance and reappearance. A new dataset has been compiled and a performance evaluation methodology that focuses on long-term tracking capabilities has been adopted. The VOT toolkit has been updated to support both standard short-term and the new longterm tracking subchallenges. Performance of the tested trackers typically by far exceeds standard baselines. The source code for most of the trackers is publicly available from the VOT page. The dataset, the evaluation kit and the results are publicly available at the challenge website 60 .
The Visual Object Tracking challenge 2015, VOT2015, aims at comparing short-term single-object visual trackers that do not apply pre-learned models of object appearance. Results of 62 trackers are presented. The number of tested trackers makes VOT 2015 the largest benchmark on shortterm tracking to date. For each participating tracker, a short description is provided in the appendix. Features of the VOT2015 challenge that go beyond its VOT2014 predecessor are: (i) a new VOT2015 dataset twice as large as in VOT2014 with full annotation of targets by rotated bounding boxes and per-frame attribute, (ii) extensions of the VOT2014 evaluation methodology by introduction of a new performance measure. The dataset, the evaluation kit as well as the results are publicly available at the challenge website 1 .
A PAR-1–mediated bias in microtubule organization in the Drosophila oocyte underlies posterior-directed mRNA transport.
In this paper, we study a general optimization model, which covers a large class of existing models for many applications in imaging sciences. To solve the resulting possibly nonconvex, nonsmooth and non-Lipschitz optimization problem, we adapt the alternating direction method of multipliers (ADMM) with a general dual step-size to solve a reformulation that contains three blocks of variables, and analyze its convergence. We show that for any dual step-size less than the golden ratio, there exists a computable threshold such that if the penalty parameter is chosen above such a threshold and the sequence thus generated by our ADMM is bounded, then the cluster point of the sequence gives a stationary point of the nonconvex optimization problem. We achieve this via a potential function specifically constructed for our ADMM. Moreover, we establish the global convergence of the whole sequence if, in addition, this special potential function is a Kurdyka-Lojasiewicz function. Furthermore, we present a simple strategy for initializing the algorithm to guarantee boundedness of the sequence. Finally, we perform numerical experiments comparing our ADMM with the proximal alternating linearized minimization (PALM) proposed in [5] on the background/foreground extraction problem with real data. The numerical results show that our ADMM with a nontrivial dual step-size is efficient.1. bridge penalty [27,28]The bridge penalty and the logistic penalty have also been considered in [13]. Finally, the linear map A can be suitably chosen to model different scenarios. For example, A can be chosen to be the identity map for extracting L and S from a noisy data D, and the blurring map for a blurred data D. The linear map B can be the identity map or some "dictionary" that spans the data space (see, for example, [34]), and C can be chosen to be the identity map or the inverse of certain sparsifying transform (see, for example, [40]). More examples of (1.1) can be found in [8-10, 13, 41, 47].One representative application that is frequently modeled by (1.1) via a suitable choice of Φ, Ψ, A, B and C is the background/foreground extraction problem, which is an important problem in video processing; see [6,7] for recent surveys. In this problem, one attempts to separate the relatively static information called "background" and the moving objects called "foreground" in a video. The problem can be modeled by (1.1), and such models are typically referred to as RPCA-based models. In these models, each image is stacked as a column of a data matrix D, the relatively static background is then modeled as a low rank matrix, while the moving foreground is modeled as sparse outliers. The data matrix D is then decomposed (approximately) as the sum of a low rank matrix L ∈ R m×n modeling the background and a sparse matrix S ∈ R m×n modeling the foreground. Various approximations are then used to induce low rank and sparsity, resulting in different RPCA-based models, most of which take the form of (1.1). One example is to set Ψ to be the nuclear norm of L, ...
We present an algorithm to directly solve numerous image restoration problems (e.g., image deblurring, image dehazing, image deraining, etc.). These problems are highly ill-posed, and the common assumptions for existing methods are usually based on heuristic image priors. In this paper, we find that these problems can be solved by generative models with adversarial learning. However, the basic formulation of generative adversarial networks (GANs) does not generate realistic images, and some structures of the estimated images are usually not preserved well. Motivated by an interesting observation that the estimated results should be consistent with the observed inputs under the physics models, we propose a physics model constrained learning algorithm so that it can guide the estimation of the specific task in the conventional GAN framework. The proposed algorithm is trained in an end-to-end fashion and can be applied to a variety of image restoration and related low-level vision problems. Extensive experiments demonstrate that our method performs favorably against the state-of-the-art algorithms.
Intercellular communication is commonly mediated by the regulated fusion, or exocytosis, of vesicles with the cell surface. SNARE (soluble N-ethymaleimide sensitive factor attachment protein receptor) proteins are the catalytic core of the secretory machinery, driving vesicle and plasma membrane merger. Plasma membrane SNAREs (tSNAREs) are proposed to reside in dense clusters containing many molecules, thus providing a concentrated reservoir to promote membrane fusion. However, biophysical experiments suggest that a small number of SNAREs are sufficient to drive a single fusion event. Here we show, using molecular imaging, that the majority of tSNARE molecules are spatially separated from secretory vesicles. Furthermore, the motilities of the individual tSNAREs are constrained in membrane micro-domains, maintaining a non-random molecular distribution and limiting the maximum number of molecules encountered by secretory vesicles. Together our results provide a new model for the molecular mechanism of regulated exocytosis and demonstrate the exquisite organization of the plasma membrane at the level of individual molecular machines.
Fluorescence imaging of dynamical processes in live cells often results in a low signal-to-noise ratio. We present a novel feature-preserving non-local means approach to denoise such images to improve feature recovery and particle detection. The commonly used non-local means filter is not optimal for noisy biological images containing small features of interest because image noise prevents accurate determination of the correct coefficients for averaging, leading to over-smoothing and other artifacts. Our adaptive method addresses this problem by constructing a particle feature probability image, which is based on Haar-like feature extraction. The particle probability image is then used to improve the estimation of the correct coefficients for averaging. We show that this filter achieves higher peak signal-to-noise ratio in denoised images and has a greater capability in identifying weak particles when applied to synthetic data. We have applied this approach to live-cell images resulting in enhanced detection of end-binding-protein 1 foci on dynamically extending microtubules in photo-sensitive Drosophila tissues. We show that our feature-preserving non-local means filter can reduce the threshold of imaging conditions required to obtain meaningful data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.