G-protein-coupled receptors (GPCRs) represent an important group of targets for pharmaceutical therapeutics. The completion of the human genome revealed a large number of putative GPCRs. However, the identification of their natural ligands, and especially peptides, suffers from low discovery rates, thus impeding development of therapeutics based on these potential drug targets. We describe the discovery of novel GPCR ligands encrypted in the human proteome. Hundreds of potential peptide ligands were predicted by machine learning algorithms. In vitro screening of selected 33 peptides on a set of 152 GPCRs, including a group of designated orphan receptors, was conducted by intracellular calcium measurements and cAMP assays. The screening revealed eight novel peptides as potential agonists that specifically activated six different receptors in a dose-dependent manner. Most of the peptides showed distinct stimulatory patterns targeted at designated and orphan GPCRs. Further analysis demonstrated a significant in vivo effect for one of the peptides in a mouse inflammation model.
The results revealed that only half of the extracellular proteolytic sites are currently annotated, leaving over 3600 unannotated ones. Furthermore, we have found that only 6% of the unannotated sites are similar to known proteolytic sites, whereas the remaining 94% do not share significant similarity with any annotated proteolytic site. The computational challenges in these two cases are very different. While the precision in detecting the former group is close to perfect, only a mere 22% of the latter group were detected with a precision of 80%. The applicability of the classifier is demonstrated through members of the FGF family, in which we verified the conservation of physiologically-relevant proteolytic sites in homologous proteins.
In this work we lower bound the individual sequence anytime regret of a large family of online algorithms. This bound depends on the quadratic variation of the sequence, Q T , and the learning rate. Nevertheless, we show that any learning rate that guarantees a regret upper bound of O( √ Q T ) necessarily implies an Ω( √ Q T ) anytime regret on any sequence with quadratic variation Q T . The algorithms we consider are online linear optimization forecasters whose weight vector at time t + 1 is the gradient of a concave potential function of cumulative losses at time t. We show that these algorithms include all linear Regularized Follow the Leader algorithms. We prove our result for the case of potentials with negative definite Hessians, and potentials for the best expert setting satisfying some natural regularity conditions. In the best expert setting, we give our result in terms of the translation-invariant relative quadratic variation. We apply our lower bounds to Randomized Weighted Majority and to linear cost Online Gradient Descent. We show that our analysis can be generalized to accommodate diverse measures of variation beside quadratic variation. We apply this generalized analysis to Online Gradient Descent with a regret upper bound that depends on the variance of losses.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.