Decoding random linear codes is a well studied problem with many applications in complexity theory and cryptography. The security of almost all coding and LPN/LWE-based schemes relies on the assumption that it is hard to decode random linear codes. Recently, there has been progress in improving the running time of the best decoding algorithms for binary random codes. The ball collision technique of Bernstein, Lange and Peters lowered the complexity of Stern's information set decoding algorithm to 2 0.0556n. Using representations this bound was improved to 2 0.0537n by May, Meurer and Thomae. We show how to further increase the number of representations and propose a new information set decoding algorithm with running time 2 0.0494n .
To solve the approximate nearest neighbor search problem (NNS) on the sphere, we propose a method using locality-sensitive filters (LSF), with the property that nearby vectors have a higher probability of surviving the same filter than vectors which are far apart. We instantiate the filters using spherical caps of height 1 − α, where a vector survives a filter if it is contained in the corresponding spherical cap, and where ideally each filter has an independent, uniformly random direction.For small α, these filters are very similar to the spherical locality-sensitive hash (LSH) family previously studied by Andoni et al. For larger α bounded away from 0, these filters potentially achieve a superior performance, provided we have access to an efficient oracle for finding relevant filters. Whereas existing LSH schemes are limited by a performance parameter of ρ ≥ 1/(2c 2 − 1) to solve approximate NNS with approximation factor c, with spherical LSF we potentially achieve smaller asymptotic values of ρ, depending on the density of the data set. For sparse data sets where the dimension is super-logarithmic in the size of the data set, we asymptotically obtain ρ = 1/(2c 2 − 1), while for a logarithmic dimensionality with density constant κ we obtain asymptotics of ρ ∼ 1/(4κc 2 ). To instantiate the filters and prove the existence of an efficient decoding oracle, we replace the independent filters by filters taken from certain structured random product codes. We show that the additional structure in these concatenation codes allows us to decode efficiently using techniques similar to lattice enumeration, and we can find the relevant filters with low overhead, while at the same time not significantly changing the collision *
Abstract. At Eurocrypt 2010, Howgrave-Graham and Joux described an algorithm for solving hard knapsacks of density close to 1 in timeÕ(2 0.337n ) and memoryÕ(2 0.256n ), thereby improving a 30-year old algorithm by Shamir and Schroeppel. In this paper we extend the Howgrave-GrahamJoux technique to get an algorithm with running time down toÕ(2 0.291n ). An implementation shows the practicability of the technique. Another challenge is to reduce the memory requirement. We describe a constant memory algorithm based on cycle finding with running timeÕ(2 0.72n ); we also show a time-memory tradeoff.
Contrary to large areas in Amazonia of tropical moist forests with a pronounced dry season, tropical wet forests in Costa Rica do not depend on deep roots to maintain an evergreen forest canopy through the year. At our Costa Rican tropical wet forest sites, we found a large carbon stock in the subsoil of deeply weathered Oxisols, even though only 0.04–0.2% of the measured root biomass (>2 mm diameter) to 3 m depth was below 2 m. In addition, we demonstrate that 20% or more of this deep soil carbon (depending on soil type) can be mobilized after forest clearing for pasture establishment. Microbial activity between 0.3 and 3 m depth contributed about 50% to the microbial activity in these soils, confirming the importance of the subsoil in C cycling. Depending on soil type, forest clearing for pasture establishment led from no change to a slight addition of carbon in the topsoil (0–0.3 m depth). However, this effect was countered by a substantial loss of C stocks in the subsoil (1–3 m depth). Our results show that large stocks of relatively labile carbon are not limited to areas with a prolonged dry season, but can also be found in deeply weathered soils below tropical wet forests. Forest clearing in such areas may produce unexpectedly high C losses from the subsoil.
Abstract. Combining the efficient cross-polytope locality-sensitive hash family of Terasawa and Tanaka with the heuristic lattice sieve algorithm of Micciancio and Voulgaris, we show how to obtain heuristic and practical speedups for solving the shortest vector problem (SVP) on both arbitrary and ideal lattices. In both cases, the asymptotic time complexity for solving SVP in dimension n is 2 0.298n+o(n) . For any lattice, hashes can be computed in polynomial time, which makes our CPSieve algorithm much more practical than the SphereSieve of Laarhoven and De Weger, while the better asymptotic complexities imply that this algorithm will outperform the GaussSieve of Micciancio and Voulgaris and the HashSieve of Laarhoven in moderate dimensions as well. We performed tests to show this improvement in practice. For ideal lattices, by observing that the hash of a shifted vector is a shift of the hash value of the original vector and constructing rerandomization matrices which preserve this property, we obtain not only a linear decrease in the space complexity, but also a linear speedup of the overall algorithm. We demonstrate the practicability of our cross-polytope ideal lattice sieve IdealCPSieve by applying the algorithm to cyclotomic ideal lattices from the ideal SVP challenge and to lattices which appear in the cryptanalysis of NTRU.
In this paper, we present a heuristic algorithm for solving exact, as well as approximate, shortest vector and closest vector problems on lattices. The algorithm can be seen as a modified sieving algorithm for which the vectors of the intermediate sets lie in overlattices or translated cosets of overlattices. The key idea is hence no longer to work with a single lattice but to move the problems around in a tower of related lattices. We initiate the algorithm by sampling very short vectors in an overlattice of the original lattice that admits a quasi-orthonormal basis and hence an efficient enumeration of vectors of bounded norm. Taking sums of vectors in the sample, we construct short vectors in the next lattice. Finally, we obtain solution vector(s) in the initial lattice as a sum of vectors of an overlattice. The complexity analysis relies on the Gaussian heuristic. This heuristic is backed by experiments in low and high dimensions that closely reflect these estimates when solving hard lattice problems in the average case.This new approach allows us to solve not only shortest vector problems, but also closest vector problems, in lattices of dimension n in time 2 0.3774 n using memory 2 0.2925 n . Moreover, the algorithm is straightforward to parallelize on most computer architectures.
<div>AbstractPurpose:<p>Elevation of L-2-hydroxylgutarate (L-2-HG) in renal cell carcinoma (RCC) is due in part to reduced expression of L-2-HG dehydrogenase (L2HGDH). However, the contribution of L-2-HG to renal carcinogenesis and insight into the biochemistry and targets of this small molecule remains to be elucidated.</p>Experimental Design:<p>Genetic and pharmacologic approaches to modulate L-2-HG levels were assessed for effects on <i>in vitro</i> and <i>in vivo</i> phenotypes. Metabolomics was used to dissect the biochemical mechanisms that promote L-2-HG accumulation in RCC cells. Transcriptomic analysis was utilized to identify relevant targets of L-2-HG. Finally, bioinformatic and metabolomic analyses were used to assess the L-2-HG/L2HGDH axis as a function of patient outcome and cancer progression.</p>Results:<p>L2HGDH suppresses both <i>in vitro</i> cell migration and <i>in vivo</i> tumor growth and these effects are mediated by L2HGDH's catalytic activity. Biochemical studies indicate that glutamine is the predominant carbon source for L-2-HG via the activity of malate dehydrogenase 2 (MDH2). Inhibition of the glutamine-MDH2 axis suppresses <i>in vitro</i> phenotypes in an L-2-HG–dependent manner. Moreover, <i>in vivo</i> growth of RCC cells with basal elevation of L-2-HG is suppressed by glutaminase inhibition. Transcriptomic and functional analyses demonstrate that the histone demethylase KDM6A is a target of L-2-HG in RCC. Finally, increased L-2-HG levels, <i>L2HGDH</i> copy loss, and lower L2HGDH expression are associated with tumor progression and/or worsened prognosis in patients with RCC.</p>Conclusions:<p>Collectively, our studies provide biochemical and mechanistic insight into the biology of this small molecule and provide new opportunities for treating L-2-HG–driven kidney cancers.</p></div>
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.