The three types refer to polynomial, trigonometric and hyperbolic splines. In this paper, we unify and extend them by a new kind of spline (UE-spline for short) defined over the space {cos t, sin t, 1, t, . . . , t l , . . .}, where l is an arbitrary nonnegative integer.is a frequency sequenceExisting splines, such as usual polynomial B-splines, CB-splines, HB-splines, NUAT splines, AH splines, FB-splines and the third form FB-splines etc., are all special cases of UE-splines. UE-splines inherit most properties of usual polynomial B-splines and enjoy some other advantageous properties for modelling. They can exactly represent classical conics, the catenary, the helix, and even the eight curve, a kind of snake-like curves etc.
A new kind of spline with variable frequencies, called ω B-spline, is presented. It not only unifies B-splines, trigonometric and hyperbolic polynomial B-splines, but also produces more new types of splines. ω B -spline bases are defined in the space spanned by {cosω t, sinω t, 1, t, …, t n , …} with the sequence of frequencies ω, where n is an arbitrary nonnegative integer. ω B -splines persist all desirable properties of B-splines. Furthermore, they have some special properties advantageous for modeling free form curves and surfaces.ωB-splines, frequencies, B-splines, trigonometric polynomial B-splines, hyperbolic polynomial B-splines
Cross‐modal retrieval has attracted great attention due to the increasing demand for tremendous amounts of multimodal data in recent years. These retrievals could either be text‐to‐image or image‐to‐text. To address the problem of inappropriate information included between images and texts, we propose two cross‐modal recovery techniques established on a dual‐branch neural network defined on a common subspace and the hashing learning method. First, a cross‐modal recovery technique established on a multilabel information deep ranking model (MIDRM) is provided. In this method, we introduce a triplet‐loss function into the dual‐branch neural network model. This function takes advantage of the semantic information of the bimodal components, focusing on not only the similarities between similar images and text features but also the distances between dissimilar images and texts. Second, we establish a new cross‐modal hashing technique said to be the deep regularized hashing constraint (DRHC). In this method, the regularized function is used to replace the binary constraint, and the discrete value is constrained to a certain numerical range so that the network can achieve end‐to‐end training. Overall, the time complexity is greatly improved, and the occupied storage space is also greatly reduced. Different experiments on our proposed MIDRM and DRHC models demonstrate their superior performance to those of the state‐of‐the‐art methods on two widely used data sets. The experimental results show that our approach also increases the mean average precision of cross‐modal recovery.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.