Abstract:The sum-rank metric naturally extends both the Hamming and rank metrics in coding theory over fields. It measures the error-correcting capability of codes in multishot matrix-multiplicative channels (e.g. linear network coding or the discrete memoryless channel on fields). Although this metric has already shown to be of interest in several applications, not much is known about it. In this work, sum-rank supports for codewords and linear codes are introduced and studied, with emphasis on duality. The lattice st… Show more
“…In the last few years, a deep mathematical theory of sum-rank-metric codes was developed in a series of papers by Martínez-Peñas [26,29,30]. These codes can be seen as a generalization of Hamming-metric codes and rank-metric codes.…”
Section: Introductionmentioning
confidence: 99%
“…We recall a slightly different notion of sum-rank metric code, in which the codewords are vectors with entries from an extension field F q m rather than matrices over F q . The interested reader is referred to [26,28,29,31,34] for a more detailed description of this setting. The F q -rank of a vector v = (v 1 , .…”
Section: Introductionmentioning
confidence: 99%
“…Hence, we can give the following definitions of sum-rank support, rank-list and rank-profile for elements in F n q m . For a deeper understanding of these notions we refer the reader to [26]. Definition 2.10.…”
We provide a geometric characterization of k-dimensional Fqm -linear sum-rank metric codes as tuples of Fq-subspaces of F k q m . We then use this characterization to study one-weight codes in the sum-rank metric. This leads us to extend the family of linearized Reed-Solomon codes in order to obtain a doubly-extended version of them. We prove that these codes are still maximum sum-rank distance (MSRD) codes and, when k = 2, they are one-weight, as in the Hamming-metric case. We then focus on constant rank-profile codes in the sum-rank metric, which are a special family of one weight-codes, and derive constraints on their parameters with the aid of an associated Hamming-metric code. Furthermore, we introduce the n-simplex codes in the sum-rank metric, which are obtained as the orbit of a Singer subgroup of GL(k, q m ). They turn out to be constant rank-profile -and hence one-weight -and generalize the simplex codes in both the Hamming and the rank metric. Finally, we focus on 2-dimensional one-weight codes, deriving constraints on the parameters of those which are also MSRD, and we find a new construction of one-weight MSRD codes when q = 2.
“…In the last few years, a deep mathematical theory of sum-rank-metric codes was developed in a series of papers by Martínez-Peñas [26,29,30]. These codes can be seen as a generalization of Hamming-metric codes and rank-metric codes.…”
Section: Introductionmentioning
confidence: 99%
“…We recall a slightly different notion of sum-rank metric code, in which the codewords are vectors with entries from an extension field F q m rather than matrices over F q . The interested reader is referred to [26,28,29,31,34] for a more detailed description of this setting. The F q -rank of a vector v = (v 1 , .…”
Section: Introductionmentioning
confidence: 99%
“…Hence, we can give the following definitions of sum-rank support, rank-list and rank-profile for elements in F n q m . For a deeper understanding of these notions we refer the reader to [26]. Definition 2.10.…”
We provide a geometric characterization of k-dimensional Fqm -linear sum-rank metric codes as tuples of Fq-subspaces of F k q m . We then use this characterization to study one-weight codes in the sum-rank metric. This leads us to extend the family of linearized Reed-Solomon codes in order to obtain a doubly-extended version of them. We prove that these codes are still maximum sum-rank distance (MSRD) codes and, when k = 2, they are one-weight, as in the Hamming-metric case. We then focus on constant rank-profile codes in the sum-rank metric, which are a special family of one weight-codes, and derive constraints on their parameters with the aid of an associated Hamming-metric code. Furthermore, we introduce the n-simplex codes in the sum-rank metric, which are obtained as the orbit of a Singer subgroup of GL(k, q m ). They turn out to be constant rank-profile -and hence one-weight -and generalize the simplex codes in both the Hamming and the rank metric. Finally, we focus on 2-dimensional one-weight codes, deriving constraints on the parameters of those which are also MSRD, and we find a new construction of one-weight MSRD codes when q = 2.
“…The sum rank distance, the distance that has been widely considered for multishot network coding, was first introduced in [27] under the name of extended rank distance. We note that the sum rank distance has been also used in the context of block codes reducing their decoding complexity, see for instance [21].…”
Section: Metrics For Multi-shot Network Codingmentioning
confidence: 99%
“…Thus, some sequences are not considered in the time interval [0, j] and therefore this metric is not a sufficient metric to guarantee decoding within a time interval. Hence, a new distance, called sum rank distance, was introduced as a generalization of the active column rank distance and the rank distance used for one-shot network coding (see [19], [21] and [27]). This new distance has proven to be the proper notion in order to deal with networks that are delay-free.…”
Let F[D] be the polynomial ring with entries in a finite field F. Convolutional codes are submodules of F[D] n that can be described by left prime polynomial matrices. In the last decade there has been a great interest in convolutional codes equipped with a rank metric, called sum rank metric, due to their wide range of applications in reliable linear network coding. However, this metric suits only for delay free networks. In this work we continue this thread of research and introduce a new metric that overcomes this restriction and therefore is suitable to handle more general networks. We study this metric and provide characterizations of the distance properties in terms of the polynomial matrix representations of the convolutional code. Convolutional codes that are optimal with respect to this new metric are investigated and concrete constructions are presented. These codes are the analogs of Maximum Distance Profile convolutional codes in the context of network coding. Moreover, we show that they can be built upon a class of superregular matrices, with entries in an extension field, that preserve their superregularity properties even after multiplication with some matrices with entries in the ground field.
The main purpose of this paper is to further study the structure, parameters and constructions of the recently introduced minimal codes in the sum‐rank metric. These objects form a bridge between the classical minimal codes in the Hamming metric, the subject of intense research over the past three decades partly because of their cryptographic properties, and the more recent rank‐metric minimal codes. We prove some bounds on their parameters, existence results, and, via a tool that we name geometric dual, we manage to construct minimal codes with few weights. A generalization of the celebrated Ashikhmin–Barg condition is proved and used to ensure the minimality of certain constructions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.