We demonstrate that the structure of a 3D point set with a single bilateral symmetry can be reconstructed from an uncalibrated affine image, modulo a Euclidean transformation, up to a four parameter family of symmetric objects that could have given rise to the image. If the object has two orthogonal bilateral symmetries, the shape can be reconstructed modulo similarity. Both results are demonstrated using real images with uncalibrated cameras.
We demonstrate that the structure of a 3D point set with a single bilateral symmetry can be reconstructed from an uncalibrated affine image, modulo a Euclidean transformation, up to a four parameter family of symmetric objects that could have given rise to the image. If the object has two orthogonal bilateral symmetries, the shape can be reconstructed modulo similarity. Both results are demonstrated using real images with uncalibrated cameras.
Quantization is a popular way of increasing the speed and lowering the memory usage of Convolution Neural Networks (CNNs). When labelled training data is available, network weights and activations have successfully been quantized down to 1-bit. The same cannot be said about the scenario when labelled training data is not available, e.g. when quantizing a pre-trained model, where current approaches show, at best, no loss of accuracy at 8-bit quantizations.We introduce DSConv, a flexible quantized convolution operator that replaces single-precision operations with their far less expensive integer counterparts, while maintaining the probability distributions over both the kernel weights and the outputs. We test our model as a plugand-play replacement for standard convolution on most popular neural network architectures, ResNet, DenseNet, GoogLeNet, AlexNet and VGG-Net and demonstrate stateof-the-art results, with less than 1% loss of accuracy, without retraining, using only 4-bit quantization. We also show how a distillation-based adaptation stage with unlabelled data can improve results even further.
Prior quantization methods focus on producing networks for fast and lightweight inference. However, the cost of unquantised training is overlooked, despite requiring significantly more time and energy than inference. We present a method for quantizing convolutional neural networks for efficient training. Quantizing gradients is challenging because it requires higher granularity and their values span a wider range than the weight and feature maps. We propose an extension of the Channel-wise Block Floating Point format that allows for quick gradient computation, using a minimal amount of quantization time. This is achieved through sharing an exponent across both depth and batch dimensions in order to quantize tensors once and reuse them during backpropagation. We test our method using standard models such as AlexNet, VGG, and ResNet, on the CI-FAR10, SVHN and ImageNet datasets. We show no loss of accuracy when quantizing AlexNet weights, activations and gradients to only 4 bits training ImageNet.
The need for electronic messaging services and the potential for
participants in the European Electronic Interchange market are
considered. It is argued that the future growth and popularisation of
EDI and global messaging will be considerable and that EDI has a
significant role to play in the actual realisation of a single European
market in 1992.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.