Convolutional neural networks have achieved amazing success in many areas in recent years, and VGG is a widely used convolutional neural network model. However, it has some limitations, such as a large number of parameters, which take up significant memory and thus restrict its application in resourceconstrained scenarios such as mobile devices and embedded systems. In convolutional neural network models, the different number of convolutional layers can extract different granularity features, which represent the different levels of importance in the image recognition process. Here, we propose a new VGG architecture with different granularity feature combinations that combine different granularity features from block1, block2, block3, block4, and block5 in VGG. Each block is followed by a local fully connected layer to reduce the dimensionality of the coarse and fine features, and five different granularity features are combined as the input of the first global fully connected layer. By combining the features of different blocks, it can increase information flow from a lower layer directly to a fully connected layer and increase feature reuse without adding too many connections. The addition of five local fully connected layers means an increase in parameters, so we reduce the neuron number in two global fully connected layers to reduce the number of parameters. The well-known datasets CIFAR-10 and MNIST were used to evaluate the network's classification performance. The experimental results show that the proposed model achieves better training and testing performance than traditional VGGs and reduces the number of parameters.
The cognitive activities of human beings are complicate and diversified. So far, there hasn't been a universal cognitive model. Each cognitive model generally only represents cognitive features in one or some aspects.Therefore, this paper aims to, based on the granular computing theory and principles, and with attributes and change laws as the main objects of the cognitive course, proposes a cognitive model which not only can describe the thing through attributes to represent the change law, but also can simulate the cognitive decision-making course with respect to the attribute change of the thing. Attribute granular computing ,based on qualitative mapping,can simulate the cognitive functions of human brain, such as granulation, organization and causation. Petri net has asynchronous, concurrent and uncertainty characteristics, which is similar to the characteristics of some cognitive activities in human thinking process. Petri net is extended based on the basic concept and logic calculation rules of attribute granular computing in this paper.Some basic elements of a cognitive system, such as knowledge representation, reasoning, learning and memory mode are initially showed in the extended Petri net. The results show that this method can reflect the cognitive process of uncertainty identification and decision-making in a certain extent.
Convolutional neural networks (CNNs) have achieved great success in image classification tasks. In the process of a convolutional operation, a larger input area can capture more context information. Stacking several convolutional layers can enlarge the receptive field, but this increases the parameters. Most CNN models use pooling layers to extract important features, but the pooling operations cause information loss. Transposed convolution can increase the spatial size of the feature maps to recover the lost low-resolution information. In this study, we used two branches with different dilated rates to obtain different size features. The dilated convolution can capture richer information, and the outputs from the two channels are concatenated together as input for the next block. The small size feature maps of the top blocks are transposed to increase the spatial size of the feature maps to recover low-resolution prediction maps. We evaluated the model on three image classification benchmark datasets (CIFAR-10, SVHN, and FMNIST) with four state-of-the-art models, namely, VGG16, VGG19, ResNeXt, and DenseNet. The experimental results show that CDTNet achieved lower loss, higher accuracy, and faster convergence speed in the training and test stages. The average test accuracy of CDTNet increased by 54.81% at most on SVHN with VGG19 and by 1.28% at least on FMNIST with VGG16, which proves that CDTNet has better performance and strong generalization abilities, as well as fewer parameters.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.