The effect of early feed restriction on metabolic programming and compensatory growth was studied in broiler chickens. A total of 480 female 1-d-old broiler birds (Aconred) were randomly allocated to ad libitum and feed-restricted groups, each of which was replicated 6 times with 40 birds per replicate. Broilers were provided commercial diets. Feed-restricted broilers were deprived of feed for 4 h per day from 1 to 21 d of age. Effects of treatments were determined at 21 and 63 d of age. In feed-restricted birds at 21 d of age, BW, average daily gain and average daily feed intake, breast muscle (P < 0.01), carcass yield (P < 0.05), and abdominal fat (P < 0.05) were decreased. Ether extract content in breast muscle was increased (P < 0.01), whereas CP content was slightly decreased. Triiodothyronine (P < 0.01) and thyroxine (P < 0.05) were decreased in serum. Free fatty acid and very low density lipoprotein were slightly increased in serum, whereas triglyceride and glucose were decreased (P < 0.01). Activities of NADPH-generating enzymes in liver including malic dehydrogenase, isocitrate dehydrogenase, and glucose-6-phosphate remained unchanged in ad libitum birds, whereas hormone-sensitive lipase activity was increased (P < 0.01). In feed-restricted birds at 63 d of age, BW, average daily gain, average daily feed intake, carcass yield, breast muscle yield, and serum triiodothyronine and thyroxine remained as ad libitum birds, whereas abdominal fat yield was increased (P < 0.05). Ether extract content in breast muscle was decreased (P < 0.01), whereas CP content was increased (P < 0.05). Activities of NADPH-generating enzymes were significantly increased, except abdominal malic dehydrogenase and hormone-sensitive lipase activity was decreased (P < 0.01) in liver and abdominal fat. Lipoprotein lipase activity was increased (P < 0.05) in abdominal fat. In summary, feed restriction severely affected growth performance and lipid metabolism in broilers in the early period. Because there was no statistical difference among the final BW, near full compensatory growth was achieved. In addition, early feed restriction might have induced prolonged metabolic programming in chicks and led to adult obesity.
Optical computing provides unique opportunities in terms of parallelization, scalability, power efficiency and computational speed, and has attracted major interest for machine learning. Diffractive deep neural networks have been introduced earlier as an optical machine learning framework that uses task-specific diffractive surfaces designed by deep learning to all-optically perform inference, achieving promising performance for object classification and imaging. Here we demonstrate systematic improvements in diffractive optical neural networks based on a differential measurement technique that mitigates the strict non-negativity constraint of light intensity. In this differential detection scheme, each class is assigned to a separate pair of detectors, behind a diffractive optical network, and the class inference is made by maximizing the normalized signal difference between the photodetector pairs. Using this differential detection scheme, involving 10 photodetector pairs behind 5 diffractive layers with a total of 0.2 Million neurons, we numerically achieved blind testing accuracies of 98.54%, 90.54% and 48.51% for MNIST, Fashion-MNIST and grayscale CIFAR-10 datasets, respectively. Moreover, by utilizing the inherent parallelization capability of optical systems, we reduced the cross-talk and optical signal coupling between the positive and negative detectors of each class by dividing the optical path into two jointly-trained diffractive neural networks that work in parallel. We further made use of this parallelization approach, and divided individual classes in a target dataset among multiple jointly-trained diffractive neural networks. Using this class-specific differential detection in jointly-optimized diffractive neural networks that operate in parallel, our simulations achieved blind testing accuracies of 98.52%, 91.48% and 50.82% for MNIST, Fashion-MNIST and grayscale CIFAR-10 datasets, respectively, coming close to the performance of some of the earlier generations of all-electronic deep neural networks, e.g., LeNet, which achieves classification accuracies of 98.77%, 90.27%, and 55.21% corresponding to the same datasets, respectively. In addition to these jointly-optimized diffractive neural networks, we also independently-optimized multiple diffractive networks and utilized them in a way that is similar to ensemble methods practiced in machine learning; using 3 independently-optimized differential diffractive neural networks that optically project their light onto a common output/detector plane, we numerically achieved blind testing accuracies of 98.59%, 91.06% and 51.44% for MNIST, Fashion-MNIST and grayscale CIFAR-10 datasets, respectively. Through these systematic advances in designing diffractive neural networks, the reported classification accuracies set the state-of-the-art for an all-optical neural network design, and the presented framework might be useful to bring optical neural network-based low power solutions for various machine learning applications and help us design new computational c...
Research on optical computing has recently attracted significant attention due to the transformative advances in machine learning. Among different approaches, diffractive optical networks composed of spatially-engineered transmissive surfaces have been demonstrated for all-optical statistical inference and performing arbitrary linear transformations using passive, free-space optical layers. Here, we introduce a polarization-multiplexed diffractive processor to all-optically perform multiple, arbitrarily-selected linear transformations through a single diffractive network trained using deep learning. In this framework, an array of pre-selected linear polarizers is positioned between trainable transmissive diffractive materials that are isotropic, and different target linear transformations (complex-valued) are uniquely assigned to different combinations of input/output polarization states. The transmission layers of this polarization-multiplexed diffractive network are trained and optimized via deep learning and error-backpropagation by using thousands of examples of the input/output fields corresponding to each one of the complex-valued linear transformations assigned to different input/output polarization combinations. Our results and analysis reveal that a single diffractive network can successfully approximate and all-optically implement a group of arbitrarily-selected target transformations with a negligible error when the number of trainable diffractive features/neurons (N) approaches $$N_pN_iN_o$$ N p N i N o , where Ni and No represent the number of pixels at the input and output fields-of-view, respectively, and Np refers to the number of unique linear transformations assigned to different input/output polarization combinations. This polarization-multiplexed all-optical diffractive processor can find various applications in optical computing and polarization-based machine vision tasks.
A plethora of research advances have emerged in the fields of optics and photonics that benefit from harnessing the power of machine learning. Specifically, there has been a revival of interest in optical computing hardware due to its potential advantages for machine learning tasks in terms of parallelization, power efficiency and computation speed. Diffractive deep neural networks (D2NNs) form such an optical computing framework that benefits from deep learning-based design of successive diffractive layers to all-optically process information as the input light diffracts through these passive layers. D2NNs have demonstrated success in various tasks, including object classification, the spectral encoding of information, optical pulse shaping and imaging. Here, we substantially improve the inference performance of diffractive optical networks using feature engineering and ensemble learning. After independently training 1252 D2NNs that were diversely engineered with a variety of passive input filters, we applied a pruning algorithm to select an optimized ensemble of D2NNs that collectively improved the image classification accuracy. Through this pruning, we numerically demonstrated that ensembles of N = 14 and N = 30 D2NNs achieve blind testing accuracies of 61.14 ± 0.23% and 62.13 ± 0.05%, respectively, on the classification of CIFAR-10 test images, providing an inference improvement of >16% compared to the average performance of the individual D2NNs within each ensemble. These results constitute the highest inference accuracies achieved to date by any diffractive optical neural network design on the same dataset and might provide a significant leap to extend the application space of diffractive optical image classification and machine vision systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.