A light‐responsive system constructed from hydrogen‐bonded azo‐macrocycles demonstrates precisely controlled propensity in molecular encapsulation and release process. A significant decrease in the size of the cavity is observed in the course of the E→Z photoisomerization based on the results from DFT calculations and traveling wave ion mobility mass spectrometry. These macrocyclic hosts exhibit a rare 2:1 host–guest stoichiometry and guest‐dependent slow or fast exchange on the NMR timescale. With the slow host–guest exchange and switchable shape change of the cavity, quantitative release and capture of bipyridinium guests is achieved with the maximum release of 68 %. This work underscores the importance of slow host–guest exchange on realizing accurate release of organic cations in a stepwise manner under light irradiation. The light‐responsive system established here could advance further design of novel photoresponsive molecular switches and mechanically interlocked molecules.
Through a combinatorial screening of 35 possible phase-selective monopeptide-based organogelators readily made at low cost, we identified five of them with high gelling ability toward aprotic aromatic solvents in the powder form. The best of them (Fmoc-V-6) is able to instantly and phase-selectively gel benzene, toluene, and xylenes in the presence of water at room temperature at a gelator loading of 6% w/v. This enables the gelled aromatics to be separated by filtration and both aromatics and the gelling material to be recycled by distillation. We also identified Fmoc-I-16 as the best gelator for benzyl alcohol, and the corresponding organogel efficiently removes toxic dye molecules by 82−99% from their highly concentrated aqueous solutions. These efficient removals of toxic organic solvents and dyes from water suggest their promising applications in remediating contaminated water resources.
With the development of mobile edge computing (MEC), more and more intelligent services and applications based on deep neural networks are deployed on mobile devices to meet the diverse and personalized needs of users. Unfortunately, deploying and inferencing deep learning models on resource-constrained devices are challenging. The traditional cloud-based method usually runs the deep learning model on the cloud server. Since a large amount of input data needs to be transmitted to the server through WAN, it will cause a large service latency. This is unacceptable for most current latency-sensitive and computation-intensive applications. In this paper, we propose Cogent, an execution framework that accelerates deep neural network inference through device-edge synergy. In the Cogent framework, it is divided into two operation stages, including the automatic pruning and partition stage and the containerized deployment stage. Cogent uses reinforcement learning (RL) to automatically predict pruning and partition strategies based on feedback from the hardware configuration and system conditions so that the pruned and partitioned model can better adapt to the system environment and user hardware configuration. Then through containerized deployment to the device and the edge server to accelerate model inference, experiments show that the learning-based hardware-aware automatic pruning and partition scheme can significantly reduce the service latency, and it accelerates the overall model inference process while maintaining accuracy. Using this method can accelerate up to 8.89× without loss of accuracy of more than 7%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.