The fluorescence quenching property of graphene oxide (GO) has been newly demonstrated and applied for fluorescence imaging and biosensing. In this work, a new nanostructure was designed for effectively studying the quenching ability of GO. The key element in this design is the fabrication of a layer of rigid and thickness adjustable silica spacer for manipulating the distance between the GO and fluorophores. First, a silica core modified with organic dye molecules was prepared, followed by the formation of a silica shell with a tunable thickness. Afterward, the GO was wrapped around silica nanoparticles based on the electrostatic interaction between the negatively charged GO and positively charged silica. The quenching efficiency of GO to different dye molecules was studied at various spacer thicknesses and varying concentrations of GO. Fluorescence lifetime of fluorophores was measured to determine the quenching mechanism. We found that the quenching efficiency of GO was still around 30% when the distance between dyes and GO was increased to more than 30 nm, which indicated the long-distance quenching ability of GO and confirmed the previous theoretical calculation. The quenching mechanisms were proposed schematically based on our experimental results. We expected that the proposed nanostructure could act as a feasible model for studying GO quenching property and shed light on designing GO-based fluorescence sensing systems.
Currently, there are very few therapeutic options for treatment of metastatic disease, as it often remains undetected until the burden of disease is too high. Microporous poly(ε-caprolactone) biomaterials have been shown to attract metastasizing breast cancer cells in vivo early in tumor progression. In order to enhance the therapeutic potential of these scaffolds, they were modified such that infiltrating cells could be eliminated with non-invasive focal hyperthermia. Metal disks were incorporated into poly(ε-caprolactone) scaffolds to generate heat through electromagnetic induction by an oscillating magnetic field within a radiofrequency coil. Heat generation was modulated by varying the size of the metal disk, the strength of the magnetic field (at a fixed frequency), or the type of metal. When implanted subcutaneously in mice, the modified scaffolds were biocompatible and became properly integrated with the host tissue. Optimal parameters for in vivo heating were identified through a combination of computational modeling and ex vivo characterization to both predict and verify heat transfer dynamics and cell death kinetics during inductive heating. In vivo inductive heating of implanted, tissue-laden composite scaffolds led to tissue necrosis as seen by histological analysis. The ability to thermally ablate captured cells non-invasively using biomaterial scaffolds has the potential to extend the application of focal thermal therapies to disseminated cancers.
Deep reinforcement learning (RL) is a data-driven method capable of discovering complex control strategies for high-dimensional systems, making it promising for flow control applications. In particular, the present work is motivated by the goal of reducing energy dissipation in turbulent flows, and the example considered is the spatiotemporally chaotic dynamics of the Kuramoto-Sivashinsky equation (KSE). A major challenge associated with RL is that substantial training data must be generated by repeatedly interacting with the target system, making it costly when the system is computationally or experimentally expensive. We mitigate this challenge in a data-driven manner by combining dimensionality reduction via an autoencoder with a neural ODE framework to obtain a low-dimensional dynamical model from just a limited data set. We substitute this datadriven reduced-order model (ROM) in place of the true system during RL training to efficiently estimate the optimal policy, which can then be deployed on the true system. For the KSE actuated with localized forcing ("jets") at four locations, we demonstrate that we are able to learn a ROM that accurately captures the actuated dynamics as well as the underlying natural dynamics just from snapshots of the KSE experiencing random actuations. Using this ROM and a control objective of minimizing dissipation and power cost, we extract a control policy from it using deep RL. We show that the ROM-based control strategy translates well to the true KSE and highlight that the RL agent discovers and stabilizes an underlying forced equilibrium solution of the KSE system. We show that this forced equilibrium captured in the ROM and discovered through RL is related to an existing known equilibrium solution of the natural KSE.
Deep reinforcement learning (RL) is a data-driven method capable of discovering complex control strategies for high-dimensional systems, making it promising for flow control applications. In particular, the present work is motivated by the goal of reducing energy dissipation in turbulent flows, and the example considered is the spatiotemporally chaotic dynamics of the Kuramoto–Sivashinsky equation (KSE). A major challenge associated with RL is that substantial training data must be generated by repeatedly interacting with the target system, making it costly when the system is computationally or experimentally expensive. We mitigate this challenge in a data-driven manner by combining dimensionality reduction via an autoencoder with a neural ODE framework to obtain a low-dimensional dynamical model from just a limited data set. We substitute this data-driven reduced-order model (ROM) in place of the true system during RL training to efficiently estimate the optimal policy, which can then be deployed on the true system. For the KSE actuated with localized forcing (‘jets’) at four locations, we demonstrate that we are able to learn a ROM that accurately captures the actuated dynamics as well as the underlying natural dynamics just from snapshots of the KSE experiencing random actuations. Using this ROM and a control objective of minimizing dissipation and power cost, we extract a control policy from it using deep RL. We show that the ROM-based control strategy translates well to the true KSE and highlight that the RL agent discovers and stabilizes an underlying forced equilibrium solution of the KSE system. We show that this forced equilibrium captured in the ROM and discovered through RL is related to an existing known equilibrium solution of the natural KSE.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.