Antibody disulfide bond reduction during monoclonal antibody (mAb) production is a phenomenon that has been attributed to the reducing enzymes from CHO cells acting on the mAb during the harvest process. However, the impact of antibody reduction on the downstream purification process has not been studied. During the production of an IgG2 mAb, antibody reduction was observed in the harvested cell culture fluid (HCCF), resulting in high fragment levels. In addition, aggregate levels increased during the low pH treatment step in the purification process. A correlation between the level of free thiol in the HCCF (as a result of antibody reduction) and aggregation during the low pH step was established, wherein higher levels of free thiol in the starting sample resulted in increased levels of aggregates during low pH treatment. The elevated levels of free thiol were not reduced over the course of purification, resulting in carry‐over of high free thiol content into the formulated drug substance. When the drug substance with high free thiols was monitored for product degradation at room temperature and 2–8°C, faster rates of aggregation were observed compared to the drug substance generated from HCCF that was purified immediately after harvest. Further, when antibody reduction mitigations (e.g., chilling, aeration, and addition of cystine) were applied, HCCF could be held for an extended period of time while providing the same product quality/stability as material that had been purified immediately after harvest. Biotechnol. Bioeng. 2017;114: 1264–1274. © 2017 The Authors. Biotechnology and Bioengineering Published by Wiley Periodicals Inc.
Goal: To develop and validate a field-based data collection and assessment method for human activity recognition in the mountains with variations in terrain and fatigue using a single accelerometer and a deep learning model. Methods: The protocol generated an unsupervised labelled dataset of various long-term field-based activities including run, walk, stand, lay and obstacle climb. Activity was voluntary so transitions could not be determined a priori. Terrain variations included slope, crossing rivers, obstacles and surfaces including road, gravel, clay, mud, long grass and rough track. Fatigue levels were modulated between rested to physical exhaustion. The dataset was used to train a deep learning convolutional neural network (CNN) capable of being deployed on battery powered devices. The human activity recognition results were compared to a lab-based dataset with 1,098,204 samples and six features, uniform smooth surfaces, non-fatigued supervised participants and activity labelling defined by the protocol. Results: The trail run dataset had 3,829,759 samples with five features. The repetitive activities and single instance activities required hyper parameter tuning to reach an overall accuracy 0.978 with a minimum class precision for the one-off activity (climbing gate) of 0.802. Conclusion: The experimental results showed that the CNN deep learning model performed well with terrain and fatigue variations compared to the lab equivalents (accuracy 97.8% vs. 97.7% for trail vs. lab). Significance: To the authors knowledge this study demonstrated the first successful human activity recognition (HAR) in a mountain environment. A robust and repeatable protocol was developed to generate a validated trail running dataset when there were no observers present and activity types changed on a voluntary basis across variations in terrain surface and both cognitive and physical fatigue levels.
As the biopharmaceutical industry evolves to include more diverse protein formats and processes, more robust control of Critical Quality Attributes (CQAs) is needed to maintain processing flexibility without compromising quality. Active control of CQAs has been demonstrated using model predictive control techniques, which allow development of processes which are robust against disturbances associated with raw material variability and other potentially flexible operating conditions. Wide adoption of model predictive control in biopharmaceutical cell culture processes has been hampered, however, in part due to the large amount of data and expertise required to make a predictive model of controlled CQAs, a requirement for model predictive control. Here we developed a highly automated, perfusion apparatus to systematically and efficiently generate predictive models using application of system identification approaches. We successfully created a predictive model of %galactosylation using data obtained by manipulating galactose concentration in the perfusion apparatus in serialized step change experiments. We then demonstrated the use of the model in a model predictive controller in a simulated control scenario to successfully achieve a %galactosylation set point in a simulated fed‐batch culture. The automated model identification approach demonstrated here can potentially be generalized to many CQAs, and could be a more efficient, faster, and highly automated alternative to batch experiments for developing predictive models in cell culture processes, and allow the wider adoption of model predictive control in biopharmaceutical processes. © 2017 The Authors Biotechnology Progress published by Wiley Periodicals, Inc. on behalf of American Institute of Chemical Engineers Biotechnol. Prog., 33:1647–1661, 2017
Electronic medical records (EMRs) help in identifying disease archetypes and progression. A very important part of EMRs is the presence of time domain data because these help with identifying trends and monitoring changes through time. Most time-series data come from wearable devices monitoring real-time health trends. This review focuses on the time-series data needed to construct complete EMRs by identifying paradigms that fall within the scope of the application of artificial intelligence (AI) based on the principles of translational medicine. (1) Background: The question addressed in this study is: What are the taxonomies present in the field of the application of machine learning on EMRs? (2) Methods: Scopus, Web of Science, and PubMed were searched for relevant records. The records were then filtered based on a PRISMA review process. The taxonomies were then identified after reviewing the selected documents; (3) Results: A total of five main topics were identified, and the subheadings are discussed in this review; (4) Conclusions: Each aspect of the medical data pipeline needs constant collaboration and update for the proposed solutions to be useful and adaptable in real-world scenarios.
Aim: To determine whether an AI model and single sensor measuring acceleration and ECG could model cognitive and physical fatigue for a self-paced trail run. Methods: A field-based protocol of continuous fatigue repeated hourly induced physical (~45 min) and cognitive (~10 min) fatigue on one healthy participant. The physical load was a 3.8 km, 200 m vertical gain, trail run, with acceleration and electrocardiogram (ECG) data collected using a single sensor. Cognitive load was a Multi Attribute Test Battery (MATB) and separate assessment battery included the Finger Tap Test (FTT), Stroop, Trail Making A and B, Spatial Memory, Paced Visual Serial Addition Test (PVSAT), and a vertical jump. A fatigue prediction model was implemented using a Convolutional Neural Network (CNN). Results: When the fatigue test battery results were compared for sensitivity to the protocol load, FTT right hand (R2 0.71) and Jump Height (R2 0.78) were the most sensitive while the other tests were less sensitive (R2 values Stroop 0.49, Trail Making A 0.29, Trail Making B 0.05, PVSAT 0.03, spatial memory 0.003). The best prediction results were achieved with a rolling average of 200 predictions (102.4 s), during set activity types, mean absolute error for ‘walk up’ (MAE200 12.5%), and range of absolute error for ‘run down’ (RAE200 16.7%). Conclusion: We were able to measure cognitive and physical fatigue using a single wearable sensor during a practical field protocol, including contextual factors in conjunction with a neural network model. This research has practical application to fatigue research in the field.
Biomanufacturing exhibits inherent variability that can lead to variation in performance attributes and batch failure. To help ensure process consistency and product quality the development of predictive models and integrated control strategies is a promising approach. In this study, a feedback controller was developed to limit excessive lactate production, a widespread metabolic phenomenon that is negatively associated with culture performance and product quality. The controller was developed by applying machine learning strategies to historical process development data, resulting in a forecast model that could identify whether a run would result in lactate consumption or accumulation. In addition, this exercise identified a correlation between increased amino acid consumption and low observed lactate production leading to the mechanistic hypothesis that there is a deficiency in the link between glycolysis and the tricarboxylic acid cycle. Using the correlative process parameters to build mechanistic insight and applying this to predictive models of lactate concentration, a dynamic model predictive controller (MPC) for lactate was designed. This MPC was implemented experimentally on a process known to exhibit high lactate accumulation and successfully drove the cell cultures towards a lactate consuming state. In addition, an increase in specific titer productivity was observed when compared with non‐MPC controlled reactors.
As NASA prepares for crewed lunar missions over the next several years, plans are also underway to journey farther into deep space. Deep space exploration will require a paradigm shift in astronaut medical support toward progressively earth-independent medical operations (EIMO). The Exploration Medical Capability (ExMC) element of NASA’s Human Research Program (HRP) is investigating the feasibility and value of advanced capabilities to promote and enhance EIMO. Currently, astronauts rely on real-time communication with ground-based medical providers. However, as the distance from Earth increases, so do communication delays and disruptions. Moreover, resupply and evacuation will become increasingly complex, if not impossible, on deep space missions. In contrast to today’s missions in low earth orbit (LEO), where most medical expertise and decision-making are ground-based, an exploration crew will need to autonomously detect, diagnose, treat, and prevent medical events. Due to the sheer amount of pre-mission training required to execute a human spaceflight mission, there is often little time to devote exclusively to medical training. One potential solution is to augment the long duration exploration crew’s knowledge, skills, and abilities with a clinical decision support system (CDSS). An analysis of preliminary data indicates the potential benefits of a CDSS to mission outcomes when augmenting cognitive and procedural performance of an autonomous crew performing medical operations, and we provide an illustrative scenario of how such a CDSS might function.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.