Background: Virus-associated cell membrane proteins acquired by HIV-1 during budding may give information on the cellular source of circulating virions. In the present study, by applying immunosorting of the virus and of the cells with antibodies targeting monocyte (CD36) and lymphocyte (CD26) markers, it was possible to directly compare HIV-1 quasispecies archived in circulating monocytes and T lymphocytes with that present in plasma virions originated from the same cell types. Five chronically HIV-1 infected patients who underwent therapy interruption after prolonged HAART were enrolled in the study. The analysis was performed by the powerful technology of ultra-deep pyrosequencing after PCR amplification of part of the env gene, coding for the viral glycoprotein (gp) 120, encompassing the tropism-related V3 loop region. V3 amino acid sequences were used to establish heterogeneity parameters, to build phylogenetic trees and to predict co-receptor usage.
BackgroundOptimal adherence to antiretroviral therapy is critical to prevent HIV drug resistance (HIVDR) epidemic. The objective of the study was to investigate the best performing adherence assessment method for predicting virological failure in resource-limited settings (RLS).MethodThis study was a single-centre prospective cohort, enrolling 220 HIV-infected adult patients attending an HIV/AIDS Care and Treatment Centre in Dar es Salaam, Tanzania, in 2010. Pharmacy refill, self-report (via visual analog scale [VAS] and the Swiss HIV Cohort study-adherence questionnaire), pill count, and appointment keeping adherence measurements were taken.Univariate logistic regression (LR) was done to explore a cut-off that gives a better trade-off between sensitivity and specificity, and a higher area under the curve (AUC) based on receiver operating characteristic curve in predicting virological failure. Additionally, the adherence models were evaluated by fitting multivariate LR with stepwise functions, decision trees, and random forests models, assessing 10-fold multiple cross validation (MCV). Patient factors associated with virological failure were determined using LR.ResultsViral load measurements at baseline and one year after recruitment were available for 162 patients, of whom 55 (34%) had detectable viral load and 17 (10.5%) had immunological failure at one year after recruitment. The optimal cut-off points significantly predictive of virological failure were 95%, 80%, 95% and 90% for VAS, appointment keeping, pharmacy refill, and pill count adherence respectively. The AUC for these methods ranged from 0.52 to 0.61, with pharmacy refill giving the best performance at AUC 0.61.Multivariate logistic regression with boost stepwise MCV had higher AUC (0.64) compared to all univariate adherence models, except pharmacy refill adherence univariate model, which was comparable to the multivariate model (AUC = 0.64). Decision trees and random forests models were inferior to boost stepwise model.Pharmacy refill adherence (<95%) emerged as the best method for predicting virological failure. Other significant predictors in multivariate LR were having a baseline CD4 T lymphocytes count < 200 cells/μl, being unable to recall the diagnosis date, and a higher weight.ConclusionPharmacy refill has the potential to predict virological failure and to identify patients to be considered for viral load monitoring and HIVDR testing in RLS.Electronic supplementary materialThe online version of this article (doi:10.1186/1471-2458-14-1035) contains supplementary material, which is available to authorized users.
We evaluated factors associated with normalization of the absolute CD4+ T-cell counts, per cent CD4+ T cells and CD4+/CD8+ T-cell ratio. A multicentre observational study was carried out in patients with sustained HIV-RNA <50 copies/mL. Outcomes were: CD4-count >500/mm(3) and multiple T-cell marker recovery (MTMR), defined as CD4+ T cells >500/mm(3) plus%CD4 T cells >29%plus CD4+/CD8+ T-cell ratio >1. Kaplan-Meier survival analysis and Cox regression analyses to predict odds for achieving outcomes were performed. Three hundred and fifty-two patients were included and followed-up for a median of 4.1 (IQR 2.1-5.9) years, 270 (76.7%) achieving a CD4+ T-cell count >500 cells/mm(3) and 197 (56%) achieving MTMR. Using three separate Cox models for both outcomes we demonstrated that independent predictors were: both absolute CD4+ and CD8+ T-cell counts, %CD4+ T cells, a higher CD4+/CD8+ T-cell ratio, and age. A likelihood-ratio test showed significant improvements in fitness for the prediction of either CD4+ >500/mm(3) or MTMR by multivariable analysis when the other immune markers at baseline, besides the absolute CD4+ count alone, were considered. In addition to baseline absolute CD4+ T-cell counts, pretreatment %CD4+ T cells and the CD4+/CD8+ T-cell ratio influence recovery of T-cell markers, and their consideration should influence the decision to start antiretroviral therapy. However, owing to the small sample size, further studies are needed to confirm these results in relation to clinical endpoints.
HLA-B*5701 is the host factor most strongly associated with slow HIV-1 disease progression, although rates can vary within this group. Underlying mechanisms are not fully understood but likely involve both immunological and virological dynamics. The present study investigated HIV-1in vivoevolution and epitope-specific CD8+T cell responses in six HLA-B*5701 patients who had not received antiretroviral treatment, monitored from early infection for up to 7 years. The subjects were classified as high-risk progressors (HRPs) or low-risk progressors (LRPs) based on baseline CD4+T cell counts. Dynamics of HIV-1 Gag p24 evolution and multifunctional CD8+T cell responses were evaluated by high-resolution phylogenetic analysis and polychromatic flow cytometry, respectively. In all subjects, substitutions occurred more frequently in flanking regions than in HLA-B*5701-restricted epitopes. In LRPs, p24 sequence diversity was significantly lower; sequences exhibited a higher degree of homoplasy and more constrained mutational patterns than HRPs. The HIV-1 intrahost evolutionary rate was also lower in LRPs and followed a strict molecular clock, suggesting neutral genetic drift rather than positive selection. Additionally, polyfunctional CD8+T cell responses, particularly to TW10 and QW9 epitopes, were more robust in LRPs, who also showed significantly higher interleukin-2 (IL-2) production in early infection. Overall, the findings indicate that HLA-B*5701 patients with higher CD4 counts at baseline have a lower risk of HIV-1 disease progression because of the interplay between specific HLA-linked immune responses and the rate and mode of viral evolution. The study highlights the power of a multidisciplinary approach, integrating high-resolution evolutionary and immunological data, to understand mechanisms underlying HIV-1 pathogenesis.
Viral/arboviral infections were characterized by a pattern of recurrent outbreaks and case clusters, with the CHIKV epidemic representing just one of several arboviral agents moving through the population. Although clinical presentations of these agents are similar, arthralgias are highly suggestive of CHIKV infection.
BackgroundTo investigate machine learning methods, ranging from simpler interpretable techniques to complex (non-linear) “black-box” approaches, for automated diagnosis of Age-related Macular Degeneration (AMD).MethodsData from healthy subjects and patients diagnosed with AMD or other retinal diseases were collected during routine visits via an Electronic Health Record (EHR) system. Patients’ attributes included demographics and, for each eye, presence/absence of major AMD-related clinical signs (soft drusen, retinal pigment epitelium, defects/pigment mottling, depigmentation area, subretinal haemorrhage, subretinal fluid, macula thickness, macular scar, subretinal fibrosis). Interpretable techniques known as white box methods including logistic regression and decision trees as well as less interpreitable techniques known as black box methods, such as support vector machines (SVM), random forests and AdaBoost, were used to develop models (trained and validated on unseen data) to diagnose AMD. The gold standard was confirmed diagnosis of AMD by physicians. Sensitivity, specificity and area under the receiver operating characteristic (AUC) were used to assess performance.ResultsStudy population included 487 patients (912 eyes). In terms of AUC, random forests, logistic regression and adaboost showed a mean performance of (0.92), followed by SVM and decision trees (0.90). All machine learning models identified soft drusen and age as the most discriminating variables in clinicians’ decision pathways to diagnose AMD.ConclusionsBoth black-box and white box methods performed well in identifying diagnoses of AMD and their decision pathways. Machine learning models developed through the proposed approach, relying on clinical signs identified by retinal specialists, could be embedded into EHR to provide physicians with real time (interpretable) support.Electronic supplementary materialThe online version of this article (doi:10.1186/1471-2415-15-10) contains supplementary material, which is available to authorized users.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.