Global investment in biomedical research has grown significantly over the last decades, reaching approximately a quarter of a trillion US dollars in 2010. However, not all of this investment is distributed evenly by gender. It follows, arguably, that scarce research resources may not be optimally invested (by either not supporting the best science or by failing to investigate topics that benefit women and men equitably). Women across the world tend to be significantly underrepresented in research both as researchers and research participants, receive less research funding, and appear less frequently than men as authors on research publications. There is also some evidence that women are relatively disadvantaged as the beneficiaries of research, in terms of its health, societal and economic impacts. Historical gender biases may have created a path dependency that means that the research system and the impacts of research are biased towards male researchers and male beneficiaries, making it inherently difficult (though not impossible) to eliminate gender bias. In this commentary, we – a group of scholars and practitioners from Africa, America, Asia and Europe – argue that gender-sensitive research impact assessment could become a force for good in moving science policy and practice towards gender equity. Research impact assessment is the multidisciplinary field of scientific inquiry that examines the research process to maximise scientific, societal and economic returns on investment in research. It encompasses many theoretical and methodological approaches that can be used to investigate gender bias and recommend actions for change to maximise research impact. We offer a set of recommendations to research funders, research institutions and research evaluators who conduct impact assessment on how to include and strengthen analysis of gender equity in research impact assessment and issue a global call for action.
As governments, funding agencies and research organisations worldwide seek to maximise both the financial and non-financial returns on investment in research, the way the research process is organised and funded is becoming increasingly under scrutiny. There are growing demands and aspirations to measure research impact (beyond academic publications), to understand how science works, and to optimise its societal and economic impact. In response, a multidisciplinary practice called research impact assessment is rapidly developing. Given that the practice is still in its formative stage, systematised recommendations or accepted standards for practitioners (such as funders and those responsible for managing research projects) across countries or disciplines to guide research impact assessment are not yet available.In this statement, we propose initial guidelines for a rigorous and effective process of research impact assessment applicable to all research disciplines and oriented towards practice. This statement systematises expert knowledge and practitioner experience from designing and delivering the International School on Research Impact Assessment (ISRIA). It brings together insights from over 450 experts and practitioners from 34 countries, who participated in the school during its 5-year run (from 2013 to 2017) and shares a set of core values from the school’s learning programme. These insights are distilled into ten-point guidelines, which relate to (1) context, (2) purpose, (3) stakeholders’ needs, (4) stakeholder engagement, (5) conceptual frameworks, (6) methods and data sources, (7) indicators and metrics, (8) ethics and conflicts of interest, (9) communication, and (10) community of practice.The guidelines can help practitioners improve and standardise the process of research impact assessment, but they are by no means exhaustive and require evaluation and continuous improvement. The prima facie effectiveness of the guidelines is based on the systematised expert and practitioner knowledge of the school’s faculty and participants derived from their practical experience and research evidence. The current knowledge base has gaps in terms of the geographical and scientific discipline as well as stakeholder coverage and representation. The guidelines can be further strengthened through evaluation and continuous improvement by the global research impact assessment community.
In the context of avoiding research waste, the conduct of a feasibility study before a clinical trial should reduce the risk that further resources will be committed to a trial that is likely to ‘fail’. However, there is little evidence indicating whether feasibility studies add to or reduce waste in research. Feasibility studies funded by the National Institute for Health Research’s (NIHR) Research for Patient Benefit (RfPB) programme were examined to determine how many had published their findings, how many had applied for further funding for a full trial and the timeframe in which both of these occurred. A total of 120 feasibility studies which had closed by May 2016 were identified and each Principal Investigator (PI) was sent a questionnaire of which 89 responses were received and deemed suitable for analysis. Based on self reported answers from the PIs a total of 57 feasibility studies were judged as feasible, 20 were judged not feasible and for 12 it was judged as uncertain whether a full trial was feasible. The RfPB programme had spent approximately £19.5m on the 89 feasibility studies of which 16 further studies had been subsequently funded to a total of £16.8m. The 20 feasibility studies which were judged as not feasible potentially saved up to approximately £20m of further research funding which would likely to have not completed successfully. The average RfPB feasibility study took 31 months (range 18 to 48) to complete and cost £219,048 (range £72,031 to £326,830) and the average full trial funded from an RfPB feasibility study took 42 months (range 26 to 55) to complete and cost £1,163,996 (range £321,403 to £2,099,813). The average timeframe of feasibility study and full trial was 72 months (range 56 to 91), however in addition to this time an average of 10 months (range -7 to 29) was taken between the end of the feasibility study and the application for the full trial, and a further average of 18 months (range 13 to 28) between the application for the full trial and the start of the full trial. Approximately 58% of the 89 feasibility studies had published their findings with the majority of the remaining studies still planning to publish. Due to the long time frames involved a number of studies were still in the process of publishing the feasibility findings and/or applying for a full trial. Feasibility studies are potentially useful at avoiding waste and de-risking funding investments of more expensive full trials, however there is a clear time delay and therefore some potential waste in the existing research pathway.
ObjectivesThis is a pilot study with the objective of investigating general practitioner (GP) perceptions and experiences in the referral of mentally ill and behaviourally disturbed children and adolescents.DesignQuantitative analyses on patient databases were used to ascertain the source of referrals into Child and Adolescent Mental Health Services (CAMHS) and identify the relative contribution from GP practices. Qualitative semistructured interviews were then used to explore challenges faced by GPs in referring to CAMHS.SettingGPs were chosen from the five localities that deliver CAMHS within the local Trust (Peterborough City, Fenland, Huntingdon, Cambridge City and South Cambridgeshire).ParticipantsFor the quantitative portion, data involving 19 466 separate referrals were used. Seven GPs took part in the qualitative interviews.ResultsThe likelihood of a referral from GPs being rejected by CAMHS was over three times higher compared to all other referral sources combined within the Cambridge and Peterborough NHS Foundation Trust. Interviews showed that detecting the signs and symptoms of mental illness in young people is a challenge for GPs. Communication with referral agencies varies and depends on individual relationships. GPs determine whether to refer on a mixture of the presenting conditions and their perceived likelihood of acceptance by CAMHS; the criteria for the latter were poorly understood by the interviewed GPs.ConclusionsThere are longstanding structural weaknesses in the services for children and young people in general, reflected in poor multiagency cooperation at the primary care level. GP-friendly guidelines and standards are required that will aid in decision-making and help with understanding the referrals process. We look to managers of both commissioning and providing organisations, as well as future research, to drive forward the development of tools, protocols, and health service structures to help aid the recognition and treatment of mental illness in young people.
The CLAHRCs pursued a strategy that can be categorized as one of flexible comprehensiveness; i.e. their programmes have been flexible and responsive and they have used a range of approaches that seek to match the diverse aspects of the complex issues they face. Key features include their work on combining a range of knowledge transfer and exchange strategies, their efforts to promote cultural change, and the freedom to experiment, learn and adapt. Although the CLAHRCs do not, by themselves, have the remit or resources to bring about wholesale service improvement in health care, they do have features that would allow them to play a key role in some of the wider initiatives that encourage innovation.
BackgroundIn 2008, the National Institute for Health Research (NIHR) in England established nine Collaborations for Leadership in Applied Health Research and Care (CLAHRCs) to develop partnerships between universities and local NHS organisations focused on improving patient outcomes through the conduct and application of applied health research.ObjectivesThe study explored how effectively the CLAHRCs supported the ‘translation’ of research into patient benefit, and developed ways of doing applied research that maximised its chances of being useful to the service and the capacity of the NHS to respond. It focused on three issues: (1) how the NHS influenced the CLAHRCs, and vice versa; (2) how effective multistakeholder and multidisciplinary research and implementation teams were built in the CLAHRCs; (3) how the CLAHRCs supported the use of research knowledge to change commissioning and clinical behaviour for patient benefit.MethodsThe study adopted an adaptive and emergent approach and incorporated a formative evaluation. An initial phase mapped the landscape of all nine CLAHRCs and the context within which they were established, using document analysis, workshops and interviews, and a literature review. This mapping exercise identified the three research questions that were explored in phase 2 through a stakeholder survey of six CLAHRCs, in-depth case studies of two CLAHRCs, validation interviews with all nine CLAHRCs and the NIHR, and document review.Results(1) The local remit and the requirement for matched NHS funding enhanced NHS influence on the CLAHRCs. The CLAHRCs achieved positive change among those most directly involved, but the larger issue of whether or not the CLAHRCs can influence others in and across the NHS remains unresolved. (2) The CLAHRCs succeeded in engaging different stakeholder groups, and explored what encouraged specific groups to become involved. Being responsive to people’s concerns and demonstrating ‘quick wins’ were both important. (3) There was some evidence that academics were becoming more interested in needs-driven research, and that commissioners were seeing the CLAHRCs as a useful source of support. A growing number of completed projects had demonstrated an impact on clinical practice.ConclusionsThe CLAHRCs have included NHS decision-makers in research and researchers in service decision-making, and encouraged research-informed practice. All the CLAHRCs (as collaborations) adopted relationship models. However, as the complexities of the challenges they faced became clearer, it became obvious that a focus on multidisciplinary relationships was necessary, but not sufficient on its own. Attention also has to be paid to the systems within and through which these relationships operate.Recommendations for researchFuture research should compare areas with an Academic Health Science Network (AHSN) and a CLAHRC with areas with just an AHSN, to understand the difference CLAHRCs make. There should be work on understanding implementation, such as the balancing of rigour and relevance in intervention studies; systemic barriers to and facilitators of implementation; and tailoring improvement interventions. There is also a need to better understand the factors that support the explicit use of research evidence across the NHS, and the processes and mechanisms that support the sustainability and scale-up of implementation projects. Research should place emphasis on examining the role of patient and public involvement in CLAHRCs and of the relation between CLAHRCs and NHS commissioners.FundingThe NIHR Health Services and Delivery Research programme.
The potential benefits of travelling across national borders to obtain medical treatment include improved care, decreased costs and reduced waiting times. However, medical travel involves additional risks, compared to obtaining treatment domestically. We review the publicly-available evidence on medical travel. We suggest that medical travel needs to be understood in terms of its potential risks and benefits so that it can be evaluated against alternatives by patients who are seeking care. We propose three domains –quality standards, informed decision-making, economic and legal protection – in which better evidence could support the development of medical travel policies.
While robust evidence is one ingredient in the policymaking process, it is by no means the only one. Engaging with policymakers and the policymaking process requires collaborative working models, navigating through the experiences, values and perspectives of policymakers and other stakeholders, as well as communicating evidence in an accessible manner. As a response to these requirements, over recent years there has been proliferation of activities that engage producers of evidence (specifically, academics), policymakers, practitioners, and the public in policy formulation, implementation and evaluation. In this article, we describe one engagement approach for facilitating research evidence uptake into policy and practice-an activity called a 'Policy Lab'-as conducted by the team at The Policy Institute at King's College London on numerous policy challenges over the past four years. Drawing on our experience in running 15 Policy Labs between January 2015 and September 2019, we (a) provide a guide to how we have run Policy Labs, while sharing our learning on what has worked best in conducting them and (b) demonstrate how these labs can contribute to bringing evidence closer to policymaking, by comparing their characteristics to enablers for doing so identified in the literature. While this approach to Policy Labs is not the only one of its kind, we suggest that these types of Labs manifest characteristics identified in previous studies for influencing the policymaking process; namely: providing a forum for open, honest conversations around a policy topic; creating new networks, collaborations and partnerships between academics and policymakers; synthesising available evidence on a policy topic in a robust and accessible format; and providing timely access to evidence relevant to a policy issue. We recognise the limitations of measuring and evaluating how these Labs change policy in the long-term and recommend viewing the Policy Lab as part of a process for engaging evidence and policymaking and not an isolated activity. This process serves to build a coalition through participation of diverse communities (thereby establishing 'trust'), work on the language and presentation of evidence (thereby enabling effective 'translation' of evidence) and engage policymakers early to respond when policy windows emerge (thereby taking into account 'timing' for creating policy action).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.