Interventions aren’t described
Few published intervention evaluations refer to formal documentation describing the content and delivery of an intervention and are seldom reported by researchers or practitioners in enough detail to replicate them [5, 6]. Reviews of nearly 1,000 behaviour change outcome studies [7-10] found that interventions were described in detail in only 5% to 30% of the experimental studies. Even when the intervention was documented (e.g., a detailed manual was available), only a few investigators actually measured the presence or strength of the intervention in practice, and fewer still included such measures in the analyses of the results. Thus, we are often left knowing very little about the details of an intervention or the functional relationship between the components of the intervention and outcomes. Knowing the details and functional relationships are critical to any future introduction and scale-up of effective interventions. This knowledge helps to inform what to teach to new practitioners, how to transform or reorganise healthcare processes, and what to include in the assessment of practitioner performance (fidelity measures)-all key features of successful implementation [11, 12].
For those studies that do provide a detailed account of the intervention, there is inconsistent use of terminology that limits meta-analyses and contributions to science. For example, ‘behavioural counselling’, ‘academic detailing’, and ‘outreach’ can mean very different things according to the group delivering or evaluating the intervention, leaving potential users confused. Having consistent terminology and sufficient information for replication appears to be more problematic for behavioural and organisational interventions than for pharmacological ones. Twenty-six multidisciplinary researchers attending a workshop were presented with a set of behavioural or pharmacological intervention protocols, and asked whether they had sufficient information to be able to deliver them in practice settings. They were less confident about being able to replicate behavioural interventions compared with pharmacological interventions (t = 6.45, p < 0.0001) and judged that they would need more information in order to replicate behavioural interventions (U = 35.5, p = 0.022) [13]. A more detailed protocol description of the intervention did not increase confidence, suggesting that, in this situation at least, more information does not, per se, make intervention descriptions easier to interpret and to use for replication.
Bạn đang xem: Specifying and reporting complex behaviour change interventions: the need for a scientific method
Xem thêm : Gruesome Thursday
The lack of attention to providing useful descriptions of behavioural interventions may in part reflect the low investment in this area of research (compared to the investment in pharmacological research); it also may reflect limitations in current scientific practice. Intervention development methods and content are often based on simple, mostly unstated models of human behaviour or, at best, are ‘informed’ by theory using methods that are tenuous and intuitive rather than systematic [14, 15]. This means that each new intervention and each new evaluation occurs in relative isolation, and the opportunity to build an incrementally improving ‘technology’ of behaviour change is constrained. If a more explicitly theoretical approach to deciding how to design and report interventions were taken, it may be that more effects may be revealed and more understanding of their functional mechanisms gleaned. Arguably, better reporting of interventions that are poorly (and implicitly) conceptualised will not improve the situation. Advantages of using explicit rather than implicit theoretical models include providing a consistent and generalisable framework within which to gather evidence; promoting the understanding of causal mechanisms that both enrich theory and facilitate the development of more effective interventions [16]; and suggestimg moderating variables that would guide the user in adapting the intervention to different patients or population subgroups [4, 17]. The extent to which this advantage is realised will depend on the development of more sophisticated methods of applying theory to intervention design and evaluation [18].
The advantages of reporting interventions better
To implement interventions to provide benefits to the intended populations, the functional components of interventions must be known and clearly described. For example, in pharmacology the active ingredient of aspirin is very different from the active ingredient of statins, and each is known to impact on physiological and pathological outcomes in different ways. To accumulate evidence of outcome effectiveness and of processes of behavioural change, accurate replication of such interventions across multiple studies is required. An analysis of 49 highly cited clinical research studies found that, of 45 claimed to be effective, only 20 (44%) had their findings replicated by subsequent research [19]. Replication requires accurate and detailed reporting of the interventions. Such replication generates scientific knowledge, allows unhelpful or even harmful interventions to be avoided, and provides the detail that allows effective interventions to be subsequently introduced and scaled up to provide population benefits. There is evidence that the more clearly the effective core components of an intervention are known and defined, the more readily the program or practice can be introduced successfully [20-22]. The core intervention components are, by definition, essential to achieving good outcomes for those targeted by the intervention. This is as true for modes of delivery and intervention settings as it is for intervention content. As a simple example, a core component of Multi-systemic Therapy (MST), Homebuilders, and Nurse-Family Partnership (NFP) interventions is that they are delivered in the homes of children [23-25]. It is not MST, Homebuilders, or NFP unless this fundamental feature is present. However, in a large scale attempt to replicate Homebuilders across the United States, many of the replication sites delivered services in their offices, not family homes and, predictably, the outcomes were disappointing [26]. The philosophy and values of Homebuilders were adopted, but the core intervention components were not used. Thus, the specification of effective core intervention components becomes very important to the process of the subsequent introduction of innovations on a scale useful to society and to their evaluation in practice (e.g., [4, 27, 28]).
Xem thêm : Can Veggie Noodles Provide a Guilt-Free Pasta Fix?
Knowing the effective core intervention components may allow for more efficient and cost effective introduction of interventions and lead to confident decisions about the non-core components that can be adapted to suit local conditions at a local site. Not knowing the effective core intervention components leads to time and resources wasted in attempting to introduce a variety of non-functional elements. Clear descriptions of core components allow for evaluations of the functions of those procedures. Some specific procedures and sub-components may be difficult and costly to evaluate using randomised group designs (e.g., [29]), but within-person or within-organisation research designs offer an efficient way to experimentally determine the function of individual components of evidence-based practices and programs [18, 30-34]. For those interventions that are supported by a series of randomized controlled trials (RCTs) that are theoretically and methodologically consistent across studies, Bloom has suggested meta-analytic strategies to take advantage of naturally-occurring variations in RCTs to discern effective components of interventions for different types of participant and setting. Of course, as with any meta-analysis, the results depend on having investigators ‘guess right’ about the core components for which measures are included.
Current reporting guidelines
Guidelines for researchers to improve the transparent and accurate reporting of interventions in health research are summarized on the EQUATOR Network website http://www.equator-network.org. They include the well-established CONSORT guidelines for reporting evaluation trials, which suggest that evaluators should report ‘precise details of interventions [as] actually administered’ [35]. The extension of these guidelines to non-pharmacological trials [36], the TREND Statement for the transparent reporting of evaluations with non-randomised designs [37] and the STROBE Statement for strengthening the reporting of observational studies [38] all call for intervention content to be described, as do the SQUIRE guidelines for quality improvement reporting [39, 40]. However, it is only recently that attention has begun to be paid, by groups such as the Workgroup for Intervention Development and Evaluation Research (WIDER), to what, or how to, report intervention content and components. Their current recommendations to improve reporting of the content of behaviour change interventions are available at http://interventiondesign.co.uk.
The relationship between post-hoc and ante-hoc description
The reporting guidelines cited above are intended to be used as a post-hoc set of descriptors. However, in order to maximise the scientific advantages inherent in better description, we argue that there needs to be an ‘ante-hoc’ process that informs the building of the intervention in the first place. This is consistent with the increasing practice of researchers involved in healthcare implementation studies to describe study and intervention protocols in BMC journals such as Implementation Science; because there is no formal space limit, intervention materials such as leaflets, brochures, websites, and training schedules can be easily included using facilities such as Additional Files.
Nguồn: https://buycookiesonline.eu
Danh mục: Info