• Home
    • » e-Newsletters

IHI advocates rapid-cycle testing for spreading positive change throughout an organization

Hospitalist Management Advisor, December 1, 2006

Physicians are no strangers to the terms “best practices” and “practice guidelines.” However, the Institute for Healthcare Improvement (IHI) asserts that few hospitals and practitioners are changing their patient care practices to align them with these initiatives.

The great challenge for hospitalists and other leaders is encouraging fellow physicians to close the gap between best practices and common practices, says Carol Haraden, PhD, vice president at the IHI in Cambridge, MA.

As a result of the widespread hesitation to adopt change, the IHI has developed a new area of focus—the science of spread. The science of spread examines how new practices and ideas are diffused throughout an organization, explains Haraden.

The IHI is encouraging institutions to develop a strategy for promoting the widespread adoption of best practices and practice guidelines that takes into account such factors as the organization’s infrastructure, culture, size, social system, and operational system.

In a recent visit to a hospital that had a high rate of adherence to congestive heart failure guidelines, Haraden says she and her colleagues were disappointed to discover that the hospital’s secret to success was that it hired a nurse to do nothing but ensure that the five steps of those guidelines were followed with all patients.

This strategy is not a sustainable practice for meeting goals. Spreading genuine change throughout an organization should not depend on one person, Haraden says.

Rapid-cycle testing produces local results

Haraden says that when presented with a mountain of evidence from leading researchers, physicians will still be skeptical about whether a new practice will work locally.

“Small tests of change build local support and local evidence,” says Haraden. “There are many who try to spread without first creating local success, and many who stop at the pilot and declare success without spreading.”

An important tool in creating a successful pilot and spreading change throughout an organization is rapid-cycle testing. Rapid-cycle testing allows organizations to test and refine ideas quickly and on a small scale.

Unlike more traditional quality improvement methods that involve collecting a large amount of data over a long period of time, rapid-cycle testing can produce quick feedback about the effectiveness of an intervention and allow for ongoing refinement.

Early positive results can help build momentum for spreading change and produce local data that is harder for physicians to refute.

Suzanne Dalton, RN, BS, EdM, quality improvement specialist for Healthcare Quality Strategies, Inc., a Medicare quality improvement organization in East Brunswick, NJ, says that hospitals can test an intervention on a very small scale, even with one patient.

Hospitals sometimes make the mistake of testing an intervention on a larger scale than is necessary, she says. One patient on the 3 p.m.–11 p.m. shift is enough to test a new discharge instruction form, for example. Next, the form can be tested with two patients, or with all patients to be discharged within the next hour.

This small-scale testing can produce useful information about making changes in forms before hospital committees begin the lengthy process of reviewing and approving changes, Dalton says.

Haraden agrees testing an intervention on one patient is legitimate. “If it doesn’t work on one, why would it work on three or five?” she asks.

The results from one patient can provide valuable information about an intervention, although Haraden notes that physicians, accustomed to heeding results only from large, randomized controlled studies, tend to underestimate the amount of useful information that can be obtained from a small-scale test.

Once the intervention is working well with one patient and one physician, then it can be ramped up to involve three clinicians and their patients, then five clinicians and their patients, and so on, she says.

24-hour turnaround

Haraden recommends 24-hour turnarounds on any refinements of the intervention when doing rapid-cycle testing. For example, if you are testing a new practice and get feedback from a clinician that changes are needed on an order set, try to get the revised order set back for further testing in 24 hours, she says.

By quickly making refinements and ramping up to a few more clinicians and patients in testing the intervention, you can gather a lot of data in a two-week period, Haraden says.

As the intervention moves from one area of the hospital to another, different implementation issues emerge. In one New Jersey hospital, providers were testing interventions to ensure that colorectal surgery patients had normal temperatures within one hour of surgery, says Dalton.

They tested interventions on a Friday for one operating room (OR) and found that their interventions produced the desired outcome. But on Monday, the same interventions were used in the same OR and did not achieve the desired outcome. Staff considered changing the method of taking the patients’ temperatures and changing the interventions.

However, when staff investigated further, they discovered that maintenance staff had lowered the temperature of the OR area to 50 degrees over the weekend, which explained why Friday’s patients had normal temperatures, but Monday’s did not.

Pilot clinicians

At the front end of the curve for the spread of change in an organization are the innovators and early adopters, who are receptive to improvements that have been reported at other institutions. At the back end of the curve are the late adopters and laggards, who will need to be swayed by local data and local evidence.

The first clinician that you choose to test the intervention should be the one who is most interested in participating in your test, Haraden says.

The world-class surgeon with enormous credibility who has no intention of changing anything is not a good place to start, nor is the junior staff member who has little credibility, she adds. Nor is it necessarily a good idea to pilot-test an intervention in a particularly problematic area, she says.

“The typical inclination is to start where the pain is the greatest, but you may not want to go where there’s a lot of pain, because there could also be a lot of resistance,” Haraden says. She emphasizes that willingness and interest are the two most important criteria in choosing clinicians to test an intervention.

Haraden says innovators tend not to be good collaborators because “they live in their own worlds.” However, early adopters may be the most willing and interested in testing a new intervention that has succeeded elsewhere in your institution.

Although physician resistance to new practices may be unfounded in many cases, in fact, local adaptations often are needed when adopting changes, Haraden says.

As hospitals work to meet guidelines for delivering antibiotics within 60 minutes of surgery, for example, they are taking many different approaches to meeting that goal and placing responsibility on nurses or anesthesiologists, depending on different factors within that hospital.

Three questions

Before initiating rapid-cycle testing, it’s important to answer the following three questions:

  • What are you trying to accomplish?
  • How will you know that a change is an improvement?
  • What changes can you make that will result in improvement?

    The aim statement should be specific, says Dalton. “Heal all patients, all the time,” is not an aim statement, she says. Haraden adds that it’s important for everyone around the table to be headed in the same direction, noting that many committees are “driving across the country” without a map or clear destination, she says.

    “Groups can be working together, thinking they share common objectives, only to discover later that many had not shared [their] assumptions with the group,” states a paper about rapid-cycle improvement by Healthcare Quality Strategies, Inc. “Working through these assumptions at the beginning is a must.”

    Before initiating rapid-cycle testing, there should be a numerical measure for documenting improvement and collecting data. In diabetes patients, for example, Haraden says, there would have to be agreement on what exactly is being measured and improved.

    Haraden advocates setting “aspirational goals, not dust ball goals.”

    “We talk about ‘half-lives’ as benchmarks in meeting goals,” she says. “If the ultimate goal is zero, and your rate is 9.2, 4.6 is a first benchmark to celebrate.”

    Meeting these half-life goals can create the energy and support that is needed for eventually meeting the ultimate goal, Haraden says.