How did the consortium come into existence?
We’ve always been interested in clinical outcomes of coronary intervention, and how to improve those outcomes. In 1997, it occurred to us that single institutions don’t have the ability to collect an adequate amount of data that will allow for a meaningful evaluation of some of the risk factors of adverse events from coronary interventions. Also, single institutions cannot compare themselves with others unless they have access to other institutions’ data and/or they have a collaborative effort with other institutions. As a result, we decided to make a proposal to the Blue Cross Blue Shield of Michigan Foundation. In 1997, we proposed the creation of a coronary intervention regional registry, focusing on data collection for risk assessment, benchmarking, and comparison of practice variations among different institutions. The overall goal of the project was to improve quality of care for patients undergoing percutaneous coronary interventions.
The Blue Cross Blue Shield of Michigan Foundation decided to fund the project. The Foundation is a nonprofit organization that funds quality improvement projects and outcome-related projects in the state of Michigan. It’s a funding agency for clinical studies and clinical protocols. We then proposed participation in this study and project to centers which were part of the Blue Cross Blue Shield Cardiovascular Centers of Excellence network. In 1997, we made a proposal to these hospitals to participate in this consortium, and eight agreed to participate at that time. The consortium now includes 17 participating centers, after expansion of the projects through additional funding from Blue Cross Blue Shield of Michigan.
How is data managed? Is it confidential?
Yes, absolutely. Data is confidential for every institution. We have a coding system in our database where every institution is assigned a number. Each operator is included in the database, also coded as a number, and the code resides with the institution supplying the data for that particular operator. So, for us, the University of Michigan coordinating center, data is blinded. We don’t know who is who in terms of centers and we don’t know who is who in terms of operators.
We also are HIPAA-compliant in terms of patient confidentiality, and of course the study has been approved by our institutional IRB, as well as by local institutional review boards.
How many people at the University of Michigan are working on this project?
Our group includes myself as principal investigator of the project, and Eva Kline-Rogers, a trained research nurse, who has been working with us since the beginning. She’s the manager for the whole project. We also have a full-time biostatistician, 2 part-time data managers, a part-time computer programmer, and Cec Montoya, a quality improvement specialist who has recently joined us. She’s also been working for the ACC with the GAP project in Michigan (The American College of Cardiology Acute Myocardial Infarction Guidelines Applied in Practice Projects to Improve the Quality of AMI Care in Michigan, www.acc.org/gap/mi/ami_gap.htm).
How does the work of the consortium compare to that of the American College of Cardiology National Cardiovascular Data Registry (ACC-NCDR)?
There are several similarities, particularly in the data that are collected. Most of our data elements are also included in the ACC database we use the same definitions that have been made public by the ACC. However, we have some additional data elements that we felt might be important to collect. In addition, as a regional registry, we can make rapid changes to our data collection process. For example, drug-eluting stents have recently been introduced into the market. We were able to change our data collection form from one day to the next, in order to start collecting data on drug-eluting stents. The same formula applies to other devices like distal embolic protection devices, thrombectomy devices, and so on.
The major difference between our consortium and the ACC is that we are a regional program, and as such, we can have closer collaboration among participating centers. For example, we have a quarterly meeting where nurse coordinators, administrators and physicians from each participating center meet in order to discuss data collected by the consortium, projects for quality improvement and also potential analyses for scientific publication.
We have a very strict program to ensure that the data we are getting in the registry are of good quality. We are currently collecting data on paper forms; each form is checked by a research nurse for data validity and completeness. If we’re missing clinical outcomes, co-morbidities, or procedure variables, the form is sent back to the participating center. Additional diagnostic routines for data validity are included in the database. We also do site visits twice a year. During those visits, we perform random case reviews, and we pull out our database log and compare it against the cath lab log. We want to ensure that all the consecutive cases have been included in our database.
Are there plans to computerize your paper form process?
Yes. We’re in the process of doing so. For some centers, collection on paper forms is quite good, while for other centers, there’s a need to streamline data collection using an electronic format. We’re in the process of making this arrangement for participating centers that have made this request.
What are some of the quality improvement goals that you’ve achieved?
Our analysis of practice variation among participating hospitals is one major result achieved by our group. First, we were able to find that some practice variation may not be necessarily based on clinical evidence. Second, in the year 1998, we started discussing if there was room for improvement. At that time, we selected certain quality indicators that we felt were important. Among these quality indicators, for example, was the use of aspirin prior to the procedure. Nobody would argue that every patient who undergoes percutaneous coronary revascularization should be on aspirin. Yet we found that administration and documentation of pre-procedure aspirin use was at 90% at that time. I believe that it should be close to 98%, if not higher. Part of it was due to inadequate documentation, but part of it was due to the fact that some patients were not receiving aspirin.
It was the same story for another quality indicator. We found that there was substantial variation among different institutions regarding the total amount of contrast per case. In an analysis we published in the American Journal of Cardiology1, we also found that exceeding a certain amount of contrast per case was the most important predictor of developing renal failure during dialysis. Once renal failure has occurred, it’s a complication that is also associated with an increased risk of in-hospital morbidity and mortality. The mortality rate is very high.
Again, we found significant variations in GP IIb/IIIa receptor blocker use. We also found variations in the frequency of emergency bypass surgery among the participating centers, as well as in the transfusion rate, vascular complications rates, and so on.
As a result of these observations, we selected a group of indicators for the consortium to work on, which included certain pre procedure medications, total contrast per case, and certain clinical outcomes. Then we asked each participating center to select two or three quality indicators to focus on from this list. We collected data in 1998, and we collected it again in the year 2002, after the quality improvement program was implemented. We found that there has been a reduction in practice variation, which means that there’s much more standardized care. There has also been significant improvement in clinical outcomes, which is the most important result.
You presented several abstracts at the 2003 American Heart Association Meeting. Can you share some of your conclusions?
One of our abstracts concerned an analysis limited to patients with diabetes, and the identification of risk factors for in-hospital mortality in patients with diabetes.2 Prior studies have shown that patients with diabetes mellitus are at increased risk of complications from coronary intervention. Diabetes is associated with an increased risk of contrast nephropathy, and with a substantial risk of in-hospital mortality. We developed a simplified risk prediction score for in-hospital mortality, which can be used at the bedside. It allows you to tell a patient with diabetes mellitus what may be his real risk of death in the hospital after a coronary revascularization procedure. Of course, not all patients are the same. Some patients may have an increased number of co-morbidities, that regardless of what you may be doing with your coronary intervention, may be related to an increased risk of death in the hospital. The risk score allows you to provide a patient with more precise information about what the risk might be. It allows the patient to make a more informed decision, and it’s useful for the clinicians, because it allows them to determine the risk for a particular patient.
Have any consortium hospitals implemented use of the risk score?
We haven’t yet implemented the risk score for patients with diabetes mellitus, but we have implemented the use of a risk score for a general patient population. We published a study in Circulation in 20013, which has a risk score allowing you to predict in-hospital mortality for an individual patient. Each one of the participating hospitals has been using that risk score as a prognostic indicator.
It sounds like much of the value in the consortium is not only in mining data trends, but also in bringing information to hospitals in such a way that they can utilize it appropriately.
You talked about improving outcomes in PCI. Do you think the success was primarily due to the fact that you were able to raise awareness of proper treatment among physicians?
There are two possibilities. One is that the improvement that you see is just related to secular changes in clinical outcome temporal trends in clinical outcomes. On the other hand, there is the possibility that part of the improvement observed in our study was really in fact due to changes in the care process as a result of the quality improvement effort. We have some clear evidence that this second possibility is the case. Some of the evidence is coming from, for example, contrast nephropathy requiring dialysis. We did a year-by-year analysis, from 1998 all the way to 2002, and what we found is that in the year 1998 and 1999, the incidence of nephropathy requiring dialysis was the same between the 2 years, and the percentage of patients exceeding a certain amount of contrast was the same. In 1999, we did an initial analysis, which showed that exceeding a certain amount of contrast was an important predictor of needing to go on dialysis. We started notifying the participating institutions of the findings of this study, which were presented in abstract form at the 1999 American Heart Association meeting. In 2001, we further refined the analysis and created a risk prediction tool that allows you to identify patients who may be at increased risk of contrast nephropathy. The tool also provides guidelines on how you can prevent the development of contrast nephropathy. In 2001, we had a decrease in patients with nephropathy requiring dialysis, as well as a decrease in patients exceeding that amount of contrast, and we had a further decrease in 2002. This decrease did not happen by chance. There was an association with data analysis and with sharing risk prediction tools and guidelines. That association resulted in a reduction in contrast nephropathy.4 It’s a nice example.
We also have evidence of increased utilization of aspirin as part of an internal effort in some of the participating institutions, as well as evidence of a reduction in blood transfusions in institutions that decided to focus on this particular quality indicator.
Can you talk about your conclusions on the relevance of risk prediction models?
My personal philosophy is that you can look at risk adjustment in two ways. Not all patients are the same, and not all institutions and operators may be the same, in terms of their ability to provide a certain type of care. If you want to compare results of differing institutions and different operators, you need to have what we call risk-adjusted data. In other words, you need to be able to adjust for co-morbidities and estimate the predicted mortality in the patient population. The results will allow you to make some comparisons across different institutions and operators.
The problem is that when you do such an analysis, there are also going to be temporal changes in the patient population and in clinical outcomes. For example, look at what was happening in 1994, when we were using stents in very few patients. Today stenting is close to 85 to 90 percent. Outcomes in 1994 were driven by an older technology that resulted in increased risk, for example, in emergency bypass surgery and perhaps death in the hospital. If you use a risk adjustment model that was developed in 1994 and apply it to 2001 and later, your model will fall apart, because your outcomes are much better in 2001.
That’s why, if you want to compare different operators and institutions, you need to be able to update your models using contemporary data from coronary intervention.
On the other hand, if I want to compare the result of a process which aims toward improvement of outcomes, I need to collect my baseline data, develop my risk adjustment model, and then apply that risk adjustment model to data that I collected later and determine if my risk-adjustment outcomes are in fact better when compared to the baseline time period.
What are some of your plans for the future?
The consortium started with 8 hospitals, and now we have a total of 17 hospitals in the state of Michigan. We will work on quality improvement with these added hospitals. We’ll also perform new analyses, looking at, for example, vascular closure devices. We are also collecting data now on the utilization of drug-eluting stents. It’s really a process that never ends, because there’s always something new in interventional cardiology and you always discover something that may be related to other, better or worse outcomes. We’re always trying to keep up in a rapid fashion. The advantage of our consortium is that with 17 hospitals, we have access to a large amount of data very quickly.
All the hospitals are really working together beyond the barriers of market competition, and this collaborative spirit has led to major advancements.
1. Freeman RV, O’Donnel M, Share D, et al. Nephropathy requiring dialysis after percutaneous coronary intervention and the critical role of an adjusted contrast dose. Am J Cardiol 2002;90:1068-1073.
2. Mukherjee D, Eagle KA, Smith D, et al. A Simple Risk Score for Predicting Mortality in Diabetic Patients Undergoing Percutaneous Coronary Intervention. Circulation 2003;108(17): IV-717-718. 2003 AHA Scientific Sessions abstract.
3. Moscucci M, Kline-Rogers E, Share D, et al. Eagle. Simple Bedside Additive Tool for Prediction of In-Hospital Mortality After Percutaneous Coronary Interventions. Circulation 2001; 104: 263-268.
4. Chetcuti SJ, Mukherjee D, Smith DE. Development of a Continuous Quality Improvement Program for the Reduction of Contrast Nephropathy After Percutaneous Coronary Intervention. Circulation 2003;108(17): IV-769. 2003 AHA Scientific Sessions abstract.