What We Can Learn From Our Patients

December 15, 2014
David Eagle, MD

,
Bo Gamble

Oncology, Oncology Vol 28 No 12, Volume 28, Issue 12

Patient satisfaction data can give practices insight into their operations and make specific, practice-level adjustments accordingly, and with aggregated data we can gain insights into global practice responsiveness and patient perceptions regarding care.

Measurement of healthcare quality continues to evolve. Currently, most measurements focus on care process and health outcomes.[1] Patient satisfaction surveys have become much more common in healthcare over the past several years, and many rightly consider assessment of patient satisfaction an important aspect of quality measurement (currently, patient satisfaction is included in the Hospital Compare database and is used to help determine hospital compensation).

What can be learned from the results of such surveys, and in what concrete ways can they lead to better healthcare and better health outcomes? Some perhaps hope that patient survey results might point the way to better care, or shed light on ways in which providers could save money-two areas in which improvement is avidly sought. However, it is unclear whether survey results reflect the quality of care delivered or simply patients’ perceptions of met expectations or the communication skills of the providers. How accurately high patient satisfaction reflects quality of care or predicts for lower overall health spending is highly uncertain and an area of current debate.[2-4]

Here, to help answer questions of what and how we can learn from patient satisfaction surveys, we discuss a recently launched oncology-specific patient satisfaction survey that was designed for practices that had decided to participate in the “medical home” model of oncology care. Oncology-specific surveys are new. This article provides a “first look” at the process of administering such a survey, the opportunities it offers for practice improvement, and the data themselves. With time, experience, and more survey results, we can expect to learn more.

In 2012, the Community Oncology Alliance launched its medical home model of oncology care. One feature of this model was an oncology-specific patient satisfaction survey that was offered to participating practices. The survey drew upon the Consumer Assessment of Healthcare Providers and Systems (CAHPS) survey inventory of standard and optional questions and was designed by a group of eight oncology practice administrators involved in oncology medical home initiatives. The survey assesses four domains of patient satisfaction: timeliness, thoroughness, communication, and friendliness/helpfulness. The full survey can be accessed here. Surveys can be distributed either in paper format or electronically. Although all survey tools are free to all sites of care, including private practices, hospital-based practices, and academic centers, practices must register to participate in national, regional, and other benchmarking.

So far, more than 40 practices have submitted over 24,000 surveys, and more than 1,100 oncology providers are currently registered to use the survey. Survey data can be entered directly by patients electronically, or hard copy survey results can be uploaded. Benchmarking results, which are the data derived from comparing a practice’s results with those of other practices, are immediately available to participating practices in real time. All patient responses are anonymous. Fourteen standard reports can be created to compare a practice as a whole to the national or state average, to compare providers within a practice, or to compare locations within a practice-as well as other options. The data derived from the surveys are for participating practices to use at their discretion. Practices may choose to share their survey results with payers if they desire and if this fits within specific payment model arrangements that a practice may negotiate. One caveat: patient satisfaction surveys can lead to physician job dissatisfaction and workforce attrition, particularly when results are used as a punitive tool.[4]

Our practice of four providers (three physicians and one nurse practitioner) just completed our first survey process. We surveyed all patients who came to either of our two office locations during the month of July. The only patients excluded were patients coming for their very first appointment. We received a total of 176 completed surveys. Benchmarking results were immediately available to our practice when the survey results were entered into the national database.

In general, we were very pleased with our results. We scored above average for all questions except two. We learned that we can do a better job of responding quickly to patient calls made after office hours. Also, we learned that we can get better at asking patients whether they will be able to follow our instructions regarding their plan of care. We met as a practice and have made changes to improve our performance in these areas. Physicians are making an effort to respond to calls more quickly. If this does not work, we may need to re-evaluate our answering service to determine whether unacceptable delays occur when relaying patient messages. Both physicians and staff will be more mindful of the importance of asking patients if they can follow our instructions. Quality improvement is an iterative process. We plan to repeat the survey every 6 months, thereby enabling us to measure our progress and find new opportunities for improvement.

The Table displays median figures for selected survey questions for the aggregated results of the approximately 24,000 surveys collected thus far. The answers to these questions have obvious value to individual practices. In addition, they speak to issues in which policymakers and insurers have an interest, such as timeliness of care, care coordination, and patient perception of quality of care. While attending various forums on oncology and healthcare delivery, we commonly hear comments about the perceived shortcomings of the way in which oncology offices provide care. However, in these conversations, anecdote and conjecture typically supersede any measurable data. While imperfect, the aggregate results of these surveys can allow for a more informed conversation on many issues of interest and importance.

The first two questions in the Table speak to how responsive a provider or office is to urgent patient issues. This is relevant, since patients whose urgent needs are not met are more likely to obtain care through emergency rooms, with possible subsequent hospitalizations. Importantly, some interpretation of the responses to these two questions is in order. For example, a suboptimal response may reflect a situation in which clerical staff or nursing staff offered an appointment the same day but it simply was not at a time slot that the patient was willing or able to accept. Also worth keeping in mind is the fact that oncology offices have had to increase patient volumes over the past decade to remain financially solvent; a busier patient schedule reduces a practice’s ability to address less predictable, urgent patient issues. In spite of all such extenuating circumstances, a lower score on these first two questions should prompt a practice to consider measures that will help a practice strike a better balance and minimize unnecessary emergency room visits-a hot button for payers. Oncology-specific telephone triage systems, another oncology medical home concept, may help in achieving this balance.

The third question speaks for itself. All providers should strive for 100% honesty and credibility with patients. Falling short of 100%-and certainly falling below the average-should prompt further reflection and insight on the part of the individual providers involved. The fourth question is self-explanatory and can provide simple, direct feedback on how well patient needs are met.

The final question speaks to important aspects of care coordination and the ultimate transition of care back to primary care providers. Fragmented, poorly coordinated care is likely a key driver of increased health costs. Again, practices can now get direct feedback on this important element of care, consider opportunities for improvement, and measure progress through subsequent surveys.

This data set has limitations that stem from the methodology used. Practices volunteer to perform the survey, primarily because the chief purpose of the survey is the self-improvement of the practice. Each practice may implement the survey in a slightly different way, and the overall survey response rate is not known. Nevertheless, we can learn something from the opinions of more than 24,000 patients receiving care in hematology-oncology offices. With time, we expect to have well over 100,000 survey results. Large observational data sets such as this can be important tools if interpreted correctly and used with proper caution. Some of the questions speak to issues for which practically no data exist. Again, we can now have a more informed conversation regarding issues such as care coordination and practice responsiveness.

To conclude, the value of patient satisfaction data is thus twofold. First, practices can gain insight into their operations and make specific, practice-level adjustments accordingly. Second, the data can be aggregated. We can gain insights into global practice responsiveness and patient perceptions regarding care. Careful interpretation is required, and not all data should simply be taken at face value. Still, having some data to work with is almost certainly better than having none.

Whatever the strengths and limitations of surveys may be, we should all want to hear from our patients. Furthermore, we should all strive to satisfy our patients, independent of whether satisfaction predicts quality or other global health metrics. We are here to serve them.

Financial Disclosure:The authors have no significant financial interest in or other relationship with the manufacturer of any product or provider of any service mentioned in this article.

[[{"type":"media","view_mode":"media_crop","fid":"17952","attributes":{"alt":"","class":"media-image media-image-left","id":"media_crop_8529149987953","media_crop_h":"0","media_crop_image_style":"-1","media_crop_instance":"1178","media_crop_rotate":"0","media_crop_scale_h":"0","media_crop_scale_w":"0","media_crop_w":"0","media_crop_x":"0","media_crop_y":"0","style":"float: left;","title":" ","typeof":"foaf:Image"}}]]The content of ONCOLOGY’s Practice & Policy department represents a joint venture between the editors of ONCOLOGY and the Community Oncology Alliance. Articles featured in Practice & Policy are supplied by the COA as part of its mission to educate oncologists about the policy issues that affect the nation's cancer care delivery system. Practice & Policy features reflect the views of their authors and not necessarily those of the COA.

References:

1. Fenton JJ, Jerant AF, Bertakis KD, Franks P. The cost of satisfaction. Arch Intern Med. 2012;172:405-11.

2. Chang JT, Hays RD, Shekelle PG, et al. Patients’ global ratings of their health care are not associated with the technical quality of their care. Ann Intern Med. 2006;144:665-72.

3. Manary MP, Boulding W, Staelin R, Glickman SW. The patient experience and health outcomes. N Engl J Med. 2013;368:201-3.

4. Zgierska A, Rabago D, Miller M. Impact of patient satisfaction ratings on physicians and clinical care. Patient Prefer Adherence. 2014;8:437-46.

Related Content:

Oncology Journal | News