BEFORE TAKING ON THE ASSIGNMENT, PLEASE ENSURE THAT YOU UNDERSTAND ALL OF THE INFORMATION AND REQUIRED READINGS THAT GO ALONG WITH IT. MUST BE ORIGINAL WORK AND NO AI ASSISTANCE.
Using the information in the background readings as well as some research in peer-reviewed sources of your own, create a professional-looking PowerPoint presentation of 10-14 slides (not including the title slide or the reference slide) which clearly summarizes each of the items below. Include thorough speaker’s notes to further expand upon and explain your points. Use in-text citations on your slides, as well as a reference slide at the end of the presentation.
1. Summarize the three types of assessment to be completed during utilization review. What is involved in each process and who completes this work? What is the health care manager’s specific role in the review?
2. Provide an example of how each of the three types of assessment are performed in the current health care settings. In your three examples, specifically explain how quality was/was not preserved, and how cost was/was not controlled.
3. Describe the role of communication between all stakeholders involved in utilization review. Who are each of the stakeholders, and how should this communication take place? How are decisions best made and communicated, when the review involves competing interests among stakeholders (e.g., for off-label drug uses for terminal patients, handling bed shortages, etc.).
Homework Module 2 SLP Assignment Expectations
1. Conduct additional research to gather sufficient information to support your presentation.
2. Provide 10-14 quality PowerPoint slides of bulleted-point information content (with speaker’s notes), not including title page and reference slides. Don’t forget to use in-text citations on the slides.
3. Support your paper with peer-reviewed articles and reliable sources. Use at least two peer-reviewed sources. For additional information on how to recognize peer-reviewed journals, see:
Angelo State University Library. (n.d.). Library Guides: How to recognize peer-reviewed (refereed) journals. Retrieved from https://www.angelo.edu/services/library/handouts/peerrev.php
and for evaluating internet sources:
Georgetown University Library. (n.d.). Evaluating internet resources. Retrieved from
https://www.library.georgetown.edu/tutorials/research-guides/evaluating-internet-content
4. You may use the following source to assist in your formatting your assignment:
Purdue Online Writing Lab. (n.d.). General APA guidelines. Retrieved from https://owl.english.purdue.edu/owl/resource/560/01/.
5. Paraphrase all source information into your own words carefully, and use in-text citations.
6. Be sure that you do not cut and paste material into your slides, but use proper quotations where needed, and also citations for all reference materials. The same expectations apply to PowerPoint presentations as to documents.
Contents lists available at ScienceDirect
International Journal of Production Economics
journal homepage: www.elsevier.com/locate/ijpe
Quality management in healthcare organizations: Empirical evidence from the baldrige data
Mahour Mellat Parasta,∗, Davood Golmohammadib
a Assistant Professor of Technology Management, North Carolina A&T State University, 1601 E Market Street, Greensboro, NC, 27411, USA bAssociate Professor of Management Science and Information Systems, University of Massachusetts-Boston, 100 William T. Morrissey Blvd, Boston, MA, 02125, USA
A R T I C L E I N F O
Keywords: Quality management Healthcare quality Malcolm baldrige national quality award (MBNQA) Structural equation modeling
A B S T R A C T
The purpose of this paper is to investigate the determinants of customer satisfaction and quality results in the healthcare industry using the Baldrige data. We use publicly available data on quality assessment of healthcare organizations that applied for the Baldrige award to examine two research questions: 1) whether the Baldrige model is a reliable and valid model for assessment of quality practices in healthcare organizations, and 2) to determine the relationship between quality practices and their impact on quality results in the healthcare or- ganizations. Using structural equation modeling, the findings suggest that the Baldrige model is a valid and reliable quality assessment model for healthcare organizations. Consistent with previous studies, the findings suggest that the main driver of the system is leadership, which has a significant effect on all quality practices in the healthcare industry. Controlling for the applicants’ year, the findings indicate that 1) Information analysis and knowledge management has a significant impact on Quality results, 2) Workforce development and human resource management has a significant impact on both Customer focus and satisfaction and Quality results, and 3) Strategic planning for quality has a significant impact on Customer focus and satisfaction. The study provides insights and suggestions for healthcare organizations on how to improve their quality systems using the Baldrige model.
1. Introduction
The last few decades have seen a great emphasis on improving the quality of healthcare in the United States (Institute of Medicine, 2001; Chassin and Galvin, 1998; Jencks et al., 2003; McGlynn et al., 2003; Matthias and Brown, 2016; Nair and Dreyfus, 2018). As a result, a di- verse array of approaches has been advocated to improve the quality of care. Some of these programs, such as pay-for-performance (Epstein et al., 2004; Lindenauer et al., 2007; Millenson, 2004) and public in- formation disclosure (Chassin, 2002; Fung et al., 2008; Marshall et al., 2000; Williams et al., 2005), have garnered substantial academic in- terest. Other programs, however, have received limited investigation. This is concerning in that focus on only a few programs may result in healthcare policymakers developing and implementing suboptimal programs or making investment decisions that do not make the best use of resources. Research in healthcare quality and healthcare services could be valuable due to the challenges of complexity, co-production, and service intangibility inherent in service delivery in healthcare or- ganizations (Vogus and McClelland, 2016; Halkjær and Lueg, 2017; Silander et al., 2017). The intrinsic complexity in healthcare organi- zations can provide a useful context for operations management
scholars to study organizational processes in such a complex environ- ment (Eisenhardt and Graebner, 2007; Farjoun and Starbuck, 2007). As indicated by Bardhan and Thouin (2013), “An important, and some- times overlooked, dimension in the debate over healthcare reform is the quality angle.”
Attention to quality in the healthcare industry gained momentum after the publication of the report “To Err is Human: Building A Safer Health System” that was published by the National Academy (Kohn et al., 2000). Following the success of quality management programs in the manufacturing sector (Power et al., 2011), the healthcare industry was motivated to adopt quality management practices and principles to ensure delivery of proper care, reduce healthcare delivery costs, and increase patient satisfaction (Alexander et al., 2006; Macinati, 2008; Sabella et al., 2014; Russell et al., 2015; Um and Lau, 2018). Never- theless, the outcome of implementing quality management in the healthcare industry is mixed and does not provide a clear picture of the effectiveness of quality management in improving healthcare quality. While many healthcare organizations faced significant challenges in successful implementation of quality management (Bringelson and Basappa, 1998; Zabada et al., 1998; Ennis and Harrington, 1999; Huq and Martin, 2000), there are examples of successful implementation in
https://doi.org/10.1016/j.ijpe.2019.04.011 Received 8 October 2018; Received in revised form 12 April 2019; Accepted 14 April 2019
∗ Corresponding author. E-mail addresses: [email protected] (M.M. Parast), [email protected] (D. Golmohammadi).
International Journal of Production Economics 216 (2019) 133–144
Available online 17 April 2019 0925-5273/ © 2019 Published by Elsevier B.V.
T
the literature (Motwani et al., 1996; Klein et al., 1998; Chattopadhyay and Szydlowski, 1999; Jackson, 2001; Francois et al., 2003). Because of the limitations of these studies in terms of their small sample size, focus on a few departments in a healthcare organization, and focus on narrow aspects of organizational performance, it is unclear whether quality management can improve healthcare quality (Mosadeghrad, 2015).
Research in quality management was promoted by development of national and international quality standards such as the European Foundation for Quality Management (EFQM) excellence model, and ISO-9001 (Araújo and Sampaio, 2014; Castka, 2018; Martín-Gaitero and Escrig-Tena, 2018). These quality models provide a framework for organizations to assess the effectiveness of quality management prac- tices, and to determine areas of improvement that are oriented towards accomplishing balanced results for all the stakeholders (Bou-Llusar et al., 2009). One initiative designed to improve quality that has not received empirical scrutiny is the Malcolm Baldrige National Quality Award (MBNQA), arguably the most prestigious quality award that can be attained by healthcare providers (Lin and Su, 2013). The MBNQA is given annually by the National Institute of Standards and Technology (NIST), a division of the Department of Commerce, to applicant orga- nizations across six industry sectors (manufacturing, service, small business, education, healthcare, and non-profit). The MBNQA was created in 1987 to foster competitiveness of U.S. companies, and in 1999 was expanded to include healthcare providers after a limited 4- year pilot that debuted in 1995. Since its introduction, over 1500 or- ganizations across all categories have applied for the award, with 19 healthcare organizations winning the award since 2002 (Baldrige Award Recipient Information, 2015). Winning the MBNQA brings public acclaim (National Institute of Standards and Technology (NIST), 1995), with U.S. Commerce Secretary Penny Pritzker stating about the 2014 MBNQA award winners, “Today's honorees are the role models of innovation, sound management, employee and customer satisfaction, and results. I encourage organizations in every sector to follow their lead.” Since the award expanded to the healthcare sector, healthcare providers have been particularly keen to apply for the MBNQA, with healthcare providers now composing approximately 50% of the appli- cants (Foster and Chenoweth, 2011). However, to date, there is limited evidence about the program's effectiveness in improving healthcare quality. This is primarily due to the fact that the data on the Baldrige assessment of healthcare organizations was treated as confidential and consequently not publicly available.
A review of the literature in healthcare quality demonstrates the inherent complexity of quality in the healthcare industry, which makes healthcare industry unique in terms of how quality management prac- tices should be implemented (Vogus and McClelland, 2016; Bortolotti et al., 2018). First, providing a specific treatment path for each patient, along with the heterogeneity of customers (patients), requires a high level of customized care that increases complexity in the quality of care (Sofaer and Firminger, 2005). Second, the knowledge asymmetry (knowledge gap) between the healthcare provider and the patient adds to the complexity of the entire healthcare process (Dempsey et al., 2014). While healthcare organizations have tried to manage this com- plexity by engaging patients and their families in the process, it adds additional complexity to healthcare delivery and ensuring healthcare quality (Abbott, 1991, 1993). Third, compared to other industries, both customers (the patients) and the organizations (healthcare providers) are exposed to high risks and costs associated with services performed, where the cost of failure is significant (Sofaer and Firminger, 2005; Vogus and McClelland, 2016). Fourth, healthcare organizations should operate under specific regulatory procedures and protocols to ensure high quality experiences for customers (Register, 2011). Finally, unlike other industries, the service delivery could happen over a longer time horizon, and may involve different forms of treatments, which may impact customers’ perception of quality of care in the healthcare in- dustry (Golin et al., 1996; Sofaer and Firminger, 2005). Thus, under- standing the antecedents and drivers of healthcare quality and quality
results has significant impact on both healthcare organizations and patients.
The objective of this study is to examine the impact of quality practices associated with the Baldrige model on quality results in the healthcare industry using the Baldrige data (independent reviewers’ scores). Surprisingly, empirical studies on the impact of the quality management and Baldrige model in the healthcare industry are rare. A comprehensive search of the EBSCO and ScienceDirect databases using the keyword “healthcare quality” suggests that Meyer and Collier (2001) conducted the only empirical assessment of the relationship between quality practices, using self-reported data on a survey instru- ment adapted from the Baldrige Healthcare Criteria. Thus, it is still uncertain whether using the Baldrige model and implementing the practices associated with quality management as prescribed by the Baldrige model can lead to improved quality results in healthcare or- ganizations.
This paper addresses four major gaps in the literature on quality management and healthcare quality: First, in contrast to previous stu- dies where surveys were used to collect data from firms regarding their quality practices, this is the first study that investigates the linkage between quality practices and quality results using independent re- viewers' scores. Second, this study examines the validity of the Baldrige model as a valid and reliable model for assessment of quality in healthcare organizations, providing empirical evidence on how healthcare organizations can improve their quality through im- plementing the Baldrige criteria. Third, this study evaluates the de- terminants of quality results in the healthcare industry using data on quality performance of healthcare organizations that applied for the Baldrige award from 1999 to 2006, which responds to the call for more longitudinal studies in the assessment of service quality (Subramony, 2009). Finally, by using the independent reviewers’ assessment of the quality of care, this study can control for the rater bias in customer satisfaction, thereby improving the quality of the data and rigor of the findings using more valid, reliable, and objective measures of service quality (Schneider et al., 2005; Hekman et al., 2010; Subramony and Pugh, 2015). By using a more comprehensive, objective, and fine- grained set of measures such as the Baldrige model to assess quality (Sofaer and Firminger, 2005), this study can provide a more nuanced explanation of how different organizational practices and mechanisms can improve quality of care (Jenkinson et al., 2002; Vogus and McClelland, 2016), and identifies the best practices that can be used by organizations to improve quality results (Escrig and Menezes, 2015).
2. Previous studies of the baldrige model in the healthcare industry
Shortell et al. (1995) conducted an empirical study of quality practices and quality outcomes in healthcare organizations, using a survey instrument they developed based on the Baldrige model. Their study shows that 54% of the variation in quality implementation is explained by culture and implementation approach (employee, physi- cian, and administrative orientation and involvement) toward quality systems, indicating the importance of leadership and human resource management in quality implementation in hospitals. Carman et al. (1996) assessed the factors that lead to successful implementation of quality management systems in hospitals. Using the scales developed by Shortell et al. (1995), they did not find any significant relationship between the Baldrige constructs and performance outcomes. It should be noted that the constructs used by Shortell et al. (1995) and Carman et al. (1996) are only loosely based on the Baldrige Health Care Criteria; thus, their study may not be an accurate representation of the appli- cation of the Baldrige model to healthcare organizations (Meyer and Collier, 2001).
In another study, Jennings and Westfall (1994) developed a self- assessment tool (survey instrument) for hospitals that can be used for benchmarking and improvement purposes based on the Baldrige
M.M. Parast and D. Golmohammadi International Journal of Production Economics 216 (2019) 133–144
134
guidelines. They report that the survey instrument can be used as a valid and reliable tool to assess quality systems in hospitals; however, their study poses two limitations: their assessment of the reliability and validity of their survey instrument is solely based on Cronbach's alpha values, and they use employee self-reported data to assess the reliability of the survey instrument.
Meyer and Collier (2001) conducted the only study that used a Baldrige model to examine the linkage between quality practice and quality results in healthcare organizations. Their findings provide im- portant insights into the relevance of the Baldrige model in the healthcare industry. They report that 1) Leadership and Information and Analysis are significantly linked with Organizational performance results, 2) Human resource development and management and Process management significantly link with Customer satisfaction, and 3) the Baldrige Health Care model is a valid and reliable measurement model. It should be noted that while Meyer and Collier (2001) provide important insights into ways to improve quality in healthcare organizations, they do not fully capture the essence and dynamics of the Baldrige Heath Care for several reasons:
1) They use the 1995 Health Care Pilot Criteria model, thereby failing to capture changes and modifications to the Baldrige Health Care Criteria in subsequent years, 2)While the survey instrument was de- veloped based on the Baldrige criteria, certain modifications were made in order to make it suitable for cross-sectional survey administration, 3) Data is based on the self-reported survey from hospitals, which does not necessarily represent quality implementation based on the Baldrige guidelines, and 4) Because of the cross-sectional nature of the study, it
may not be able to capture whether the determinants of quality results in health care organizations remain stable over time. These limitations make it difficult to properly assess the relationships between Baldrige criteria and their impact on quality results.
3. Assessment of theoretical foundations of the baldrige model in the healthcare industry
Several studies have used quality management models and have established theoretical foundations underlying the Baldrige criteria. Using self-reported data on quality practices across several industries, Flynn and Saladin (2001) showed that the Baldrige criteria are sound, robust, and have been appropriately revised over time to address the evolving nature of quality. Nevertheless, all of these findings are based on data collected from survey studies associated with quality manage- ment, and the validity of the Baldrige model using the Baldrige data has not been examined. It is important to note that the data for the survey research is obtained through a specific set of questions aimed at cap- turing the perceptions of quality managers on quality practices. In a Baldrige assessment, independent reviewers assign scores to the Bal- drige criteria based on specific guidelines and procedures. These in- dependent reviewers visit many firms, review documents and reports, and talk to the managers and personnel; the reviewers are able to make an informed evaluation regarding the level of implementation of quality practices across an organization. There is consistency in terms of the assessment, since independent reviewers follow specific guidelines to evaluate an organization's quality practices. This is a unique evaluation
Table 1 Baldrige model hypotheses.
Hypothesis Justification
H2a: Leadership is positively related to management of process quality. According to the Baldrige model, leadership has a direct effect on process quality. Wilson and Collier (2000), Meyer and Collier (2001), and Pannirselvam and Ferguson (2001) showed that quality leadership is significantly related to process quality. The significant role of leadership in improving healthcare quality is addressed in the literature (Gilmartin and D'Aunno, 2007; Withanachchi et al., 2007; Naveh and Marcus, 2004).
H2b: Leadership is positively related to information and analysis. According to the Baldrige model, leadership has a direct effect on information and analysis. Wilson and Collier (2000) and Meyer and Collier (2001) showed that quality leadership is significantly related to information and analysis.
H2c: Leadership is positively related to human resource development and management.
According to the Baldrige model, leadership has a direct effect on human resource development and management. Wilson and Collier (2000), Meyer and Collier (2001), and Pannirselvam and Ferguson (2001) showed that quality leadership is significantly related to human resource development and management.
H2d: Leadership is positively related to strategic quality planning. According to the Baldrige model, leadership has a direct effect on strategic quality planning. Wilson and Collier (2000) showed that quality leadership is significantly related to strategic quality planning. In the healthcare organization, leadership has shown to have a direct impact on strategic planning (Mosadeghrad, 2015).
H3a: Management of process quality is positively related to customer focus and satisfaction.
Wilson and Collier (2000), Meyer and Collier (2001), and Pannirselvam and Ferguson (2001) showed that process quality is significantly related to customer focus and satisfaction.
H3b: Management of process quality is positively related to quality and operational results.
Wilson and Collier (2000) and Pannirselvam and Ferguson (2001) showed that process quality is significantly related to quality and operational results. Placing a low priority on continuous quality improvement has a negative impact on quality results (Chan and Ho, 1997).
H4a: Information and analysis is positively related to customer focus and satisfaction.
Wilson and Collier (2000) and Pannirselvam and Ferguson (2001) showed that information and analysis is significantly related to customer focus and satisfaction. Healthcare organizations must find ways in which IT can assist the delivery of high quality patient care.
H4b: Information and analysis is positively related to quality and operational results.
Meyer and Collier (2001) showed that information and analysis is significantly related to quality and operational results. Bardhan and Thouin (2013) showed that information technology can improve quality and reduce operational costs in hospitals.
H5a: Human resource development and management is positively related to customer focus and satisfaction.
Wilson and Collier (2000) and Pannirselvam and Ferguson (2001) showed that human resource management is significantly related to customer focus and satisfaction. Lack of incentives and human resources practices leads to ineffective quality outcomes in healthcare organizations (Alexander et al., 2006).
H5b: Human resource development and management is positively related to quality and operational results.
Wilson and Collier (2000) and Pannirselvam and Ferguson (2001) showed that human resource management is significantly related to quality and operational results. The centrality of strategic human resource management in improving quality results is discussed by Gowen et al. (2006a) and Savinoa and Batbaatarb (2015).
H6a: Strategic quality planning is positively related to customer focus and satisfaction.
Wilson and Collier (2000) and Meyer and Collier (2001) showed that strategic quality planning is significantly related to customer focus and satisfaction.
H6b: Strategic quality planning is positively related to quality and operational results.
Wilson and Collier (2000) showed that strategic quality planning is significantly related to quality and operational results. To improve quality results, healthcare managers should integrate quality as a strategic priority in their organizations' vision, policies, and long-term strategies (Mosadeghrad, 2015).
M.M. Parast and D. Golmohammadi International Journal of Production Economics 216 (2019) 133–144
135
process that does not exist in typical survey research designs (Evans, 2010; Parast, 2015).
In order to examine the validity of the Baldrige model, the first step is to examine its theoretical foundations. These foundations can be examined through an assessment of the measurement model. Based on the findings of previous studies on the Baldrige criteria using quality management survey instruments (Wilson and Collier, 2000; Flynn and Saladin, 2001; Pannirselvam et al., 1998; Pannirselvam and Ferguson, 2001), our first hypothesis is that the Baldrige model is a valid and reliable model:
H1. The Baldrige model for quality is a valid and reliable model for assessing quality management in the healthcare industry.
Several studies using the Baldrige criteria have established the link between leadership and other Baldrige categories. With reference to the Baldrige model and by borrowing from the previous studies that ex- amined the linkage between Baldrige criteria and quality performance (e.g., Pannirselvam et al., 1998), Wilson and Collier (2000), and Pannirselvam and Ferguson (2001)) we examine the relationships among the Baldrige criteria. To establish the relationships among the Baldrige criteria and to develop the hypotheses, we review the litera- ture in Baldrige and quality management. Table 1 provides a review of previous studies that examined the relationship between quality man- agement practices and organizational quality results. This leads to de- velopment of several hypotheses that relate quality managed practices to quality results for the Baldrige model (H2a through H6b).
Due to the lack of clarity on the relationships between the Baldrige criteria, previous studies have not provided a clear understanding on how the relationships between the Baldrige criteria influence firm performance and business results (Wilson and Collier, 2000; Evans, 2010). The Baldrige guidelines do not clearly specify the linkages be- tween the criteria. The only clear relationship in the Baldrige model is the influence of leadership on other categories. Thus, we build our structural model of the relationship among Baldrige criteria based on the review of the literature in quality management and Baldrige
criteria, which are presented in Table 1. The structural model for the Baldrige that is used for testing the hypotheses is provided in Fig. 1.
4. Variables and measures
The following variables are used to assess the relationship among the Baldrige criterial. Further details about the nature and scope of these variables can be obtained in the following link: https://www.nist. gov/baldrige/baldrige-criteria-commentary-health-care.
Leadership. It assesses how senior leaders’ personal actions and your governance system guide and sustain your organization.
Strategy. It assesses how organizations develop strategic objectives and action plans, implement them, change them if circumstances re- quire, and measure progress. Assessments are made to assess strategic planning pertaining to Organizational learning and learning by work- force members Patient-excellence, Operational performance improve- ment and innovation, Organizational learning and learning by work- force members.
Customers. This category asks how the organization engages pa- tients and other customers for long-term marketplace success, including how an organization listens to the voice of the customer, serve and exceed patients' and other customers’ expectations, and build re- lationships with patients and other customers.
Measurement, Analysis, and Knowledge Management. This is the “brain center” for the alignment of an organization's operations with its strategic objectives. It is the main point within the Health Care Criteria for all key information on effectively measuring, analyzing, and improving performance and managing organizational knowledge to drive improvement, innovation, and organizational competitiveness. Central to this use of data and information are their quality and availability. Furthermore, since information, analysis, and knowledge management might themselves be primary sources of competitive ad- vantage and productivity growth, this category also includes such strategic considerations.
Workforce. This category addresses key workforce practices—those
H 5b
H2a
H 2d
H 2c
H 3a
H 3b
H 2b
H 5a
H 6bH
6a
H 4a
H 4b
Leadership
Management of Process
Quality
Information and Analysis
HR Development and Management
Strategic Quality
Planning
Customer Focus and
Satisfaction
Quality and Operational
Results
Fig. 1. Structural model and hypotheses.
M.M. Parast and D. Golmohammadi International Journal of Production Economics 216 (2019) 133–144
136
directed toward creating and maintaining a high-performance en- vironment and toward engaging an organization's workforce to enable it and the organization to adapt to change and succeed.
Operations. This category assess how the organizations focused on the organization's work, the design and delivery of health care services, innovation, and operational effectiveness to achieve organizational success now and in the future.
Results. This category provides a systems focus that encompasses all results necessary to sustaining an enterprise: the key process and health care results, the patient- and other customer-focused results, the workforce results, the leadership and governance system results, and the overall financial and market performance.
5. Methodology
The data for this study was collected from the National Institute of Standards and Technology (NIST) (http://www.nist.gov/baldrige/ about/for_researchers.cfm), which provides the independent evalua- tors’ scores from 1990 to 2006. This provides a unique opportunity to address theoretical and causal relationships of the Baldrige framework. The soundness, robustness, and the objectivity of the review process ensure a high level of reliability in the data (Evans, 2010). Information about the Baldrige assessment for the healthcare organizations is pro- vided in the following link: https://www.nist.gov/baldrige/baldrige- criteria-commentary-health-care.
We used structural equation modeling (SEM) for model validation and assessment. SEM is a series of statistical methods that allow the simultaneous assessment of complex relationships between one or more independent variables and one or more dependent variables. In addi- tion, SEM is a combination of factor analysis and multiple regression analysis that is used to analyze the structural relationships between measured variables and latent constructs (Bryne, 2009; Bagozzi and Yi, 2012).
5.1. Sample
The sample for this study consists of all healthcare organizations that applied for the Baldrige award between 1999 and 2006. We re- moved forty-four records for healthcare organizations that applied for the 1995 award, because that was the pilot program. In total, the publically available data for the MBNQA applicants in the healthcare category from 1999 to 2006 resulting in a total of 161 observations for healthcare organizations. This includes the following sample sizes for each year: N1999= 9, N2000= 8, N2001= 8, N2002= 17, N2003= 19, N2004= 22, N2005= 33, and N2006= 45.
Table 2 provides descriptive statistics (mean and standard devia- tion) for the healthcare organizations for each year. Because the Bal- drige model assigns weights to each construct, the data was normalized by dividing the independent examiners’ score by the maximum score in order to provide consistency across the constructs. While the total available points for the Baldrige assessment is 1000, the distribution of
the points does not weight the seven categories equally. For example, the Leadership category has 120 points, while Strategic Planning has 80 points. The normalization process is used to have the same range of measurement for each construct and provide consistency to the data. In this process, we divide each observation by the total allocated value of the given construct by the Baldrige assessment. (e.g. Dividing the Lea- dership score of organizations by 120 or dividing the Strategic Planning score by 80 to obtain the normalized values). Thus, with this normal- ization process, each construct value has a range between zero and one. To examine the normality of the data, we calculated the statistics for skewness (asymmetry) and kurtosis (peaked-ness). Values for asym- metry and kurtosis between −1.5 and + 1.5 are considered to be ac- ceptable in order to prove normal univariate distribution (Tabachnick and Fidell, 2013). For the Baldrige data, the statistics for skewness and kurtosis range between −0.715 and 0.605, suggesting that the re- quirement of normality is met.
Some preliminary insights could be obtained from these statistics. As the descriptive statistics show, we see some improvement on the mean value for several constructs over time, which could be attributed to the widespread application of quality systems across the healthcare organizations, and the attention given by healthcare organizations to quality improvement (Meyer and Collier, 2001). For example, the average score for Leadership increased in seven years from 0.46 in 1999 to 0.53 in 2006. The same patterns are observed for other key con- structs of the Baldrige: Strategic planning (from 0.37 to 0.49), Customer focus and satisfaction (from 0.40 to 0.52), Information and analysis (from 0.41 to 0.54), Human resource development and management (from 0.43 to 0.53), Process management (from 0.38 to 0.53), and Quality and op- erational results (from 0.34 to 0.43).
Fig. 2 provides a longitudinal overview of the change in the value of the seven constructs of quality management in the Baldrige model. As it is show, all values show improvement from their initial assessment at 1999 compared to that of 2006. This suggest that over time, healthcare organizations were able to improve their quality management practices as evidenced by the assessment of the independent reviewers’ scores.
Table 3 provides the mean, standard deviation, and correlations for the entire sample of 161 observations. As it is shown, the overall averages for the quality management is between 0.42 and 0.51 (out of the maximum score of 1.00 for each construct), which suggests sig- nificant gaps in quality management implementation by healthcare organizations. Another observation is the significant correlations among quality management practices, which further supports the Bal- drige guiles with respect to the interrelationship among variables in the Baldrige model.
5.2. Measurement model: validation and assessment
We begin our analysis by an assessment of the validity of the Baldrige model for the healthcare industry (H1). Construct validity measures the correspondence between a concept and the set of items used to measure the construct (Churchill, 1979; Brahma, 2009). This
Table 2 Descriptive statistics.
Construct 1999 2000 2001 2002 2003 2004 2005 2006
N=9 N=8 N=8 N=17 N=19 N=22 N=33 N=45
x̄ σx̄ x̄ σx̄ x̄ σx̄ x̄ σx̄ x̄ σx̄ x̄ σx̄ x̄ σx̄ x̄ σx̄
Leadership (LEA) .46 .10 .45 .14 .44 .09 .49 .10 .48 .09 .52 .10 .54 .10 .53 .09 Strategic Planning (STR) .37 .11 .42 .13 .39 .11 .43 .11 .42 .10 .48 .09 .49 .12 .49 .11 Customer Focus and Satisfaction (CFS) .40 .08 .44 .12 .48 .10 .48 .10 .45 .10 .52 .09 .54 .09 .52 .11 Information and Analysis (INF) .41 .11 .43 .19 .45 .11 .45 .12 .48 .09 .51 .08 .54 .10 .54 .09 Human Resource Development and Management (HRM) .43 .07 .43 .10 .46 .10 .41 .09 .46 .08 .50 .08 .54 .08 .53 .08 Process Management (OPR) .38 .09 .41 .13 .38 .07 .43 .12 .45 .10 .50 .10 .55 .11 .53 .09 Quality and Operational Results (RES) .34 .10 .32 .13 .40 .11 .41 .13 .38 .12 .42 .13 .42 .11 .43 .11
M.M. Parast and D. Golmohammadi International Journal of Production Economics 216 (2019) 133–144
137
process starts with the assessment of content validity (O'Leary-Kelly and Vokurka, 1998). Content validity refers to the extent to which a mea- sure represents all aspects of a given concept (Nunally and Bernstein, 1994; Rossiter, 2008). One of the approaches to ensure content validity is through reviewing the literature and using experts' opinions on a given construct (Churchill, 1979; Kerlinger, 1986). The Baldrige model satisfies the requirements of content validity because 1) it has been developed based on the principles and theories of quality management, and 2) it has been reviewed by scholars and practitioners in quality management. This procedure for assessing content validity was used in previous studies on quality management (Dow et al., 1999). In addition, the Baldrige healthcare quality is specifically developed to measure quality practices in healthcare organizations; thus, we conclude that the Baldridge healthcare quality meets the requirements of content validity.
5.3. Confirmatory factor analysis for the model
Hair et al. (2009) pointed to the importance of conducting con- firmatory factor analysis (CFA) for the full measurement model. We used confirmatory factor analysis for the full model using a variety of goodness-of-fit statistics to determine the overall fit of the model (χ2/ df=1.66, RMSEA=0.06; CFI= 0.97). All fit indices are within the recommended range, an indication of the acceptable measurement model (Kaynak, 2003; Hu and Beltler, 1999). Therefore, there is enough empirical evidence to accept the validity of the hypothesis that the Baldrige model is theoretically robust in terms of measuring quality practices in the healthcare industry, providing support for the first hypothesis (H1).
An examination of the standardized loadings shows that they are all significant, providing initial evidence of convergent validity (Table 3). Next, reliability values (Cronbach's alpha) for the constructs were cal- culated. A reliability value of 0.7 or higher is an acceptable value for survey research (Nunally and Bernstein, 1994; Hair et al., 2009). All reliability measures are within the acceptable range. In order to assess convergence validity, the average variance extracted (AVE) for the constructs was calculated; the values are presented in Table 4. The AVE calculates the mean variance extracted for the item loadings on a construct and is used as an indicator of convergence (Fornell and Larcker, 1981; Carlson and Herdman, 2012; Agarwal, 2013). A value of 0.5 or higher is an indication of good convergence (Hair et al., 2009). All constructs have AVE's above the recommended threshold of 0.5. To establish discriminant validity, the AVE values for any of the two constructs were compared with the square of the correlation estimate between the two constructs (Fornell and Larcker, 1981; Henseler et al., 2015). If the AVE values are greater than the squared correlation esti- mate between the constructs, discriminant validity of the two con- structs is supported. An examination of AVE values and the correlation estimates suggests the existence of discriminant validity (Harris, 2004). A review of the correlations shows that they are statistically significant. Therefore, the underlying assumption of the Baldrige model that “ev- erything is related to everything else” appears to be valid in the healthcare industry.
5.4. Structural model and testing the hypotheses
Control Variables: We use two control variables in this study. The
0
0.1
0.2
0.3
0.4
0.5
0.6
1999 2000 2001 2002 2003 2004 2005 2006
Leadership (LEA) Strategic Planning (STR)
Customer Focus and Sa sfac on (CFS) Informa on and Analysis (INF)
Human Resource Development Process Management (OPR)
Quality and Opera onal Results (RES)
Fig. 2. Change in quality assessment in healthcare organizations 1999–2006.
Table 3 Correlations.
Mean S.D. 1 2 3 4 5 6 7
1. Leadership .51 .09 1.00 2. Strategic planning .46 .11 .821∗∗∗ 1.00 3. Customer focus and satisfaction .50 .09 .774∗∗∗ .776∗∗∗ 1.00 4. Information and analysis .52 .10 .764∗∗∗ .762∗∗∗ .733∗∗∗ 1.00 5. Human resource development and management .50 .09 .729∗∗∗ .705∗∗∗ .725∗∗∗ .720∗∗∗ 1.00 6. Process management .49 .11 .745∗∗∗ .767∗∗∗ .694∗∗∗ .740∗∗∗ .721∗∗∗ 1.00 7. Quality and operational results .42 .12 .722∗∗∗ .682∗∗∗ .643∗∗∗ .698∗∗∗ .654∗∗∗ .639∗∗∗ 1.00
***p < .01.
M.M. Parast and D. Golmohammadi International Journal of Production Economics 216 (2019) 133–144
138
first control variable is the industry (healthcare organizations). The second control variable is the application year. To control for the ap- plication year, we construct a vector of seven dummy variables (Y1999
through Y2006), each representing an application year, using 1999 as the reference year.
Statistical Procedure: To determine the relationship between quality practices and quality results, the structural model proposed in Fig. 1 is examined using structural equation modeling (SEM) with the maximum likelihood procedure.
Table 5 presents the estimate for each path (regression coefficient) and the corresponding p-values. Consistent with our hypotheses (H2a – H2d), we find support for a significant relationship between Leadership and Operational results (ß= 0.825, p < .01), Leadership and Informa- tion and analysis (ß= 0.935, p < .01), Leadership and Human resource development (ß= 0.788, p < .01), and Leadership and Strategic planning (ß= 0.914, p < .01). We also find support for the relationship be- tween Information and analysis and Quality results (H4b: ß=0.567, p < .1), Human resource management and Customer focus and satisfac- tion (H5a: ß=0.354, p < .05), Human resource management and Quality results (H5b: ß=0.285, p < .10), and Strategic planning and Customer focus and satisfaction (H6a: ß=0.344, p < .05). The proposed struc- tural model explains 74% of the variability in Quality results and 87% of the variability in Customer focus and satisfaction. We discuss these
findings as well as the implications for theory and practice in section 6.
5.5. Robustness tests
Non-Normality. While the assumption of normality is required for regression analysis, non-normality of that data does not affect the consistency of the parameter estimates in SEM (Bollen, 1989; Sharma et al., 1989; Lei and Lomax, 2005). Nevertheless, our assessment of normality implied that the assumptions of normality of the data are met.
Heteroscedasticity. Heteroscedasticity refers to the case where the variance of regression disturbances is not constant across observations, leading to unbiased estimates (Greene, 2012). To address the potential bias associated with heteroscedasticity (or inequality of the variance of the error term), we plotted the scatter plot of the standardized residuals vs. standardized predicted values. We did not find any evidence of heteroscedasticity in our data.
Multicollinearity. To ensure that results are not sensitive to the correlation among variables, we examined multicollinearity among variables using regression analysis. All the VIF values generated from the regression analysis are well below 0.50, which indicates that mul- ticollinearity is not a major concern in this study (Belsley et al., 1980; Hair et al., 2009).
Comparisons with alternative SEM models. To evaluate the fit of our base model, we compared the base model fit with an alternative SEM model, all with freely estimated paths, so that the results could be compared. In the alternative model, we added two more direct paths that have been suggested in prior studies in quality management: from Leadership to Customer focus and satisfaction and from Leadership to Quality results (Meyer and Collier, 2001). We wanted to see whether there is a direct effect between Leadership to Customer focus and sa- tisfaction and from Leadership to Quality results. None of the paths from Leadership to Customer focus and from Leadership to Quality results show a significant relationship. In addition, the R2 value does not change as a result of adding the two paths, an indication of a less powerful model fit. Also, the standardized regression coefficient from Leadership to Quality results was negative, providing strong evidence of model mis- specification. Alternative B is a model where we add a direct path from Strategic planning to Information and analysis. This path is not significant. Thus, using competing models, we were able to demonstrate that the proposed structural model best explains the conceptual model.
Table 4 Properties of the model.
Scale Measurements
α Item λ AVE
Leadership (LEA) .85 q11 .89 .74 q12 .83
Strategic Planning (SP) .91 q21 .94 .84 q22 .89
Customer Focus and Satisfaction (CFS) .87 q31 .90 .77 q32 .86
Information and Analysis (INF) .83 q41 .85 .71 q42 .84
Human Resource Development and Management (HRD) .88 q51 .88 .72 q52 .84 q53 .82
Process Management (OPR) .84 q61 .88 .74 q62 .84
Quality and Operational Results (RES) .89 q71 .85 .76 q72 .81 q73 .83 q74 .87
Table 5 Standardized regression coefficients.
Dependent Variables Independent Variables
CFS RES STR INF HRM OPR
Controls Y2000 .064 −.049 .075 .004 −.023 .089 Y2001 .158 .140 .080 .011 .070 −.002 Y2002 .256 .193 .087 −.089 −.170∗∗ .098 Y2003 .116 .060 .055 .021 .003 .121 Y2004 .133 −.001 .124 .065 .108 .186∗∗
Y2005 .212 −.077 .148 .053 .199∗ .369∗∗∗
Y2006 .160 −.109 .186 .107 .231∗∗ .386∗∗∗
Predictors Leadership n.s. n.s. .914∗∗∗ .935∗∗∗ .788∗∗∗ .825∗∗∗
Strategic quality planning .344∗ −.106 Information and analysis .457 .567∗
HR Development .354∗∗ .285∗
Process Management −.191 .217
*p < .10 **p < .05 ***p < .01. n.s. hypothesis is not stated.
M.M. Parast and D. Golmohammadi International Journal of Production Economics 216 (2019) 133–144
139
6. Discussion
This study presents the first empirical study on the assessment of the Baldrige model in the healthcare industry using the Baldrige data (in- dependent reviewers' scores). This study addresses two major gaps in the literature in quality management: First, in contrast to previous studies where self-reported data were used, this study examines in- dependent reviewers’ scores on the Baldrige criteria. Thus, it provides higher level of reliability and validity in data in terms of the assessment, perceptions, and reporting. A second important aspect of this study is the assessment of the impact of quality practices on quality results in healthcare organizations over time.
6.1. Theoretical contributions
This research makes several contributions to operations manage- ment, quality management, and healthcare quality. First, the study makes a strong case that the Baldrige model is theoretically sound and robust, and over time it has maintained a high level of measurement capability to address quality management in the healthcare industry. To the best of our knowledge, this is the first empirical analysis of the assessment of the Baldrige model using the Baldrige data. One im- portant implication of this finding is that healthcare organizations would be able to use the Baldrige model as a self-assessment tool to improve their quality results.
Second, our work builds on prior MBNQA research by explicitly studying applicants' quality scores using independent reviewers' scores. This complements prior research that examined the impact of the Baldrige model in improving quality practices in the healthcare in- dustry using a cross-sectional survey (Meyer and Collier, 2001). In that respect, we provide a more nuanced assessment of linkages between quality practices and quality results through incorporating in our re- search design and methodology two important factors that have not been addressed in prior studies: 1) using Baldrige assessment data (independent reviewers’ scores) that are a more objective, reliable, and valid source of data, and 2) using quality performance data that span seven years, providing a more rigorous assessment of the relationship between quality practices and quality results in healthcare organiza- tions.
A third way our manuscript contributes to theory is improving our understanding of how healthcare organizations can improve their quality results using the Baldrige model. Consistent with previous stu- dies in quality management implementation in the healthcare industry (Meyer and Collier, 2001; Mosadeghrad, 2015), we found that Leader- ship is the main driver of quality, which has a significant impact on all other quality practices. There are two major findings from this research regarding the role of Leadership in the Baldrige Health Care causal model. First, Leadership has a direct causal influence on each of the components of the Baldrige System including Information and analysis, Strategic planning, Human resource development and management, and Process management. Improvement in Leadership has a positive and di- rect impact on each of the Baldrige System categories. This result confirms the Baldrige theory that Leadership drives the system, and supports the findings of Batalden and Stoltz (1993) and Meyer and Collier (2001), where strong support and commitment to quality from the senior administration in healthcare organizations is the key to quality improvement. The second research finding for healthcare or- ganizations is the significance of the causal relationship from Leadership to Information and analysis. The point estimate of the influence of Lea- dership on Information and analysis (ß= .935, p < .01) is the largest among the Baldrige criteria and presents leadership's strongest influ- ence in comparison to other system categories (Leadership → Opera- tional results: ß= 0.825; Leadership → Human resources management: ß= 0.788; Leadership→Strategic planning: ß= 0.914). This suggests that in quality-driven healthcare organizations, leaders recognize the critical role of Information analysis and knowledge management, and the
importance of data-driven decision making. Our fourth theoretical contribution is related to the significant im-
pact of Information and analysis on Quality results in the healthcare or- ganizations. First, information and analysis has a significant effect on quality results (ß=0.567, p < .10), supporting the finding of Meyer and Collier (2001) and the Baldrige theory that “an effective health care system needs to be built upon a framework of measurement, informa- tion, data, and analysis” (National Institute of Standards and Tech- nology (NIST), 1995). This suggests that healthcare organizations can improve their quality results by development of effective information systems, which enables them to make informed decisions to support performance outcomes. Second, Information and analysis has a direct impact on Customer focus and satisfaction, indicating that effective use of measurement, information, and data can contribute to the performance of healthcare organizations if healthcare organizations use information and data in their decision making process. Our findings provide em- pirical support for the argument put forward by some OM scholars that coordination (information exchange relationship) among providers in health-care delivery, is necessary to achieve desirable patient outcomes (Boyer and Pronovost, 2010; Queenan et al., 2011). Thus, if quality improvement is a strategic concern (Fundin et al., 2018), healthcare organizations need to make proper investment in their information systems and knowledge management.
Our fifth contribution is the importance and centrality of Human resource development on improving Customer focus and satisfaction and Quality results in healthcare organizations (Gowen et al., 2006a). In that regard, our empirical findings support the existing anecdotal evidence in the literature that discuss how lack of attention to human resource management could lead to inefficient and poor quality outcomes in healthcare organizations (Huq and Martin, 2000; Francois et al., 2003; Alexander et al., 2006; Withanachchi et al., 2007; Ozturk and Swiss, 2008). We showed that within the Baldrige model, organizational at- tention and investment in human resource management improves pa- tient satisfaction with the quality of care.
Finally, our last contribution is related to the significant relationship between Strategic planning for quality and Customer focus and satisfaction in healthcare organizations. Earlier studies discuss lack of attention in healthcare organizations to strategic planning of quality, and pursuing a “middle of the road approach” to avoid risks (Gibson et al., 1990; Calem and Rizzo, 1995). The complexity of the healthcare system along with its highly departmentalized structure has contributed to the in- effectiveness of many quality management programs and their poor implementation (Jabnoun, 2005; Naveh and Stern, 2005). Taking into account the hierarchical structure, departmentalized setting, and au- thoritative nature of healthcare organizations, strategic planning and implementation of quality is difficult to achieve (Francois et al., 2003; McNulty and Ferlie, 2002; Abd-Manaf, 2005). Nevertheless, as our empirical analysis suggests, Strategic planning for quality has a sig- nificant impact on Customer focus and satisfaction (ß= 0.344, p < .05), an indication of the positive impact of Strategic planning for quality on Customer focus and satisfaction.
Surprisingly, we were not able to find a significant link between Process Management and Quality and Operational Results in healthcare organizations. While this may be counterintuitive, there are two ex- planations for this. First, due to the heterogeneity of the customers in the healthcare industry, improvement in Quality results is best achieved through implementing customized healthcare delivery programs. Such programs require healthcare professionals to tailor their care services based on the specific needs and conditions of the patients, adding more complexity to the healthcare delivery due to the customized nature of the service delivery (Sofaer and Firminger, 2005). Second, empirical studies suggest a trade-off between efficiency and service quality in operations management literature (Pinker et al., 2000; Sampson and Froehle, 2006; Campbell and Frei, 2010; Xia and Zhang, 2010), and more particularly in healthcare organizations (Mennicken et al., 2011). This is being empirically demonstrated by the negative standardized
M.M. Parast and D. Golmohammadi International Journal of Production Economics 216 (2019) 133–144
140
path coefficient between Operational results and Customer focus and sa- tisfaction (ß=−0.191). Thus, operational results and efficient im- provement may not necessarily improve quality results, as a result of customization of the process in healthcare organizations.
6.2. Implications for managers
This study provides several insights for managers and decision makers who are responsible for quality programs in healthcare orga- nizations. First, quality managers can use the Baldrige model in order to reorganize, restructure, and streamline their quality improvement programs. If healthcare organizations are committed to improve their quality outcomes, the Baldrige model provides a robust and compre- hensive assessment of quality systems in healthcare organizations. Second, healthcare organizations should recognize the importance of information systems, availability of timely and accurate data, and sig- nificance of decision making as related to healthcare operations and processes. We showed that Information and analysis has the strongest impact on Quality results in the healthcare industry. This provides evi- dence on the importance of timely and accurate information exchange to ensure proper decision making as well as involvement of patients in the process, as evidenced by the shift in the healthcare industry from provider-centered care to patient-centered care (Vogus and McClelland, 2016). Third, healthcare organizations should also recognize the im- portance of human resource management on improving Customer focus and satisfaction and Quality results. With the understanding that healthcare systems, both technologically and administratively, are among the most complex systems in terms of service delivery, health- care managers should develop human resource management practices to overcome challenges such as a culture of professional dominance that limit the ability of healthcare organizations to design and implement comprehensive human resource management practices (Mosadeghrad, 2015; Kimberly and Minvielle, 2000).
7. Limitations and future research
This study has several limitations that should be addressed. Availability of a larger sample of healthcare organizations could im- prove the model fit indices and provide more stable parameter esti- mates. It is important to realize that the recommendation for sample size is based on several criteria such as the complexity of the model, distribution of the data, and construct reliability. We should keep in mind that independent examiners’ scores were used in this study; they follow specific guidelines and procedures in assigning scores for each item. In that respect, this data has a very high level of reliability and consistency, especially in terms of the quality of the data, and the fact that it represents the evaluation of experts. In addition, a high level of construct validity and normality of the data, along with high construct reliability measures, ensures the validity of the findings (Hair et al., 2009). In the case of our data, the constructs exhibit high reliability (ranging from 0.83 to 0.91), and the assumption of normality is met. One rule of thumb is that the ratio of the number of observations to the number of parameters should be at least five to one (Russell et al., 1998), which in our dataset is met. Thus, the results of this study are generalizable to a larger sample.
Another limitation of this study is the lack of access to the most recent data. The data for the Baldrige model for healthcare organiza- tions is available from 1999 to 2006. It would be very helpful for academic researchers and practitioners to get access to the more recent data of the Baldrige model. Unfortunately, this data is not publicly available; access to the data is limited to data through 2006. In addi- tion, the availability and inclusion of other variables in healthcare or- ganizations such as the type of the healthcare organization, organiza- tional size, and annual revenue would provide valuable information on the effect of organizational and contextual variables on healthcare quality. In that regards, one possible research study would be to assess
quality management practices at healthcare organizations that won the Baldrige award using a case study approach. Such a study provides an in-depth understanding of how healthcare originations were able to achieve superior quality results through pursuing the Baldrige model.
Care should be taken in terms of generalizing the results of this study. While some argue that service quality in the healthcare industry addresses issues relevant to healthcare delivery (complexity, co-pro- duction, and intangibility) that have been supported to be generalizable to other service contexts (Subramony, 2009; Gittell et al., 2010), we would be also mindful about the nature of service quality in healthcare organizations. We expect that the findings of this study can be gen- eralizable to industries with service expectations similar to those of the healthcare industry, where there is a significant knowledge gap and information asymmetry between the service provider and the customer (e.g., auto repair, consulting, or law firms).
8. Conclusion
For over three decades, the Baldrige model has been used as a fra- mework for quality management, with the expectation of enhancing quality initiatives in organizations through implementation of practices that could improve quality in a systematic way. Our understanding of the relevance of the Baldrige criteria in the healthcare industry was limited due to the lack of data. Our objective in this paper was to provide more insight on the relevance of the Baldrige model in healthcare organizations, and to assess how healthcare organizations can improve customer satisfaction and quality results. We are also cautious that our results should be viewed with respect to the data that are publically available, which limited our interpretation of the results in a more meaningful way. A more nuanced understanding of the re- lationship among the Baldrige criteria requires access to more detailed information about the organizations, which is not currently available.
Using the Baldrige reviewer's scores, this study addressed some of the key questions on the effectiveness of the Baldrige model and its impact on organizational quality in the healthcare industry. We showed that the Baldrige model is a valid and reliable model for healthcare organizations. Organizations can benefit from implementing the Baldrige model, especially when they have the commitment and sup- port of their top management. Our empirical analysis showed that leadership has a significant impact on implementing quality practices in the Baldrige model, including strategic planning for quality, informa- tion and analysis, human resource development, and management of process quality. In addition, our empirical analysis shows that health- care organizations can improve their quality of care through investment in information systems and human resource management. Healthcare organizations should recognize the importance of information systems, the availability of timely and accurate data, and the significance of decision making as related to healthcare operations and processes. Furthermore, healthcare organizations should also recognize the im- portance of human resource management on improving customer focus and satisfaction and in achieving quality results.
Appendix ABaldrige Healthcare Assessment
A.1. Leadership (Category 1)
This category asks how senior leaders’ personal actions and your governance system guide and sustain your organization.
A.1.1. Senior Leadership This item asks about the key aspects of your senior leaders’ re-
sponsibilities, with the aim of creating an organization that is successful now and in the future.
A.1.2. Governance and Societal Responsibilities This item asks about key aspects of your governance system,
M.M. Parast and D. Golmohammadi International Journal of Production Economics 216 (2019) 133–144
141
including the improvement of leaders and the leadership system. It also asks how the organization ensures that everyone in the organization behaves legally and ethically, how it fulfills its societal responsibilities, how it supports its key communities, and how it builds community health.
A.2. Strategy (Category 2)
This category asks how you develop strategic objectives and action plans, implement them, change them if circumstances require, and measure progress.
A.2.1. Strategy Development This item asks how you establish a strategy to address your orga-
nization's challenges and leverage its advantages and how you make decisions about key work systems and core competencies. It also asks about your key strategic objectives and their related goals. The aim is to strengthen your overall performance, competitiveness, and future suc- cess.
A.2.2. Strategy Implementation This item asks how you convert your strategic objectives into action
plans to accomplish the objectives and how you assess progress relative to these action plans. The aim is to ensure that you deploy your stra- tegies successfully and achieve your goals.
A.3. Customers (Category 3)
This category asks how you engage patients and other customers for long-term marketplace success, including how you listen to the voice of the customer, serve and exceed patients' and other customers’ ex- pectations, and build relationships with patients and other customers.
A.3.1. Voice of the Customer This item asks about your processes for listening to your patients
and other customers and determining their satisfaction and dis- satisfaction. The aim is to capture meaningful information in order to exceed your patients' and other customers’ expectations.
A.3.2. Customer Engagement This item asks about your processes for determining and custo-
mizing health care service offerings that serve your patients, other customers, and markets; for enabling patients and other customers to seek information and support; and for identifying patient and other customer groups and market segments. The item also asks how you build relationships with your patients and other customers and manage complaints. The aim of these efforts is to improve marketing, build a more patient- and other customer-focused culture, and enhance patient and other customer loyalty.
A.4. Measurement, Analysis, and Knowledge Management (Category 4)
In the simplest terms, category 4 is the “brain center” for the alignment of your operations with your strategic objectives. It is the main point within the Health Care Criteria for all key information on effectively measuring, analyzing, and improving performance and managing organizational knowledge to drive improvement, innovation, and organizational competitiveness. Central to this use of data and in- formation are their quality and availability. Furthermore, since in- formation, analysis, and knowledge management might themselves be primary sources of competitive advantage and productivity growth, this category also includes such strategic considerations.
A.4.1. Measurement, Analysis, and Improvement of Organizational Performance
This item asks how you select and use data and information for
performance measurement, analysis, and review in support of organi- zational planning and performance improvement. The item serves as a central collection and analysis point in an integrated performance measurement and management system that relies on clinical, financial, and other data and information. The aim of performance measurement, analysis, review, and improvement is to guide your process manage- ment toward the achievement of key organizational results and stra- tegic objectives, anticipate and respond to rapid or unexpected orga- nizational or external changes, and identify best practices to share.
A.4.2. Information and Knowledge Management This item asks how you build and manage your organization's
knowledge assets and ensure the quality and availability of data and information. The aim of this item is to improve organizational effi- ciency and effectiveness and stimulate innovation.
A.5. Workforce (Category 5)
This category addresses key workforce practices—those directed toward creating and maintaining a high-performance environment and toward engaging your workforce to enable it and your organization to adapt to change and succeed.
A.5.1. Workforce Environment This item asks about your workforce capability and capacity needs,
how you meet those needs to accomplish your organization's work, and how you ensure a supportive work climate. The aim is to build an ef- fective environment for accomplishing your work and supporting your workforce.
A.5.2. Workforce Engagement This item asks about your systems for managing workforce perfor-
mance and developing your workforce members to enable and en- courage all of them to contribute effectively and to the best of their ability. These systems are intended to foster high performance, to ad- dress your core competencies, and to help accomplish your action plans and ensure your organization's success now and in the future.
A.6. Operations (Category 6)
This category asks how you focus on your organization's work, the design and delivery of health care services, innovation, and operational effectiveness to achieve organizational success now and in the future.
A.6.1. Work Processes This item asks about the management of your key health care ser-
vices, your key work processes, and innovation, with the aim of creating value for your patients and other customers and achieving current and future organizational success.
A.6.2. Operational Effectiveness This item asks how you ensure effective operations in order to have
a safe workplace environment and deliver customer value. Effective operations frequently depend on controlling the overall costs of your operations and maintaining the reliability, security, and cybersecurity of your information systems.
A.7. Results (Category 7)
This category provides a systems focus that encompasses all results necessary to sustaining an enterprise: the key process and health care results, the patient- and other customer-focused results, the workforce results, the leadership and governance system results, and the overall financial and market performance.
M.M. Parast and D. Golmohammadi International Journal of Production Economics 216 (2019) 133–144
142
A.7.1. Health Care and Process Results This item asks about your key health care and operational perfor-
mance results, which demonstrate health care outcomes, service quality, and value that lead to patient and other customer satisfaction and engagement.
A.7.2. Customer-Focused Results This item asks about your patient- and other customer-focused
performance results, which demonstrate how well you have been sa- tisfying your patients and other customers and engaging them in loy- alty-building relationships.
A.7.3. Workforce-Focused Results This item asks about your workforce-focused performance results,
which demonstrate how well you have been creating and maintaining a productive, caring, engaging, and learning environment for all mem- bers of your workforce.
A.7.4. Leadership and Governance Results This item asks about your key results in the areas of senior leader-
ship and governance, which demonstrate the extent to which your or- ganization is fiscally sound, ethical, and socially responsible.
A.7.5. Financial and Market Results This item asks about your key financial and market results, which
demonstrate your financial sustainability and your marketplace achievements.
References
Abbott, A., 1991. The order of professionalization: an empirical analysis. Work Occup. 18 (4), 355–384.
Abbott, A., 1993. The sociology of work and occupations. Annu. Rev. Sociol. 19, 187–209. Abd-Manaf, N.H., 2005. Quality management in Malaysian public health care. Int. J.
Health Care Qual. Assur. 18 (3), 204–216. Agarwal, V., 2013. Investigating the convergent validity of organizational trust. J.
Commun. Manag. 17 (1), 24–39. Alexander, J.A., Weiner, B.J., Griffith, J., 2006. Quality improvement and hospital fi-
nancial performance. J. Organ. Behav. 27 (7), 1003–1029. Araújo, M., Sampaio, P., 2014. The path to excellence of the Portuguese organizations
recognized by the EFQM model. Total Qual. Manag. Bus. Excel. 25 (5/6), 427–438. Bagozzi, R., Yi, Y., 2012. Specification, evaluation, and interpretation of structural
equation models. J. Acad. Mark. Sci. 40 (1), 8–34. Baldrige Award Recipient Information, 2015. National Institute of Standards and
Technology. http://patapsco.nist.gov/Award_Recipients/index.cfm, Accessed date: 8 October 2018.
Bardhan, I.R., Thouin, M.F., 2013. Health information technology and its impact on the quality and cost of healthcare delivery. Decis. Support Syst. 55 (2), 438–449.
Batalden, P.B., Stoltz, P.K., 1993. A framework for the continual improvement of health care: building and applying professional and improvement knowledge to test changes in daily work. Joint Comm. J. Qual. Improv. 19 (10), 424–447.
Belsley, D.A., Kuh, E., Welsch, R.E., 1980. Regression Diagnostics: Identifying Influential Observations and Sources of Collinearity. John Wiley and Sons.
Bollen, K.A., 1989. Structural Equations with Latent Variables. Wiley-Interscience, New York, NY.
Bortolotti, T., Boscari, S., Danese, P., Medina, S., Hebert, A., Rich, N., 2018. The social benefits of kaizen initiatives in healthcare: an empirical study. Int. J. Oper. Prod. Manag. 38 (2), 554–578.
Boyer, K., Pronovost, P., 2010. What medicine can't each operations: what operations can teach medicine. J. Oper. Manag. 28 (5), 367–371.
Brahma, S.S., 2009. Assessment of construct validity in management research. J. Manag. Res. 9 (2), 59–71.
Bringelson, L.S., Basappa, L.S., 1998. TQM implementation strategies in hospitals: an empirical perspective. J. Soc. Health Syst. 5 (4), 50–62.
Bryne, B.M., 2009. Structural Equation Modeling with AMOS: Basic Concepts, Applications, and Programming, second ed. Routledge, New York, NY.
Calem, P.S., Rizzo, J.A., 1995. Competition and specialization in the hospital industry: an application of Hotelling's location model. South. Econ. J. 61 (4), 1182–1198.
Campbell, D., Frei, F.X., 2010. Cost structure, customer profitability, and retention im- plications of self-service distribution channels: evidence from customer behavior in an online banking channel. Manag. Sci. 56 (1), 4–24.
Bou-Llusar, J.C., Escrig-Tena, A.B., Roca-Puig, V., Beltrán-Martín, I., 2009. An empirical assessment of the EFQM Excellence Model: evaluation as a TQM framework relative to the MBNQA Model. J. Oper. Manag. 27 (1), 1–22.
Carlson, K.D., Herdman, A.O., 2012. Understanding the impact of convergent validity on research results. Organ. Res. Methods 15 (1), 17–32.
Carman, J.M., Shortell, S.M., Foster, R.W., Hughes, E.F.X., Boerstler, H., O'Brien, J.L., O'Connor, E.J., 1996. Keys for successful implementation of total quality manage- ment in hospitals. Health Care Manag. Rev. 21 (1), 48–60.
Castka, P., 2018. Modelling firms’ interventions in ISO 9001 certification: A configura- tional approach. Int. J. Prod. Econ. 201, 163–172.
Chan, Y.L., Ho, K., 1997. Continuous quality improvement: a survey of American and Canadian healthcare executives. Hosp. Health Serv. Adm. 42 (4), 525–544.
Chassin, M.R., 2002. Achieving and sustaining improved quality: lessons from New York State and cardiac surgery. Health Aff. 21 (4), 40–51.
Chassin, M.R., Galvin, R.W., 1998. The urgent need to improve health care quality: Institute of medicine national roundtable on health care quality. J. Am. Med. Assoc. 280 (11), 1000–1008.
Chattopadhyay, S.P., Szydlowski, S.J., 1999. TQM implementation for competitive ad- vantage in healthcare delivery. Manag. Serv. Qual. 9 (2), 96–101.
Churchill Jr., G.A., 1979. A paradigm for developing better measures of marketing con- structs. J. Mark. Res. 16 (1), 64–73.
Dempsey, C., McConville, E., Wojciechowski, S., Drain, M., 2014. Reducing patient suf- fering through compassionate connected care. J. Nurs. Adm. 44 (10), 517–524.
Dow, D., Samson, D., Ford, S., 1999. Exploding the myth: do all quality management practices contribute to superior quality performance? Prod. Oper. Manag. 8 (1), 1–27.
Eisenhardt, K.M., Graebner, M.E., 2007. Theory building from cases: opportunities and challenges. Acad. Manag. J. 50 (1), 25–32.
Ennis, K., Harrington, D., 1999. Quality management in Irish health care. Int. J. Health Care Qual. Assur. 12 (6), 232–243.
Epstein, A.M., Lee, T.H., Hamel, M.B., 2004. Paying physicians for high-quality care. N. Engl. J. Med. 350, 406–410.
Escrig, A.B., de Menezesb, L.M., 2015. What characterizes leading companies within business excellence models? An analysis of "EFQM Recognized for Excellence" re- cipients in Spain. Int. J. Prod. Econ. 169, 362–375.
Evans, J.R., 2010. An exploratory analysis of preliminary blinded applicant scoring data from the Baldrige national quality program. Qual. Manag. J. 17 (3), 35–50.
Farjoun, M., Starbuck, W.H., 2007. Organizing at and beyond the limits. Organ. Stud. 28 (4), 541–566.
Flynn, B.B., Saladin, B., 2001. Further evidence on the validity of the theoretical models underlying the Baldrige criteria. J. Oper. Manag. 19 (3), 617–652.
Fornell, C., Larcker, D.F., 1981. Evaluating structural equations models with un- observable variables and measurement error. J. Mark. Sci. 18 (1), 39–50.
Foster, D.A., Chenoweth, J., 2011. Comparison of Baldrige Applicants and Award Recipients with Peer Hospitals on a National Balanced Scorecard, Truven Health Analytics. http://www.nist.gov/baldrige/upload/baldrige-hospital-research-paper. pdf.
Francois, P., Peyrin, J.C., Touboul, M., 2003. Evaluating implementation of quality management system in a teaching hospital's clinical department. Int. J. Qual. Health Care 15 (1), 47–55.
Fundin, A., Bergquist, B., Eriksson, H., Gremyr, I., 2018. Challenges and propositions for research in quality management. Int. J. Prod. Econ. 199, 125–137.
Fung, C.H., Lim, Y.-W., Mattke, S., Damberg, C., Shekelle, P.G., 2008. Systematic review: the evidence that publishing patient care data improves quality of care. Annu. Intern. Med. 148 (2), 111–123.
Gibson, C.K., Newton, D.J., Cochran, D.S., 1990. An empirical investigation of the nature of hospital mission statements. Health Care Manag. Rev. 15 (3), 35–45.
Gilmartin, M.J., D'Aunno, T.A., 2007. Leadership research in healthcare: a review and roadmap. Acad. Manag. Ann. 1 (1), 387–438.
Gittell, J.H., Seidner, R., Wimbush, J., 2010. A relational model of how high-performance work systems work. Organ. Sci. 21 (2), 490–506.
Golin, C.E., DiMatteo, M.R., Gelberg, L., 1996. The role of patient participation in the doctor visit: implications for adherence to diabetes care. Diabetes Care 19 (10), 1153–1164.
Gowen III, C.R., McFadden, K.L., Tallon, W.J., 2006a. On the centrality of strategic human resource management for healthcare quality results and competitive ad- vantage. J. Manag. Dev. 25 (8), 806–826.
Greene, W.H., 2012. Econometric Analysis, seventh ed. Pearson, New Jersey, NJ. Hair, J.F., Black, W.C., Babin, B.J., Black, W.C., 2009. Multivariate Data Analysis, seventh
edition. Pearson Education. Harris, R.D., 2004. Organizational Task Environments: An Evaluation of Convergent and
Discriminant Validity. J. Manag. Stud. 41 (5), 857–882. Halkjær, S., Lueg, R., 2017. The effect of specialization on operational performance: A
mixed-methods natural experiment in Danish healthcare services,. Int. J. Oper. Prod. Manag. 37 (7), 822–839.
Hekman, D.R., Aquino, K., Owens, B.P., Mitchell, T.R., Schilpzand, P., Leavitt, K., 2010. An examination of whether and how racial and gender biases influence customer satisfaction. Acad. Manag. J. 53 (2), 238–264.
Henseler, J., Ringle, C., Sarstedt, M., 2015. A new criterion for assessing discriminant validity in variance-based structural equation modeling. J. Acad. Mark. Sci. 43 (1), 115–135.
Hu, L., Beltler, P.M., 1999. Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Struct. Equ. Model. 6 (1), 1.
Huq, Z., Martin, T.N., 2000. Workforce cultural factors in TQM/CQI implementation in hospitals. Health Care Manag. Rev. 25 (3), 80–93.
Institute of Medicine, 2001. Crossing the Quality Chasm: A New Health System for the 21st Century. National Academy Press, Washington, D.C.
Jabnoun, N., 2005. Organizational structure for customer-oriented TQM: an empirical investigation. TQM Mag. 17 (3), 226–236.
Jackson, S., 2001. Successfully implementing total quality management tools within healthcare: what are the key actions? Int. J. Health Care Qual. Assur. 14 (4), 157–163.
M.M. Parast and D. Golmohammadi International Journal of Production Economics 216 (2019) 133–144
143
Jencks, S.F., Huff, E.D., Cuerdon, T., 2003. Change in the quality of care delivered to Medicare beneficiaries, 1998-1999 to 2000-2001. J. Am. Med. Assoc. 289 (3), 305–312.
Jenkinson, C., Coulter, A., Bruster, S., 2002. The Picker Patient Experience Questionnaire: development and validation using data from in-patient surveys in five countries. Int. J. Qual. Health Care 14 (5), 353–358.
Jennings, K., Westfall, F., 1994. A survey-based benchmarking approach for health care using the Baldrige quality criteria. Joint Comm. J. Qual. Improv. 20 (9), 500–509.
Kaynak, H., 2003. The relationship between total quality management practices and their effects on firm performance. J. Oper. Manag. 21 (4), 405–435.
Kerlinger, F.N., 1986. Foundations of Behavioral Research. Holt, Rinehart and Winston, New York, NY.
Kimberly, J.R., Minvielle, E., 2000. The Quality Imperative: Measurement and Management of Quality in Health Care. Imperial College Press, London, UK.
Klein, D., Motwani, J., Cole, B., 1998. Continuous quality improvement, total quality management, and reengineering: one hospital's continuous quality improvement journey. Am. J. Med. Qual. 13 (3), 158–163.
Kohn, L.T., Corrigan, J.M., Donaldson, M.S., 2000. To Err Is Human: Building a Safer Health System. Institute of Medicine (US) Committee on Quality of Health Care in America, National Academies Press, Washington, D.C.
Lei, M., Lomax, R.G., 2005. The effect of varying degrees of nonnormality in Structural equation modeling. Struct. Equ. Model. 12 (1), 1–27.
Lin, C.S., Su, C.T., 2013. The Taiwan national quality award and market value of the firms: an empirical study. Int. J. Prod. Econ. 144, 57–67.
Lindenauer, R.K., Remus, D., Roman, S., Rothberg, M.B., Benjamin, E.M., Ma, A., Bratzler, D.W., 2007. Public reporting and pay for performance in hospital quality improve- ment. N. Engl. J. Med. 356 486-396.
Macinati, M.S., 2008. The relationship between quality management systems and orga- nizational performance in the Italian National Health Service. Health Policy 85 (2), 228–241.
Marshall, M.N., Shekelle, P.G., Leatherman, S., Brook, R.H., 2000. The public release of performance data: what do we expect to gain? J. Am. Med. Assoc. 283 (14), 1866–1874.
Martín-Gaitero, J.P., Escrig-Tena, A.B., 2018. The relationship between EFQM levels of excellence and CSR development. Int. J. Qual. Reliab. Manag. 35 (6), 1158–1176.
Matthias, O., Brown, S., 2016. Implementing operations strategy through Lean processes within health care: the example of NHS in the UK. Int. J. Oper. Prod. Manag. 36 (11), 1435–1457.
McGlynn, E.A., Asch, S.M., Adams, J., Keesey, J., Hicks, J., DeCristofaro, A., Kerr, E.A., 2003. The quality of health care delivered to adults in the United States. N. Engl. J. Med. 348, 2635–2645.
McNulty, T., Ferlie, E., 2002. Reengineering Health Care: the Complexities of Organizational Transformation. Oxford University Press, Oxford, UK.
Mennicken, R., Kuntz, L., Schwierz, C., 2011. The trade-off between efficiency and quality in hospital departments. J. Health Organ. Manag. 25 (5), 564–577.
Meyer, S.M., Collier, D.A., 2001. An empirical test of the causal relationships in the Baldrige health care pilot criteria. J. Oper. Manag. 19 (4), 403–425.
Millenson, M.L., 2004. Pay for performance: the best worst choice. Qual. Saf. Health Care 13 (5), 323–324.
Mosadeghrad, A.M., 2015. Developing and validating a total quality management model for healthcare organizations. TQM J. 27 (5), 544–564.
Motwani, J.G., Cheng, C.H., Madan, M.S., 1996. Implementation of ISO 9000 in the healthcare sector: a case study. Health Market. Qual. 14 (2), 63–72.
Nair, A., Dreyfus, D., 2018. Technology alignment in the presence of regulatory changes: The case of meaningful use of information technology in healthcare. Int. J. Med. Inf. 110, 42–51.
Naveh, E., Marcus, A., 2004. When does the ISO 9000 quality assurance standard lead to performance improvement? Assimilation and going beyond. IEEE Trans. Eng. Manag. 51 (3), 352–363.
Naveh, E., Stern, Z., 2005. How quality improvement programs can affect general hospital performance. Int. J. Health Care Qual. Assur. 18 (4), 249–270.
Nunally, J.C., Bernstein, J.H., 1994. Psychometric Theory, third ed.s. McGraw-Hill, New York, NY.
Ozturk, A.O., Swiss, J.E., 2008. Implementing management tools in Turkish public hos- pitals: the impact of culture, politics and role status. Publ. Adm. Dev. 28 (2), 138–148.
O'Leary-Kelly, S.W., Vokurka, R.J., 1998. The empirical assessment of construct validity.
J. Oper. Manag. 16 (4), 387–405. Pannirselvam, G.P., Ferguson, L.A., 2001. A study of the relationships between the
Baldrige categories. Int. J. Qual. Reliab. Manag. 18 (1), 14–34. Pannirselvam, G.P., Siferd, S.P., Ruch, W.A., 1998. Validation of the Arizona governor's
quality award criteria: a test of the Baldrige. J. Oper. Manag. 16 (5), 529–550. Parast, M., 2015. A longitudinal assessment of the linkages among the Baldrige criteria
using independent reviewers' scores. Int. J. Prod. Econ. 164, 24–34. Pinker, E.J., Shumsky, R.A., Simon, W.E., 2000. The efficiency-quality trade-off of cross-
trained workers. Manuf. Serv. Oper. Manag. 2 (1), 32–48. Power, D., Schoenherr, T., Samson, D., 2011. Assessing the effectiveness of quality
management in a global context. IEEE Trans. Eng. Manag. 58 (2), 307–322. Queenan, C.C., Angst, C.M., Devaraj, S., 2011. Doctors ‘orders–if they’re electronic, do
they improve patient satisfaction? A complements/substitutes perspective. J. Oper. Manag. 29 (7), 639–649.
Register, F., 2011. Medicare Program; Hospital Inpatient Value-Based Purchasing Program. Final Rule, Government Printing Office, Washington, D.C.
Rossiter, J.R., 2008. Content validity of measures of abstract constructs in management and organizational research. Br. J. Manag. 19 (4), 380–388.
Russell, D.W., Kahn, J.H., Spoth, R., Altmaier, E.M., 1998. Analyzing data from experi- mental studies: a latent variable structural equation modeling approach. J. Couns. Psychol. 45 (1), 18–29.
Russell, R.S., Johnson, D.M., White, S.W., 2015. Patient perceptions of quality: analyzing patient satisfaction surveys. Int. J. Oper. Prod. Manag. 35 (8), 1158–1181.
Sabella, A., Kashou, R., Omran, O., 2014. Quality management practices and their re- lationship to organizational performance. Int. J. Oper. Prod. Manag. 34 (12), 1487–1505.
Sampson, S.E., Froehle, C.M., 2006. Foundations and implications of a proposed unified services theory. Prod. Oper. Manag. 15 (2), 329–343.
Savinoa, M.M., Batbaatarb, E., 2015. Investigating the resources for integrated manage- ment systems within resource-based and contingency perspective in manufacturing firms. J. Clean. Prod. 104 (1), 392–402.
Schneider, B., Ehrhart, M.G., Mayer, D.M., Saltz, J.L., Niles-Jolly, K., 2005. Understanding organization-customer links in service settings. Acad. Manag. J. 48 (6), 1017–1032.
Sharma, S., Durvasula, S., Dillon, W.R., 1989. Some results on the behavior of alternate covariance structure estimation procedures in the presence of non-normal data. J. Mark. Res. 26 (2), 214–221.
Shortell, S.M., O'Brien, J.L., Carman, J.M., Foster, R.W., Hughes, E.F.X., Boerstler, H., O'Connor, E.J., 1995. Assessing the impact of continuous quality improvement/Total Quality Management: concept versus implementation. Health Serv. Res. 30 (2), 377–401.
Silander, K., Torkki, P., Lillrank, P., Peltokorpi, A., Brax, S.A., 2017. Modularizing spe- cialized hospital services: constraining characteristics, enabling activities and out- comes. Int. J. Oper. Prod. Manag. 37 (6), 791–818.
Sofaer, S., Firminger, K., 2005. Patient perceptions of the quality of health services. Annu. Rev. Public Health 26 (1), 513–559.
Subramony, M., 2009. A meta-analytic investigation of the relationship between HRM bundles and firm performance. Hum. Resour. Manag. 48 (5), 745–768.
Subramony, M., Pugh, S.D., 2015. Services management research review, integration, and future directions. J. Manag. 41 (1), 349–373.
Tabachnick, B.G., Fidell, L.S., 2013. Using Multivariate Statistics, sixth ed. Pearson. Um, K.H., Lau, A.K.W., 2018. Healthcare service failure: how dissatisfied patients respond
to poor service quality. Int. J. Oper. Prod. Manag. 38 (5), 1245–1270. Vogus, T.J., McClelland, L.E., 2016. When the customer is the patient: lessons from
healthcare research on patient satisfaction and service quality ratings. Hum. Resour. Manag. Rev. 26 (1), 37–49.
Williams, S.C., Schmaltz, S.P., Morton, D.J., Koss, R.G., Loeb, J.M., 2005. Quality of care in U.S. hospitals as reflected by standardized measures, 2002-2004. N. Engl. J. Med. 353, 255–264.
Wilson, D.D., Collier, D.A., 2000. An empirical investigation of the Malcolm Baldrige national quality award causal model. Decis. Sci. J. 31 (2), 361–390.
Withanachchi, N., Handa, Y., Karandagoda, K.K., 2007. TQM emphasizing 5-S principles. A breakthrough for chronic managerial constraints at public hospitals in developing countries. Int. J. Public Sect. Manag. 20 (3), 168–177.
Xia, Y., Zhang, G.P., 2010. The impact of the online channel on retailers' performances: an empirical evaluation. Decis. Sci. J. 41 (3), 517–554.
Zabada, C.P., Rivers, A., Munchus, G., 1998. Obstacles to the application of total quality management in health care organizations. Total Qual. Manag. 9 (1), 57–66.
M.M. Parast and D. Golmohammadi International Journal of Production Economics 216 (2019) 133–144
144
- Quality management in healthcare organizations: Empirical evidence from the baldrige data
- Introduction
- Previous studies of the baldrige model in the healthcare industry
- Assessment of theoretical foundations of the baldrige model in the healthcare industry
- Variables and measures
- Methodology
- Sample
- Measurement model: validation and assessment
- Confirmatory factor analysis for the model
- Structural model and testing the hypotheses
- Robustness tests
- Discussion
- Theoretical contributions
- Implications for managers
- Limitations and future research
- Conclusion
- Appendix ABaldrige Healthcare Assessment
- Leadership (Category 1)
- Senior Leadership
- Governance and Societal Responsibilities
- Strategy (Category 2)
- Strategy Development
- Strategy Implementation
- Customers (Category 3)
- Voice of the Customer
- Customer Engagement
- Measurement, Analysis, and Knowledge Management (Category 4)
- Measurement, Analysis, and Improvement of Organizational Performance
- Information and Knowledge Management
- Workforce (Category 5)
- Workforce Environment
- Workforce Engagement
- Operations (Category 6)
- Work Processes
- Operational Effectiveness
- Results (Category 7)
- Health Care and Process Results
- Customer-Focused Results
- Workforce-Focused Results
- Leadership and Governance Results
- Financial and Market Results
- References
,
HEALTH POLICY/ORIGINAL RESEARCH
Volume 70, no.
The Effect of Utilization Review on Emergency Department Operations
Shoma Desai, MD*; Phillip F. Gruber, MD; Erick Eiting, MD, MMM; Seth A. Seabury, PhD; Wendy J. Mack, PhD; Christian Voyageur, BA; Veronica Vasquez, MD; Hyung T. Kim, MD; Sophie Terp, MD, MPH
*Corresponding Author. E-mail: [email protected].
Study objective: Increasingly, hospitals are using utilization review software to reduce hospital admissions in an effort to contain costs. Such practices have the potential to increase the number of unsafe discharges, particularly in public safety-net hospitals. Utilization review software tools are not well studied with regard to their effect on emergency department (ED) operations. We study the effect of prospectively used admission decision support on ED operations.
Methods: In 2012, Los Angeles Countyþ University of Southern California Medical Center implemented prospective use of computerized admission criteria. After implementation, only ED patients meeting primary review (diagnosis-based criteria) or secondary review (medicalnecessityasdeterminedbyanon-siteemergencyphysician)wereassigned inpatientbeds.Datawere extracted from electronic medical records from September 2011 through December 2013. Outcomes included operational metrics,30-dayEDrevisits, and30-dayadmission rates. Excludinga6-month implementationperiod,monthly summarymetrics were comparedpre- andpostimplementationwithnonparametric andnegative binomial regressionmethods. All adult EDvisits, excluding incarcerated and purely behavioral health visits, were analyzed. The primary outcomes were disposition rates. Secondary outcomes were 30-day ED revisits, 30-day admission rate among return visitors to the ED, and estimated cost.
Results: Analysis of 245,662 ED encounters was performed. The inpatient admission rate decreased from 14.2% to 12.8%. Increases in discharge rate (82.4% to 83.4%) and ED observation unit utilization (2.5% to 3.4%) were found. Thirty-day revisits increased (20.4% to 24.4%), although the 30-day admission rate decreased (3.2% to 2.8%). Estimated cost savings totaled $193.17 per ED visit.
Conclusion: The prospective application of utilization review software in the ED led to a decrease in the admission rate. This was tempered by a concomitant increase in ED observation unit utilization and 30-day ED revisits. Cost savings suggest that resources should be redirected to the more highly affected ED and ED observation unit, although more work is needed to confirm the generalizability of these findings. [Ann Emerg Med. 2017;70:623-631.]
Please see page 624 for the Editor’s Capsule Summary of this article.
Readers: click on the link to go directly to a survey in which you can provide feedback to Annals on this particular article. A podcast for this article is available at www.annemergmed.com.
0196-0644/$-see front matter Copyright © 2017 by the American College of Emergency Physicians. http://dx.doi.org/10.1016/j.annemergmed.2017.03.043
INTRODUCTION Since the Social Security Act of 1965, the Centers
for Medicare & Medicaid Services (CMS) has issued retrospective payment denials for an increasing number of medical services deemed inappropriate.1 In 2013, CMS reported an improper payment rate of 8% for inpatient hospital services, with an estimated cost of approximately $9.4 billion.2 These errors in reimbursement account for nearly a quarter of the overall Medicare fee-for-service improper payment rate and are a focus of the national call to reduce excessive health care expenditures.1,2
As one solution to this crisis, utilization review has been increasingly used by hospitals, managed care organizations, and public and fee-for-service payers to ensure accuracy of
5 : November 2017
care, time, place, and cost.3,4 To align with CMS recovery audit contractors by identifying inpatient stays that may subsequently be deemed inappropriate, hospital systems across the United States are incorporating commercial evidence-based admission decision support software into their daily operations. Such decision support tools are intended to reduce the number of denied days, minimize variations in care across hospital systems through standardized criteria, and improve transparency between health care providers and payers.5 They may be used in a prospective manner such that medically unnecessary stays are avoided by screening before admission, or more commonly in a concurrent manner such that inpatient admissions are reviewed daily to reduce denied days and optimize the level of care.
Annals of Emergency Medicine 623
624
Utilization Review and Emergency Department Operations Desai et al
Editor’s Capsule Summary
What is already known on this topic Reducing unnecessary admissions from the emergency department (ED) can help contain costs in resource-constrained hospital environments.
What question this study addressed The authors analyzed the effect of prospective structured utilization review software on admissions in over 245,000 visits to a public hospital ED.
What this study adds to our knowledge Review decreased ED inpatient admissions by an absolute 1.4%, whereas observation unit utilization and 30-day revisits increased by 0.9% and 4.0%, respectively. Estimated savings were $193 per ED visit.
How this is relevant to clinical practice Utilization review appeared to decrease some disposition outcomes but increase others. More detailed understanding of costs versus benefits is needed.
Although utilization review is increasingly widespread, there is concern that it could potentially overreach and prioritize cost containment, possibly at the expense of patient outcomes.6 In 1989, the Institute of Medicine urged researchers to study the effect of use management on the delivery of patient care.7 Since then, few studies have been published on the effect of utilization review on cost containment, achieved through a reduction of admissions and denied days.8-11 As a result, relatively little is known about the influence of utilization review on the quality of patient care, patient safety, and operations. In this study, we examine the effect of prospective admission decision support software on patient disposition and 30-day ED revisits at a large, urban, safety-net hospital during a 22-month period.
MATERIALS AND METHODS Los Angeles Countyþ University of Southern California
Medical Center is an urban, public, safety-net hospital with an emergency department (ED) patient volume of approximately 170,000 visits annually. In the fall of 2012, the center implemented admission decision support through InterQual (versions 2012 and 2012.2; McKesson, Newton, MA). Previously, the California Department of Health Care Services used a treatment authorization request process to perform 100% utilization review for Medicaid fee-for-service (inpatient stays). In 2008 to 2010, the Department of Health Care Services piloted and expanded
Annals of Emergency Medicine
a program using standardized, evidence-based review criteria through decision support software.
This software package is linked in real time with the admissions process as follows: Using the admission decision support tool, all requests of admission are screened by utilization review nurses located within the ED 24 hours a day. Admissions that meet decision support tool appropriateness criteria are allowed to proceed with bed assignment. Admissions that do not meet appropriateness criteria are referred to another on-site emergency physician to review for medical necessity. Cases deemed medically necessary on secondary review are then allowed to proceed with bed assignment. All patients being considered for admission were screened by utilization review nurses with this software regardless of insurance (eg, Medicare, Medicaid, uninsured). Because some private insurers (health maintenance organizations) do not require prospective admission review as a condition of reimbursement for inpatient stays, criteria were not applied for stabilized patients authorized for transfer to hospitals covered by private insurance. Patients who underwent and did not pass secondary review for medical necessity were either discharged or observed in an observation unit. Decision support criteria for observation level of care were not used.
This retrospective study included all adult patient (18 years and older) visits to the Los Angeles County þ University of Southern California Medical Center ED from September 2011 through December 2013. Incarcerated patients and behavioral health visits were excluded from this analysis. The study was approved by the University of Southern California Institutional Review Board.
Implementation of this utilization review process was initiated in the fall of 2012 in a stepwise fashion, startingwith informal admission review and provider training. The formal review process, involving a “hard stop” on bed assignments pending approval, started in January 2013. To allow a clear comparison between pre- and postimplementation operations, patient visits in the 6-month rollout period (August 1, 2012, to January 31, 2013) were excluded. Figure 1 depicts the number of ED visits, exclusions by category, and final number of visits analyzed.
The following operational data, summarized by month, were abstracted from the ED Information System (Wellsoft, Somerset, NJ): patient demographics (monthly average age and sex distribution), initial Emergency Severity Index score, ED volume, average ED length of stay (defined as arrival to departure time), observation unit length of stay (defined as arrival to ED to departure from the observation unit), 30-day ED revisit rate, and 30-day admission rate. The 30-day admission rate was defined as
Volume 70, no. 5 : November 2017
All ED Visits (Sept 2011-Dec 2013)
n = 406,936
Total ED Visits Analyzed n = 245,662
Excluded Pediatric ED Visits
n = 50,916
Excluded Psychiatric ED Visits
n = 22,559
Excluded Jail ED Visits
n = 20,846
Excluded 6-mo implementation period
n = 66,953
Figure 1. Data analysis flowchart, including initial number of ED visits, exclusions, and final number analyzed.
Desai et al Utilization Review and Emergency Department Operations
the admission rate for patients returning to the ED within 30 days. The 30-day revisit and admission rates were calculated for 2 groups: all patients visiting the ED and patients discharged from the ED during the first visit. Metrics from the Hospital Information System (QuadraMed Affinity, Reston, VA) included inpatient level of service (ie, unmonitored versus monitored), hospital admission rate, observation unit placement rate, transfer rate to other facilities, and hospital length of stay. “Unmonitored” beds were defined as inpatient ward beds, whereas “monitored” beds included telemetry, step-down, and ICU levels of care. Established in 2008 (before the study start date), the observation unit at this facility is split into surgical (11 beds) and medical (17 beds) areas. Both observation units are staffed by nurse practitioners or physician assistants (both under the supervision of attending physicians) and nurses, with social work services available daily. There were no significant changes to the capacity or staffing of observation units throughout the study period.
To control for process changes that may have occurred during the study period unrelated to the implementation of utilization review, syncope and appendicitis were chosen as
Volume 70, no. 5 : November 2017
comparative conditions for analysis, using ED discharge International Classification of Diseases, Ninth Revision (ICD- 9) codes. Syncope is a complex condition with multiple causes, ranging from benign to life threatening. The decision to admit patients with this condition to the hospital is subject to a substantial degree of practice variation. For example, an otherwise healthy 28-year-old woman presenting with syncope will often be discharged if diagnostic tests do not reveal any concerning findings. However, a 65-year-old woman with multiple comorbidities and unremarkable diagnostic test results may have a different disposition, depending on the ability to receive timely outpatient evaluation and on the emergency physician’s clinical impression.
Thus, admission rates for syncope would be expected to be sensitive to standardized criteria. InterQual admission criteria (version 2014) for syncope of unknown cause require one or more of the following findings: structural or functional cardiac disease, ECG abnormalities, family history of sudden cardiac death, symptoms preceding syncope, and syncope during exertion or while supine. Also, management must involve both cardiac monitoring and syncope evaluation, which necessitates carotid artery massage (unless contraindicated), ECG, echocardiogram (unless recently performed), and orthostatic blood pressure measurement.12 Rates of admission for appendicitis, a condition for which patients are nearly universally admitted to the hospital, would not be expected to be influenced by admission criteria. Appendicitis therefore was chosen as a control condition.
Metrics were compared before (September 2011 to July 2012) and after (February 2013 to December 2013) the implementation of the decision support tool, with monthly summary metrics as units of analysis. A dichotomous variable for pre- and postdecision support implementation was the primary independent variable to test for differences in metrics. Comparisons between pre- and postimplementation periods on age and other continuous outcomes (such as ED metrics) used ANOVA and nonparametric comparisons. Outcome variables representing counts (hospital length of stay), rates, and proportions were tested for mean differences pre- and postdecision support implementation, using negative binomial regression because of Poisson overdispersion in many of the outcomes. In most cases, the denominator for rate variables was the monthly ED volume. Thirty-day revisit and admission rates were compared among all ED patients (using monthly ED volume as the denominator) and among patients discharged from the ED (using monthly ED discharges as the denominator).
Annals of Emergency Medicine 625
Utilization Review and Emergency Department Operations Desai et al
Table E1 (available online at http://www.annemergmed. com) reveals a segmented regression for the interrupted time series used to estimate admission decision support in the face of ongoing secular trends in each of the metrics. Metric measures were regressed against time (month), adding an estimate for the acute change in level (change in the mean) from the end of the preimplementation to the start of the postimplementation periods. Both the immediate effects (change in level in the regression line and linear trends [slopes, estimated as monthly trends]) in metrics were estimated and compared pre- and postimplementation. Results are reported as estimates (means and slopes) with 95% confidence intervals (CIs). Trends were graphically displayed by plotting monthly summary statistics over time (Figure 2).
A simple, approximate cost analysis was performed, using the change in rates of admission, observation unit utilization, and revisits to the ED between the pre- and postimplementation periods. The average cost per person
Figure 2. Trends in monthly admission, discharge, observation and course of the study period. The red lines delineate the 6-month im Lowess curves are depicted.
626 Annals of Emergency Medicine
of an inpatient stay and ED visit were obtained from the Agency for Healthcare Research and Quality’s Medical Expenditure Panel Survey.13 The survey is a nationally representative one that combines information from households, individuals, insurers, and medical providers, and it represents the most complete, publicly available source of data on health care use and spending in the United States. The cost of an observation unit stay per person was obtained from the CMS Outpatient Prospective Payment System 2013 Rule, which lists Medicare reimbursement rates for specific services.14
For this study, we used the base reimbursement for an observation unit stay (Ambulatory Payment Classification 8009) and added the average payment for an ED stay.
RESULTS From September 2011 through December 2013, there
were a total 406,936 ED visits. Excluding pediatric,
30-day revisit rates. Monthly summary data are plotted over the plementation period (August 1, 2012, to January 31, 2013).
Volume 70, no. 5 : November 2017
Table 2. Patient dispositions.
Variables Preimplementation Mean (95% CI)
Postimplementation Mean (95% CI)
ED LOS, h 8.1 (7.8–8.5) 8.5 (8.1–9.0) Hospital LOS, days 6.5 (6.1–6.9) 6.5 (6.2–6.7) Observation unit LOS, h 24.4 (23.5–25.2) 27.1 (26.4–27.8) Admission rate (total) 14.2 (13.6–14.8) 12.8 (12.3–13.4) Admission rate (unmonitored) 10.9 (10.6–11.3) 9.3 (9.0–9.6) Admission rate (monitored) 3.3 (3.0–3.6) 3.5 (3.2–3.8) Discharge rate 82.4 (81.8–82.9) 83.4 (82.8–83.9) Transfer rate 0.9 (0.9–1.0) 0.5 (0.5–0.6) Observation rate 2.5 (2.3–2.7) 3.4 (3.1–3.6) 30-day revisit rate, total
20.4 (19.9–20.9) 24.4 (23.8–25.0)
30-day admission rate, total
3.2 (3.1–3.4) 2.8 (2.7–3.0)
30-day revisit rate, discharged
20.3 (19.8–20.9) 24.9 (24.3–25.6)
30-day admission rate, discharged
2.3 (2.1–2.4) 1.9 (1.8–2.0)
LOS, Length of stay. Throughput, disposition rates, and revisits within 30 days for all adult ED patient visits during the preimplementation (September 2011 to July 2012) and postimplementation (February 2013 to December 2013) periods. Numbers are regression estimates, and data are number or percentage unless otherwise indicated. The 30-day admission rate is the admission rate on the second visit to the ED. The discharged sample includes only adult patients discharged from the ED on the first visit.
Desai et al Utilization Review and Emergency Department Operations
incarcerated, and behavioral health visits and a 6-month implementation period, 245,662 patient encounters were analyzed (Figure 1). Table 1 shows monthly patient volumes and demographics. The average number of adult ED visits was 11,028 per month before implementation and 11,305 per month afterward. The mean age, sex, and distribution of initial Emergency Severity Index scores are also listed in Table 1. During the preimplementation period, encounters by payer were as follows: 60.9% uninsured, 30.2% Medicaid, 4.9% Medicare, 2.6% commercial insurance, and 1.4% other. During the postimplementation period, encounters by payer were as follows: 61.0% uninsured, 29.9% Medicaid, 5.3% Medicare, 2.6% commercial insurance, and 1.1% other.
Table 2 summarizes monthly lengths of stay, disposition rates, and return visits. The mean ED and hospital lengths of stay were approximately 8.3 hours and 6.5 days, respectively, throughout the study period. Before implementation of admission criteria, ED discharges composed 82.4% (95% CI 81.8% to 82.9%), observations 2.5% (95% CI 2.3% to 2.7%), admissions 14.2% (95% CI 13.6% to 14.8%), and transfers 0.9% (95% CI 0.9% to 1.0%) of dispositions. After the implementation of admission criteria, the ED experienced increased rates of discharge (83.4%; 95% CI 82.8% to 83.9%) and observation unit utilization (3.4%; 95% CI 3.1% to 3.6%). There was also an increase in observation unit length of stay from 24.4 hours (95% CI 23.5 to 25.2) to 27.1 hours (95% CI 26.4 to 27.8). Meanwhile, the rates of admission (12.8%; 95% CI 12.3% to 13.4%) and transfer to outside hospitals decreased (0.5%; 95% CI 0.5% to 0.6%).
In the preimplementation period, approximately 10.9% (95% CI 10.6% to 11.3%) of adult ED patients were admitted to unmonitored beds, whereas 3.3% (95% CI
Table 1. Monthly patient volume and demographics.
Variable Preimplementation
(95% CI) Postimplementation
(95% CI)
ED volume, no. visits 11,028.4 (10,733.1–11,323.6)
11,304.6 (11,002.0–11,607.0)
Mean age, y 44.8 (44.7–44.9) 45.4 (45.2–45.6) Female patient, % 45.8 (45.4–46.3) 45.5 (45.1–46.0) Initial ESI score, % 1 1.4 (1.3–1.5) 1.2 (1.1–1.3) 2–3 76.3 (75.6–77.0) 74.2 (73.5–74.9) 4–5 22.3 (21.7–22.9) 24.6 (23.9–25.3)
ESI, Emergency Severity Index. Descriptives by month for all adult ED patient visits during the preimplementation (September 2011 to July 2012) and postimplementation (February 2013 to December 2013) periods. The table includes monthly summary data and 95% CIs.
Volume 70, no. 5 : November 2017
3.0% to 3.6%) were admitted to monitored beds. After implementation, there was a decrease in admissions to unmonitored beds (9.3%; 95% CI 9.0% to 9.6%), whereas the percentage of patients admitted to monitored settings did not change (3.5%; 95% CI 3.2% to 3.8%) (Table 2).
Before the implementation of admission criteria, the 30- day revisit rate to the ED was 20.4% (95% CI 19.9% to 20.9%). The admission rate on the second visit was 3.2% (95% CI 3.1% to 3.4%). After implementation, the overall 30-day revisit rate to the ED increased to 24.4% (95% CI 23.8% to 25.0%). The admission rate on the second visit decreased to 2.8% (95% CI 2.7% to 3.0%). These findings were similarly reflected by the subgroup of patients discharged from the ED on the first visit (Table 2).
To clarify the effect of the admission decision support tool on ED operations, a comparative analysis of patients who received a diagnosis of syncope (hypothesized to be sensitive to decision support tool implementation) and appendicitis (hypothesized to be insensitive to decision support tool implementation) was performed (Table 3). There were similar numbers of ED visits for syncope before and after implementation. However, the inpatient admission rate for syncope decreased from 16.9% (95% CI 13.8% to 20.7%) to 11.4% (95% CI 8.9% to 14.5%), whereas observation unit utilization increased from 12.1%
Annals of Emergency Medicine 627
Table 3. Syncope and appendicitis subgroup analysis.
Variable
Syncope (95% CI) Appendicitis (95% CI)
Preimplementation Postimplementation Preimplementation Postimplementation
ED volume, no. visits 50.5 (45.6–55.3) 51.2 (46.2–56.1) 38.8 (35.1–42.6) 37.2 (33.5–40.9) ED stay, h 11.7 (11.1–12.2) 13.8 (12.2–15.5) 13.1 (11.8–14.4) 16.3 (14.3–18.2) Hospital LOS, days 2.3 (1.7–2.9) 2.7 (1.6–3.8) 2.6 (2.3–2.9) 3.1 (2.6–3.5) Admission rate 16.9 (13.8–20.7) 11.4 (8.9–14.5) 96.7 (87.8–100) 96.8 (87.7–100) Discharge rate 60.4 (54.2–67.2) 62.3 (56.2–69.2) 3.0 (1.8–5.2) 1.5 (0.7–3.3) Transfer rate 10.6 (8.2–13.7) 5.5 (3.9–7.8) 0* 0* Observation rate 12.1 (9.2–15.9) 20.7 (16.5–25.9) 0.2 (0.0–1.7) 1.7 (0.8–3.6) 30-day revisit rate 12.3 (9.7–15.5) 13.9 (11.1–17.3) 10.6 (7.8–14.3) 9.0 (6.5–12.6) 30-day admission rate 2.7 (1.6–4.5) 1.1 (0.5–2.4) 2.3 (1.2–4.5) 2.7 (1.5–5.0)
Monthly ED volumes, lengths of stay, disposition rates, and overall 30-day revisit and admission rates are shown for all adult ED patients receiving a diagnosis of syncope and appendicitis during the preimplementation (September 2011 to July 2012) and postimplementation (February 2013 to December 2013) periods. Data are No. (%) unless otherwise indicated. *Number of transfers in either the pre- or postimplementation period.
Utilization Review and Emergency Department Operations Desai et al
(95% CI 9.2% to 15.9%) to 20.7% (95% CI 16.5% to 25.9%). The discharge rate and 30-day revisit rate of this subgroup increased slightly. The admission rate for patients returning to the ED within 30 days decreased from 2.7% (95% CI 1.6% to 4.5%) to 1.1% (95% CI 0.5% to 2.4%) after implementation.
There was no statistical difference in the number of patients receiving a diagnosis of appendicitis per month throughout the study period. The majority were admitted both before and after implementation. The rates of discharge, transfer, 30-day revisits, and admission on return visit were not significantly different in the pre- and postimplementation periods (Table 3).
Table 4 demonstrates the results of the cost analysis. The average reimbursement for an ED visit ($1,762), inpatient stay ($20,758), and observation (cost of ED visit [$1,762] plus cost of observation unit stay [$1,234]¼$2,996) was multiplied by the difference in ED revisits in 30 days, admission rate, and observation rate, respectively, associated with the adoption of utilization review.13,14 For example, according to the 2014 Medical Expenditure Panel Survey,13
the average payment for a hospitalization was $20,758, so the reduction in admission rate yields an expected savings of $290.61 per ED visit. Combining this with the offsetting effect of a higher revisit rate and greater use of observation unit beds, the overall net cost savings were estimated to be $193.17 per ED visit. Based on the average number of adult ED visits per month after implementation (11,305), the projected total aggregate savings would be $26,204,515 per year in our hospital setting.
The pre-post analysis discussed above is useful to test whether there was an association between the adoption of prospective utilization review and postimplementation outcomes. However, one limitation of the pre-post
628 Annals of Emergency Medicine
design is that it assumes there is no confounding trend. To test this, we graphically displayed the pre- and postimplementation trends by month during the study period in Figure 2. In general, there is no clear evidence of preexisting trends in the 3 months before or after implementation, although there is some general decline in admissions at approximately 6 to 9 months before implementation. Additionally, there appears to be a slight drift toward a return to preimplementation rates over time.
LIMITATIONS There are some limitations to this study. First, this is a
single-center study, involving a publicly funded, safety-net ED. Our institution serves patients with limited access to primary care and those with social or psychiatric issues. Though this is a unique patient population, the large volume of patient encounters studied strengthens our findings and may be applicable to other public, urban, safety-net facilities.
Second, results were likely influenced somewhat by coding error, a limitation inherent to computerized data extraction. For example, we found a 1.5% to 3.0% discharge rate among patients with ICD-9 codes consistent with appendicitis, a condition for which inpatient admission is expected. On review of a random sample of these discharged patients, it was discovered that the majority had received a diagnosis of epiploic appendagitis, a benign condition that rarely requires inpatient admission or surgery. Also, the data were analyzed by month. Although this is the usual format for both operational and quality assurance reporting, this interval may be less precise than more granular data in reflecting variations by day of the week. However, we do not expect that such errors are likely to have affected our overall results.
Volume 70, no. 5 : November 2017
Table 4. Cost analysis.13,14
Variable Preimplementation, % Postimplementation, % Change, %* Mean Payment/Person, $13,14 Cost/Visit, $†
Admissions 14.2 12.8 –1.4 20,758 –290.61 ED revisits, 30 days 20.4 24.4 þ4.0 1,762 þ70.48 Observations 2.5 3.4 þ0.9 2,996 þ26.96 Net cost per visit –193.17
An approximate cost-analysis based on percentage change in admission rate, ED revisit rate in 30 days, and observation unit utilization. Estimates of cost are extracted from the Agency for Healthcare Research and Quality Medical Expenditure Panel Survey (Western census region)13 and the CMS Outpatient Prospective Payment System.14 The observation payment includes the cost of an ED visit ($1,762) and the cost of an observation stay ($1,234). *The difference between pre- and postimplementation monthly percentage rates. †Approximate cost multiplied by percentage change in variable.
Desai et al Utilization Review and Emergency Department Operations
Third, our datamay have been affected byother operational changes implemented during the study period. To better view the influence of confounding trends, we performed a comparative analysis of patients who received a diagnosis of syncope (deemed admission criteria sensitive) and appendicitis (deemed admission criteria insensitive). Our hypothesis was that the dispositions of syncope patients would be greatly influenced by an admission decision support tool, whereas those of appendicitis patients would be relatively unchanged. As anticipated, the syncope admission rate decreased approximately 5.5%, whereas the observation rate increased by approximately 8.6%. Meanwhile, the appendicitis disposition rates stayed relatively steady throughout the study period. Although the presence of confounding trends may affect the results of before-after studies, this comparative analysis is supportive of our conclusions.
Fourth, detailed cost reporting was unavailable for a more sophisticated cost analysis, in part because of a difference in recordkeeping after the use of utilization review software, as well as this county facility’s use of flat-rate billing.
DISCUSSION In this study, we sought to answer the question of how
prospective use of a contemporary utilization review tool affects the operations of a large, public ED. Examining computerized decision support software, the most contemporary iteration of utilization review, we found a marked shift in patient dispositions from the ED. Most notable was a statistically significant decrease in the monthly admission rate, from 14.2% to 12.8%, after implementation of prospective utilization review. The decrease in admission rate was accompanied by an increase in discharge rate, observation unit utilization, observation unit length of stay, and short-term return visits to the ED.
The decrease in admission rate corroborates results from a few previously published studies examining the effect of more traditional utilization review programs.15-19
Feldstein et al15 reported a statistically significant reduction
Volume 70, no. 5 : November 2017
in hospital admissions (12.3%) with a preadmission and concurrent on-site utilization review program. Associated with this decline in hospital admissions was a net decrease in total medical expenditures of approximately 8% per insured person.15 Wickizer et al16,17 showed a similar reduction in admission rate and total expenditures with the application of utilization review among patients covered by 223 commercial insurance companies during a 3-year period.
Though some proponents of utilization review programs have equated the systematic review of admissions for appropriateness and optimization of level of care with good patient care, one could argue that return visits and hospitalizations are more direct markers of quality. In 1997, Milstein6 questioned the continued reduction of health care use without clear evidence ofmaintained patient safety. Soon after, Wickizer et al8-10 published a series of studies on privately insured patients under use management. In these studies, pediatric patients, mental health patients, and those admitted for cardiovascular procedures experienced an increased relative risk of readmission within 60 days when the length of stay was restricted.8-10
After implementation of prospective utilization review at our institution, a greater percentage of patients were discharged from the ED only to return within 30 days. In fact, the overall 30-day ED revisit rate increased from 20.4% preimplementation to 24.4% postimplementation. One could argue that admission criteria serve only to guide physician decisionmaking and that, as in our case, medical necessity may override “failures” of primary review. So why did we observe an increase in discharge rate postimplementation? One possible explanation is the application of standard criteria to a previously unchecked process. Before prospective utilization review implementation, admission decisions on stable “soft-call” patients were unopposed. Afterward, utilization review nurses were available continuously to provide real-time feedback. Another possibility for the increase in discharge rate is a change in physician behavior affected by the sheer
Annals of Emergency Medicine 629
Utilization Review and Emergency Department Operations Desai et al
awareness of administrative oversight. Although there were no incentives (or disincentives) assigned to the secondary peer review process, physicians may have been indirectly influenced by the accountability inherent in both documentation and oversight by their peers.
Although ED revisits within 30 days increased, a smaller percentage of patients were actually admitted during the second ED visit, indicating that returning patients were not necessarily inappropriately discharged on the first visit. Presumably, if patients were prematurely discharged during the first ED visit, the return visit would prompt an admission either because of higher acuity or medical necessity. In fact, an increased relative risk of readmission within 60 days after implementation of a utilization review program was reported.8-10 This disagreement may be explained by the authors’ focus on concurrent review (ie, review of inpatient days) as opposed to prospective review (ie, review of admission appropriateness), an evolution in the evidence base of admission decision support criteria, or a difference in patient population.
Our medical center serves a unique patient population with a substantial proportion of uninsured, homeless, and psychiatric patients. Disposition decisions are often made within the context of limited access to primary care, poorly managed chronic conditions, and social obstacles for safe discharge. Though cost-cutting measures are expected to result in reduced overall health care costs, the statistically significant increase in 30-day revisits found in this study must be weighed carefully against potential benefits. Moreover, we did not have access to data at a county level and therefore cannot rule out the possibility that the higher discharge rate was associated with negative patient outcomes.
Previously, our institution has borne financial losses from denied days recognized in a retrospective and delayed fashion. To reach a treatment authorization request–free state, admission decision support software with the support of utilization review nurses was instituted on a continuous basis. To determine the overall financial effect of this utilization review program, we performed a simplified cost analysis bymultiplying the percentage of change in admission rate, observation unit utilization, and ED revisits with the average reimbursement of an inpatient stay, observation stay, and ED visit, respectively.13,14 The balance was an overall cost savings of approximately $193.17 per visit.
Such monetary savings may be offset by the cost of the utilization review program itself. We were unable to obtain the cost of software implementation and maintenance, personnel hiring and training, or physician peer review time for a more comprehensive cost analysis. In 2003, Murray and Henriques20 estimated a cost of approximately
630 Annals of Emergency Medicine
$300,000 annually (for both providers and payers) to support a utilization review program at an urban Midwest hospital (approximately 14,000 concurrent reviews/year). Adjusting for inflation, the cost of $300,000 amounts to just over $385,000 in 2014 (the date of the Medical Expenditure Panel Survey/CMS payment data13,14), or approximately $400,000 in 2017 (the date of this study’s publication). In 1991, Wickizer et al11 also reported an increase in outpatient expenditures with hospital-based utilization review programs. Although limited in scope, their study found that the reduction in hospital expenditures still outweighed the 20% increase in outpatient expenditures, resulting in net hospital savings.11
With a projected total aggregate savings of more than $26 million per year, it would be safe to expect a net cost savings to our health care system. With this in mind, one must consider the reallocation of resources beyond the financial support of the utilization review program itself. In addition to the increase in 30-day revisits to the ED, there was a synchronous postimplementation increase in the rate of observation unit placement (2.5% to 3.4%) and in median observation unit length of stay (24.4 to 27.1 hours). As the observation unit falls under the purview of emergency services at our hospital, such repercussions contribute to an additional workload for the ED. Because an increasing number of hospitals across the United States implement use programs to reduce inpatient admissions, short stays that are underreimbursed are likely to increase. Anticipating this shift by redirecting resources from inpatient care to emergency and observation services is prudent. However, more work needs to be done to verify both the accuracy and generalizability of these cost-savings estimates.
In the review of monthly disposition and 30-day revisit rates plotted over time (Figure 2), there is a discernable trend toward the end of the postimplementation period of a return to the preimplementation baseline. Although the changes in this interrupted time series are sustained through the study period, this finding indicates that further study is needed to determine the long-term effects of utilization review software on operations.
While noting a decrease in admission rate, this study reveals a concurrent increase in observation unit utilization, observation unit length of stay, and 30-day ED revisit rate. These effects influence the patient experience, exacerbate the burden on the ED, and temper overall cost savings. Although decreased health care costs are of national importance and translate to affordable health care for the individual, cost-cutting directives must be balanced with adjusted resource allocation to drive achievement of quality medical care.
Volume 70, no. 5 : November 2017
Desai et al Utilization Review and Emergency Department Operations
Supervising editor: Daniel A. Handel, MD, MBA
Author affiliations: From the Department of Emergency Medicine (Desai, Gruber, Eiting, Seabury, Vasquez, Kim, Terp) and Department of Preventive Medicine (Mack), Keck School of Medicine, and the Leonard D Schaeffer Center for Health Policy and Economics (Seabury, Terp), University of Southern California, Los Angeles, CA; and Los Angeles County + University of Southern California Medical Center, Los Angeles, CA (Voyageur).
Author contributions: SD, PFG, EE, VV, and HTK conceived and designed the study and wrote the institutional review board application. SD, PFG, and VV obtained research funding. PFG, CV, HTK, and ST extracted data and perform basic analysis. SAS and WJM worked on advanced data analysis and statistics. SD drafted the article and all authors contributed substantially to its revision. SD takes responsibility for the paper as a whole.
All authors attest to meeting the four ICMJE.org authorship criteria: (1) Substantial contributions to the conception or design of the work; or the acquisition, analysis, or interpretation of data for the work; AND (2) Drafting the work or revising it critically for important intellectual content; AND (3) Final approval of the version to be published; AND (4) Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
Funding and support: By Annals policy, all authors are required to disclose any and all commercial, financial, and other relationships in any way related to the subject of this article as per ICMJE conflict of interest guidelines (see www.icmje.org). The authors have stated that no such relationships exist. Research reported in this publication was partially supported by the National Center for Advancing Translational Sciences of the National Institutes of Health under award UL1TR000130 (formerly by the National Center for Research Resources, award UL1RR031986).
Publication dates: Received for publication September 29, 2016. Revision received March 15, 2017. Accepted for publication March 21, 2017. Available online May 27, 2017.
Presented at the Society for Academic Emergency Medicine annual meeting, May 2015, San Diego, CA.
REFERENCES 1. Chang Y, Ketterlin R, Laiben G. The $6 million question: can process
improvement ensure appropriate hospitalizations? J Healthc Qual. 2008;30:15-24.
2. Centers for Medicare & Medicaid Services. Medicare fee-for-service 2013 improper payments report. 2013. Available at: http://www.cms. gov/Research-Statistics-Data-and-Systems/Monitoring-Programs/ Medicare-FFS-Compliance-Programs/CERT/CERT-Reports-Items/
Volume 70, no. 5 : November 2017
Downloads/MedicareFee-for-Service2013ImproperPaymentsReport. pdf. Accessed September 20, 2015.
3. McKendry MJ, Van Horn J. Today’s hospital-based case manager: how one hospital integrated/adopted evidenced-based medicine using InterQual criteria. Lippincotts Case Manag. 2004;9:61-71.
4. Wickizer TM, Lessler D. Utilization management: issues, effects, and future prospects. Annu Rev Public Health. 2002;23:233-254.
5. Mitus AJ. The birth of InterQual: evidence-based decision support criteria that helped change healthcare. Prof Case Manag. 2008;13:228-233.
6. Milstein A. Managing utilization management: a purchaser’s view. Health Aff (Millwood). 1997;16:87-90.
7. Gray BH, Field MJ. Institute of Medicine (US) Committee on Utilization Management by Third Parties: Controlling Costs and Changing Patient Care? The Role of Utilization Management. Washington, DC: National Academies Press; 1989.
8. Wickizer TM, Lessler D, Boyd-Wickizer J. Effects of health care cost-containment programs on patterns of care and readmissions among children and adolescents. Am J Public Health. 1999;89:1353-1358.
9. Lessler D, Wickizer T. The impact of utilization management on readmissions among patients with cardiovascular disease. Health Serv Res. 2000;34:1315-1329.
10. Wickizer TM, Lessler D. Do treatment restrictions imposed by utilization management increase the likelihood of readmission for psychiatric patients? Med Care. 1998;36:844-850.
11. Wickizer TM, Wheeler JRC, Feldstein PJ. Have hospital inpatient cost containment programs contributed to the growth in outpatient expenditures? analysis of the substitution effect associated with hospitalization utilization review. Med Care. 1991;29:442-451.
12. Interqual® Level of Care Criteria [computer program]. San Francisco, CA: McKesson; 2014. Version 2014.
13. Agency for Healthcare Research and Quality. Medical expenditure panel survey. 2014. Available at https://meps.ahrq.gov/mepsweb. Accessed May 11, 2017.
14. Department of Health and Human Services; Centers for Medicare and Medicaid Services. Medicare and Medicaid programs: hospital outpatient prospective payment and ambulatory surgical center payment systems and quality programs. Fed Reg. 2014;79: 66812.
15. Feldstein PJ, Wickizer TM, Wheeler JR. Private cost containment: the effects of utilization review programs on health care use and expenditures. N Engl J Med. 1988;318:1310-1314.
16. Wickizer TM, Wheeler JR, Feldstein PJ. Does utilization review reduce unnecessary hospital care and contain costs? Med Care. 1989;27:632-647.
17. Wickizer TM, Feldstein PJ, Wheeler JR, et al. Reducing hospital use and expenditures through utilization review: findings from an outcome evaluation. Qual Assur Util Rev. 1990;5:80-85.
18. Murray ME, Darmody JV. Clinical and fiscal outcomes of utilization review. Outcomes Manag. 2004;81:19-25.
19. Rosenberg SN, Allen DR, Handte JS, et al. Effect of utilization review in a fee-for-service health insurance plan. N Engl J Med. 1995;333:1326-1331.
20. Murray ME, Henriques JB. An exploratory cost analysis of performing hospital-based concurrent utilization review. Am J Manag Care. 2003;9:512-518.
Annals of Emergency Medicine 631
Table E1. Throughput, disposition rates, and revisits within 30 days for all adult ED visits during the preimplementation postimplementation (February 2013 to December 2013) periods.
Variables Preimplementation Mean (95% CI)
Postimplementation Mean (95% CI)
Preimplementation Monthly Trend (95% CI)
Postimplementation: Immediate Change
(95% CI)
ED LOS, h 8.1 (7.8 to 8.5) 8.5 (8.1 to 9.0) –0.02 (–0.17 to 0.14) 0.47 (–1.60 to 2.53) Hospital LOS, days 6.5 (6.1 to 6.9) 6.5 (6.2 to 6.7) –0.05 (–0.17 to 0.08) 0.78 (–0.43 to 2.00) Observation unit LOS, h 24.4 (23.5 to 25.2) 27.1 (26.4 to 27.8) 0.20 (–0.03 to 0.43) 0.14 (–2.31 to 2.59) Admission rate (total) 14.2 (13.6 to 14.8) 12.8 (12.3 to 13.4) –0.12 (–0.22 to –0.02) –2.09 (–3.49 to –0.70 Admission rate (unmonitored) 10.9 (10.6 to 11.3) 9.3 (9.0 to 9.6) –0.12 (–0.20 to –0.05) –1.05 (–2.07 to –0.04 Admission rate (monitored) 3.3 (3.0 to 3.6) 3.5 (3.2 to 3.8) 0.00 (–0.04 to 0.04) –1.04 (–1.65 to –0.43 Discharge rate 82.4 (81.8 to 82.9) 83.4 (82.8 to 83.9) 0.03 (–0.09 to 0.15) 2.65 (1.05 to 4.24) Transfer rate 0.94 (0.89 to 1.00) 0.51 (0.47 to 0.55) –0.03 (–0.07 to 0.01) –0.13 (–0.50 to 0.24) Observation rate 2.5 (2.3 to 2.7) 3.4 (3.1 to 3.6) 0.12 (0.06 to 0.17) –0.42 (–1.11 to 0.26) 30-day revisit
rate, total 20.4 (19.9 to 20.9) 24.4 (23.8 to 25.0) –0.13 (–0.27 to 0.01) 6.87 (4.95 to 8.79)
30-day admission rate, total
3.2 (3.1 to 3.4) 2.8 (2.7 to 3.0) –0.06 (–0.09 to –0.02) –0.13 (–0.63 to 0.37)
30-day revisit rate, discharged
20.3 (19.8 to 20.9) 24.9 (24.3 to 25.6) –0.12 (–0.28 to 0.05) 7.52 (5.25 to 9.80)
30-day admission rate, discharged
2.3 (2.1 to 2.4) 1.9 (1.8 to 2.0) –0.03 (–0.07 to 0.00) –0.07 (–0.50 to 0.37)
Numbers are regression estimates, and data are presented as number or percentage unless otherwise indicated. The 30-day admission rate is the admission only adult patients discharged from the ED on the first visit. A segmented regression for the interrupted time series was used to estimate admission decision Metric measures were regressed against time (month), with the addition of an estimate for the acute change in level (change in the mean) from the end of th Both the immediate effects (change in level in the regression line and linear trends [slopes, estimated as monthly trends]) in metrics were estimated and
U tilization
R evie
631.e1 A nnals
of E m ergency
M edicine
V olum
e 70 , n o . 5
: N ovem
ber 20 17
(September 2011 to July 2012) and
Postimplementation: Change in Monthly Trend (95% CI)
Postimplementation: Monthly Trend (95% CI)
0.03 (–0.15 to 0.21) 0.01 (–0.09 to 0.12) –0.00 (–0.14 to 0.14) –0.05 (–0.11 to 0.02) –0.15 (–0.45 to 0.16) 0.06 (–0.15 to 0.26)
) 0.45 (0.27 to 0.63) 0.33 (0.19 to 0.48) ) 0.25 (0.14 to 0.36) 0.13 (0.05 to 0.21) ) 0.20 (0.11 to 0.29) 0.20 (0.12 to 0.29)
–0.38 (–0.56 to –0.19) –0.34 (–0.48 to –0.20) 0.04 (–0.00 to 0.08) 0.01 (–0.00 to 0.02)
–0.11 (–0.19 to –0.04) 0.00 (–0.05 to 0.06) –0.12 (–0.29 to 0.05) –0.25 (–0.35 to –0.14)
0.11 (0.05 to 0.17) 0.05 (0.00 to 0.10)
–0.17 (–0.36 to 0.04) –0.28 (–0.39 to –0.16)
0.05 (0.01 to 0.09) 0.02 (–0.01 to 0.04)
rate on the second visit to the ED. The discharged sample includes support in the face of ongoing secular trends in each of the metrics. e preimplementation to the start of the postimplementation period. compared pre- and postimplementation.
w and
E m ergency
D epartm
ent O perations
D esai
et al
- The Effect of Utilization Review on Emergency Department Operations
- Introduction
- Materials and Methods
- Results
- Limitations
- Discussion
- References
- Appendix
Recent Comments