BEFORE TAKING ON THE ASSIGNMENT, PLEASE ENSURE THAT YOU UNDERSTAND ALL OF THE INFORMATION AND REQUIRED READINGS THAT GO ALONG WITH IT. MUST BE ORIGINAL WORK AND NO AI ASSISTANCE

Assignment Overview: MHA540 Introduction to Quality Assurance 

With the consistently rising costs of health care services, utilization management and utilization review are routinely used in the vast majority of current health care settings. How are they evolving over time, and what is involved in these processes? What do health care managers need to be aware of as the processes play out? 

In the absence of sweeping policy change (or the complete restructuring of the United States health care system), utilization management and utilization review are much more moderate processes to attempt to preserve the quality of care provided while controlling overall health care expenditures. Let’s learn a bit about each of these processes, and how they are implemented in our current health care environment.

Module 2 Homework Case Assignment

Using the information in the required readings as well as some additional research in peer reviewed sources, complete your Case Assignment by answering the following:

1.  Compare and contrast utilization review and utilization management in health care. What are the similarities and the differences between each type of assessment? (Hint: One is generally a prospective process, and the other is generally a retrospective process).

 Baker, J. (2017, May 1). Improving quality of care through utilization management [Video]. https://youtu.be/g-T-YeP53nY

2.  Explain the specific role of each method in providing value-based health care. Who benefits from the method—the health care system, the insurance company, and/or the patient?

3.  Articulate how individual case management is critical to a hospital’s long-term survival. In what way does this practice protect your patients while keeping your doors open for business?

4.  What are the ethical pitfalls to be aware of in performing these types of quality reviews? What must health care managers be aware of in terms of ethical pitfalls and also potential unintended negative consequences?

Assignment Expectations

1.  Conduct additional research to gather sufficient information to support your analysis.

2.  Provide a response of 3-5 pages, not including title page and references.

3.  As we have multiple required items to be addressed herein, please use subheadings to show where you’re responding to each required item and to ensure that none are omitted.

4.  Support your paper with peer-reviewed articles and reliable sources. Use at least three references, and a minimum of two of these from peer-reviewed sources. For additional information on how to recognize peer-reviewed journals, see: Angelo State University Library. (n.d.). Library Guides: How to recognize peer-reviewed (refereed) journals. Retrieved from https://www.angelo.edu/services/library/handouts/peerrev.php 

and for evaluating internet sources: Georgetown University Library. (n.d.). Evaluating internet resources. Retrieved from  https://www.library.georgetown.edu/tutorials/research-guides/evaluating-internet-content

5.  You may use the following source to assist in your formatting your assignment: Purdue Online Writing Lab. (n.d.). General APA guidelines. Retrieved from https://owl.english.purdue.edu/owl/resource/560/01/.

6.  Paraphrase all source information into your own words carefully, and use in-text citations.

Contents lists available at ScienceDirect

International Journal of Production Economics

journal homepage: www.elsevier.com/locate/ijpe

Quality management in healthcare organizations: Empirical evidence from the baldrige data

Mahour Mellat Parasta,∗, Davood Golmohammadib

a Assistant Professor of Technology Management, North Carolina A&T State University, 1601 E Market Street, Greensboro, NC, 27411, USA bAssociate Professor of Management Science and Information Systems, University of Massachusetts-Boston, 100 William T. Morrissey Blvd, Boston, MA, 02125, USA

A R T I C L E I N F O

Keywords: Quality management Healthcare quality Malcolm baldrige national quality award (MBNQA) Structural equation modeling

A B S T R A C T

The purpose of this paper is to investigate the determinants of customer satisfaction and quality results in the healthcare industry using the Baldrige data. We use publicly available data on quality assessment of healthcare organizations that applied for the Baldrige award to examine two research questions: 1) whether the Baldrige model is a reliable and valid model for assessment of quality practices in healthcare organizations, and 2) to determine the relationship between quality practices and their impact on quality results in the healthcare or- ganizations. Using structural equation modeling, the findings suggest that the Baldrige model is a valid and reliable quality assessment model for healthcare organizations. Consistent with previous studies, the findings suggest that the main driver of the system is leadership, which has a significant effect on all quality practices in the healthcare industry. Controlling for the applicants’ year, the findings indicate that 1) Information analysis and knowledge management has a significant impact on Quality results, 2) Workforce development and human resource management has a significant impact on both Customer focus and satisfaction and Quality results, and 3) Strategic planning for quality has a significant impact on Customer focus and satisfaction. The study provides insights and suggestions for healthcare organizations on how to improve their quality systems using the Baldrige model.

1. Introduction

The last few decades have seen a great emphasis on improving the quality of healthcare in the United States (Institute of Medicine, 2001; Chassin and Galvin, 1998; Jencks et al., 2003; McGlynn et al., 2003; Matthias and Brown, 2016; Nair and Dreyfus, 2018). As a result, a di- verse array of approaches has been advocated to improve the quality of care. Some of these programs, such as pay-for-performance (Epstein et al., 2004; Lindenauer et al., 2007; Millenson, 2004) and public in- formation disclosure (Chassin, 2002; Fung et al., 2008; Marshall et al., 2000; Williams et al., 2005), have garnered substantial academic in- terest. Other programs, however, have received limited investigation. This is concerning in that focus on only a few programs may result in healthcare policymakers developing and implementing suboptimal programs or making investment decisions that do not make the best use of resources. Research in healthcare quality and healthcare services could be valuable due to the challenges of complexity, co-production, and service intangibility inherent in service delivery in healthcare or- ganizations (Vogus and McClelland, 2016; Halkjær and Lueg, 2017; Silander et al., 2017). The intrinsic complexity in healthcare organi- zations can provide a useful context for operations management

scholars to study organizational processes in such a complex environ- ment (Eisenhardt and Graebner, 2007; Farjoun and Starbuck, 2007). As indicated by Bardhan and Thouin (2013), “An important, and some- times overlooked, dimension in the debate over healthcare reform is the quality angle.”

Attention to quality in the healthcare industry gained momentum after the publication of the report “To Err is Human: Building A Safer Health System” that was published by the National Academy (Kohn et al., 2000). Following the success of quality management programs in the manufacturing sector (Power et al., 2011), the healthcare industry was motivated to adopt quality management practices and principles to ensure delivery of proper care, reduce healthcare delivery costs, and increase patient satisfaction (Alexander et al., 2006; Macinati, 2008; Sabella et al., 2014; Russell et al., 2015; Um and Lau, 2018). Never- theless, the outcome of implementing quality management in the healthcare industry is mixed and does not provide a clear picture of the effectiveness of quality management in improving healthcare quality. While many healthcare organizations faced significant challenges in successful implementation of quality management (Bringelson and Basappa, 1998; Zabada et al., 1998; Ennis and Harrington, 1999; Huq and Martin, 2000), there are examples of successful implementation in

https://doi.org/10.1016/j.ijpe.2019.04.011 Received 8 October 2018; Received in revised form 12 April 2019; Accepted 14 April 2019

∗ Corresponding author. E-mail addresses: [email protected] (M.M. Parast), [email protected] (D. Golmohammadi).

International Journal of Production Economics 216 (2019) 133–144

Available online 17 April 2019 0925-5273/ © 2019 Published by Elsevier B.V.

T

the literature (Motwani et al., 1996; Klein et al., 1998; Chattopadhyay and Szydlowski, 1999; Jackson, 2001; Francois et al., 2003). Because of the limitations of these studies in terms of their small sample size, focus on a few departments in a healthcare organization, and focus on narrow aspects of organizational performance, it is unclear whether quality management can improve healthcare quality (Mosadeghrad, 2015).

Research in quality management was promoted by development of national and international quality standards such as the European Foundation for Quality Management (EFQM) excellence model, and ISO-9001 (Araújo and Sampaio, 2014; Castka, 2018; Martín-Gaitero and Escrig-Tena, 2018). These quality models provide a framework for organizations to assess the effectiveness of quality management prac- tices, and to determine areas of improvement that are oriented towards accomplishing balanced results for all the stakeholders (Bou-Llusar et al., 2009). One initiative designed to improve quality that has not received empirical scrutiny is the Malcolm Baldrige National Quality Award (MBNQA), arguably the most prestigious quality award that can be attained by healthcare providers (Lin and Su, 2013). The MBNQA is given annually by the National Institute of Standards and Technology (NIST), a division of the Department of Commerce, to applicant orga- nizations across six industry sectors (manufacturing, service, small business, education, healthcare, and non-profit). The MBNQA was created in 1987 to foster competitiveness of U.S. companies, and in 1999 was expanded to include healthcare providers after a limited 4- year pilot that debuted in 1995. Since its introduction, over 1500 or- ganizations across all categories have applied for the award, with 19 healthcare organizations winning the award since 2002 (Baldrige Award Recipient Information, 2015). Winning the MBNQA brings public acclaim (National Institute of Standards and Technology (NIST), 1995), with U.S. Commerce Secretary Penny Pritzker stating about the 2014 MBNQA award winners, “Today's honorees are the role models of innovation, sound management, employee and customer satisfaction, and results. I encourage organizations in every sector to follow their lead.” Since the award expanded to the healthcare sector, healthcare providers have been particularly keen to apply for the MBNQA, with healthcare providers now composing approximately 50% of the appli- cants (Foster and Chenoweth, 2011). However, to date, there is limited evidence about the program's effectiveness in improving healthcare quality. This is primarily due to the fact that the data on the Baldrige assessment of healthcare organizations was treated as confidential and consequently not publicly available.

A review of the literature in healthcare quality demonstrates the inherent complexity of quality in the healthcare industry, which makes healthcare industry unique in terms of how quality management prac- tices should be implemented (Vogus and McClelland, 2016; Bortolotti et al., 2018). First, providing a specific treatment path for each patient, along with the heterogeneity of customers (patients), requires a high level of customized care that increases complexity in the quality of care (Sofaer and Firminger, 2005). Second, the knowledge asymmetry (knowledge gap) between the healthcare provider and the patient adds to the complexity of the entire healthcare process (Dempsey et al., 2014). While healthcare organizations have tried to manage this com- plexity by engaging patients and their families in the process, it adds additional complexity to healthcare delivery and ensuring healthcare quality (Abbott, 1991, 1993). Third, compared to other industries, both customers (the patients) and the organizations (healthcare providers) are exposed to high risks and costs associated with services performed, where the cost of failure is significant (Sofaer and Firminger, 2005; Vogus and McClelland, 2016). Fourth, healthcare organizations should operate under specific regulatory procedures and protocols to ensure high quality experiences for customers (Register, 2011). Finally, unlike other industries, the service delivery could happen over a longer time horizon, and may involve different forms of treatments, which may impact customers’ perception of quality of care in the healthcare in- dustry (Golin et al., 1996; Sofaer and Firminger, 2005). Thus, under- standing the antecedents and drivers of healthcare quality and quality

results has significant impact on both healthcare organizations and patients.

The objective of this study is to examine the impact of quality practices associated with the Baldrige model on quality results in the healthcare industry using the Baldrige data (independent reviewers’ scores). Surprisingly, empirical studies on the impact of the quality management and Baldrige model in the healthcare industry are rare. A comprehensive search of the EBSCO and ScienceDirect databases using the keyword “healthcare quality” suggests that Meyer and Collier (2001) conducted the only empirical assessment of the relationship between quality practices, using self-reported data on a survey instru- ment adapted from the Baldrige Healthcare Criteria. Thus, it is still uncertain whether using the Baldrige model and implementing the practices associated with quality management as prescribed by the Baldrige model can lead to improved quality results in healthcare or- ganizations.

This paper addresses four major gaps in the literature on quality management and healthcare quality: First, in contrast to previous stu- dies where surveys were used to collect data from firms regarding their quality practices, this is the first study that investigates the linkage between quality practices and quality results using independent re- viewers' scores. Second, this study examines the validity of the Baldrige model as a valid and reliable model for assessment of quality in healthcare organizations, providing empirical evidence on how healthcare organizations can improve their quality through im- plementing the Baldrige criteria. Third, this study evaluates the de- terminants of quality results in the healthcare industry using data on quality performance of healthcare organizations that applied for the Baldrige award from 1999 to 2006, which responds to the call for more longitudinal studies in the assessment of service quality (Subramony, 2009). Finally, by using the independent reviewers’ assessment of the quality of care, this study can control for the rater bias in customer satisfaction, thereby improving the quality of the data and rigor of the findings using more valid, reliable, and objective measures of service quality (Schneider et al., 2005; Hekman et al., 2010; Subramony and Pugh, 2015). By using a more comprehensive, objective, and fine- grained set of measures such as the Baldrige model to assess quality (Sofaer and Firminger, 2005), this study can provide a more nuanced explanation of how different organizational practices and mechanisms can improve quality of care (Jenkinson et al., 2002; Vogus and McClelland, 2016), and identifies the best practices that can be used by organizations to improve quality results (Escrig and Menezes, 2015).

2. Previous studies of the baldrige model in the healthcare industry

Shortell et al. (1995) conducted an empirical study of quality practices and quality outcomes in healthcare organizations, using a survey instrument they developed based on the Baldrige model. Their study shows that 54% of the variation in quality implementation is explained by culture and implementation approach (employee, physi- cian, and administrative orientation and involvement) toward quality systems, indicating the importance of leadership and human resource management in quality implementation in hospitals. Carman et al. (1996) assessed the factors that lead to successful implementation of quality management systems in hospitals. Using the scales developed by Shortell et al. (1995), they did not find any significant relationship between the Baldrige constructs and performance outcomes. It should be noted that the constructs used by Shortell et al. (1995) and Carman et al. (1996) are only loosely based on the Baldrige Health Care Criteria; thus, their study may not be an accurate representation of the appli- cation of the Baldrige model to healthcare organizations (Meyer and Collier, 2001).

In another study, Jennings and Westfall (1994) developed a self- assessment tool (survey instrument) for hospitals that can be used for benchmarking and improvement purposes based on the Baldrige

M.M. Parast and D. Golmohammadi International Journal of Production Economics 216 (2019) 133–144

134

guidelines. They report that the survey instrument can be used as a valid and reliable tool to assess quality systems in hospitals; however, their study poses two limitations: their assessment of the reliability and validity of their survey instrument is solely based on Cronbach's alpha values, and they use employee self-reported data to assess the reliability of the survey instrument.

Meyer and Collier (2001) conducted the only study that used a Baldrige model to examine the linkage between quality practice and quality results in healthcare organizations. Their findings provide im- portant insights into the relevance of the Baldrige model in the healthcare industry. They report that 1) Leadership and Information and Analysis are significantly linked with Organizational performance results, 2) Human resource development and management and Process management significantly link with Customer satisfaction, and 3) the Baldrige Health Care model is a valid and reliable measurement model. It should be noted that while Meyer and Collier (2001) provide important insights into ways to improve quality in healthcare organizations, they do not fully capture the essence and dynamics of the Baldrige Heath Care for several reasons:

1) They use the 1995 Health Care Pilot Criteria model, thereby failing to capture changes and modifications to the Baldrige Health Care Criteria in subsequent years, 2)While the survey instrument was de- veloped based on the Baldrige criteria, certain modifications were made in order to make it suitable for cross-sectional survey administration, 3) Data is based on the self-reported survey from hospitals, which does not necessarily represent quality implementation based on the Baldrige guidelines, and 4) Because of the cross-sectional nature of the study, it

may not be able to capture whether the determinants of quality results in health care organizations remain stable over time. These limitations make it difficult to properly assess the relationships between Baldrige criteria and their impact on quality results.

3. Assessment of theoretical foundations of the baldrige model in the healthcare industry

Several studies have used quality management models and have established theoretical foundations underlying the Baldrige criteria. Using self-reported data on quality practices across several industries, Flynn and Saladin (2001) showed that the Baldrige criteria are sound, robust, and have been appropriately revised over time to address the evolving nature of quality. Nevertheless, all of these findings are based on data collected from survey studies associated with quality manage- ment, and the validity of the Baldrige model using the Baldrige data has not been examined. It is important to note that the data for the survey research is obtained through a specific set of questions aimed at cap- turing the perceptions of quality managers on quality practices. In a Baldrige assessment, independent reviewers assign scores to the Bal- drige criteria based on specific guidelines and procedures. These in- dependent reviewers visit many firms, review documents and reports, and talk to the managers and personnel; the reviewers are able to make an informed evaluation regarding the level of implementation of quality practices across an organization. There is consistency in terms of the assessment, since independent reviewers follow specific guidelines to evaluate an organization's quality practices. This is a unique evaluation

Table 1 Baldrige model hypotheses.

Hypothesis Justification

H2a: Leadership is positively related to management of process quality. According to the Baldrige model, leadership has a direct effect on process quality. Wilson and Collier (2000), Meyer and Collier (2001), and Pannirselvam and Ferguson (2001) showed that quality leadership is significantly related to process quality. The significant role of leadership in improving healthcare quality is addressed in the literature (Gilmartin and D'Aunno, 2007; Withanachchi et al., 2007; Naveh and Marcus, 2004).

H2b: Leadership is positively related to information and analysis. According to the Baldrige model, leadership has a direct effect on information and analysis. Wilson and Collier (2000) and Meyer and Collier (2001) showed that quality leadership is significantly related to information and analysis.

H2c: Leadership is positively related to human resource development and management.

According to the Baldrige model, leadership has a direct effect on human resource development and management. Wilson and Collier (2000), Meyer and Collier (2001), and Pannirselvam and Ferguson (2001) showed that quality leadership is significantly related to human resource development and management.

H2d: Leadership is positively related to strategic quality planning. According to the Baldrige model, leadership has a direct effect on strategic quality planning. Wilson and Collier (2000) showed that quality leadership is significantly related to strategic quality planning. In the healthcare organization, leadership has shown to have a direct impact on strategic planning (Mosadeghrad, 2015).

H3a: Management of process quality is positively related to customer focus and satisfaction.

Wilson and Collier (2000), Meyer and Collier (2001), and Pannirselvam and Ferguson (2001) showed that process quality is significantly related to customer focus and satisfaction.

H3b: Management of process quality is positively related to quality and operational results.

Wilson and Collier (2000) and Pannirselvam and Ferguson (2001) showed that process quality is significantly related to quality and operational results. Placing a low priority on continuous quality improvement has a negative impact on quality results (Chan and Ho, 1997).

H4a: Information and analysis is positively related to customer focus and satisfaction.

Wilson and Collier (2000) and Pannirselvam and Ferguson (2001) showed that information and analysis is significantly related to customer focus and satisfaction. Healthcare organizations must find ways in which IT can assist the delivery of high quality patient care.

H4b: Information and analysis is positively related to quality and operational results.

Meyer and Collier (2001) showed that information and analysis is significantly related to quality and operational results. Bardhan and Thouin (2013) showed that information technology can improve quality and reduce operational costs in hospitals.

H5a: Human resource development and management is positively related to customer focus and satisfaction.

Wilson and Collier (2000) and Pannirselvam and Ferguson (2001) showed that human resource management is significantly related to customer focus and satisfaction. Lack of incentives and human resources practices leads to ineffective quality outcomes in healthcare organizations (Alexander et al., 2006).

H5b: Human resource development and management is positively related to quality and operational results.

Wilson and Collier (2000) and Pannirselvam and Ferguson (2001) showed that human resource management is significantly related to quality and operational results. The centrality of strategic human resource management in improving quality results is discussed by Gowen et al. (2006a) and Savinoa and Batbaatarb (2015).

H6a: Strategic quality planning is positively related to customer focus and satisfaction.

Wilson and Collier (2000) and Meyer and Collier (2001) showed that strategic quality planning is significantly related to customer focus and satisfaction.

H6b: Strategic quality planning is positively related to quality and operational results.

Wilson and Collier (2000) showed that strategic quality planning is significantly related to quality and operational results. To improve quality results, healthcare managers should integrate quality as a strategic priority in their organizations' vision, policies, and long-term strategies (Mosadeghrad, 2015).

M.M. Parast and D. Golmohammadi International Journal of Production Economics 216 (2019) 133–144

135

process that does not exist in typical survey research designs (Evans, 2010; Parast, 2015).

In order to examine the validity of the Baldrige model, the first step is to examine its theoretical foundations. These foundations can be examined through an assessment of the measurement model. Based on the findings of previous studies on the Baldrige criteria using quality management survey instruments (Wilson and Collier, 2000; Flynn and Saladin, 2001; Pannirselvam et al., 1998; Pannirselvam and Ferguson, 2001), our first hypothesis is that the Baldrige model is a valid and reliable model:

H1. The Baldrige model for quality is a valid and reliable model for assessing quality management in the healthcare industry.

Several studies using the Baldrige criteria have established the link between leadership and other Baldrige categories. With reference to the Baldrige model and by borrowing from the previous studies that ex- amined the linkage between Baldrige criteria and quality performance (e.g., Pannirselvam et al., 1998), Wilson and Collier (2000), and Pannirselvam and Ferguson (2001)) we examine the relationships among the Baldrige criteria. To establish the relationships among the Baldrige criteria and to develop the hypotheses, we review the litera- ture in Baldrige and quality management. Table 1 provides a review of previous studies that examined the relationship between quality man- agement practices and organizational quality results. This leads to de- velopment of several hypotheses that relate quality managed practices to quality results for the Baldrige model (H2a through H6b).

Due to the lack of clarity on the relationships between the Baldrige criteria, previous studies have not provided a clear understanding on how the relationships between the Baldrige criteria influence firm performance and business results (Wilson and Collier, 2000; Evans, 2010). The Baldrige guidelines do not clearly specify the linkages be- tween the criteria. The only clear relationship in the Baldrige model is the influence of leadership on other categories. Thus, we build our structural model of the relationship among Baldrige criteria based on the review of the literature in quality management and Baldrige

criteria, which are presented in Table 1. The structural model for the Baldrige that is used for testing the hypotheses is provided in Fig. 1.

4. Variables and measures

The following variables are used to assess the relationship among the Baldrige criterial. Further details about the nature and scope of these variables can be obtained in the following link: https://www.nist. gov/baldrige/baldrige-criteria-commentary-health-care.

Leadership. It assesses how senior leaders’ personal actions and your governance system guide and sustain your organization.

Strategy. It assesses how organizations develop strategic objectives and action plans, implement them, change them if circumstances re- quire, and measure progress. Assessments are made to assess strategic planning pertaining to Organizational learning and learning by work- force members Patient-excellence, Operational performance improve- ment and innovation, Organizational learning and learning by work- force members.

Customers. This category asks how the organization engages pa- tients and other customers for long-term marketplace success, including how an organization listens to the voice of the customer, serve and exceed patients' and other customers’ expectations, and build re- lationships with patients and other customers.

Measurement, Analysis, and Knowledge Management. This is the “brain center” for the alignment of an organization's operations with its strategic objectives. It is the main point within the Health Care Criteria for all key information on effectively measuring, analyzing, and improving performance and managing organizational knowledge to drive improvement, innovation, and organizational competitiveness. Central to this use of data and information are their quality and availability. Furthermore, since information, analysis, and knowledge management might themselves be primary sources of competitive ad- vantage and productivity growth, this category also includes such strategic considerations.

Workforce. This category addresses key workforce practices—those

H 5b

H2a

H 2d

H 2c

H 3a

H 3b

H 2b

H 5a

H 6bH

6a

H 4a

H 4b

Leadership

Management of Process

Quality

Information and Analysis

HR Development and Management

Strategic Quality

Planning

Customer Focus and

Satisfaction

Quality and Operational

Results

Fig. 1. Structural model and hypotheses.

M.M. Parast and D. Golmohammadi International Journal of Production Economics 216 (2019) 133–144

136

directed toward creating and maintaining a high-performance en- vironment and toward engaging an organization's workforce to enable it and the organization to adapt to change and succeed.

Operations. This category assess how the organizations focused on the organization's work, the design and delivery of health care services, innovation, and operational effectiveness to achieve organizational success now and in the future.

Results. This category provides a systems focus that encompasses all results necessary to sustaining an enterprise: the key process and health care results, the patient- and other customer-focused results, the workforce results, the leadership and governance system results, and the overall financial and market performance.

5. Methodology

The data for this study was collected from the National Institute of Standards and Technology (NIST) (http://www.nist.gov/baldrige/ about/for_researchers.cfm), which provides the independent evalua- tors’ scores from 1990 to 2006. This provides a unique opportunity to address theoretical and causal relationships of the Baldrige framework. The soundness, robustness, and the objectivity of the review process ensure a high level of reliability in the data (Evans, 2010). Information about the Baldrige assessment for the healthcare organizations is pro- vided in the following link: https://www.nist.gov/baldrige/baldrige- criteria-commentary-health-care.

We used structural equation modeling (SEM) for model validation and assessment. SEM is a series of statistical methods that allow the simultaneous assessment of complex relationships between one or more independent variables and one or more dependent variables. In addi- tion, SEM is a combination of factor analysis and multiple regression analysis that is used to analyze the structural relationships between measured variables and latent constructs (Bryne, 2009; Bagozzi and Yi, 2012).

5.1. Sample

The sample for this study consists of all healthcare organizations that applied for the Baldrige award between 1999 and 2006. We re- moved forty-four records for healthcare organizations that applied for the 1995 award, because that was the pilot program. In total, the publically available data for the MBNQA applicants in the healthcare category from 1999 to 2006 resulting in a total of 161 observations for healthcare organizations. This includes the following sample sizes for each year: N1999= 9, N2000= 8, N2001= 8, N2002= 17, N2003= 19, N2004= 22, N2005= 33, and N2006= 45.

Table 2 provides descriptive statistics (mean and standard devia- tion) for the healthcare organizations for each year. Because the Bal- drige model assigns weights to each construct, the data was normalized by dividing the independent examiners’ score by the maximum score in order to provide consistency across the constructs. While the total available points for the Baldrige assessment is 1000, the distribution of

the points does not weight the seven categories equally. For example, the Leadership category has 120 points, while Strategic Planning has 80 points. The normalization process is used to have the same range of measurement for each construct and provide consistency to the data. In this process, we divide each observation by the total allocated value of the given construct by the Baldrige assessment. (e.g. Dividing the Lea- dership score of organizations by 120 or dividing the Strategic Planning score by 80 to obtain the normalized values). Thus, with this normal- ization process, each construct value has a range between zero and one. To examine the normality of the data, we calculated the statistics for skewness (asymmetry) and kurtosis (peaked-ness). Values for asym- metry and kurtosis between −1.5 and + 1.5 are considered to be ac- ceptable in order to prove normal univariate distribution (Tabachnick and Fidell, 2013). For the Baldrige data, the statistics for skewness and kurtosis range between −0.715 and 0.605, suggesting that the re- quirement of normality is met.

Some preliminary insights could be obtained from these statistics. As the descriptive statistics show, we see some improvement on the mean value for several constructs over time, which could be attributed to the widespread application of quality systems across the healthcare organizations, and the attention given by healthcare organizations to quality improvement (Meyer and Collier, 2001). For example, the average score for Leadership increased in seven years from 0.46 in 1999 to 0.53 in 2006. The same patterns are observed for other key con- structs of the Baldrige: Strategic planning (from 0.37 to 0.49), Customer focus and satisfaction (from 0.40 to 0.52), Information and analysis (from 0.41 to 0.54), Human resource development and management (from 0.43 to 0.53), Process management (from 0.38 to 0.53), and Quality and op- erational results (from 0.34 to 0.43).

Fig. 2 provides a longitudinal overview of the change in the value of the seven constructs of quality management in the Baldrige model. As it is show, all values show improvement from their initial assessment at 1999 compared to that of 2006. This suggest that over time, healthcare organizations were able to improve their quality management practices as evidenced by the assessment of the independent reviewers’ scores.

Table 3 provides the mean, standard deviation, and correlations for the entire sample of 161 observations. As it is shown, the overall averages for the quality management is between 0.42 and 0.51 (out of the maximum score of 1.00 for each construct), which suggests sig- nificant gaps in quality management implementation by healthcare organizations. Another observation is the significant correlations among quality management practices, which further supports the Bal- drige guiles with respect to the interrelationship among variables in the Baldrige model.

5.2. Measurement model: validation and assessment

We begin our analysis by an assessment of the validity of the Baldrige model for the healthcare industry (H1). Construct validity measures the correspondence between a concept and the set of items used to measure the construct (Churchill, 1979; Brahma, 2009). This

Table 2 Descriptive statistics.

Construct 1999 2000 2001 2002 2003 2004 2005 2006

N=9 N=8 N=8 N=17 N=19 N=22 N=33 N=45

x̄ σx̄ x̄ σx̄ x̄ σx̄ x̄ σx̄ x̄ σx̄ x̄ σx̄ x̄ σx̄ x̄ σx̄

Leadership (LEA) .46 .10 .45 .14 .44 .09 .49 .10 .48 .09 .52 .10 .54 .10 .53 .09 Strategic Planning (STR) .37 .11 .42 .13 .39 .11 .43 .11 .42 .10 .48 .09 .49 .12 .49 .11 Customer Focus and Satisfaction (CFS) .40 .08 .44 .12 .48 .10 .48 .10 .45 .10 .52 .09 .54 .09 .52 .11 Information and Analysis (INF) .41 .11 .43 .19 .45 .11 .45 .12 .48 .09 .51 .08 .54 .10 .54 .09 Human Resource Development and Management (HRM) .43 .07 .43 .10 .46 .10 .41 .09 .46 .08 .50 .08 .54 .08 .53 .08 Process Management (OPR) .38 .09 .41 .13 .38 .07 .43 .12 .45 .10 .50 .10 .55 .11 .53 .09 Quality and Operational Results (RES) .34 .10 .32 .13 .40 .11 .41 .13 .38 .12 .42 .13 .42 .11 .43 .11

M.M. Parast and D. Golmohammadi International Journal of Production Economics 216 (2019) 133–144

137

process starts with the assessment of content validity (O'Leary-Kelly and Vokurka, 1998). Content validity refers to the extent to which a mea- sure represents all aspects of a given concept (Nunally and Bernstein, 1994; Rossiter, 2008). One of the approaches to ensure content validity is through reviewing the literature and using experts' opinions on a given construct (Churchill, 1979; Kerlinger, 1986). The Baldrige model satisfies the requirements of content validity because 1) it has been developed based on the principles and theories of quality management, and 2) it has been reviewed by scholars and practitioners in quality management. This procedure for assessing content validity was used in previous studies on quality management (Dow et al., 1999). In addition, the Baldrige healthcare quality is specifically developed to measure quality practices in healthcare organizations; thus, we conclude that the Baldridge healthcare quality meets the requirements of content validity.

5.3. Confirmatory factor analysis for the model

Hair et al. (2009) pointed to the importance of conducting con- firmatory factor analysis (CFA) for the full measurement model. We used confirmatory factor analysis for the full model using a variety of goodness-of-fit statistics to determine the overall fit of the model (χ2/ df=1.66, RMSEA=0.06; CFI= 0.97). All fit indices are within the recommended range, an indication of the acceptable measurement model (Kaynak, 2003; Hu and Beltler, 1999). Therefore, there is enough empirical evidence to accept the validity of the hypothesis that the Baldrige model is theoretically robust in terms of measuring quality practices in the healthcare industry, providing support for the first hypothesis (H1).

An examination of the standardized loadings shows that they are all significant, providing initial evidence of convergent validity (Table 3). Next, reliability values (Cronbach's alpha) for the constructs were cal- culated. A reliability value of 0.7 or higher is an acceptable value for survey research (Nunally and Bernstein, 1994; Hair et al., 2009). All reliability measures are within the acceptable range. In order to assess convergence validity, the average variance extracted (AVE) for the constructs was calculated; the values are presented in Table 4. The AVE calculates the mean variance extracted for the item loadings on a construct and is used as an indicator of convergence (Fornell and Larcker, 1981; Carlson and Herdman, 2012; Agarwal, 2013). A value of 0.5 or higher is an indication of good convergence (Hair et al., 2009). All constructs have AVE's above the recommended threshold of 0.5. To establish discriminant validity, the AVE values for any of the two constructs were compared with the square of the correlation estimate between the two constructs (Fornell and Larcker, 1981; Henseler et al., 2015). If the AVE values are greater than the squared correlation esti- mate between the constructs, discriminant validity of the two con- structs is supported. An examination of AVE values and the correlation estimates suggests the existence of discriminant validity (Harris, 2004). A review of the correlations shows that they are statistically significant. Therefore, the underlying assumption of the Baldrige model that “ev- erything is related to everything else” appears to be valid in the healthcare industry.

5.4. Structural model and testing the hypotheses

Control Variables: We use two control variables in this study. The

0

0.1

0.2

0.3

0.4

0.5

0.6

1999 2000 2001 2002 2003 2004 2005 2006

Leadership (LEA) Strategic Planning (STR)

Customer Focus and Sa sfac on (CFS) Informa on and Analysis (INF)

Human Resource Development Process Management (OPR)

Quality and Opera onal Results (RES)

Fig. 2. Change in quality assessment in healthcare organizations 1999–2006.

Table 3 Correlations.

Mean S.D. 1 2 3 4 5 6 7

1. Leadership .51 .09 1.00 2. Strategic planning .46 .11 .821∗∗∗ 1.00 3. Customer focus and satisfaction .50 .09 .774∗∗∗ .776∗∗∗ 1.00 4. Information and analysis .52 .10 .764∗∗∗ .762∗∗∗ .733∗∗∗ 1.00 5. Human resource development and management .50 .09 .729∗∗∗ .705∗∗∗ .725∗∗∗ .720∗∗∗ 1.00 6. Process management .49 .11 .745∗∗∗ .767∗∗∗ .694∗∗∗ .740∗∗∗ .721∗∗∗ 1.00 7. Quality and operational results .42 .12 .722∗∗∗ .682∗∗∗ .643∗∗∗ .698∗∗∗ .654∗∗∗ .639∗∗∗ 1.00

***p < .01.

M.M. Parast and D. Golmohammadi International Journal of Production Economics 216 (2019) 133–144

138

first control variable is the industry (healthcare organizations). The second control variable is the application year. To control for the ap- plication year, we construct a vector of seven dummy variables (Y1999

through Y2006), each representing an application year, using 1999 as the reference year.

Statistical Procedure: To determine the relationship between quality practices and quality results, the structural model proposed in Fig. 1 is examined using structural equation modeling (SEM) with the maximum likelihood procedure.

Table 5 presents the estimate for each path (regression coefficient) and the corresponding p-values. Consistent with our hypotheses (H2a – H2d), we find support for a significant relationship between Leadership and Operational results (ß= 0.825, p < .01), Leadership and Informa- tion and analysis (ß= 0.935, p < .01), Leadership and Human resource development (ß= 0.788, p < .01), and Leadership and Strategic planning (ß= 0.914, p < .01). We also find support for the relationship be- tween Information and analysis and Quality results (H4b: ß=0.567, p < .1), Human resource management and Customer focus and satisfac- tion (H5a: ß=0.354, p < .05), Human resource management and Quality results (H5b: ß=0.285, p < .10), and Strategic planning and Customer focus and satisfaction (H6a: ß=0.344, p < .05). The proposed struc- tural model explains 74% of the variability in Quality results and 87% of the variability in Customer focus and satisfaction. We discuss these

findings as well as the implications for theory and practice in section 6.

5.5. Robustness tests

Non-Normality. While the assumption of normality is required for regression analysis, non-normality of that data does not affect the consistency of the parameter estimates in SEM (Bollen, 1989; Sharma et al., 1989; Lei and Lomax, 2005). Nevertheless, our assessment of normality implied that the assumptions of normality of the data are met.

Heteroscedasticity. Heteroscedasticity refers to the case where the variance of regression disturbances is not constant across observations, leading to unbiased estimates (Greene, 2012). To address the potential bias associated with heteroscedasticity (or inequality of the variance of the error term), we plotted the scatter plot of the standardized residuals vs. standardized predicted values. We did not find any evidence of heteroscedasticity in our data.

Multicollinearity. To ensure that results are not sensitive to the correlation among variables, we examined multicollinearity among variables using regression analysis. All the VIF values generated from the regression analysis are well below 0.50, which indicates that mul- ticollinearity is not a major concern in this study (Belsley et al., 1980; Hair et al., 2009).

Comparisons with alternative SEM models. To evaluate the fit of our base model, we compared the base model fit with an alternative SEM model, all with freely estimated paths, so that the results could be compared. In the alternative model, we added two more direct paths that have been suggested in prior studies in quality management: from Leadership to Customer focus and satisfaction and from Leadership to Quality results (Meyer and Collier, 2001). We wanted to see whether there is a direct effect between Leadership to Customer focus and sa- tisfaction and from Leadership to Quality results. None of the paths from Leadership to Customer focus and from Leadership to Quality results show a significant relationship. In addition, the R2 value does not change as a result of adding the two paths, an indication of a less powerful model fit. Also, the standardized regression coefficient from Leadership to Quality results was negative, providing strong evidence of model mis- specification. Alternative B is a model where we add a direct path from Strategic planning to Information and analysis. This path is not significant. Thus, using competing models, we were able to demonstrate that the proposed structural model best explains the conceptual model.

Table 4 Properties of the model.

Scale Measurements

α Item λ AVE

Leadership (LEA) .85 q11 .89 .74 q12 .83

Strategic Planning (SP) .91 q21 .94 .84 q22 .89

Customer Focus and Satisfaction (CFS) .87 q31 .90 .77 q32 .86

Information and Analysis (INF) .83 q41 .85 .71 q42 .84

Human Resource Development and Management (HRD) .88 q51 .88 .72 q52 .84 q53 .82

Process Management (OPR) .84 q61 .88 .74 q62 .84

Quality and Operational Results (RES) .89 q71 .85 .76 q72 .81 q73 .83 q74 .87

Table 5 Standardized regression coefficients.

Dependent Variables Independent Variables

CFS RES STR INF HRM OPR

Controls Y2000 .064 −.049 .075 .004 −.023 .089 Y2001 .158 .140 .080 .011 .070 −.002 Y2002 .256 .193 .087 −.089 −.170∗∗ .098 Y2003 .116 .060 .055 .021 .003 .121 Y2004 .133 −.001 .124 .065 .108 .186∗∗

Y2005 .212 −.077 .148 .053 .199∗ .369∗∗∗

Y2006 .160 −.109 .186 .107 .231∗∗ .386∗∗∗

Predictors Leadership n.s. n.s. .914∗∗∗ .935∗∗∗ .788∗∗∗ .825∗∗∗

Strategic quality planning .344∗ −.106 Information and analysis .457 .567∗

HR Development .354∗∗ .285∗

Process Management −.191 .217

*p < .10 **p < .05 ***p < .01. n.s. hypothesis is not stated.

M.M. Parast and D. Golmohammadi International Journal of Production Economics 216 (2019) 133–144

139

6. Discussion

This study presents the first empirical study on the assessment of the Baldrige model in the healthcare industry using the Baldrige data (in- dependent reviewers' scores). This study addresses two major gaps in the literature in quality management: First, in contrast to previous studies where self-reported data were used, this study examines in- dependent reviewers’ scores on the Baldrige criteria. Thus, it provides higher level of reliability and validity in data in terms of the assessment, perceptions, and reporting. A second important aspect of this study is the assessment of the impact of quality practices on quality results in healthcare organizations over time.

6.1. Theoretical contributions

This research makes several contributions to operations manage- ment, quality management, and healthcare quality. First, the study makes a strong case that the Baldrige model is theoretically sound and robust, and over time it has maintained a high level of measurement capability to address quality management in the healthcare industry. To the best of our knowledge, this is the first empirical analysis of the assessment of the Baldrige model using the Baldrige data. One im- portant implication of this finding is that healthcare organizations would be able to use the Baldrige model as a self-assessment tool to improve their quality results.

Second, our work builds on prior MBNQA research by explicitly studying applicants' quality scores using independent reviewers' scores. This complements prior research that examined the impact of the Baldrige model in improving quality practices in the healthcare in- dustry using a cross-sectional survey (Meyer and Collier, 2001). In that respect, we provide a more nuanced assessment of linkages between quality practices and quality results through incorporating in our re- search design and methodology two important factors that have not been addressed in prior studies: 1) using Baldrige assessment data (independent reviewers’ scores) that are a more objective, reliable, and valid source of data, and 2) using quality performance data that span seven years, providing a more rigorous assessment of the relationship between quality practices and quality results in healthcare organiza- tions.

A third way our manuscript contributes to theory is improving our understanding of how healthcare organizations can improve their quality results using the Baldrige model. Consistent with previous stu- dies in quality management implementation in the healthcare industry (Meyer and Collier, 2001; Mosadeghrad, 2015), we found that Leader- ship is the main driver of quality, which has a significant impact on all other quality practices. There are two major findings from this research regarding the role of Leadership in the Baldrige Health Care causal model. First, Leadership has a direct causal influence on each of the components of the Baldrige System including Information and analysis, Strategic planning, Human resource development and management, and Process management. Improvement in Leadership has a positive and di- rect impact on each of the Baldrige System categories. This result confirms the Baldrige theory that Leadership drives the system, and supports the findings of Batalden and Stoltz (1993) and Meyer and Collier (2001), where strong support and commitment to quality from the senior administration in healthcare organizations is the key to quality improvement. The second research finding for healthcare or- ganizations is the significance of the causal relationship from Leadership to Information and analysis. The point estimate of the influence of Lea- dership on Information and analysis (ß= .935, p < .01) is the largest among the Baldrige criteria and presents leadership's strongest influ- ence in comparison to other system categories (Leadership → Opera- tional results: ß= 0.825; Leadership → Human resources management: ß= 0.788; Leadership→Strategic planning: ß= 0.914). This suggests that in quality-driven healthcare organizations, leaders recognize the critical role of Information analysis and knowledge management, and the

importance of data-driven decision making. Our fourth theoretical contribution is related to the significant im-

pact of Information and analysis on Quality results in the healthcare or- ganizations. First, information and analysis has a significant effect on quality results (ß=0.567, p < .10), supporting the finding of Meyer and Collier (2001) and the Baldrige theory that “an effective health care system needs to be built upon a framework of measurement, informa- tion, data, and analysis” (National Institute of Standards and Tech- nology (NIST), 1995). This suggests that healthcare organizations can improve their quality results by development of effective information systems, which enables them to make informed decisions to support performance outcomes. Second, Information and analysis has a direct impact on Customer focus and satisfaction, indicating that effective use of measurement, information, and data can contribute to the performance of healthcare organizations if healthcare organizations use information and data in their decision making process. Our findings provide em- pirical support for the argument put forward by some OM scholars that coordination (information exchange relationship) among providers in health-care delivery, is necessary to achieve desirable patient outcomes (Boyer and Pronovost, 2010; Queenan et al., 2011). Thus, if quality improvement is a strategic concern (Fundin et al., 2018), healthcare organizations need to make proper investment in their information systems and knowledge management.

Our fifth contribution is the importance and centrality of Human resource development on improving Customer focus and satisfaction and Quality results in healthcare organizations (Gowen et al., 2006a). In that regard, our empirical findings support the existing anecdotal evidence in the literature that discuss how lack of attention to human resource management could lead to inefficient and poor quality outcomes in healthcare organizations (Huq and Martin, 2000; Francois et al., 2003; Alexander et al., 2006; Withanachchi et al., 2007; Ozturk and Swiss, 2008). We showed that within the Baldrige model, organizational at- tention and investment in human resource management improves pa- tient satisfaction with the quality of care.

Finally, our last contribution is related to the significant relationship between Strategic planning for quality and Customer focus and satisfaction in healthcare organizations. Earlier studies discuss lack of attention in healthcare organizations to strategic planning of quality, and pursuing a “middle of the road approach” to avoid risks (Gibson et al., 1990; Calem and Rizzo, 1995). The complexity of the healthcare system along with its highly departmentalized structure has contributed to the in- effectiveness of many quality management programs and their poor implementation (Jabnoun, 2005; Naveh and Stern, 2005). Taking into account the hierarchical structure, departmentalized setting, and au- thoritative nature of healthcare organizations, strategic planning and implementation of quality is difficult to achieve (Francois et al., 2003; McNulty and Ferlie, 2002; Abd-Manaf, 2005). Nevertheless, as our empirical analysis suggests, Strategic planning for quality has a sig- nificant impact on Customer focus and satisfaction (ß= 0.344, p < .05), an indication of the positive impact of Strategic planning for quality on Customer focus and satisfaction.

Surprisingly, we were not able to find a significant link between Process Management and Quality and Operational Results in healthcare organizations. While this may be counterintuitive, there are two ex- planations for this. First, due to the heterogeneity of the customers in the healthcare industry, improvement in Quality results is best achieved through implementing customized healthcare delivery programs. Such programs require healthcare professionals to tailor their care services based on the specific needs and conditions of the patients, adding more complexity to the healthcare delivery due to the customized nature of the service delivery (Sofaer and Firminger, 2005). Second, empirical studies suggest a trade-off between efficiency and service quality in operations management literature (Pinker et al., 2000; Sampson and Froehle, 2006; Campbell and Frei, 2010; Xia and Zhang, 2010), and more particularly in healthcare organizations (Mennicken et al., 2011). This is being empirically demonstrated by the negative standardized

M.M. Parast and D. Golmohammadi International Journal of Production Economics 216 (2019) 133–144

140

path coefficient between Operational results and Customer focus and sa- tisfaction (ß=−0.191). Thus, operational results and efficient im- provement may not necessarily improve quality results, as a result of customization of the process in healthcare organizations.

6.2. Implications for managers

This study provides several insights for managers and decision makers who are responsible for quality programs in healthcare orga- nizations. First, quality managers can use the Baldrige model in order to reorganize, restructure, and streamline their quality improvement programs. If healthcare organizations are committed to improve their quality outcomes, the Baldrige model provides a robust and compre- hensive assessment of quality systems in healthcare organizations. Second, healthcare organizations should recognize the importance of information systems, availability of timely and accurate data, and sig- nificance of decision making as related to healthcare operations and processes. We showed that Information and analysis has the strongest impact on Quality results in the healthcare industry. This provides evi- dence on the importance of timely and accurate information exchange to ensure proper decision making as well as involvement of patients in the process, as evidenced by the shift in the healthcare industry from provider-centered care to patient-centered care (Vogus and McClelland, 2016). Third, healthcare organizations should also recognize the im- portance of human resource management on improving Customer focus and satisfaction and Quality results. With the understanding that healthcare systems, both technologically and administratively, are among the most complex systems in terms of service delivery, health- care managers should develop human resource management practices to overcome challenges such as a culture of professional dominance that limit the ability of healthcare organizations to design and implement comprehensive human resource management practices (Mosadeghrad, 2015; Kimberly and Minvielle, 2000).

7. Limitations and future research

This study has several limitations that should be addressed. Availability of a larger sample of healthcare organizations could im- prove the model fit indices and provide more stable parameter esti- mates. It is important to realize that the recommendation for sample size is based on several criteria such as the complexity of the model, distribution of the data, and construct reliability. We should keep in mind that independent examiners’ scores were used in this study; they follow specific guidelines and procedures in assigning scores for each item. In that respect, this data has a very high level of reliability and consistency, especially in terms of the quality of the data, and the fact that it represents the evaluation of experts. In addition, a high level of construct validity and normality of the data, along with high construct reliability measures, ensures the validity of the findings (Hair et al., 2009). In the case of our data, the constructs exhibit high reliability (ranging from 0.83 to 0.91), and the assumption of normality is met. One rule of thumb is that the ratio of the number of observations to the number of parameters should be at least five to one (Russell et al., 1998), which in our dataset is met. Thus, the results of this study are generalizable to a larger sample.

Another limitation of this study is the lack of access to the most recent data. The data for the Baldrige model for healthcare organiza- tions is available from 1999 to 2006. It would be very helpful for academic researchers and practitioners to get access to the more recent data of the Baldrige model. Unfortunately, this data is not publicly available; access to the data is limited to data through 2006. In addi- tion, the availability and inclusion of other variables in healthcare or- ganizations such as the type of the healthcare organization, organiza- tional size, and annual revenue would provide valuable information on the effect of organizational and contextual variables on healthcare quality. In that regards, one possible research study would be to assess

quality management practices at healthcare organizations that won the Baldrige award using a case study approach. Such a study provides an in-depth understanding of how healthcare originations were able to achieve superior quality results through pursuing the Baldrige model.

Care should be taken in terms of generalizing the results of this study. While some argue that service quality in the healthcare industry addresses issues relevant to healthcare delivery (complexity, co-pro- duction, and intangibility) that have been supported to be generalizable to other service contexts (Subramony, 2009; Gittell et al., 2010), we would be also mindful about the nature of service quality in healthcare organizations. We expect that the findings of this study can be gen- eralizable to industries with service expectations similar to those of the healthcare industry, where there is a significant knowledge gap and information asymmetry between the service provider and the customer (e.g., auto repair, consulting, or law firms).

8. Conclusion

For over three decades, the Baldrige model has been used as a fra- mework for quality management, with the expectation of enhancing quality initiatives in organizations through implementation of practices that could improve quality in a systematic way. Our understanding of the relevance of the Baldrige criteria in the healthcare industry was limited due to the lack of data. Our objective in this paper was to provide more insight on the relevance of the Baldrige model in healthcare organizations, and to assess how healthcare organizations can improve customer satisfaction and quality results. We are also cautious that our results should be viewed with respect to the data that are publically available, which limited our interpretation of the results in a more meaningful way. A more nuanced understanding of the re- lationship among the Baldrige criteria requires access to more detailed information about the organizations, which is not currently available.

Using the Baldrige reviewer's scores, this study addressed some of the key questions on the effectiveness of the Baldrige model and its impact on organizational quality in the healthcare industry. We showed that the Baldrige model is a valid and reliable model for healthcare organizations. Organizations can benefit from implementing the Baldrige model, especially when they have the commitment and sup- port of their top management. Our empirical analysis showed that leadership has a significant impact on implementing quality practices in the Baldrige model, including strategic planning for quality, informa- tion and analysis, human resource development, and management of process quality. In addition, our empirical analysis shows that health- care organizations can improve their quality of care through investment in information systems and human resource management. Healthcare organizations should recognize the importance of information systems, the availability of timely and accurate data, and the significance of decision making as related to healthcare operations and processes. Furthermore, healthcare organizations should also recognize the im- portance of human resource management on improving customer focus and satisfaction and in achieving quality results.

Appendix ABaldrige Healthcare Assessment

A.1. Leadership (Category 1)

This category asks how senior leaders’ personal actions and your governance system guide and sustain your organization.

A.1.1. Senior Leadership This item asks about the key aspects of your senior leaders’ re-

sponsibilities, with the aim of creating an organization that is successful now and in the future.

A.1.2. Governance and Societal Responsibilities This item asks about key aspects of your governance system,

M.M. Parast and D. Golmohammadi International Journal of Production Economics 216 (2019) 133–144

141

including the improvement of leaders and the leadership system. It also asks how the organization ensures that everyone in the organization behaves legally and ethically, how it fulfills its societal responsibilities, how it supports its key communities, and how it builds community health.

A.2. Strategy (Category 2)

This category asks how you develop strategic objectives and action plans, implement them, change them if circumstances require, and measure progress.

A.2.1. Strategy Development This item asks how you establish a strategy to address your orga-

nization's challenges and leverage its advantages and how you make decisions about key work systems and core competencies. It also asks about your key strategic objectives and their related goals. The aim is to strengthen your overall performance, competitiveness, and future suc- cess.

A.2.2. Strategy Implementation This item asks how you convert your strategic objectives into action

plans to accomplish the objectives and how you assess progress relative to these action plans. The aim is to ensure that you deploy your stra- tegies successfully and achieve your goals.

A.3. Customers (Category 3)

This category asks how you engage patients and other customers for long-term marketplace success, including how you listen to the voice of the customer, serve and exceed patients' and other customers’ ex- pectations, and build relationships with patients and other customers.

A.3.1. Voice of the Customer This item asks about your processes for listening to your patients

and other customers and determining their satisfaction and dis- satisfaction. The aim is to capture meaningful information in order to exceed your patients' and other customers’ expectations.

A.3.2. Customer Engagement This item asks about your processes for determining and custo-

mizing health care service offerings that serve your patients, other customers, and markets; for enabling patients and other customers to seek information and support; and for identifying patient and other customer groups and market segments. The item also asks how you build relationships with your patients and other customers and manage complaints. The aim of these efforts is to improve marketing, build a more patient- and other customer-focused culture, and enhance patient and other customer loyalty.

A.4. Measurement, Analysis, and Knowledge Management (Category 4)

In the simplest terms, category 4 is the “brain center” for the alignment of your operations with your strategic objectives. It is the main point within the Health Care Criteria for all key information on effectively measuring, analyzing, and improving performance and managing organizational knowledge to drive improvement, innovation, and organizational competitiveness. Central to this use of data and in- formation are their quality and availability. Furthermore, since in- formation, analysis, and knowledge management might themselves be primary sources of competitive advantage and productivity growth, this category also includes such strategic considerations.

A.4.1. Measurement, Analysis, and Improvement of Organizational Performance

This item asks how you select and use data and information for

performance measurement, analysis, and review in support of organi- zational planning and performance improvement. The item serves as a central collection and analysis point in an integrated performance measurement and management system that relies on clinical, financial, and other data and information. The aim of performance measurement, analysis, review, and improvement is to guide your process manage- ment toward the achievement of key organizational results and stra- tegic objectives, anticipate and respond to rapid or unexpected orga- nizational or external changes, and identify best practices to share.

A.4.2. Information and Knowledge Management This item asks how you build and manage your organization's

knowledge assets and ensure the quality and availability of data and information. The aim of this item is to improve organizational effi- ciency and effectiveness and stimulate innovation.

A.5. Workforce (Category 5)

This category addresses key workforce practices—those directed toward creating and maintaining a high-performance environment and toward engaging your workforce to enable it and your organization to adapt to change and succeed.

A.5.1. Workforce Environment This item asks about your workforce capability and capacity needs,

how you meet those needs to accomplish your organization's work, and how you ensure a supportive work climate. The aim is to build an ef- fective environment for accomplishing your work and supporting your workforce.

A.5.2. Workforce Engagement This item asks about your systems for managing workforce perfor-

mance and developing your workforce members to enable and en- courage all of them to contribute effectively and to the best of their ability. These systems are intended to foster high performance, to ad- dress your core competencies, and to help accomplish your action plans and ensure your organization's success now and in the future.

A.6. Operations (Category 6)

This category asks how you focus on your organization's work, the design and delivery of health care services, innovation, and operational effectiveness to achieve organizational success now and in the future.

A.6.1. Work Processes This item asks about the management of your key health care ser-

vices, your key work processes, and innovation, with the aim of creating value for your patients and other customers and achieving current and future organizational success.

A.6.2. Operational Effectiveness This item asks how you ensure effective operations in order to have

a safe workplace environment and deliver customer value. Effective operations frequently depend on controlling the overall costs of your operations and maintaining the reliability, security, and cybersecurity of your information systems.

A.7. Results (Category 7)

This category provides a systems focus that encompasses all results necessary to sustaining an enterprise: the key process and health care results, the patient- and other customer-focused results, the workforce results, the leadership and governance system results, and the overall financial and market performance.

M.M. Parast and D. Golmohammadi International Journal of Production Economics 216 (2019) 133–144

142

A.7.1. Health Care and Process Results This item asks about your key health care and operational perfor-

mance results, which demonstrate health care outcomes, service quality, and value that lead to patient and other customer satisfaction and engagement.

A.7.2. Customer-Focused Results This item asks about your patient- and other customer-focused

performance results, which demonstrate how well you have been sa- tisfying your patients and other customers and engaging them in loy- alty-building relationships.

A.7.3. Workforce-Focused Results This item asks about your workforce-focused performance results,

which demonstrate how well you have been creating and maintaining a productive, caring, engaging, and learning environment for all mem- bers of your workforce.

A.7.4. Leadership and Governance Results This item asks about your key results in the areas of senior leader-

ship and governance, which demonstrate the extent to which your or- ganization is fiscally sound, ethical, and socially responsible.

A.7.5. Financial and Market Results This item asks about your key financial and market results, which

demonstrate your financial sustainability and your marketplace achievements.

References

Abbott, A., 1991. The order of professionalization: an empirical analysis. Work Occup. 18 (4), 355–384.

Abbott, A., 1993. The sociology of work and occupations. Annu. Rev. Sociol. 19, 187–209. Abd-Manaf, N.H., 2005. Quality management in Malaysian public health care. Int. J.

Health Care Qual. Assur. 18 (3), 204–216. Agarwal, V., 2013. Investigating the convergent validity of organizational trust. J.

Commun. Manag. 17 (1), 24–39. Alexander, J.A., Weiner, B.J., Griffith, J., 2006. Quality improvement and hospital fi-

nancial performance. J. Organ. Behav. 27 (7), 1003–1029. Araújo, M., Sampaio, P., 2014. The path to excellence of the Portuguese organizations

recognized by the EFQM model. Total Qual. Manag. Bus. Excel. 25 (5/6), 427–438. Bagozzi, R., Yi, Y., 2012. Specification, evaluation, and interpretation of structural

equation models. J. Acad. Mark. Sci. 40 (1), 8–34. Baldrige Award Recipient Information, 2015. National Institute of Standards and

Technology. http://patapsco.nist.gov/Award_Recipients/index.cfm, Accessed date: 8 October 2018.

Bardhan, I.R., Thouin, M.F., 2013. Health information technology and its impact on the quality and cost of healthcare delivery. Decis. Support Syst. 55 (2), 438–449.

Batalden, P.B., Stoltz, P.K., 1993. A framework for the continual improvement of health care: building and applying professional and improvement knowledge to test changes in daily work. Joint Comm. J. Qual. Improv. 19 (10), 424–447.

Belsley, D.A., Kuh, E., Welsch, R.E., 1980. Regression Diagnostics: Identifying Influential Observations and Sources of Collinearity. John Wiley and Sons.

Bollen, K.A., 1989. Structural Equations with Latent Variables. Wiley-Interscience, New York, NY.

Bortolotti, T., Boscari, S., Danese, P., Medina, S., Hebert, A., Rich, N., 2018. The social benefits of kaizen initiatives in healthcare: an empirical study. Int. J. Oper. Prod. Manag. 38 (2), 554–578.

Boyer, K., Pronovost, P., 2010. What medicine can't each operations: what operations can teach medicine. J. Oper. Manag. 28 (5), 367–371.

Brahma, S.S., 2009. Assessment of construct validity in management research. J. Manag. Res. 9 (2), 59–71.

Bringelson, L.S., Basappa, L.S., 1998. TQM implementation strategies in hospitals: an empirical perspective. J. Soc. Health Syst. 5 (4), 50–62.

Bryne, B.M., 2009. Structural Equation Modeling with AMOS: Basic Concepts, Applications, and Programming, second ed. Routledge, New York, NY.

Calem, P.S., Rizzo, J.A., 1995. Competition and specialization in the hospital industry: an application of Hotelling's location model. South. Econ. J. 61 (4), 1182–1198.

Campbell, D., Frei, F.X., 2010. Cost structure, customer profitability, and retention im- plications of self-service distribution channels: evidence from customer behavior in an online banking channel. Manag. Sci. 56 (1), 4–24.

Bou-Llusar, J.C., Escrig-Tena, A.B., Roca-Puig, V., Beltrán-Martín, I., 2009. An empirical assessment of the EFQM Excellence Model: evaluation as a TQM framework relative to the MBNQA Model. J. Oper. Manag. 27 (1), 1–22.

Carlson, K.D., Herdman, A.O., 2012. Understanding the impact of convergent validity on research results. Organ. Res. Methods 15 (1), 17–32.

Carman, J.M., Shortell, S.M., Foster, R.W., Hughes, E.F.X., Boerstler, H., O'Brien, J.L., O'Connor, E.J., 1996. Keys for successful implementation of total quality manage- ment in hospitals. Health Care Manag. Rev. 21 (1), 48–60.

Castka, P., 2018. Modelling firms’ interventions in ISO 9001 certification: A configura- tional approach. Int. J. Prod. Econ. 201, 163–172.

Chan, Y.L., Ho, K., 1997. Continuous quality improvement: a survey of American and Canadian healthcare executives. Hosp. Health Serv. Adm. 42 (4), 525–544.

Chassin, M.R., 2002. Achieving and sustaining improved quality: lessons from New York State and cardiac surgery. Health Aff. 21 (4), 40–51.

Chassin, M.R., Galvin, R.W., 1998. The urgent need to improve health care quality: Institute of medicine national roundtable on health care quality. J. Am. Med. Assoc. 280 (11), 1000–1008.

Chattopadhyay, S.P., Szydlowski, S.J., 1999. TQM implementation for competitive ad- vantage in healthcare delivery. Manag. Serv. Qual. 9 (2), 96–101.

Churchill Jr., G.A., 1979. A paradigm for developing better measures of marketing con- structs. J. Mark. Res. 16 (1), 64–73.

Dempsey, C., McConville, E., Wojciechowski, S., Drain, M., 2014. Reducing patient suf- fering through compassionate connected care. J. Nurs. Adm. 44 (10), 517–524.

Dow, D., Samson, D., Ford, S., 1999. Exploding the myth: do all quality management practices contribute to superior quality performance? Prod. Oper. Manag. 8 (1), 1–27.

Eisenhardt, K.M., Graebner, M.E., 2007. Theory building from cases: opportunities and challenges. Acad. Manag. J. 50 (1), 25–32.

Ennis, K., Harrington, D., 1999. Quality management in Irish health care. Int. J. Health Care Qual. Assur. 12 (6), 232–243.

Epstein, A.M., Lee, T.H., Hamel, M.B., 2004. Paying physicians for high-quality care. N. Engl. J. Med. 350, 406–410.

Escrig, A.B., de Menezesb, L.M., 2015. What characterizes leading companies within business excellence models? An analysis of "EFQM Recognized for Excellence" re- cipients in Spain. Int. J. Prod. Econ. 169, 362–375.

Evans, J.R., 2010. An exploratory analysis of preliminary blinded applicant scoring data from the Baldrige national quality program. Qual. Manag. J. 17 (3), 35–50.

Farjoun, M., Starbuck, W.H., 2007. Organizing at and beyond the limits. Organ. Stud. 28 (4), 541–566.

Flynn, B.B., Saladin, B., 2001. Further evidence on the validity of the theoretical models underlying the Baldrige criteria. J. Oper. Manag. 19 (3), 617–652.

Fornell, C., Larcker, D.F., 1981. Evaluating structural equations models with un- observable variables and measurement error. J. Mark. Sci. 18 (1), 39–50.

Foster, D.A., Chenoweth, J., 2011. Comparison of Baldrige Applicants and Award Recipients with Peer Hospitals on a National Balanced Scorecard, Truven Health Analytics. http://www.nist.gov/baldrige/upload/baldrige-hospital-research-paper. pdf.

Francois, P., Peyrin, J.C., Touboul, M., 2003. Evaluating implementation of quality management system in a teaching hospital's clinical department. Int. J. Qual. Health Care 15 (1), 47–55.

Fundin, A., Bergquist, B., Eriksson, H., Gremyr, I., 2018. Challenges and propositions for research in quality management. Int. J. Prod. Econ. 199, 125–137.

Fung, C.H., Lim, Y.-W., Mattke, S., Damberg, C., Shekelle, P.G., 2008. Systematic review: the evidence that publishing patient care data improves quality of care. Annu. Intern. Med. 148 (2), 111–123.

Gibson, C.K., Newton, D.J., Cochran, D.S., 1990. An empirical investigation of the nature of hospital mission statements. Health Care Manag. Rev. 15 (3), 35–45.

Gilmartin, M.J., D'Aunno, T.A., 2007. Leadership research in healthcare: a review and roadmap. Acad. Manag. Ann. 1 (1), 387–438.

Gittell, J.H., Seidner, R., Wimbush, J., 2010. A relational model of how high-performance work systems work. Organ. Sci. 21 (2), 490–506.

Golin, C.E., DiMatteo, M.R., Gelberg, L., 1996. The role of patient participation in the doctor visit: implications for adherence to diabetes care. Diabetes Care 19 (10), 1153–1164.

Gowen III, C.R., McFadden, K.L., Tallon, W.J., 2006a. On the centrality of strategic human resource management for healthcare quality results and competitive ad- vantage. J. Manag. Dev. 25 (8), 806–826.

Greene, W.H., 2012. Econometric Analysis, seventh ed. Pearson, New Jersey, NJ. Hair, J.F., Black, W.C., Babin, B.J., Black, W.C., 2009. Multivariate Data Analysis, seventh

edition. Pearson Education. Harris, R.D., 2004. Organizational Task Environments: An Evaluation of Convergent and

Discriminant Validity. J. Manag. Stud. 41 (5), 857–882. Halkjær, S., Lueg, R., 2017. The effect of specialization on operational performance: A

mixed-methods natural experiment in Danish healthcare services,. Int. J. Oper. Prod. Manag. 37 (7), 822–839.

Hekman, D.R., Aquino, K., Owens, B.P., Mitchell, T.R., Schilpzand, P., Leavitt, K., 2010. An examination of whether and how racial and gender biases influence customer satisfaction. Acad. Manag. J. 53 (2), 238–264.

Henseler, J., Ringle, C., Sarstedt, M., 2015. A new criterion for assessing discriminant validity in variance-based structural equation modeling. J. Acad. Mark. Sci. 43 (1), 115–135.

Hu, L., Beltler, P.M., 1999. Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Struct. Equ. Model. 6 (1), 1.

Huq, Z., Martin, T.N., 2000. Workforce cultural factors in TQM/CQI implementation in hospitals. Health Care Manag. Rev. 25 (3), 80–93.

Institute of Medicine, 2001. Crossing the Quality Chasm: A New Health System for the 21st Century. National Academy Press, Washington, D.C.

Jabnoun, N., 2005. Organizational structure for customer-oriented TQM: an empirical investigation. TQM Mag. 17 (3), 226–236.

Jackson, S., 2001. Successfully implementing total quality management tools within healthcare: what are the key actions? Int. J. Health Care Qual. Assur. 14 (4), 157–163.

M.M. Parast and D. Golmohammadi International Journal of Production Economics 216 (2019) 133–144

143

Jencks, S.F., Huff, E.D., Cuerdon, T., 2003. Change in the quality of care delivered to Medicare beneficiaries, 1998-1999 to 2000-2001. J. Am. Med. Assoc. 289 (3), 305–312.

Jenkinson, C., Coulter, A., Bruster, S., 2002. The Picker Patient Experience Questionnaire: development and validation using data from in-patient surveys in five countries. Int. J. Qual. Health Care 14 (5), 353–358.

Jennings, K., Westfall, F., 1994. A survey-based benchmarking approach for health care using the Baldrige quality criteria. Joint Comm. J. Qual. Improv. 20 (9), 500–509.

Kaynak, H., 2003. The relationship between total quality management practices and their effects on firm performance. J. Oper. Manag. 21 (4), 405–435.

Kerlinger, F.N., 1986. Foundations of Behavioral Research. Holt, Rinehart and Winston, New York, NY.

Kimberly, J.R., Minvielle, E., 2000. The Quality Imperative: Measurement and Management of Quality in Health Care. Imperial College Press, London, UK.

Klein, D., Motwani, J., Cole, B., 1998. Continuous quality improvement, total quality management, and reengineering: one hospital's continuous quality improvement journey. Am. J. Med. Qual. 13 (3), 158–163.

Kohn, L.T., Corrigan, J.M., Donaldson, M.S., 2000. To Err Is Human: Building a Safer Health System. Institute of Medicine (US) Committee on Quality of Health Care in America, National Academies Press, Washington, D.C.

Lei, M., Lomax, R.G., 2005. The effect of varying degrees of nonnormality in Structural equation modeling. Struct. Equ. Model. 12 (1), 1–27.

Lin, C.S., Su, C.T., 2013. The Taiwan national quality award and market value of the firms: an empirical study. Int. J. Prod. Econ. 144, 57–67.

Lindenauer, R.K., Remus, D., Roman, S., Rothberg, M.B., Benjamin, E.M., Ma, A., Bratzler, D.W., 2007. Public reporting and pay for performance in hospital quality improve- ment. N. Engl. J. Med. 356 486-396.

Macinati, M.S., 2008. The relationship between quality management systems and orga- nizational performance in the Italian National Health Service. Health Policy 85 (2), 228–241.

Marshall, M.N., Shekelle, P.G., Leatherman, S., Brook, R.H., 2000. The public release of performance data: what do we expect to gain? J. Am. Med. Assoc. 283 (14), 1866–1874.

Martín-Gaitero, J.P., Escrig-Tena, A.B., 2018. The relationship between EFQM levels of excellence and CSR development. Int. J. Qual. Reliab. Manag. 35 (6), 1158–1176.

Matthias, O., Brown, S., 2016. Implementing operations strategy through Lean processes within health care: the example of NHS in the UK. Int. J. Oper. Prod. Manag. 36 (11), 1435–1457.

McGlynn, E.A., Asch, S.M., Adams, J., Keesey, J., Hicks, J., DeCristofaro, A., Kerr, E.A., 2003. The quality of health care delivered to adults in the United States. N. Engl. J. Med. 348, 2635–2645.

McNulty, T., Ferlie, E., 2002. Reengineering Health Care: the Complexities of Organizational Transformation. Oxford University Press, Oxford, UK.

Mennicken, R., Kuntz, L., Schwierz, C., 2011. The trade-off between efficiency and quality in hospital departments. J. Health Organ. Manag. 25 (5), 564–577.

Meyer, S.M., Collier, D.A., 2001. An empirical test of the causal relationships in the Baldrige health care pilot criteria. J. Oper. Manag. 19 (4), 403–425.

Millenson, M.L., 2004. Pay for performance: the best worst choice. Qual. Saf. Health Care 13 (5), 323–324.

Mosadeghrad, A.M., 2015. Developing and validating a total quality management model for healthcare organizations. TQM J. 27 (5), 544–564.

Motwani, J.G., Cheng, C.H., Madan, M.S., 1996. Implementation of ISO 9000 in the healthcare sector: a case study. Health Market. Qual. 14 (2), 63–72.

Nair, A., Dreyfus, D., 2018. Technology alignment in the presence of regulatory changes: The case of meaningful use of information technology in healthcare. Int. J. Med. Inf. 110, 42–51.

Naveh, E., Marcus, A., 2004. When does the ISO 9000 quality assurance standard lead to performance improvement? Assimilation and going beyond. IEEE Trans. Eng. Manag. 51 (3), 352–363.

Naveh, E., Stern, Z., 2005. How quality improvement programs can affect general hospital performance. Int. J. Health Care Qual. Assur. 18 (4), 249–270.

Nunally, J.C., Bernstein, J.H., 1994. Psychometric Theory, third ed.s. McGraw-Hill, New York, NY.

Ozturk, A.O., Swiss, J.E., 2008. Implementing management tools in Turkish public hos- pitals: the impact of culture, politics and role status. Publ. Adm. Dev. 28 (2), 138–148.

O'Leary-Kelly, S.W., Vokurka, R.J., 1998. The empirical assessment of construct validity.

J. Oper. Manag. 16 (4), 387–405. Pannirselvam, G.P., Ferguson, L.A., 2001. A study of the relationships between the

Baldrige categories. Int. J. Qual. Reliab. Manag. 18 (1), 14–34. Pannirselvam, G.P., Siferd, S.P., Ruch, W.A., 1998. Validation of the Arizona governor's

quality award criteria: a test of the Baldrige. J. Oper. Manag. 16 (5), 529–550. Parast, M., 2015. A longitudinal assessment of the linkages among the Baldrige criteria

using independent reviewers' scores. Int. J. Prod. Econ. 164, 24–34. Pinker, E.J., Shumsky, R.A., Simon, W.E., 2000. The efficiency-quality trade-off of cross-

trained workers. Manuf. Serv. Oper. Manag. 2 (1), 32–48. Power, D., Schoenherr, T., Samson, D., 2011. Assessing the effectiveness of quality

management in a global context. IEEE Trans. Eng. Manag. 58 (2), 307–322. Queenan, C.C., Angst, C.M., Devaraj, S., 2011. Doctors ‘orders–if they’re electronic, do

they improve patient satisfaction? A complements/substitutes perspective. J. Oper. Manag. 29 (7), 639–649.

Register, F., 2011. Medicare Program; Hospital Inpatient Value-Based Purchasing Program. Final Rule, Government Printing Office, Washington, D.C.

Rossiter, J.R., 2008. Content validity of measures of abstract constructs in management and organizational research. Br. J. Manag. 19 (4), 380–388.

Russell, D.W., Kahn, J.H., Spoth, R., Altmaier, E.M., 1998. Analyzing data from experi- mental studies: a latent variable structural equation modeling approach. J. Couns. Psychol. 45 (1), 18–29.

Russell, R.S., Johnson, D.M., White, S.W., 2015. Patient perceptions of quality: analyzing patient satisfaction surveys. Int. J. Oper. Prod. Manag. 35 (8), 1158–1181.

Sabella, A., Kashou, R., Omran, O., 2014. Quality management practices and their re- lationship to organizational performance. Int. J. Oper. Prod. Manag. 34 (12), 1487–1505.

Sampson, S.E., Froehle, C.M., 2006. Foundations and implications of a proposed unified services theory. Prod. Oper. Manag. 15 (2), 329–343.

Savinoa, M.M., Batbaatarb, E., 2015. Investigating the resources for integrated manage- ment systems within resource-based and contingency perspective in manufacturing firms. J. Clean. Prod. 104 (1), 392–402.

Schneider, B., Ehrhart, M.G., Mayer, D.M., Saltz, J.L., Niles-Jolly, K., 2005. Understanding organization-customer links in service settings. Acad. Manag. J. 48 (6), 1017–1032.

Sharma, S., Durvasula, S., Dillon, W.R., 1989. Some results on the behavior of alternate covariance structure estimation procedures in the presence of non-normal data. J. Mark. Res. 26 (2), 214–221.

Shortell, S.M., O'Brien, J.L., Carman, J.M., Foster, R.W., Hughes, E.F.X., Boerstler, H., O'Connor, E.J., 1995. Assessing the impact of continuous quality improvement/Total Quality Management: concept versus implementation. Health Serv. Res. 30 (2), 377–401.

Silander, K., Torkki, P., Lillrank, P., Peltokorpi, A., Brax, S.A., 2017. Modularizing spe- cialized hospital services: constraining characteristics, enabling activities and out- comes. Int. J. Oper. Prod. Manag. 37 (6), 791–818.

Sofaer, S., Firminger, K., 2005. Patient perceptions of the quality of health services. Annu. Rev. Public Health 26 (1), 513–559.

Subramony, M., 2009. A meta-analytic investigation of the relationship between HRM bundles and firm performance. Hum. Resour. Manag. 48 (5), 745–768.

Subramony, M., Pugh, S.D., 2015. Services management research review, integration, and future directions. J. Manag. 41 (1), 349–373.

Tabachnick, B.G., Fidell, L.S., 2013. Using Multivariate Statistics, sixth ed. Pearson. Um, K.H., Lau, A.K.W., 2018. Healthcare service failure: how dissatisfied patients respond

to poor service quality. Int. J. Oper. Prod. Manag. 38 (5), 1245–1270. Vogus, T.J., McClelland, L.E., 2016. When the customer is the patient: lessons from

healthcare research on patient satisfaction and service quality ratings. Hum. Resour. Manag. Rev. 26 (1), 37–49.

Williams, S.C., Schmaltz, S.P., Morton, D.J., Koss, R.G., Loeb, J.M., 2005. Quality of care in U.S. hospitals as reflected by standardized measures, 2002-2004. N. Engl. J. Med. 353, 255–264.

Wilson, D.D., Collier, D.A., 2000. An empirical investigation of the Malcolm Baldrige national quality award causal model. Decis. Sci. J. 31 (2), 361–390.

Withanachchi, N., Handa, Y., Karandagoda, K.K., 2007. TQM emphasizing 5-S principles. A breakthrough for chronic managerial constraints at public hospitals in developing countries. Int. J. Public Sect. Manag. 20 (3), 168–177.

Xia, Y., Zhang, G.P., 2010. The impact of the online channel on retailers' performances: an empirical evaluation. Decis. Sci. J. 41 (3), 517–554.

Zabada, C.P., Rivers, A., Munchus, G., 1998. Obstacles to the application of total quality management in health care organizations. Total Qual. Manag. 9 (1), 57–66.

M.M. Parast and D. Golmohammadi International Journal of Production Economics 216 (2019) 133–144

144

  • Quality management in healthcare organizations: Empirical evidence from the baldrige data
    • Introduction
    • Previous studies of the baldrige model in the healthcare industry
    • Assessment of theoretical foundations of the baldrige model in the healthcare industry
    • Variables and measures
    • Methodology
      • Sample
      • Measurement model: validation and assessment
      • Confirmatory factor analysis for the model
      • Structural model and testing the hypotheses
      • Robustness tests
    • Discussion
      • Theoretical contributions
      • Implications for managers
    • Limitations and future research
    • Conclusion
    • Appendix ABaldrige Healthcare Assessment
      • Leadership (Category 1)
        • Senior Leadership
        • Governance and Societal Responsibilities
      • Strategy (Category 2)
        • Strategy Development
        • Strategy Implementation
      • Customers (Category 3)
        • Voice of the Customer
        • Customer Engagement
      • Measurement, Analysis, and Knowledge Management (Category 4)
        • Measurement, Analysis, and Improvement of Organizational Performance
        • Information and Knowledge Management
      • Workforce (Category 5)
        • Workforce Environment
        • Workforce Engagement
      • Operations (Category 6)
        • Work Processes
        • Operational Effectiveness
      • Results (Category 7)
        • Health Care and Process Results
        • Customer-Focused Results
        • Workforce-Focused Results
        • Leadership and Governance Results
        • Financial and Market Results
    • References

,

160 www.greenbranch.com | 800-933-3711

PRACTICE MANAGEMENT

W e have a big problem with waste in this country. An estimated $750 billion—or 30 cents of every dollar spent on healthcare—is spent on unnecessary care, according to the

National Academy of Medicine, formerly the Institute of Medicine.1 Research indicates that almost every family in the United States has experienced over-testing and over- treatment, resulting in costs that hit the average American household hard. For some families, inappropriate care means less money to spend on food, clothing, and shelter.

In 2012, approximately one in six families (16.5%) had difficulty paying their medical bills within a 12-month period. Within that group, one in 11 families (8.9%) had outstanding medical bills they were unable to pay at all (Figure 1).2 Unwarranted drug prescriptions, surgeries, imaging, and laboratory tests are strangling America’s healthcare system, one patient at a time.

The 300 million people in the United States undergo about 15 million nuclear medicine scans, 100 million CT and MRI scans, and 7 to 10 billion laboratory tests annu- ally.3 Excessive testing not only is costly, but also can be harmful. Take, for example, imaging studies. Exposing patients to the risk of radiation that is not clinically indi- cated may be linked to an increase in some types of cancer. Furthermore, the clinical value of any test or procedure de- pends on the likelihood that an individual has a significant medical problem in the first place. If, for instance, some- one has deep central chest pain and shortness of breath,

there is a high probability of a serious cardiac event, and an electrocardiogram (ECG) not only is appropriate, but provides significant value. Conversely, in people with no signs or symptoms of heart trouble, an ECG in all likelihood provides no useful information. There is no justification for performing these tests on healthy people, yet millions are done each year.

The clinical laboratory also plays an important role in utilization related to cardiac events. In 2010, more than 17 million patients with chest pain seen in emergency depart- ments received cardiac biomarker testing. Researchers at Johns Hopkins Bayview Medical Center used a two-fold approach to significantly reduce unnecessary blood testing when evaluating symptoms of chest pain and heart attack.4 The team started by educating physicians and provided information about proven testing guidelines. Interventions then were implemented in the computerized provider or- der entry (CPOE) system. The objective of the study was to reduce the rate at which clinicians order biomarker testing for the diagnosis of acute coronary syndrome, using scien- tific evidence as the baseline.

The study focused on levels of troponin, a protein that increases in the blood when heart muscle is damaged. Studies indicate that tests for troponin are performed up to four times within a 24-hour period—not only is this considered excessive, but they also are ordered in combi- nation with other cardiac biomarkers, including creatine kinase (CK) and creatine kinase-MB (CKMB). At Johns

Is That Test Necessary? The Key to Laboratory Utilization Management

Suzanne Carasso, MBA, MT (ASCP)*

America’s healthcare system is in trouble. Decades of over-spending have cre- ated an unsustainable and, frankly, dysfunctional industry in which patient out- comes do not reflect dollars spent. Healthcare organizations are under mounting pressure to increase value by decreasing costs and improving outcomes. Studies have shown that approximately one-third of laboratory tests are unnecessary, resulting in increased costs and—more importantly—potential harm to patients. Because they are on the front line of patient care, family practice physicians are uniquely positioned to positively impact care across the continuum and have substantial influence over testing decisions.

KEY WORDS: Overutilization; educate; value; laboratory; tests; stewardship; patients; physicians.

*Director, Business Solutions Consult- ing, ARUP Laboratories, 500 Chipeta Way, Salt Lake City, UT 84108-1221; phone: 800-242.2787 extension 3236; e-mail: [email protected]; website: www.aruplab.com. Copyright © 2017 by Greenbranch Publishing LLC.

www.greenbranch.com | 800-933-3711

Carasso | Unnecessary Tests 161

Hopkins Bayview Medical Center, institutional guidelines were written to suggest ordering troponin alone, without CK or CKMB, for patients suspected of having acute coro- nary syndrome. Going one step further, the guidelines set restrictions on repeat orders, limiting those for troponin to no more than three within an 18- to 24-hour period. As a result of this study, overall cardiac biomarker test orders decreased by 66% over a 12-month period, with a corre- sponding reduction in patient charges of $1.25 million.4

Reducing unnecessary hospital admissions is not only cost effective, but can actually save lives.

Looking at test utilization more broadly, an annual visit to a primary care physician (PCP) for most adults usually includes blood tests, a urinalysis, and an ECG. Many indi- viduals and providers believe that there is no such thing as a bad screening test. If something is wrong, it’s better to know in order to receive treatment to prevent morbid- ity and mortality, right? Wrong. In 2012, the American Board of Internal Medicine Foundation (ABIM) launched a campaign, Choosing Wisely, designed to initiate con- versation between doctors and patients regarding the use of unnecessary tests and procedures. To date, 60 medical specialty societies have made more than 300 recommenda- tions addressing overuse.5 The Society of General Internal Medicine, in conjunction with Choosing Wisely, states that “regularly scheduled general health checks without a

specific cause including the ‘health maintenance’ annual visit, have not shown to be effective in reducing morbidity, mortality or hospitalization, while creating a potential for harm from unnecessary testing.”6 Spurious positive test re- sults often lead to a cascade of more tests and procedures, increased downstream costs, and suboptimal outcomes for patients.

Under a grant from the ABIM, a group of internists, fam- ily practice physicians, and pediatricians, known as the Good Stewardship Working Group, identified a number of common primary care practices as overused7 (see the sidebars Top Five List in Internal Medicine and Top Five List in Family Medicine).

THE TOP FIVE LISTS IN PRIMARY CARE: MEETING THE RESPONSIBILITY

OF PROFESSIONALISM

Using data from federal medical surveys, physicians from Mount Sinai Medical Center and Weill Cornell Medical College in New York estimated that unnecessary testing and treatment within the 12 primary care practices ac- counted for a staggering $6.8 billion in 2009. The single test most commonly ordered without justification was a complete blood count (CBC). In 56% of routine physical examinations, physicians inappropriately ordered CBCs and similar tests, adding up to almost $33 million in unnec- essary costs.8 Field testing indicated support among physi- cians for evidence supporting the practices, the potential positive impact on quality of care and cost, and the ease of implementation.

Figure 1. Percentage of families with selected financial burdens of medical care.

162 Medical Practice Management | November/December 2017

www.greenbranch.com | 800-933-3711

Cost aside, care that is not needed can be harmful to patients. Reducing unnecessary hospital admissions is not only cost effective, but can actually save lives. About one in three inpatients experience an adverse event while in the hospital, the vast majority of which require some kind of medical intervention. Further, 7% are irreversibly harmed or die as a result.9 Apply that same logic to labora- tory testing, and it is clear that more harm than good can result from inappropriate testing. For example: in a rou- tine urinalysis, clinicians look for protein or blood in the urine to check for chronic kidney disease. If the initial test is positive, an ultrasound of the kidney may be ordered, followed by a biopsy. Although the risk is relatively small, the biopsy can result in hemorrhage and, in worst case scenarios, kidney removal. Kidney biopsies are reasonable and appropriate in patients with symptoms of kidney dis- ease. However, looking for disease in an otherwise healthy patient and performing interventional procedures is wrong and can be dangerous.

UNNECESSARY TESTS AND PROCEDURES: THE PROBLEM, THE

CAUSES, AND THE SOLUTIONS

Why, then, do physicians continue to order tests, proce- dures, and other expensive treatment options without supporting evidence? This question is especially perplexing given that nearly three out of four physicians believe that doctors order unnecessary tests at least once per week.10 Moreover, research from the 2014 study commissioned by ABIM reveals that physicians would order a laboratory test on an insistent patient even though they knew it was unwarranted as follows10: 77 73% of physicians say the frequency of unnecessary tests

and procedures is a very or somewhat serious problem. 77 66% of physicians feel they have a great deal of responsi-

bility to make sure their patients avoid unnecessary tests and procedures.

77 53% of physicians say that even if they know a medical test is unnecessary, they order it if a patient insists.

77 70% of physicians say that after they speak with a pa- tient about why a test or procedure is unnecessary, the patient often avoids it.

77 58% of physicians say they are in the best position to address the problem, with the government as a distant second (15%).

77 72% of physicians say the average medical doctor pre- scribes an unnecessary test or procedure at least once a week.

77 47% of physicians say their patients ask for an unneces- sary test or procedure at least once a week.

The answer is multifaceted, ranging from lack of under- standing of the diagnostic value of a test to general clinical uncertainty. At the national level, under fee-for-service

reimbursement the healthcare sector rewarded volume and fueled the “more-is-better” mindset. As stated by Doug Campos-Outcalt, a family physician in Phoenix, “Nobody ever gets sued from ordering unnecessary tests.”11 Patients also bear some responsibility when they pressure physi- cians to order tests that may not be clinically indicated. Physicians, in turn, put pressure on labs to run tests to keep their patients happy. It’s a vicious cycle that comes at great cost in both spending and outcomes.

Patients and physicians can overlook the fact that labo- ratory tests deliver value only if they provide meaningful information that leads to an accurate diagnosis and sup- ports clinical decision making, drives patient-centered

Top Five List in Internal Medicine 1. Don’t do imaging for low back pain in the first

six weeks unless red flags are present. Red flags include, but are not limited to, severe or progressive neurological deficits or when serious underlying condi- tions such as osteomyelitis are suspected. Imaging of the lumbar spine before six weeks does not improve outcomes but does increase costs. Low back pain is the fifth most common reason for all physician visits.

2. Don’t obtain blood chemistry panels (e.g., basic metabolic panel) or urinalyses for screening in asymptomatic, healthy adults. Only lipid screening yields significant numbers of positive results among asymptomatic patients. Screen for type 2 diabetes mellitus in asymptomatic adults with hypertension.

3. Don’t order annual electrocardiograms or any other cardiac screening for asymptomatic, low-risk patients. There is little evidence that detection of coronary artery stenosis in asymptomatic patients at low risk for coronary heart disease improves health outcomes. The potential harm of this routine annual screening exceeds the potential benefit.

4. Use only generic statins when initiating lipid-lower- ing drug therapy. All statins are effective in decreas- ing mortality, heart attacks, and strokes when dosing is titrated to achieve appropriate low-density lipo- protein (LDL) cholesterol reduction. Switch to more expensive brand-name statins (atorvastatin [Lipitor] or rosuvastatin [Crestor] only if generic statins cause clini- cal reactions or do not achieve LDL cholesterol goals.

5. Don’t use DEXA screening for osteoporosis in women under age 65 or men under 70 years with no risk factors. Risk factors include, but are not limited to, fractures after age 50 years, prolonged exposure to corticosteroids, a diet deficient in calcium or vitamin D, cigarette smoking, alcoholism, and/or a thin and small build. Screening is not cost-effective in younger, low-risk patients, but is cost-effective in older patients.

Source: Reference 7.

www.greenbranch.com | 800-933-3711

Carasso | Unnecessary Tests 163

outcomes, and contributes to reduced healthcare costs. In short, laboratory tests provide value only if they are clinically valid, clinically efficacious, cost-effective, and properly interpreted.

Recent studies confirm what pathologists and medical laboratory professionals have long known: a substantial

number of primary care physicians are uncertain about which test is the right test. Moreover, they are uncertain how to properly interpret the results of some tests. This uncertainty can lead to diagnostic errors, including delayed or inaccurate diagnoses, failure to use appropriate tests, use of obsolete tests or therapies, and failure to review and act on test results. A national study performed by Hickner et al.,12 which included almost 1800 internal medicine and family practice providers, attempted to identify challenges primary care physicians face in ordering and interpreting clinical laboratory tests. Over the past two decades, the number of available laboratory tests has increased to more than 3500. On average, one PCP sees just over 80 patients per week, ordering lab tests on approximately one-third of them. Survey results indicate PCPs are uncertain about or- dering tests 15% of the time and uncertain with interpreting the results 8% of the time.12 Nationally, this translates to 500 million primary care patient visits annually, with degrees of uncertainty potentially impacting 23 million patients.

Laboratory tests provide value only if they are clinically valid, clinically efficacious, cost- effective, and properly interpreted.

What’s driving all this uncertainty? On the ordering side, the cost to patients and managed care restrictions are listed among the top challenges. In addition to the issue of deciphering large test menus with hundreds—sometimes thousands—of test options, clinicians are challenged with different names for the same test and ordering test panels when a single test is more appropriate. Test reporting poses challenges as well, with problems arising when physicians don’t receive the results or receive results in confusing re- port formats. Solutions to these complex problems are not simple. However, using information technology platforms, such as the CPOE system, to alert and inform physicians of an inappropriately ordered test at the point of order entry have proven successful. Use of algorithms and other decision support tools has been effective as well by creat- ing pathways that intuitively lead to the most appropriate test option. Unfortunately, physicians tend not to seek out assistance or consultation from laboratory professionals, even though these individuals are experts regarding the tests their labs perform.

There is no panacea for improving care and reducing cost in a system riddled with inefficiency and waste. Doc- tors alone cannot solve the problem. It is up to every doc- tor, hospital, healthcare organization, medical educator, insurance company, and government agency to recognize the problem and explore ways to engage patients in deci- sions that affect their care, share useful information on treatment options, and ensure that services, when utilized,

Top Five List in Family Medicine 1. Don’t do imaging for low back pain in the first six

weeks unless red flags are present. Red flags include, but are not limited to, severe or progressive neurologi- cal deficits or when serious underlying conditions such as osteomyelitis are suspected. Imaging of the lumbar spine before six weeks does not improve outcomes but does increase costs. Low back pain is the fifth most common reason for all physician visits. Red flags include, but are not limited to, severe or progressive neurological deficits or when serious underlying condi- tions such as osteomyelitis are suspected.

2. Don’t routinely prescribe antibiotics for acute mild to moderate sinusitis unless symptoms (which must include purulent nasal secretions AND maxillary pain or facial or dental tenderness to percussion) last for seven or more days OR symptoms worsen after initial clinical improvement. Most maxillary sinusitis in the ambulatory setting is due to a viral infection that will resolve itself on its own. Despite consistent recommendations to the contrary, antibiot- ics are prescribed in over 80% of outpatient visits for acute sinusitis. Sinusitis accounts for 16 million office visits and $5.8 billion in annual healthcare costs.

3. Don’t order annual electrocardiograms or any other cardiac screening for asymptomatic, low-risk patients. There is little evidence that detection of coronary artery stenosis in asymptomatic patients at low risk for coronary heart disease improves health outcomes. The potential harm of this routine annual screening exceeds the potential benefit.

4. Don’t perform Pap tests on patients younger than 21 years or women status post hysterectomy for benign disease. Most dysplasia in adolescence regresses spontaneously; therefore, screening Pap tests done in this age group lead to unnecessary anxi- ety, morbidity, and cost. Pap tests have low yield in women after hysterectomy (for benign disease), and there is poor evidence for improved outcomes.

5. Don’t use DEXA screening for osteoporosis in women under age 65 or men under 70 years with no risk factors. Risk factors include, but are not limited to, fractures after age 50 years, prolonged exposure to corticosteroids, diet deficient in calcium or vitamin D, cigarette smoking, alcoholism, and/or a thin and small build. Screening is note cost-effective in younger, low- risk patients, but is cost-effective in older patients.

Source: Reference 7.

164 Medical Practice Management | November/December 2017

www.greenbranch.com | 800-933-3711

are necessary, appropriate, and beneficial for optimal care. Improving patient education and communication with physicians is central in changing practice patterns, but that is only the starting point. Goethe captures the path forward: “Knowing is not enough; we must apply. Willing is not enough; we must do.”13 Y

REFERENCES

1. The National Academies of Sciences, Engineering, Medicine. Trans- formation of health system needed to improve care and reduce costs. September 6, 2012. www.nationalacademies.org/hmd/Reports/2012/ Best-Care-at-Lower-Cost-The-Path-to-Continuously-Learning- Health-Care-in-America/Press-Release-MR.aspx.

2. Cohen RA, Kirzinger WK. Financial burden of medical care: a family perspective. NCHS Data Brief. 2014 Jan;(142):1-8.

3. Gawande A. Annals of health care: overkill. The New Yorker; May 11, 2015.

4. Larochelle MR, Knight AM, Pantle H, Riedel S, Trost JC. Reducing excess cardiac biomarker testing at an academic medical center. J Gen Intern Med. 2014;29:1468-1474.

5. Kerr EA, Ayanian JZ. How to stop the overconsumption of health care. Harvard Business Review; December 11, 2014. https://hbr. org/2014/12/how-to-stop-the-overconsumption-of-health-care.

6. Sussman J, Beyth RJ. Choosing wisely: five things physicians and pa- tients should question. Society of General Internal Medicine. https:// www.sgim.org/File%20Library/JGIM/Web%20Only/Choosing%20 Wisely/General-Health-Checks.pdf.

7. The Good Stewardship Working Group. The top 5 lists in primary care: meeting the responsibility of professionalism. Arch Intern Med . 2011;171:1385-1390.

8. Andrews M. $6.8 billion spent yearly on 12 unnecessary tests and treatments. Kaiser Health News; October 31, 2011. http://khn.org/ news/michelle-andrews-on-unneccesary-tests-and-treatments/.

9. Gamble M. Study: one-third of patients experience adverse events during hospital stay. Becker’s ASC Review; April 18, 2011. www.becker sasc.com/asc-accreditation-and-patient-safety/study-one-third-of- patients-experience-adverse-events-during-hospital-stay.html.

10. ABIM Foundation. Choosing Wisely. May 1, 2014. http://abimfounda tion.org/what-we-do/choosing-wisely.

11. Andrews J. Doctors estimate $6.8 billion in unnecessary medical tests. Washington Post. October 11, 2011. https://www.washington post.com/national/health-science/doctors-estimate-68-billion- in-unnecessary-medical-tests/2011/10/28/gIQANpEXZM_story. html?utm_term=.b7d9858ae297.

12. Hickner J, Thompson PJ, Wilkinson T, et al. Primary care physicians’ challenges in ordering clinical laboratory tests and interpreting re- sults. J Am Board Fam Med . 2014 Mar-Apr;27(2):268-74.

13. Horvath AR. From evidence to best practice in laboratory medicine. Clin Biochem Rev . 2013;34(2):47-60.

Reproduced with permission of copyright owner. Further reproduction prohibited without permission.