Do right PLS and do PLS right: A critical review of the application of PLS-SEM in construction management research

Ningshuang ZENG , Yan LIU , Pan GONG , Marcel HERTOGH , Markus KÖNIG

Front. Eng ›› 2021, Vol. 8 ›› Issue (3) : 356 -369.

PDF (268KB)
Front. Eng ›› 2021, Vol. 8 ›› Issue (3) : 356 -369. DOI: 10.1007/s42524-021-0153-5
REVIEW ARTICLE
REVIEW ARTICLE

Do right PLS and do PLS right: A critical review of the application of PLS-SEM in construction management research

Author information +
History +
PDF (268KB)

Abstract

Partial least squares structural equation modeling (PLS-SEM) is a modern multivariate analysis technique with a demonstrated ability to estimate theoretically established cause-effect relationship models. This technique has been increasingly adopted in construction management research over the last two decades. Accordingly, a critical review of studies adopting PLS-SEM appears to be a timely and valuable endeavor. This paper offers a critical review of 139 articles that applied PLS-SEM from 2002 to 2019. Results show that the misuse of PLS-SEM can be avoided. Critical issues related to the application of PLS-SEM, research design, model development, and model evaluation are discussed in detail. This paper is the first to highlight the use and misuse of PLS-SEM in the construction management area and provides recommendations to facilitate the future application of PLS-SEM in this field.

Graphical abstract

Keywords

PLS / SEM / construction management / literature review / misuse

Cite this article

Download citation ▾
Ningshuang ZENG, Yan LIU, Pan GONG, Marcel HERTOGH, Markus KÖNIG. Do right PLS and do PLS right: A critical review of the application of PLS-SEM in construction management research. Front. Eng, 2021, 8(3): 356-369 DOI:10.1007/s42524-021-0153-5

登录浏览全文

4963

注册一个新账户 忘记密码

1 Introduction

Structural equation modeling (SEM) has been extensively applied in theoretical explorations and empirical validations across many research disciplines since the early 1980s (Bentler, 1980; Bagozzi and Yi, 1988). In recent decades, SEM has evolved into a quasi-routine and an essential multivariate analysis technique. As an alternative to the frequently mentioned covariance-based SEM (CB-SEM), partial least squares SEM (PLS-SEM) is a causal modeling method that focuses on maximizing the explained variance of the dependent latent constructs instead of constructing a theoretical covariance matrix (Hair Jr et al., 2011). While CB-SEM analysis has been normatively applied in construction management for an extended period (Xiong et al., 2015), the application of PLS-SEM is relatively new in this field.

According to its philosophical distinction, research with a theory development objective requires the application of PLS-SEM instead of CB-SEM. PLS-SEM can estimate complex relationships and emphasize prediction without imposing high demands on data or requiring a specification of relationships (Chin et al., 2008; Dijkstra, 2010). Specifically, PLS-SEM can guarantee factor determinacy by directly evaluating the latent variable scores, conduct factor identification by introducing a flexible residual covariance structure, and provide sound prediction in the context of small sample size, asymmetric distribution, and interdependent observations (Chin, 1998; Wetzels et al., 2009). Moreover, well-developed PLS-SEM software packages with graphical user interfaces can help researchers conduct their experiments accurately and conveniently (Ashraf, 2004; Hair Jr et al., 2011). In recent years, PLS-SEM has witnessed a series of advancements, such as in confirmatory tetrad analysis, prediction-oriented segmentation, and finite mixture segmentation, all of which promote its application in various disciplines (Hahn et al., 2002; Gudergan et al., 2008; Becker et al., 2013; Sharma et al., 2019; Hair Jr et al., 2020).

Many articles have reviewed and analyzed the application of PLS-SEM in studies published in leading journals across several professional fields, such as marketing (Hair Jr et al., 2012), human resource management (Ringle et al., 2018), and information systems (Kante et al., 2018). These reviews highlight numerous instances where CB-SEM and PLS-SEM have been misapplied. Examples include using an incorrect type of observable variables, conducting a wrong measurement model evaluation, and applying an indistinct higher-order constructs structure (Ashraf, 2004; Hair Jr et al., 2011; Ringle et al., 2012; Nitzl and Chin, 2017). Construction management studies are plagued with exploratory problems whose solutions are yet to be supported by mature theories and models (Blomquist et al., 2010). In addition, construction management has not yet accumulated enough experience to apply PLS-SEM to sufficient standards. No reviews to date have investigated the use of PLS-SEM in construction management research. A critical review aims to study the literature extensively and critically evaluate its quality. The key value of this type of review lies in its “critical” component (Grant and Booth, 2009). Therefore, this paper aims to provide a critical review of the current application of PLS-SEM in construction management research and to discuss its proper use in solving early mapping, research design, and model evaluation issues.

2 Methodology

Construction research can be seen as a combination of multiple disciplines covering both technical and managerial topics (Xiong et al., 2015). This review presents a comprehensive evaluation of PLS-SEM applications in the construction management field. A structured method is adopted to identify and assess significant outputs related to PLS-SEM that have been published in peer-reviewed English journals. The data used in this study were retrieved on 6 May 2019. The entire research process was divided into three steps.

In the first step, a comprehensive exploratory desktop search was conducted by using the Scopus search engine. Titles, abstracts, and keywords containing the terms “partial least squares”, “PLS” and “construction” were retrieved. Non-peer-reviewed document types (e.g., conference papers and book chapters) were eliminated from the database. To retrieve articles related to construction management and to filter out those articles related to other disciplines, the database subject areas were set to 1) engineering, 2) business, management and accounting, 3) decision sciences, 4) economics, econometrics and finance, and 5) social sciences. The search yielded 255 articles.

In the second step, to reduce the risk of missing relevant publications, an additional targeted database search for key journals was conducted without using the “construction” keyword. Six journals were identified as having the most number of PLS-SEM articles published, namely, International Journal of Project Management, Journal of Construction Engineering and Management, Journal of Management in Engineering, Engineering, Construction and Architectural Management, Automation in Construction, and Construction Management and Economics. These journals are consistent with the construction management journal ranking list published by Wing (1997) and Bröchner and Björk (2008). The targeted database search yielded 155 additional articles.

In the third step, the contents of these articles were checked to ensure selection quality, to guarantee that PLS-SEM was the primary research method, and to confirm that the application was related to the construction industry. Some articles that used SmartPLS, a standard PLS-SEM software package, yet did not analyze the PLS-SEM model were excluded.

As shown in Table 1, 139 articles published between 2002 and 2019 were selected for the analysis. Figure 1 shows the distribution of PLS-SEM articles by year.

In the construction management field, the first article (Mohamed, 2002) that used PLS-SEM as the primary method for statistical analysis was published in Journal of Construction Engineering and Management in 2002. This article focused on construction safety. The International Journal of Project Management, Journal of Construction Engineering and Management, Journal of Management in Engineering, and Engineering, Construction and Architectural Management had the most number of papers with PLS-SEM applications. Other non-construction journals, such as the Journal of Cleaner Production and Sustainability, also published articles that applied PLS-SEM to solve construction management problems. Figure 1 shows that the number of articles using PLS-SEM experienced a significant surge between 2014 and 2019 compared with between 2002 and 2013. A comprehensive review was conducted on the 139 retrieved articles, and a series of critical issues are reported in the following sections.

3 Critical issues in the application of PLS-SEM

3.1 When to and why use PLS-SEM

The comprehensive reasons for choosing PLS-SEM are examined and summarized in Table 2. Most of the reviewed articles explained why PLS-SEM was used prior to the data analysis by referring to the specific statistical features of this technique or by comparing PLS-SEM with similar techniques, such as CB-SEM, in the context of their research topic.

The reasons and motivations for adopting PLS-SEM are diverse. As shown in Table 2, the three most frequently mentioned reasons include small sample size (81 articles, 58.27%), non-normal data (56 articles, 40.29%), and initiation of exploratory research (44 articles, 31.65%). Meanwhile, critical reasons for applying PLS-SEM include formative latent variables (23 articles, 16.55%), addressing predictions (20 articles, 14.39%), and adopting complex models (20 articles, 14.39%). Among the reviewed articles, Wen et al. (2017) compared CB-SEM with PLS-SEM approaches and cited all of the above reasons in their research on construction management consultants. Other considerations for adopting PLS-SEM, such as theory development (10 articles, 7.19%), theory validation (9 articles, 6.47%), use of categorical variables (7 articles, 5.04%), and addressing the mediation effect (6 articles, 4.32%), were not frequently mentioned yet still played a role among the reviewed articles. Nadhim et al. (2018) provided an example to explain why they used PLS-SEM for the categorical variables in their model of safety climate and performance. A total of 11 articles (7.91%) did not specify any reason for using PLS-SEM.

3.2 Topic coverage of research using PLS-SEM

Figure 1 shows an increasing trend in the application of PLS-SEM in construction management research. As mentioned above, many of the reviewed PLS-SEM papers were exploratory in nature. One may ask which topics in the construction management field warrant the application of PLS-SEM. Keywords can be used to provide a clear and concise description of research content. All the selected articles were classified into the most suitable topics. An article may fall into several groups if more than one research interest is covered. Based on the outcome of the keyword grouping and the topic categories suggested by Themistocleous and Wearne (2000), 8 research topics were identified, namely, project organization (40), performance measurement (35), safety, health and environment (21), procurement (21), success criteria (19), teamwork (18), risk management (16), and goals, objectives and strategies (16).

Project organization ranks first among eight topics, with 40 articles involved. The majority of these articles used PLS-SEM as an instrument to identify and evaluate specific inter-organizational relationships. These articles focused on the relationships among stakeholders and how they are influenced. PLS-SEM was also used as an analytical tool for performance measurement and for assessing the effects of other topics (e.g., procurement and critical success factors) on project performance. The other six topics received roughly the same amount of attention. These eight topics highlight the appeal of applying PLS-SEM in construction management research, whereas other topics in this area have received little or no attention.

3.3 Research design with PLS-SEM

3.3.1 Sampling size and characteristics

Selecting a proper sample size for testing the proposed model is critical before data collection and analysis. Testing the characteristics of the sample data is also necessary. Using a small sample size is the most prominent argument for applying PLS-SEM in construction management research (Ashraf, 2004). When the sample size is small or when the collected data do not meet the distributional assumptions of CB-SEM, construction management researchers may apply PLS-SEM instead (Hair Jr et al., 2011). The reviewed papers had a sample size ranging from 25 to 1387 (Table 3). Among the retrieved articles, 29.50% (41 of 139) of the models were derived from sample sizes of less than 100. Regarding the adequacy of the resulting sample size, 120 articles (86.33%) addressed non-response bias, and 74 articles (53.24%) evaluated the content validity of their data collection instruments.

3.3.2 Software application

Some software packages have been designed for conducting PLS-SEM analysis. Among the reviewed articles, 67.63% (94 of 139) models were explicitly stated to be built in SmartPLS, 7.19% (10 of 139) were built in PLS Graph, 2.16% (3 of 139) were built in Warp PLS, and 0.72% (1 of 139) were built using the PLS-PM package in R (Table 4). 22.30% of these articles did not mention the software packages or tools they used. Bootstrapping procedures, which draw a large number of subsamples (typically 5000) from the original data and re-estimate the model, are among the significant features of PLS-SEM (Ashraf, 2004; Hair Jr et al., 2011). This resampling method can help generate rigorous theory (Streukens and Leroi-Werelds, 2016) and has become a critical routine in the PLS-SEM process (Hair Jr et al., 2011). Among the reviewed papers, 55.40% (77 of 139) mentioned applying bootstrapping, and 35.25% (49 of 139) adopted subsamples of 5000 and over.

3.3.3 Model characteristics

Among the reviewed articles, 141 models were obtained, of which 139 were primary and 2 were alternative models. The fundamental elements of the PLS-SEM structural and measurement models are the latent variables and their indicators. The descriptive statistics of the reviewed articles report an average of 7.20 latent variables and 33.45 indicators (Table 5). Regarding the mode of the measurement model, 75.89% (107 of 141) of these models employed only reflectively measured constructs, whereas 11.35% (16 of 141) employed both reflective and formative measures. Only a few of the reviewed PLS-SEM applications included latent variables with only formative measurement models (13 of 141 models; 9.22%). In the remaining cases (5 of 141 models; 3.55%), the measurement instrument was not distinguished, but reflective criteria were applied to evaluate these measurement models. As for the number of indicators per construct, the reflective (average of 4.57) and formative (average of 4.94) constructs did not show much difference.

As shown in Table 5, 43.97% (62 of 141) of the models contained mediators, and 14.89% (21 of 141) applied moderators. As for the structural model feature, 82.27% (116 of 141) of the models adopted single-item constructs, whereas only 15.60% (22 of 141) provided hierarchical constructs. A total of 7 models (4.96%) were modified during the course of the analysis.

3.4 Model evaluation

3.4.1 Reflective measurement model evaluation

PLS-SEM develops a series of methods and related empirical test criteria to evaluate reflective and formative measurement models, respectively. Table 5 shows 123 specified reflective models in total, of which 16 are reflective–formative mixed models. Five non-specified models adopted the reflective evaluation criteria. In sum, 128 models were considered in this section.

Construct validity is critical to reflective measurement model testing and primarily assesses the reliability, convergent validity, and discriminant validity of measures (Peter, 1981). In PLS-SEM statistics, reliability analysis involves repeatability, indicator reliability, and internal consistency reliability tests (Spector, 1992; Harwood and Garry, 2003; Hair Jr et al., 2011). The repeatability evaluation involves the application of test-retest and alternate-form methods (Weir, 2005; Diamantopoulos et al., 2008). Among the reviewed articles, Hartmann and Hietbrink (2013) compared the expectations of two samples collected from different project phases to evaluate their reliability. The reliability of an indicator can be assessed based on its loadings, which are empirically suggested to be more than 0.7 (Hair Jr et al., 2011). Around 78.91% (101 of 128) of the reflective models validated indicator reliability by checking indicator loadings as shown in Table 6. The internal consistency reliability of CB-SEM models was tested based on the widely used Cronbach’s α>0.7 coefficient (Cronbach, 1951; Nunnally, 1978). However, for PLS-SEM exploratory models, a Cronbach’s α test may generate underestimated results in the reliability analysis of constructs because this test is sensitive to the number of items (Hair Jr et al., 2014). Therefore, PLS-SEM also applies composite reliability (CR) to evaluate the internal consistency reliability of constructs (Bagozzi and Yi, 1988; Hair Jr et al., 2014). Ringle et al. (2018) argued that Cronbach’s α represents the most conservative criterion, whereas CR is a more liberal one. CR values of 0.6 to 0.7 in exploratory research and 0.7 to 0.9 in more advanced research stages are considered satisfactory (Hair Jr et al., 2011). As shown in Table 6, 61.72% (79 of 128) of the reflective models were tested by both Cronbach’s α and CR, whereas 28.13% (36 of 128) only chose CR to evaluate the measurement model.

Convergent and discriminant validity are subcategories of construct validity (Peter, 1981). Convergent validity measures the degree of correlation between one and other observable variables within a particular construct (Hulland, 1999). The average variance extracted (AVE) of measured constructs should be assessed for convergent validity (Fornell and Larcker, 1981; Comrey, 1993). In SEM-based research, the minimum acceptable AVE value ranges from 0.36 to 0.5 (Fornell and Larcker, 1981; Hair Jr et al., 2011; 2014). Around 95.31% (122 of 128) of the reviewed reflective models reported AVE as shown in Table 6.

Discriminant validity tests whether a construct is genuinely distinct from other constructs, and both the Fornell–Larcker criterion (Fornell and Larcker, 1981) and cross-loadings have been proposed as two main measures for discriminant validity (Hair Jr et al., 2011). The Fornell–Larcker criterion tests whether a latent construct shares more variance with its assigned indicators than with another latent variable (i.e., the square root of the AVE of each latent construct should be higher than its highest correlation with other latent constructs) (Hair Jr et al., 2011). As for the cross-loadings, an indicator loading with its associated latent construct should be higher than its loadings with all the remaining constructs (Hair Jr et al., 2011). Around 83.59% (107 of 128) of the reviewed reflective models performed a Fornell–Larcker criterion test, and only 53.13% (68 of 128) performed cross-loading tests.

3.4.2 Formative measurement model evaluation

Unlike the reflective measurement model, conventional statistical evaluation criteria, such as the construct validity tests discussed above, cannot be directly transferred to formative indices (Ashraf, 2004; Petter et al., 2007; Hair Jr et al., 2011). The formative measurement approach generally minimizes the overlap between complementary indicators, where “omitting an indicator is omitting a part of the construct” (Bollen and Lennox, 1991). Formative indicators are not necessarily correlated (Hair Jr et al., 2011) and are assumed to be error-free (Bagozzi and Yi, 1988). A total of 29 formative models were retrieved, of which 16 were reflective–formative mixed models.

According to various guidelines for validating formative measurement models, the indicator weights and significance of weights are strictly required, such as t-value testing results (or converted p-value) (Ashraf, 2004; Petter et al., 2007). The critical t-values for a two-tailed test are 1.65, 1.96, and 2.58, which represent significance levels of 90% (p<0.1), 95% (p<0.05), and 99% (p<0.01), respectively (Hair Jr et al., 2011). Multicollinearity should also be addressed for formative models. A rule of thumb for the variance inflation factor (VIF) of each indicator is that its value should be less than 5 (Hair Jr et al., 2011).

As shown in Table 7, 72.41% (21 of 29) of the formative models reported indicator weights, of which 3 models ignored the significance test of these weights. Meanwhile, 37.93% (11 of 29) of the formative models did not include a test of multicollinearity, and 4 formative models (13.79%) applied reflective evaluation criteria.

3.4.3 Structural model evaluation

The essence of structural model evaluation is to test hypothesized relationships, such as path coefficients (b) and their significance. Unlike CB-SEM, PLS-SEM does not have a standard goodness-of-fit statistic but applies the following criteria to ensure the quality of the structural model: Coefficient of determination (R2) and its effect size f2, and predictive relevance including cross-validated redundancy (Q2) with the effect size q2 (Ashraf, 2004; Hair Jr et al., 2014).

As shown in Table 8, all reviewed models calculated the absolute value of the path coefficients, whereas 7 models (4.96%) ignored the significance tests of the indicator weights. A series of t-values were calculated by applying a bootstrapping procedure. According to their respective research topic and field, significant levels from 90% (p<0.1) to 99.9% (p<0.001) were set by the researchers subjectively. Nearly half (69 of 141; 48.94%) of the reviewed models chose the 95% significance level (p<0.05) (i.e., t-value= 1.96) as an acceptable level to support the hypotheses.

R2 is a measure of the predictive accuracy of a model and represents the amount of variance in the endogenous constructs explained by all exogenous constructs linked to them. A “rough” rule of thumb for an acceptable R2 is widely adopted, with 0.75, 0.50, and 0.25 indicating substantial, moderate, or weak levels of predictive accuracy, respectively (Hair Jr et al., 2011; 2014). Around 83.69% (118 of 141) of the models were evaluated using R2 (Table 8). Meanwhile, a pseudo-F-test (effect size f2) assesses how one exogenous construct actively contributes to explaining a specific endogenous construct regarding R2 (Lachenbruch and Cohen, 1989; Ashraf, 2004). However, only 26.24% (37 of 141) of the reviewed models applied the f2 criterion.

Q2 is necessary for assessing the predictive relevance of a structural model, and effect size q2 represents the predictive relevance of an exogenous construct for a specific endogenous construct (Wold et al., 2001; Ashraf, 2004; Hair Jr et al., 2014). Acceptable Q2 values generally include 0.02, 0.15, and 0.35, which indicate weak, moderate, and sound effect levels of predictive relevance, respectively (Chin, 2010). Around 21.99% (31 of 141) of the models reported Q2, and only 2 (1.42%) reported the effect size q2 (Table 8).

Statistical criteria, such as R2, f2, Q2, and q2, have been used to highlight the predictive capabilities of a model (Ashraf, 2004; Chin, 2010). Moreover, bootstrap confidence intervals (CI) and total effects have received much attention in recently reviewed PLS-SEM articles to provide additional evidence and to improve transparency (Hair Jr et al., 2014; Ringle et al., 2018). However, only 5 of the reviewed models (3.55%) reported bootstrap CI, and 8 articles (5.67%) evaluated the total effects.

4 Discussion and recommendations

An increasing number of PLS-SEM articles have been conducted by construction management scholars over the past few years. A review of PLS-SEM applications in construction management research revealed that both the selection and use of this approach are frequently not well justified.

4.1 Selection of PLS-SEM and comparison with CB-SEM

Among the reviewed papers, Ringle et al. (2012) compared some latent variables and indicators and found that researchers preferred PLS-SEM over CB-SEM when handling model complexity with less stringent restrictions. However, the opposite is observed in the construction management field. Xiong et al. (2015) reported an average of 7.13 latent variables and 28.65 indicators per model in their CB-SEM review. Descriptive statistics of the reviewed articles reported an average of 7.20 latent variables and 33.45 indicators (Table 5), which did not significantly differ from CB-SEM models. These findings echo the statistical results shown in Table 2, thereby suggesting that model complexity is not a primary reason for construction management researchers to choose PLS-SEM.

As shown in Fig. 1, an increasing number of construction management publications have adopted PLS-SEM. As mentioned above, many of the reviewed PLS-SEM papers were exploratory in nature. As shown in Table 2, most of the identified PLS-SEM articles (81 of 139; 58.27%) revealed that most construction management researchers chose PLS-SEM because of their small sample size. Around 58.02% (47 of 81) of these studies reported that they chose PLS-SEM because of their small sample size and non-normal distribution. In other words, PLS-SEM is frequently chosen in construction management research because their collected data cannot meet the minimum sample size and distributional assumptions of CB-SEM. However, whether PLS-SEM is applicable for exploratory research with a limited sample size depends on specific research objectives. As suggested by Hair Jr et al. (2011) and Ringle et al. (2012), PLS-SEM is suitable for exploratory research instead of theory validation. Specifically, the latter has strict requirements on sample size and measurement configurations, whereas the former frequently adopts a small sample size and formative measures to explore casual relationships. Around 33.33% (27 of 81) of the reviewed articles, which reported a small sample size at the same time, declared that they were exploratory in nature, whereas 7.41% (6 of 81) claimed that they relied on theory validation. Adopting a small sample size or non-normal data for theory validation may generate implausible conclusions, thereby suggesting that construction management researchers should consider the limitations instead of merely chasing a novel statistical method.

4.2 Misuse of PLS-SEM

A dominant criticism in other research fields (e.g., business research) is that they misuse PLS-SEM and believe that an analytical technique can explain any research problem (Ashraf, 2004). Construction management research should then pay attention to this criticism. All of the reviewed studies were characterized as survey-based studies and followed a paradigm with “models and hypotheses” in the construction management field (Fellows and Liu, 2015). These studies relied on statistical methods with observational data to make causal inferences (Goertz and Mahoney, 2012). However, depending too much on statistics may lead to a poor conceptualization and execution of surveys. Moreover, the misapplication of PLS-SEM may result in misinterpretations of the practical outcomes and generate false conclusions (Ashraf, 2004). The overall quality of PLS-SEM applications in construction management research is consequently affected. This review proposes some recommendations on how to improve this situation in concept framing, research design, and results analysis. This section reports the failures, mistakes, and biases observed in using the PLS-SEM method from a technical perspective.

Non-normative measurement model evaluation has been described as a dominant problem in recent studies even though the dangers of erroneous evaluation have been previously criticized (Ashraf, 2004; Petter et al., 2007; Ringle et al., 2012). Around 45.31% (58 of 128) of the reflective models were evaluated by using partial criteria, whereas only 51.72% (15 of 29) of the formative models were evaluated by using normative criteria (Tables 6 and 7). Even for the thoroughly evaluated measurement models measured above, essential criteria (e.g., AVE) were frequently violated, and essential PLS-SEM criteria were also frequently ignored. For example, over 46.88% (60 of 128) of the reflective models ignored cross-loadings. Meanwhile, unnecessary criteria for CB-SEM models were reported in many cases, such as Cronbach’s α for formative measurement models. Ringle et al. (2012) highlighted the importance of evaluating measures via normative PLS-SEM statistics because the parameter estimates depend on the specific set-up of the analyzed model. Among the reviewed articles, Ning and Ling (2013) provided a clear example of reflective measurement model evaluation, whereas Bjorvatn and Wald (2018) normatively elaborated on formative measurement model evaluation. Another significant observation regarding measurement model evaluation is the misuse of criteria for formative indicators. Among the reviewed articles, 13.79% (4 of 29) applied test criteria for reflective models (Table 7). This misuse should be strictly avoided in future research by following the measurement model evaluation guidance or normative examples.

A better understanding and careful use of hierarchical component models are also necessary. As shown in Table 5, 22 (15.60%) hierarchical component models were retrieved from the reviewed articles. Two articles did not mention that their models were hierarchical, two other articles did not report the type of hierarchical relationship, and one article misjudged the type of first-order indicators. However, the hierarchical relationship significantly influences the measurement model evaluation. Theoretically, four types of hierarchical component models exist, including the formative (lower order)–reflective (higher-order) type, formative–formative type, reflective–formative type, and reflective–reflective type (Wetzels et al., 2009). The lower-order components explain the variance of the higher-order components as predecessors (Henseler and Chin, 2010). In the review, 3 of the 22 research models (13.64%) adopted either the formative–reflective or formative–formative type and selected formative criteria for the measurement model evaluation. The reflective–formative and reflective–reflective types apply the reflective criteria, and 86.36% (19 of 22) of the reviewed models belonged to this case. Suprapto et al. (2016) presented an example by clarifying the hierarchical relationship, that is, the structure of higher-order constructs and their lower-order indicators, during the model development. In this example, the measurement evaluation criteria matched the hierarchical relationship type.

4.3 Future application suggestions and directions

4.3.1 Early mapping

In survey-based construction management research, the primary use of theory is to facilitate assumptions or predictions. The theory offers a framework of variables, relationships, and boundaries that must be mapped to the context and research questions to guide the entire research design (Klein and Müller, 2019). As a method for conducting exploratory research, PLS-SEM can help identify future directions for exploratory research in construction management. Reasons for choosing a theory can be diverse depending on the motivation of the researcher and the features of a specific research area.

The first direction refers to an in-depth understanding of how the collaboration of various construction organizations characterizes construction management. Inter-organizational relationships have attracted more attention than intra-organizational relationships within a single project organization. This finding is in line with the recently emerging trend of viewing project stakeholder networks as temporary network-based organizations (Turner and Müller, 2003). Another direction involves research efforts examining the transformation of contractual focus from formal contract behaviors to relational behaviors. The last direction refers to complex project contexts and emerging themes. Information technologies (e.g., Building Information Modeling, project management information systems, Enterprise Resource Planning, and e-bidding), project complexity, and knowledge management are three themes that use PLS-SEM but are not covered by traditional project management topics (Themistocleous and Wearne, 2000). The relationships among these process benefits and project outcomes are seldom thoroughly examined.

Given that the early mapping of the context and research questions to theory substantially influences the whole PLS-SEM application, the following aspects should be considered:

(1) Theoretical contribution: The research questions and context should fit the chosen theory without severe anomalies (Klein and Müller, 2019). Any extension, modification, or questioning of the boundaries, variables, or relationships can also be made (Klein and Müller, 2019).

(2) Developing and combining theories: A combination of an additional theory or theories is frequently observed in the reviewed PLS-SEM studies when the primary theory does not support a variable or a relationship. This combination requires a clear explanation of why more than one theory is essential to answering the research questions (Gregor and Klein, 2014). In some cases, the competing theories should also be discussed to avoid potential dogmatic views (Fellows and Liu, 2015). Construction management researchers can address this problem by providing alternative research models. For example, Cao et al. (2014) tested original and alternative models by changing the relationships among the latent variables to explore the impacts of isomorphic pressures on Building Information Modeling adoption in construction projects.

4.3.2 Modeling and hypotheses development

Essential issues in the modeling stage include identifying the variables, establishing the hypotheses, and developing the constructs and their measurement items. A qualified early mapping provides the fundamentals of variable identification. Establishing hypothetical relationships among these variables requires an equilibrium between the precedence from prior theory and the logical development of novel insights (Klein and Müller, 2019), which are often enhanced by empirical observations and results. Construction management researchers should also pay attention to the distinction among various construct types, especially for tentative management notions. Future research should improve the judgment of constructs by closely following the recommendations provided by Fornell and Bookstein (1982), Chin (1998), Diamantopoulos and Winklhofer (2001), Rossiter (2002), Jarvis et al. (2003) and Bagozzi (2011). The following aspects are proposed to avoid the misuse of the PLS-SEM method in model development:

(1) Transparency of the source of constructs: Omitting essential information should be avoided. The constructs from previous studies on the same theory still need declarations about how they fit the research context and problems. Evidence from the literature should be addressed for both existing and new-developed constructs.

(2) Distinguishing reflective and formative constructs: The differences between reflective and formative constructs must be understood, and the reasons for adopting which type of construct should be explained. Formative constructs should be used sparingly and appropriately (Ringle et al., 2012; Klein and Müller, 2019).

(3) Wording and validation based on expertise and empirical evidence: The wording of constructs and their measurement items must be precise and accurate. A content review of the initial constructs and their measurement items should be conducted by experts, and performing pilot studies (e.g., model pre-test or alternative test) involving knowledgeable respondents is highly recommended.

4.3.3 Model evaluation and analysis

The misuse of evaluation criteria for both reflective and formative models should be avoided, and hierarchical component models require attention. Compared with CB-SEM, PLS-SEM is suitable for handling formative models, but a more careful formative model evaluation is needed. Future research should report the complete evaluation results as suggested by Hair Jr et al. (2011) and Ringle et al. (2012) to confirm their conclusions.

At the analysis level, researchers should not consider mathematical results in isolation. Related interview results, empirical observations, and other essential information should address the research questions, implications, and contributions. Construction management scholars should offer descriptions of reality and provide solutions to the problems with the help of PLS-SEM.

5 Conclusions

To understand the application of PLS-SEM in construction management research, a critical review of 139 articles applying this technique between 2002 and 2019 was conducted. The adoption of PLS-SEM in the construction management field has been accelerated over the past years. This paper examines critical issues related to the reasons for using PLS-SEM, research design, and model evaluation. PLS-SEM is also distinguished from CB-SEM. The consequences of misusing PLS-SEM are also elaborated. Future research recommendations are provided to improve the application of PLS-SEM in the construction management area, especially in early mapping, modeling and hypotheses development, and model evaluation and analysis.

Articles using CB-SEM have been carefully reviewed, and many scholars have clearly distinguished CB-SEM from PLS-SEM. To the best of the researchers’ knowledge, this literature review is the first to explore the use of PLS-SEM in construction management in response to its increasing application in this area over the past years. This timely, valuable endeavor is in line with previous PLS-SEM reviews for other disciplines and contributes to the evolution of construction management-contextualized PLS-SEM literature. This review responds to the call of Klein and Müller (2019) for studying the overuse of questionable standards and offers methodological guidance for construction management researchers planning to use PLS-SEM.

Koskela (2017) argued that mathematical representations offer only an idealized version of industry practice. Following the guidelines can answer the “how” question, but the “why” question should be considered as well to guarantee a proper adoption. This provocative comment calls the relevance of PLS-SEM research in construction management into question. According to Hair Jr et al. (2011), PLS-SEM can be a “silver bullet” for estimating causal relationships in various data and model situations. Several aspects are highlighted that deter construction management researchers from “shooting well” from the research design stage to the model evaluation stage. Compared with CB-SEM, PLS-SEM allows the use of a small sample size and formative measures in exploratory research. An underlying issue of this review is that many construction management research articles adopting PLS-SEM focus too much on using advanced statistics of a small sample. Moreover, this review does not focus enough on understanding the exploratory problem that these studies are trying to solve. Construction management researchers should be aware that PLS-SEM is no panacea and that PLS-SEM applications need a better conceptualization and execution of surveys, careful model development and evaluation, and realistic analyses based on adequate reasoning and proof. This paper intends to contribute to improving the application of PLS-SEM in construction management research by highlighting the misunderstandings and limitations that construction management researchers face. Therefore, this review helps apply PLS-SEM in construction management research with entry points in the future.

References

[1]

Ashraf M (2004). A critical look at the use of group projects as a pedagogical tool. Journal of Education for Business, 79(4): 213–216

[2]

Bagozzi R P (2011). Measurement and meaning in information systems and organizational research: Methodological and philosophical foundations. Management Information Systems Quarterly, 35(2): 261–292

[3]

Bagozzi R P, Yi Y (1988). On the evaluation of structural equation models. Journal of the Academy of Marketing Science, 16(1): 74–94

[4]

Becker J M, Rai A, Ringle C M, Völckner F (2013). Discovering unobserved heterogeneity in structural equation models to avert validity threats. Management Information Systems Quarterly, 37(3): 665–694

[5]

Bentler M (1980). Multivariate analysis with latent variables: Causal modeling. Annual Review of Psychology, 31(1): 419–456

[6]

Bjorvatn T, Wald A (2018). Project complexity and team-level absorptive capacity as drivers of project management performance. International Journal of Project Management, 36(6): 876–888

[7]

Blomquist T, Hällgren M, Nilsson A, Söderholm A (2010). Project-as-practice: In search of project management research that matters. Project Management Journal, 41(1): 5–16

[8]

Bollen K, Lennox R (1991). Conventional wisdom on measurement: A structural equation perspective. Psychological Bulletin, 110(2): 305–314

[9]

Bröchner J, Björk B C (2008). Where to submit? Journal choice by construction management authors. Construction Management and Economics, 26(7): 739–749

[10]

Cao D, Li H, Wang G (2014). Impacts of isomorphic pressures on BIM adoption in construction projects. Journal of Construction Engineering and Management, 140(12): 04014056

[11]

Chin W W (1998). The partial least squares approach to structural equation modelling. In: Marcoulides G A, ed. Modern Methods for Business Research. London: Lawrence Erlbaum Associates, 295–336

[12]

Chin W W (2010). How to write up and report PLS analyses. In: Esposito Vinzi V, Chin W W, Henseler J, Wang H, eds. Handbook of Partial Least Squares. Berlin: Springer, 655–690

[13]

Chin W W, Peterson R A, Brown S P (2008). Structural equation modeling in marketing: Some practical reminders. Journal of Marketing Theory and Practice, 16(4): 287–298

[14]

Comrey A L (1993). A First Course in Factor Analysis. London: Psychology Press

[15]

Cronbach L J (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16(3): 297–334

[16]

Diamantopoulos A, Riefler P, Roth K P (2008). Advancing formative measurement models. Journal of Business Research, 61(12): 1203–1218

[17]

Diamantopoulos A, Winklhofer H M (2001). Index construction with formative indicators: An alternative to scale development. Journal of Marketing Research, 38(2): 269–277

[18]

Dijkstra T K (2010). Latent variables and indices: Herman Wold’s basic design and partial least squares. In: Esposito Vinzi V, Chin W W, Henseler J, Wang H, eds. Handbook of Partial Least Squares. Berlin: Springer, 23–46

[19]

Fellows R F, Liu A M M (2015). Research Methods for Construction. 4th ed. Hoboken, NJ: John Wiley & Sons

[20]

Fornell C, Bookstein F L (1982). Two structural equation models: LISREL and PLS applied to consumer exit-voice theory. Journal of Marketing Research, 19(4): 440–452

[21]

Fornell C, Larcker D F (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18(1): 39–50

[22]

Goertz G, Mahoney J (2012). A Tale of Two Cultures: Qualitative and Quantitative Research in the Social Sciences. Princeton: Princeton University Press

[23]

Grant M J, Booth A (2009). A typology of reviews: An analysis of 14 review types and associated methodologies. Health Information and Libraries Journal, 26(2): 91–108

[24]

Gregor S, Klein G (2014). Eight obstacles to overcome in the theory testing genre. Journal of the Association for Information Systems, 15(11): 5

[25]

Gudergan S P, Ringle C M, Wende S, Will A (2008). Confirmatory tetrad analysis in PLS path modeling. Journal of Business Research, 61(12): 1238–1249

[26]

Hahn C, Johnson M D, Herrmann A, Huber F (2002). Capturing customer heterogeneity using a finite mixture PLS approach. Schmalenbach Business Review, 54(3): 243–269

[27]

Hair Jr J F, Howard M C, Nitzl C (2020). Assessing measurement model quality in PLS-SEM using confirmatory composite analysis. Journal of Business Research, 109: 101–110

[28]

Hair Jr J F, Ringle C M, Sarstedt M (2011). PLS-SEM: Indeed a silver bullet. Journal of Marketing Theory and Practice, 19(2): 139–152

[29]

Hair Jr J F, Sarstedt M, Hopkins L, Kuppelwieser V G (2014). Partial least squares structural equation modeling (PLS-SEM): An emerging tool in business research. European Business Review, 26(2): 106–121

[30]

Hair Jr J F, Sarstedt M, Ringle C M, Mena J A (2012). An assessment of the use of partial least squares structural equation modeling in marketing research. Journal of the Academy of Marketing Science, 40(3): 414–433

[31]

Hartmann A, Hietbrink M (2013). An exploratory study on the relationship between stakeholder expectations, experiences and satisfaction in road maintenance. Construction Management and Economics, 31(4): 345–358

[32]

Harwood T G, Garry T (2003). An overview of content analysis. Marketing Review, 3(4): 479–498

[33]

Henseler J, Chin W W (2010). A comparison of approaches for the analysis of interaction effects between latent variables using partial least squares path modeling. Structural Equation Modeling, 17(1): 82–109

[34]

Hulland J (1999). Use of partial least squares (PLS) in strategic management research: A review of four recent studies. Strategic Management Journal, 20(2): 195–204

[35]

Jarvis C B, MacKenzie S B, Podsakoff M (2003). A critical review of construct indicators and measurement model misspecification in marketing and consumer research. Journal of Consumer Research, 30(2): 199–218

[36]

Kante M, Chepken C, Oboko R (2018). Partial least square structural equation modelling’s use in information systems: An updated guideline of practices in exploratory settings. Journal of Research & Innovation, 6(1): 49–67

[37]

Klein G, Müller R (2019). Quantitative research submissions to project management. Project Management Journal, 50(3): 263–265

[38]

Koskela L (2017). Why is management research irrelevant? Construction Management and Economics, 35(1–2): 4–23

[39]

Lachenbruch A, Cohen J (1989). Statistical Power Analysis for the Behavioral Sciences. 2nd ed. Hillsdale, NJ: Lawrence Erlbaum Associates

[40]

Mohamed S (2002). Safety climate in construction site environments. Journal of Construction Engineering and Management, 128(5): 375–384

[41]

Nadhim E A, Hon C K H, Xia B, Stewart I, Fang D (2018). Investigating the relationships between safety climate and safety performance indicators in retrofitting works. Construction Economics and Building, 18(2): 110–129

[42]

Ning Y, Ling F Y Y (2013). Reducing hindrances to adoption of relational behaviors in public construction projects. Journal of Construction Engineering and Management, 139(11): 04013017

[43]

Nitzl C, Chin W W (2017). The case of partial least squares (PLS) path modeling in managerial accounting research. Journal of Management Control, 28(2): 137–156

[44]

Nunnally J (1978). Psychometric Methods. New York: McGraw-Hill Book Co.

[45]

Peter J P (1981). Construct validity: A review of basic issues and marketing practices. Journal of Marketing Research, 18(2): 133–145

[46]

Petter S, Straub D, Rai A (2007). Specifying formative constructs in information systems research. Management Information Systems Quarterly, 31(4): 623–656

[47]

Ringle C M, Sarstedt M, Mitchell R, Gudergan S P (2018). Partial least squares structural equation modeling in HRM research. International Journal of Human Resource Management, 31(12): 1617–1643

[48]

Ringle C M, Sarstedt M, Straub D W (2012). A critical look at the use of PLS-SEM in MIS quarterly. Management Information Systems Quarterly, 36(1): iii–xiv

[49]

Rossiter J R (2002). The C-OAR-SE procedure for scale development in marketing. International Journal of Research in Marketing, 19(4): 305–335

[50]

Sharma P N, Shmueli G, Sarstedt M, Danks N, Ray S (2019). Prediction–oriented model selection in partial least squares path modeling. Decision Sciences, on line, doi: 10.1111/deci.12329

[51]

Spector P (1992). Summated Rating Scale Construction. London: Sage Publications Inc.

[52]

Streukens S, Leroi-Werelds S (2016). Bootstrapping and PLS-SEM: A step-by-step guide to get more out of your bootstrap results. European Management Journal, 34(6): 618–632

[53]

Suprapto M, Bakker H L M, Mooi H G, Hertogh M J C M (2016). How do contract types and incentives matter to project performance? International Journal of Project Management, 34(6): 1071–1087

[54]

Themistocleous G, Wearne S H (2000). Project management topic coverage in journals. International Journal of Project Management, 18(1): 7–11

[55]

Turner J R, Müller R (2003). On the nature of the project as a temporary organization. International Journal of Project Management, 21(1): 1–8

[56]

Weir J P (2005). Quantifying test-retest reliability using the intraclass correlation coefficient and the SEM. Journal of Strength and Conditioning Research, 19(1): 231–240

[57]

Wen Q, Qiang M, An N (2017). Collaborating with construction management consultants in project execution: Responsibility delegation and capability integration. Journal of Construction Engineering and Management, 143(7): 04017021

[58]

Wetzels M, Odekerken-Schröder G, van Oppen C (2009). Using PLS path modeling for assessing hierarchical construct models: Guidelines and empirical illustration. Management Information Systems Quarterly, 33(1): 177

[59]

Wing C K (1997). The ranking of construction management journals. Construction Management and Economics, 15(4): 387–398

[60]

Wold S, Sjöström M, Eriksson L (2001). PLS-regression: A basic tool of chemometrics. Chemometrics and Intelligent Laboratory Systems, 58(2): 109–130

[61]

Xiong B, Skitmore M, Xia B (2015). A critical review of structural equation modeling applications in construction research. Automation in Construction, 49(Part A): 59–70

RIGHTS & PERMISSIONS

The Author(s) 2021. This article is published with open access at link.springer.com and journal.hep.com.cn

AI Summary AI Mindmap
PDF (268KB)

6726

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/