Home > Articles > Business & Management

  • Print
  • + Share This
This chapter is from the book

Method

Sample and Procedure

An experimental study in a controlled environment was chosen to simulate an authentic integrated business process supported by a real ERP system (that is, SAP). Students who took an introductory level of IS class at a large Midwestern public university participated in this game as a part of the course requirements. A total of 166 undergraduate students from 6 classes participated. In each class, the participants were randomly assigned to eight teams of two to four students. The sample was composed of approximately 38% females, and the average age of the participants was 20.5 years.

The research methodology involved the use of simulation game called ERPsim (Léger, 2006; Léger et al., 2011), designed to recreate a realistic business context and manage the main business processes of an organization using the ERP system SAP. Several similar studies have used this ERP simulation (for example, Caya, Léger, Grebot, & Brunelle, 2014). Within this overall product (ERPsim), there are several different ERP simulation games (for example, distribution game, logistics game, and manufacturing game). We chose the Distribution Game (or Water Bottle Game) because student participants had little previous knowledge of or experience with ERP systems. Because of this lack of experience and because all subjects attended the same ERP training session, we were able to isolate the effect of prior experience of individuals on the relationships, which we were testing. One week before the experiment, the participants were asked to complete the pretest survey that measured their prior knowledge about ERP systems.

A threat to internal validity may occur when we assigned different participants to different teams, with different team sizes, and in different classes, which could produce groups of individuals with noticeably different characteristics. Hence, we checked assignment bias to rule out this possible confounding effect and found that there were no significant differences in knowledge of ERP systems across the six classes (F = .537, p = .780) and team sizes (F = .761, p = .469), suggesting there was no assignment bias.

Construct Measurement

The measurement items used in this study were adapted from previous studies as shown in Table 1.1.

Table 1.1 Measurement Items

Individual effort (adapted from Hong, Thong, & Tam, 2004; Oh and Jasper, 2006)

Seven-point scales anchored with “strongly disagree” and “strongly agree”

In achieving a higher performance.

IE1

I tried to make an accurate decision.

IE2

I extended a large amount of effort in this simulation game.

IE3

I paid a lot of attention while playing this simulation game.

Pre- and post-knowledge (adapted from Bhattacherjee and Sanford, 2006)

Seven-point scales anchored with “strongly disagree” and “strongly agree”

These items were measured before and after the simulation game.

KN1

The level of my ERP knowledge is high.

KN2

The level of my ERP experience is high.

KN3

The level of my ERP competency is high.

Involvement with the simulation game (adapted from Bhattacherjee and Sanford, 2006)

Seven-point semantic scales

For me, learning the SAP ERP simulation game is

INV1

Unimportant/Important

INV2

Irrelevant/Relevant

INV4

Means nothing to me/Means a lot to me

Willingness to learn ERP systems (adapted from Davis et al., 1989)

Seven-point scales anchored with “strongly disagree” and “strongly agree”

WL1

I intend to learn about ERP systems.

WL2

I predict that I will learn about ERP systems.

WL3

I am willing to learn about ERP systems.

Data Analysis and Results

We used structural equation modeling (SEM) to analyze the proposed model. SEM is a flexible technique, applicable to both experimental and nonexperimental data (Kline, 2011). To conduct SEM, we used AMOS 22.0 because it enables us to simultaneously calculate the model parameters and it also takes into account measurement errors for each indicator, which improves its accuracy (Kline, 2011).

Measurement Model

Before analyzing the structural model, a confirmatory factor analysis (CFA) was conducted, in AMOS, to check the reliability and validity of the constructs. Composite reliability (CR) is commonly used to check the internal validity of the construct. Table 1.2 shows the CR values of the measurement items in the research model. All have CRs greater than .7, which is the normally agreed upon minimum value (Hair, Black, Babin, & Anderson, 2010). As shown in Table 1.2, average variance extracted (AVE) values are greater than .5, indicating that the model has convergent validity (Fornell & Larcker, 1981). Discriminant validity was assessed by the square root of AVE for each construct exceeding the construct’s correlations with other constructs (Chin, 1998). As demonstrated in Table 1.2, the construct’s discriminant validity can be concluded as acceptable.

Table 1.2 Confirmatory Factor Analysis Results

Constructs

Mean

SD

CR

AVE

Factor Loading Ranges

(1)

(2)

(3)

(4)

(1) Individual Effort

5.94

.97

.93

.81

.72-.86

.90

(2) Knowledge Update*

1.31

1.70

.85

.65

.88-.95

.44

.81

(3) Involvement

5.33

1.35

.94

.85

.87-.94

.33

.29

.92

(4) Willingness to Learn

5.01

1.46

.93

.81

.87-.94

.67

.34

.37

.90

Bold values represented diagonally are square root of AVE.

To evaluate the results of the CFA, we checked several commonly used goodness-of-fit indices (Table 1.3). As can be seen in Table 1.3, all tested indices of the model for both measurement and structural models were satisfactory (Hair et al., 2010).

Table 1.3 Goodness-of-Fit Indices

X2(DF)

X2/DF

GFI

AGFI

NFI

CFI

SRMR

RMSEA

Good Model Fit Ranges

<3.0

>.90

>.80

>.90

>.90

<.09

<.08

Measurement Model

64.08(48)

1.34

.94

.91

.96

.99

.036

.045

Structural Model

117.32(79)

1.19

.92

.88

.93

.98

.066

.054

Structural Model

We tested the hypothesized causal relationships among the constructs of the model. The created model yielded a good fit to the data (see Table 1.3). Figure 1.2 shows the path diagram for the model as well as the estimated standardized parameters for the causal paths, the square multiple correlations, and the level of significance of the constructs. The findings of this study support the conceptual model where all the hypotheses were supported.

Figure 1.2

Figure 1.2 Results of structure model

On the path from individual effort to perceived knowledge update the coefficient is .30 (p < .001); to involvement the coefficient is .38 (p < .001). These coefficients, thus, support H1 and H2. The results indicate that knowledge update has significant effect on involvement (β = .21; p < .01) and on willingness to learn (β = .20; p < .01), which support H3 and H4. And finally, the relationship between involvement and willingness to learn is also significant with a positive coefficient of .58 (p < .001), supporting H5.

The structural model shows that individual effort explains 8.8% of the variance in perceived knowledge update, and those two variables together explain 23.9% of the variance in involvement. Lastly, 47.5% of the variation in willingness to learn was jointly explained by perceived knowledge update and involvement.

  • + Share This
  • 🔖 Save To Your Account