Home > Articles > Software Development & Management > Management: Lifecycle, Project, Team

  • Print
  • + Share This
This chapter is from the book

Preliminary Analyses

Before running "blind" statistical tests, I check that the assumptions underlying them are true. In addition, I like to get some first impressions of the data. My objective is not a complete understanding of all possible relationships among all the variables. For example, in Step 2, variable and model selection, I decided that my first goal was to determine which of the variables collected had an influence on effort. To achieve that goal, I follow the steps described in this section before building the multi-variable model (Step 4).

Graphs

Histograms  To start, I look at a graph of each numerical variable individually to see how many small values, large values, and medium values there are, that is, the distribution of each variable. These are also called histograms.

Why Do It?  I want to see if the variables are normally distributed. Many statistical techniques assume that the underlying data is normally distributed, so you should check if it is. A normal distribution is also known as a bell-shaped curve. Many of us were graded on such curves at large competitive universities. In a bell-shaped curve, most values fall in the middle, with few very high and very low values. For example, if an exam is graded and the results are fit to a normal distribution (Figure 1.1), most students will get a C. Less students will get a B or a D. And even fewer students will receive an A or an F. The average test score will be the midpoint of the C grade, whether the score is 50, or 90, out of 100. That does not always seem very fair, does it? You can learn more about normal distributions and why they are important in Chapter 6.

Figure 1.1Figure 1.1 Example of a normal distribution

How to Do It  To create a histogram for the variable t13 manually, you would count how many 1s there are, how many 2s, etc. Then, you would make a bar chart with either the number of observations or the percentage of observations on the y-axis for each value of t13. However, you don't need to waste your time doing this by hand.

Let a statistical analysis tool do it for you. You will need to learn how to use a statistical analysis tool to analyze data. I have used SAS, Excel, and Stata in my career. My opinions regarding each are: SAS was fine when I worked for large organizations, but far too expensive when I had to pay for it myself. Excel is not powerful or straightforward enough for my purposes. Stata is relatively inexpensive (no yearly licensing fee), does everything I need, and is very easy to use (see www.stata.com). However, no matter which statistical software you use, the output should always look the same, and it is the interpretation of that output on which this book focuses.

Example  The distributions of effort and size show that they are not normally distributed (Figures 1.2 and 1.3). The database contains few projects with a very high effort, or a very big size. It also contains many low effort, and small size projects. This is typical in a software development project database. Not only are the efforts and sizes not normally distributed in this sample, but we would not expect them to be normally distributed in the population of all software development projects.

Figure 1.2Figure 1.2 Distribution of effort

Figure 1.3Figure 1.3 Distribution of size

To approximate a normal distribution, we must transform these variables. A common transformation is to take their natural log (ln). Taking the natural log makes large values smaller and brings the data closer together. For example, take two project sizes of 100 and 3000 function points. 3000 is much bigger than 100. If I take the ln of these numbers, I find that ln(100) = 4.6 and ln(3000) = 8.0. These transformed sizes are much closer together. As you can see, taking the natural log of effort and size more closely approximates a normal distribution (Figures 1.4 and 1.5).

Figure 1.4Figure 1.4 Distribution of ln(effort)

Figure 1.5Figure 1.5 Distribution of ln(size)

Graphs of staff application knowledge (t13) and staff tool skills (t14) look more normally distributed (Figures 1.6 and 1.7). Most projects have an average value of 3. Additionally, in the larger multi-company database from which this subset was taken, the distributions of these factors are approximately normal. In fact, the definitions of the factors were chosen especially so that most projects would be average. These variables do not need any transformation.

Figure 1.6Figure 1.6 Distribution of t13

Figure 1.7Figure 1.7 Distribution of t14

What to Watch Out For

  • Just because the values of variables are numbers, it does not imply that they have any numerical sense. For example, application type (app) might have been coded as 1, 2, 3, and 4 instead of CustServ, MIS, TransPro, and InfServ. Application type (app) is a categorical variable with a nominal scale; that is, its values cannot be arranged in any meaningful order. I can arrange the values in any order I want: MIS before CustServ, for example. I suggest giving these types of variables meaningful names instead of numbers before you start analyzing the data. It will help you remember what they are. (You will learn more about variable types in Chapter 6.)

  • On the other hand, there may be categorical variables with meaningful names that do have numerical sense. For example, staff application knowledge (t13) could have been coded as very low, low, average, high, and very high instead of 1, 2, 3, 4, and 5 (often referred to as a Likert scale). Staff application knowledge (t13) is a categorical variable whose values can be arranged in a meaningful order. I suggest transforming these types of variables into ordered numbers before you start analyzing the data. Then, check to see if they are normally distributed. If they are approximately normally distributed, I treat them as numerical variables for the rest of the analysis. If they are not normally distributed, and I have no good reason to expect that in the population of all software development projects they would be, I treat them as categorical variables. It is common practice in the market research profession to treat Likert-type variables as numerical data. As you will see, it is easier to analyze numerical-type data than true categorical data.

Two-Dimensional Graphs  I also make graphs of the dependent variable against each independent numerical variable. In this example, I am interested in the relationships between effort and size, effort and staff application knowledge (t13), and effort and staff tool skills (t14).

Why Do It?  A picture is worth a thousand words. I highly recommend visualizing any relationship that might exist between the dependent and independent variables before running "blind" statistical tests. It is important to see if the relationship is linear as our statistical tests are based on linear relationships and will "ignore" non-linear relationships. A relationship is linear if you can fit one straight line through the data points, and this represents them well.

Example  I plot these graphs using the transformed data. We can see in Figure 1.8 that there appears to be a linear relationship between ln(effort) and ln(size). As project size increases, the amount of effort needed increases. Figure 1.9 gives the impression that there is no relationship between effort and staff application knowledge (t13). Conversely, Figure 1.10 seems to suggest that less effort is required for projects with higher levels of staff tool skills (t14). These are first impressions that will be verified through statistical tests.

Figure 1.8Figure 1.8 ln(effort) vs. ln(size)

Figure 1.9Figure 1.9 ln(effort) vs. t13

Figure 1.10Figure 1.10 ln(effort) vs. t14

Another good reason to use a log transformation is to make a non-linear relationship more linear. Figure 1.11 shows the relationship between the variables effort and size before the log transformation. As you can see, the relationship in Figure 1.8 is much more linear than the relationship in Figure 1.11

Figure 1.11Figure 1.11 effort vs. size

What to Watch Out For

  • Non-linear relationships.

  • Outliers—that is, any data points (projects) far away from the others. In an extreme case, an outlier can distort the scale, causing all the other projects to look as if they are grouped together in a little cloud. All the straight lines fit to the data will try to go through the outlier, and will treat the cloud of data (that is, all the other projects) with less importance. Remove the outlier(s) and re-plot the data to see if there is any relationship hidden in the cloud. See Chapter 2 for an example where an outlier is detected and removed.

Tables

I make tables of the average value of the dependent variable and the number of observations it is based on for each value of each categorical variable. In this example, the tables will show the average value of effort for each application type, and for Telon use.

Why Do It?  We make tables to see if there is a big difference in the effort needed by category and to start formulating possible reasons for this.

Example  From Example 1.4, we learn that on average, transaction processing (TransPro) applications require the highest effort, then customer service (CustServ) applications, then MIS applications, and finally, information service (InfServ) applications. Why is this? Answering this question will be important for the interpretation phase of the analysis. Example 1.5 tells us that, on average, projects that used Telon required almost twice as much effort as projects that did not. Is this because they were bigger in size, or could there be another explanation?

Example 1.4

                           . table app, c(n effort mean effort) 
Application Type         N(effort)          mean(effort)
CustServ                  6                       7872
MIS                       3                       4434
TransPro                 20                      10816
InfServ                   5                       4028

Example 1.5

                           . table telonuse, c(n effort mean effort) 
Telon Use         N(effort)         mean(effort)
No                      27                 7497
Yes                      7                13510

What to Watch Out For  Remember that we still need to check the relationships in Examples 1.4 and 1.5 to see if they are statistically significant. Even if there appears to be a big difference in the average values, it may not really be true because one project with a high effort could have influenced a category's average.

Correlation Analysis

Another assumption of the statistical procedure I use to build a multi-variable model is that independent variables are independent; that is, they are not related to each other. In our example, there should be no strong relationships among the variables: size, t13, t14, app, and telonuse. There is a very quick way to check if the numerical variables, size, t13, and t14, are independent: correlation analysis. If some of the numerical variables were collected using an ordinal or quasi-interval Likert scale (like t13 and t14), I use Spearman's rank correlation coefficient because it tests the relationships of orders rather than actual values. (See Chapter 6 for scale definitions.) Another important feature of Spearman's rank correlation coefficient is that it is less sensitive to extreme values than the standard Pearson correlation coefficient.

Two variables will be highly positively correlated if low ranked values of one are nearly always associated with low ranked values of the other, and high ranked values of one are nearly always associated with high ranked values of the other. For example, do projects with very low staff tool skills always have very low staff application knowledge, too; are average tool skills associated with average application knowledge, high tool skills with high application knowledge, etc.? If such a relationship is nearly always true, the correlation coefficient will be close to 1.

Two variables will be highly negatively correlated if low ranked values of one are nearly always associated with high ranked values of the other, and vice-versa. For example, do the smallest projects (smallest in size) always have the highest staff application knowledge, and do the biggest projects always have the lowest staff application knowledge? If such a situation is nearly always true, the correlation coefficient will be close to –1. Variables that are not correlated at all will have a correlation coefficient close to zero. You will learn more about correlation analysis in Chapter 6.

Why Do It?  Perform a correlation analysis as a quick check to see if there are possible violations of the independence assumption. Later, as I build the multi-variable model, I will use this information. For the moment, I only make note of it.

Example  Example 1.6 shows the statistical output for the Spearman's rank correlation coefficient test between the variables size and t13. The number of observations equals 34. The correlation coefficient is "Spearman's rho," which is 0.1952. Already it is clear that these two variables are not very correlated as this number is closer to 0 than 1. The "Test of Ho" tests if size and t13 are independent (i.e., not correlated). If Pr > |t| = a number greater than 0.05, then size and t13 are independent. Because 0.2686 > 0.05, we conclude that this is indeed the case. (Pr is an abbreviation for probability; t means that the t distribution was used to determine the probability. You will learn more about this in Chapter 6.)

Example 1.6

                                . spearman size t13
  Number of obs = 34
Spearman's rho = 0.1952
Test of Ho: size and t13 independent
          Pr > |t| = 0.2686

From Example 1.7, we learn that the variables size and t14 have a Spearman's correlation coefficient of –0.3599. We cannot accept that size and t14 are independent because 0.0365 is less than 0.05. Thus, we conclude that size and t13 are negatively correlated.

Example 1.7

                            . spearman size t14
  Number of obs = 34
Spearman's rho = –0.3599
Test of Ho: size and t14 independent
          Pr > |t| =  0.0365

We conclude from the results in Example 1.8 that t13 and t14 are not correlated.

Example 1.8

                              . spearman t13 t14
  Number of obs = 34
Spearman's rho = –0.0898
Test of Ho: t13 and t14 independent
          Pr > |t| = 0.6134

What to Watch Out For 

  • If the absolute value of Spearman's rho is greater than or equal to 0.75, and the Pr>|t| value equals 0.05 or less, then the two variables are strongly correlated and should not be included in the final model together.

  • Many statistical analysis packages will automatically calculate the standard Pearson correlation coefficient unless you state otherwise. Make sure you request Spearman's correlation.

  • It does not matter if you use the original variable (for example, size) or the transformed variable (ln(size)) to calculate Spearman's correlation; the results will be the same.

Categorical Variable Tests  Now that we have checked the numerical variables' independence with correlation analysis, perhaps you are asking: What about the categorical variables? It takes much more time to check the independence of every possible relationship between the categorical variables and between the categorical variables and numerical variables, especially in a large database. It is for this reason that I only carry out these checks on the independent variables present in the final multi-variable model in Step 5, when I check the model.

Stepwise Regression Analysis

Performing multiple regression analyses allows us to determine the relative importance of each independent, numerical variable's relationship (ln(size), t13, t14) to the dependent variable (ln(effort)).

Why Do It?  Because stepwise regression analysis is automatic and very simple to run, I always like to see how good of a model can be built just with the numerical data. In addition to learning if the non-categorical variables collected are very important indicators of effort, this also gives me a quick idea of what performance the categorical data is going to have to beat.

Example  The output in Example 1.9 shows the results of running a forward stepwise regression procedure on our data set. Forward stepwise regression means that the model starts "empty" and then the variables most related to leffort (abbreviation of ln(effort) in statistical output) are added one by one in order of importance until no other variable can be added to improve the model. You must run this procedure using the transformed variables.

You can see that first, lsize (abbreviation of ln(size) in statistical output) is added, then t14 is added. No further variation in leffort is explained by t13, so it is left out of the model. In Chapter 6, you will learn how to interpret every part of this output; for now, I will just concentrate on the values in bold. These are the values that I look at to determine the performance and significance of the model. I look at the number of observations (Number of obs) to see if the model was built using all the projects. The model was built using all 34 observations. I look at Prob > F to determine if the model is significant, in other words, can I believe this model? (Prob is an abbreviation for probability; F means that the F distribution was used to determine the probability. You will learn more about this in Chapter 6.) If Prob > F is a number less than or equal to 0.05, then I accept the model. Here it is 0, so the model is significant. I look at the adjusted R-squared value (Adj R-squared) to determine the performance of the model. The closer it is to 1, the better. The Adj R-squared of 0.7288 means that this model explains nearly 73% (72.88%) of the variation in leffort. This is a very good result. This means that even without the categorical variables, I am sure to come up with a model than explains 73% of the variation in effort. I am very interested in finding out more about which variables explain this variation.

I can see from the output that lsize and t14 are the RHS explanatory variables. I also check the significance of each explanatory variable and the constant (_cons) in the column P > |t|. If P > |t| is a number less than or equal to 0.05, then the individual variable is significant; that is, it is not in the model by chance. (P is yet another abbreviation for probability; t means that the t distribution was used to determine the probability.)

Example 1.9

                                 . sw regress leffort lsize t13 t14, pe(.05)
begin with empty model

                       p = 0.0000 < 0.0500   adding   lsize
                       p = 0.0019 < 0.0500   adding   t14
Source         SS         df       MS                      Number of obs  =           34
Model       25.9802069     2   12.9901035                  F(2,31)        =    45.35
Residual    8.88042769    31   .286465409                  Prob > F       =     0.0000
Total       34.8606346    33   1.05638287                  R-squared      = 0.7453
                                                           Adj R-squared  = 0.7288
                                                           Root MSE       =  .53522

Leffort        Coef.          Std. Err.  t         P>|t|          [95% Conf. Interval]
Lsize          .7678266     .1148813    6.684      0.000           .5335247   1.002129
t14           –.3856721     .1138331   -3.388      0.002          -.6178361   -.153508
_cons          5.088876     .8764331    5.806      0.000           3.301379   6.876373

The output in Example 1.10 shows the results of running a backward stepwise regression procedure on our data set. Backward stepwise regression means that the model starts "full" (with all the variables) and then the variables least related to effort are removed one by one in order of unimportance until no further variable can be removed to improve the model. You can see here that t13 was removed from the model. In this case, the results are the same for forward and backward stepwise regression; however, this is not always the case. Things get more complicated when some variables have missing observations.

Example 1.10

                              . sw regress leffort 1size t13 t14, pr(.05)
begin with full model

p = 0.6280 >= 0.0500  removing t13

Source         SS      df       MS                  Number of obs  =          34
Model      25.9802069   2   12.9901035              F(2,31)        =     45.35
Residual   8.88042769  31   .286465409              Prob > F       =    0.0000
Total      34.8606346  33   1.05638287              R-squared      =    0.7453
                                                    Adj R-squared  =    0.7288
                                                    Root MSE       =     .53522

Leffort           Coef.       Std. Err.    t      P>|t|      [95% Conf. Interval]
Lsize           .7678266    .1148813     6.684    0.000     .5335247    1.002129
t14             -.3856721   .1138331    -3.388    0.002    -.6178361    -.153508
_cons           5.088876    .8764331     5.806    0.000     3.301379    6.876373

What to Watch Out For  Watch out for variables with lots of missing values. The stepwise regression procedure only takes into account observations with non-missing values for all variables specified. For example, if t13 is missing for half the projects, then half the projects will not be used. Check the number of observations used in the model. You may keep coming up with models that explain a large amount of the variation for a small amount of the data. If this happens, run the stepwise procedures using only the variables available for nearly every project.

  • + Share This
  • 🔖 Save To Your Account