Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Development of a Questionnaire to Evaluate Turnover and Retention in the IT Work Force: Art or Science?

Peter L.T. HOONAKKER*, Pascale CARAYON*^, Jen S. SCHOEPKE^ *Center for Quality and Productivity Improvement ^Department of Industrial and Systems Engineering University of Wisconsin-Madison, WI, USA

Abstract In order to evaluate turnover and retention in the Information Technology (IT) Work Force (ITWF) we developed a tailor made questionnaire that addresses specific job and organizational design factors of importance to IT workers. Early analyses show the questionnaire to be reliable and valid. However, because some companies complained about the length of the questionnaire, we decided to develop a short version of the questionnaire. In this paper we discuss the steps taken to develop the short version of the questionnaire, the criteria used to shorten the questionnaire, and the consequences for validity and reliability of the short questionnaire. Keywords: Methods, questionnaire development, IT workforce, turnover, web based survey

1. Introduction Turnover and retention of skilled information technology (IT) personnel are a major issue for employers and recruiters of the IT workforce: the departure of a companys IT employees not only means the loss of personnel, knowledge, and skills but also the loss of business opportunities (Moore & Burke, 2002). In 2001, Information Technology Association of America (ITAA) reported that IT firms lost 15% of their IT workers while non-IT companies lost 4% (2002). With 92% of IT workers employed by non-IT companies (ITAA, 2002), the issue of retention and turnover of IT workers is a problem for both IT and non-IT firms. Projections from the US Bureau of Labor Statistics estimate that between 2000 and 2010 2.5 million new IT jobs will be available (U.S. Department of Commerce, 2003). Thus, a high demand for skilled IT workers is predicted for the next decade (U.S. Department of Commerce, 2003). The most widely used method for evaluating job and organizational factors, quality of working life (QWL), and turnover is questionnaire survey. Thus there is a need to adapt existing surveys that measure intention to turnover and include the causes and consequences of intention to turnover of the populations underrepresented in the IT workforce. In this paper, we report results on the development of a questionnaire survey to assess turnover intention among IT professionals, with specific attention to the role of

gender and minority status. This survey is based on a conceptual framework that links job and organizational factors to QWL and turnover intention (Carayon, Haims, & Kraemer, 2001). The quality of a questionnaire survey can be assessed by examining its validity and reliability. Validity refers to the content of measurement: are we measuring what we think we are measuring? There are several methods to evaluate reliability and validity (Carmines & Zeller, 1990; Carayon & Hoonakker, 2001). We can evaluate reliability by measuring a concept at two different times (test-retest-reliability), by looking at the internal consistency of questions that are supposed to measure the same concept, and by comparing with other methods of measurement of equal or higher level, for example standardized (and validated) questionnaires. A measure that is often used to evaluate internal consistency is Cronbachs alpha: it is a measure of the homogeneity of a group of items in a survey or questionnaire. Three forms of validity can be distinguished (Nunnaly, 1978): predictive validity, content validity and construct validity. The content validity of a measurement instrument can be established by examining the domain represented by the questions very carefully. For developing the domain of questions of our questionnaire survey, we chose our topics/issues to be measured from a review of the literature and from our conceptual framework (Carayon et al., 2001). We also conducted interviews with IT professionals to make sure the questions were interpreted by the IT professionals as we wanted them to be interpreted (Carayon et al, 2005). The process of asking subject-matter experts (IT professionals) about the clarity and completeness of a questionnaire is an often-used method for establishing the content validity of a questionnaire (McDowell & Newell, 1987). Thus, interviewing IT professionals, analyzing the data collected, and revising the questionnaire were all steps that provided a method for ensuring the content validity of the questionnaire. The construct validity of a measurement instrument can be established by statistically analyzing the measures. The abstract concept (the construct) is typically operationalized by several questions. When results of statistical analyses show that the questionnaire items indicate a high degree of internal consistency, one can conclude that the different questions do indeed refer to one (underlying) construct. Structural equation modeling with primary and secondary confirmatory factor analysis of the scales in a questionnaire survey is another statistical method used to evaluate construct validity (Carayon et al, 2005). 2. Method

The overall process used to develop the questionnaire is depicted in figure 1. The process consisted of 12 steps in three stages (A, B and C). The first stage consisted of the development of the paper and pencil version of the questionnaire: A1) creating the initial questionnaire from a review of the literature and a review of existing valid and reliable scales. In order to assure reliability and validity, it is recommended to use established scales as much as possible (Carmines & Zeller, 1990); A2) conducting a pilot study with interviews to test the questionnaire with this particular population; A3) modifying the questionnaire based on feedback from the pilot study; and A4) the final version of the paper and pencil version of the questionnaire.

STAGE 1: PAPER & PENCIL VERSION

Creation of the initial questionnaire from a review of existing scales

Pilot study with interviews to test questionnaire

Modifications to the questionnaire based on feedback from the pilot study

Final version of the paper pencil version

STAGE 2: WEB BASED SURVEY

Study on the development of web based surveys

Pilot testing of the web based survey

Modifications to the web based survey based on pilot testing

Implementation of the final version of the web based survey

STAGE 3: SHORT VERSION

Demand from potential participants for a short version

Development of criteria for selecting items

Statistical analysis on short version

Final Short Version

Figure 1

Stages in development of the questionnaire

The second stage also consisted of four steps: B1) development of first version of the web based questionnaire (WBQ), based on the paper and pencil version and a literature study of web based survey design; B2) pilot testing of the first version of the web based survey; B3) modifying the questionnaire based on feedback from pilot testing; and B4) implementation of the final WBQ. For a full description of stages 1 and 2, see Carayon et al (2005). The third and last stage also consisted of 4 steps: C1) based on demands from potential participants for a short version, (C2) establishing criteria for including and excluding items; C3) statistical analysis on the short version to confirm reliability and validity and C4) implementation of the short version of the web based survey. In the rest of this paper we will focus on stage three. 2.1 Stage 3 of questionnaire development After data from five companies was collected, a total of 628 respondents populated the database. However, in the database, women and minorities were underrepresented. Therefore, we decided to specifically target organizations where female and minority employees were employed. However, these organizations complained about the length of the questionnaire. Therefore, we had to shorten the questionnaire. In order to come up with a shorter version of the questionnaire, we had to develop criteria to reduce the number of items. Evidently, reliability and validity were of great concern. With regard to reliability, we had to make sure that items that were excluded would not endanger the reliability of the existing scales. We performed reliability analysis and exploratory factor analysis to reduce the number of items. The most important criterion for the reliability of the scales was that the exclusion of items would not compromise Cronbachs alpha. We used exploratory factor analysis to make sure that the principle of unimodality (all items load on one factor) was maintained and to identify the items with highest factor loading. After having selected the items, we used confirmatory factor analysis to confirm the construct validity of the shortened scales.

With regard to validity, three major issues play a role. The first is construct validity. As described above, we used confirmatory factor analysis to test for the construct validity of the shortened scales. A second important issue is criterion related validity. The goal of the project is to identify (gender differences in) predictors to turnover. Therefore, items and scales that are highly predictive of turnover should be maintained. We used correlation analysis to study the relation between items and scales in the questionnaire and turnover intention. A third important issue is the gender and race/ethnicity specificity of some of the items and scales. As described above, the goal is to identify (gender and race/ethnicity differences in) predictors of turnover. Therefore we do not want to exclude items that are specifically important to the work experience of female employees or minorities. We used correlation analysis to look at gender and race/ethnicity specific items. 3. Results

Table 1 Comparison between the long and short version of the questionnaire (means, standard deviations, Cronbachs alphas (), paired t-tests (t) and correlations (r))
Scale IT job demands Role ambiguity Decision control Challenge Supervisory support Support from colleagues Support from home Family spills over into job Job spills over into family Training opportunities Developmental activities Career advancement Ethnicity discrimination Corporate fit Rewards Concerns about future Job satisfaction Organizational involvement Fatigue Tension Emotional exhaustion Total # of items # items 7 4 4 4 4 4 4 4 4 8 5 10 10 13 8 4 5 3 3 3 6 98 M 56.8 29.8 42.7 71.8 71.7 68.9 81.7 43.6 41.4 56.1 73.9 52.3 22.5 71.9 58.7 15.2 75.1 80.1 31.1 18.4 34.4 SD 19.7 20.3 28.9 21.1 26.1 20.4 20.1 22.7 20.6 21.1 21.6 16.9 20.4 14.4 19.4 13.3 23.8 15.7 25.8 21.0 37.1 0.86 0.88 0.91 0.84 0.87 0.78 0.77 0.83 0.68 0.93 0.87 0.82 0.95 0.87 0.88 0.72 0.82 0.48 0.88 0.81 0.91 # items 4 2 2 1 2 2 2 2 1 3 5 4 10 12 7 1 3 2 3 3 6 68 M 46.2 32.3 43.3 70.5 72.2 65.7 73.9 43.6 53.8 52.2 73.9 56.3 22.5 73.4 59.4 24.1 77.6 85.5 31.1 18.4 34.4 SD 22.5 21.8 28.7 24.3 28.3 23.6 26.7 24.8 30.1 25.0 21.6 22.1 20.4 14.9 20.0 24.4 23.9 16.6 25.8 21.0 22.1 0.79 0.84 0.84 n.a. 0.81 0.72 0.70 0.71 n.a. 0.93 0.87 0.84 0.95 0.88 0.88 n.a. 0.81 0.86 0.88 0.81 0.91 t 33.3 *** -7.9 ns -1.7 *** 2.8 ns -1.3 *** 7.8 *** 18.1 ns 0.1 *** -15.4 *** 10.7 = *** -10.3 = *** -16.4 *** -5.5 *** -13.0 *** -7.29 *** -10.2 = = =
***

r 0.94 0.93 0.96 0.89 0.94 0.90 0.93 0.92 0.75 0.91 1.0 0.91 1.0 0.99 0.99 0.76 0.93 0.78 1.0 1.0 1.0

The results show that in most cases, the standard deviations have increased, that the differences in mean scores are statistically significant, but that the correlations between the long and short version of the questionnaire are high.

Table 2 An example of scales and items that are correlated to gender, ethnicity and age
SCALE Highest loading item DEMANDS How often does your job require you to work very hard? CHALLENGE My job is very challenging SUPERVISORY SUPPORT How much can be relied on your supervisor when things get tough at work JOB SPILLS OVER INTO FAMILY LIFE My job takes so much energy I dont feel up to doing things that need attention at home. CAREER OPPORTUNITIES I regard my promotional opportunities in the future as good DISCRIMINATION BASED ON ETHNICITY At work, I feel that others exclude me from their activities because of my ethnic or cultural background CORPERATE INTEGRATION I am really a part of my work group JOB FUTURE CONCERNS How often are you concerned or bothered about losing your job or being laid off? ORGANIZATIONAL INVOLVEMENT To know that my own work had made a contribution to the good of the organization would please me Gender Ethnicity age have (1 = male; (1=white, in years children 2 = female) 2 = other)

0.07 0.09* 0.04 0.04 -0.13** -0.08 0.01 0.08 0.05 0.01 0.19** 0.20** -0.02 -0.03 -0.07 -0.03 0.12** 0.08

-0.10* 0.10* -0.09 0.10* -0.08 0.16** -0.09* 0.13** 0.03 -0.12** 0.02 -0.02 0.00

0.08 0.01 0.07 0.06 -0.02

-0.12** -0.03 0.09* 0.19** 0.15* 0.10*

0.00 -0.22** -0.07 -0.02 -0.22** -0.09 0.16** 0.07 0.05 0.20** 0.06 0.00 -0.15** -0.08 0.00 -0.05 0.01 -0.04 0.02 0.27** 0.16** 0.04 -0.09* -0.07 0.20** 0.14** 0.13** 0.09* 0.05 0.06

Italic: significant at p<0.10; * significant at p < 0.05; ** significant at p < 0.01

The results of the correlation analysis show that gender, minority status, age and having children at home are related to some of the scales and items in the questionnaire. Therefore, when excluding items, attention should be paid to these results. In order to examine possible changes in the relationships between the different variables, we conducted structural equation modeling with both the long and short versions of the questionnaire. Table 3 shows the results. Table 3 Long version Short version Results of structural equation modeling analysis with short and long version 2 44.3 63.0 Df 20 20 GFI .97 .98 AGFI .96 .95 CFI .99 .98 RMR .04 .05 RSMEA .045 .06

The results show that 2 (a measure of misfit between the model and the data) increases. However, the change in goodness of fit measures is very small.

4.

Discussion

In this paper we have described the stages we went through to develop version of a questionnaire survey to assess turnover intention among IT professionals, with specific attention to the role of gender and minority status. The title of this paper poses the question: is this a form of art or is it science? First, we have to emphasize that we followed a very systematic approach to develop the questionnaire. Many steps (see Figure 1) were taken to ensure that the questionnaire was both valid and reliable. Because of the demand for a shortened version of the questionnaire, we developed a set of criteria to reduce the number of items. Most of the criteria were based on statistical techniques: factor analysis to identify the highest loading items; reliability analysis to ensure internal consistency and correlation analysis to study criterion (aimed at both predictive validity and possible gender and race/ethnicity related issues) related criteria. The results show that the short version of the questionnaire can no longer be used for benchmarking with the original data set. The differences in means are too great to ignore. Results also show that the relations between concepts measured with the old and new version of the questionnaire do not change very much. Results of correlation analysis and structural equation modeling show that most of the relationships remain intact. In other words, the development of a short version of the questionnaire had consequences for the reliability of the questionnaire (increased standard deviations; statistically significant lower means, lower internal consistencies), but not so much for validity. Second, we recommend -as much as possible- to use standard questionnaires, which have proven to be reliable and valid. We further recommend using a very systematic approach to develop a tailor made questionnaire. Finally, we recommend making as few as possible changes to the questionnaire. However, sometimes changes are necessary. When making changes we recommend making them in a systematic way, using specific criteria. Finally, is it art or science? We do recommend a very systematic, scientific approach, but do also realize that part of it, is indeed art. References
Carayon, P., Haims, M. C., & Kraemer, S. (2001). Turnover and retention of the Information Technology workforce: The diversity issue. In M. J. Smith & G. Salvendy (Eds.), Systems, Social and Internationalization Design Aspects of Human-Computer Interaction (pp. 67-70). Mahwah, NJ: LEA Carayon, P., & Hoonakker, P. (2001). Survey design. In W. Karwowski (Ed.), International Encyclopedia of Ergonomics and Human Factors (Vol. Volume III, pp. 1899-1902). London: Taylor & Francis. Carayon, P., Schoepke, J., Hoonakker, P., Haims, M., & Brunette, M. (2005). Evaluating the causes and consequences of turnover intention among IT users: The development of a questionnaire survey. Accepted for publication by Behaviour and Information Technology (BIT). Carmines, E. G., & Zeller, R. A. (1990). Reliability and validity assessment. Beverly Hills, California: Sage. ITAA. (2002). Bouncing back: Jobs, skills and the continuing demand for IT workers: ITAA. McDowell, I., & Newell, C. (1987). Measuring Health: A Guide to Rating Scales and Questionnaires. Oxford, UK: Oxford University Press. Moore, J. E., & Burke, L. (2002). How to turn around 'turnover culture' in IT. Communications of the ACM, 45(2), 73-78. Nunnaly, J. C. (1978). Psychometric theory (second ed.). California: McGraw Hill.

You might also like