The Effect of Learning Styles On The Navigation Needs of Web-Based Learners

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

Computers in Human Behavior

Computers in Human Behavior 22 (2006) 885898 www.elsevier.com/locate/comphumbeh

The eect of learning styles on the navigation needs of Web-based learners


Jens O. Liegle
a b

a,1

, Thomas N. Janicki

b,*

Computer Information Systems Department, Georgia State University, Atlanta, GA 30302, USA Information Systems Department, University of North Carolina at Wilmington, Wilmington, NC 28403, USA Available online 10 April 2004

Abstract Web-based training with all its potential benets is growing at a tremendous rate; however most current systems provide a one-size-ts-all approach to the delivery of the material. Two approaches that try to improve end-user learning have emerged: adaptation of the material content and/or adaptation of the material presentation mode. As a subset of the material presentation approach, two modes have been discussed in the literature: learner control vs. system control. It has been discussed that if the amount of learning is dependent on the material presentation mode and the learning style of the users, more eective systems that adapt to this relationship could be developed. This paper analyzes the results of an exploratory experiment completed by 58 subjects. It rst measured their learning style preferences (using a version of the Kolb Learning Style Inventory Tool) and compared it to their actual visits of linked Web-pages. The study found that learners classied as Explorers tended to jump more and created their own path of learning (learner control), while subjects classied as Observers tended to follow the suggested path by clicking on the Next button (system control). In addition, test scores for explorers who did jump were higher than explorers who did not jump, while conversely observers who did not jump scored higher than observers who did jump. 2004 Elsevier Ltd. All rights reserved.
Keywords: Learning styles; Learner diversity; Computer-based training

Corresponding author. Tel.: +1 910 962 4077; fax: +1 910 962 3068. E-mail addresses: jliegle@gsu.edu (J.O. Liegle), janickit@uncw.edu (T.N. Janicki). Tel.: +1 404 651 3850; fax: +1 404 651 3842.

0747-5632/$ - see front matter 2004 Elsevier Ltd. All rights reserved. doi:10.1016/j.chb.2004.03.024

886

J.O. Liegle, T.N. Janicki / Computers in Human Behavior 22 (2006) 885898

1. Introduction Computer-based training (CBT) and its newer complement Web-based training (WBT) are growing rapidly. Academicians are placing more and more course material on-line to supplement their tradition in-class instructions. However, course management tools like BlackBoardTM and Web-CTTM as well as authoring tools like Authorware and Director provide only a general one-size-ts-all approach and do not take into account the needs of dierent learners (Janicki & Liegle, 2001), which could be a reason for the lack of success of many of these systems (Martinson & Schindler, 1995). 1.1. Customizing learning modules to improve learning outcome It has been argued that by customizing learning modules to diering types of learners, the learning outcome will be increased. Early experiments by Tennyson and Christensen (1988) with their MAIS (Minnesota Adaptive Instructional System) examined among other things the eect of adaptive interface on cognitive strategies of the learner in order to improve learning. Many have built on this research stream. Two approaches that try to improve end-user training have emerged in the area of software training research: (1) adaptation of the training material content to match the needs of individual learners and (2) adaptation of the training material presentation mode in terms of order and style of the presentation (Olfman & Mandviwalla, 1994). Further examining the second mode of adaptation (presentation), Bernstein (1998) reported that two dierent pedagogical approaches to lesson sequencing the order and style of presentation may work well on the Web. These pedagogical approaches are based on the behaviorist learning/teaching style and the constructivist theory (Bernstein, 1998; Brandt, 1997) and are commonly referred to as system control vs. learner control (Aleven & Koedinger, 2001). System control guides a student through predened steps while learner control provides the navigational tools that permit learners to construct knowledge themselves. A variation of the learner-controlled approach in recent studies involving computer-delivered instruction has been to allow each learner to add or delete elements of instruction at frequent choice points as he or she works through the instructional materials (Schnackenberg, Sullivan, Leader, & Jones, 1998). The literature has not yet clearly dened which mode is superior. An overview of the research concerning learner vs. system-control was presented by Schnackenberg et al. (1998). They reported that some arguments in favor of learner-control are that: (1) learners best know their own instructional needs, (2) learner-control can help students become independent learners, and (3) learners construct their own knowledge in the context of their own needs and experiences and require control over the learning process to do so. Critics claim that learner-control distracts learners because it forces them to interrupt their learning and pay attention to the sequencing of material. They claim that beginners are unable to make the right sequence choices: students cannot be expected to select learning tasks and topics eciently in domains they are just beginning to learn about (Murray, 1998), or simply suer from disorientation (Chalmers, 2003). Dierent learning strategies might further inuence the use of navigation (Namlu, 2003). This view is supported by Lieberman and Linn (1991) who reviewed the literature of learner vs. systemcontrol and found that novices should benet from system-control, but more advanced students could benet from learner-control. Others go even further claiming that the

J.O. Liegle, T.N. Janicki / Computers in Human Behavior 22 (2006) 885898

887

degree of learner-control should depend on the learners familiarity with the topic as well as the learners motivation, aptitude, and attitude (Merrill, Li, & Jones, 1992). It has been argued that individuals dier in their preference for system vs. learnercontrol. Wilson (1986) has summarized the theory in terms of four basic propositions: (a) learning is a cyclic process involving four kinds of styles [...]; (b) all normal adults possess and use all of the four styles; (c) the level of these styles and preferences for their use vary among individuals; and (d) an individuals learning-style preference can be assessed through the use of the LSI [Kolbs Learning Style Inventory] (Cornwell & Manfredo, 1994). Others support this view, i.e., Bostrom, Olfman, and Sein (1990, 1993) used Kolbs Learning Style Inventory (see Table 1) and found that in the design of training, it is essential to match training methods to individual dierence variables. One experiment concerning navigational control was conducted by Melara (1996), who examined the eect of learning style (based on Kolbs Learning Style Inventory) on learner performance within two dierent hypertext structures: hierarchical and network. Her experiment showed no signicant dierences in achievement for Explorers and Observers using either hypertext structure. She raises the following point: contrary to Observers, Explorers are expected to prefer experimentation to observation. She recommends additional studies are needed that examine the time spent on dierent activities that are targeted towards these two dierent personality types. One such study was conducted by Reed, Oughton, Ayersman, Ervin, and Giessler (2000), who found no correlation between learning style as measured by the Kolb groupings to time on task. They also examined the relationship between learning style and nonlinear steps vs. linear steps, and found no relationship based on the Kolb groupings, however, they did nd a relationship between learning style as measured with the Group Embedded Figures Test (GEFT) and navigation behavior. However, their study was limited in that only 18 subjects were used, and instead of a WWW (World Wide Web)-based hypertext system, an antiquated HyperCard system from before 1995 was used as the navigation environment. In a related study, Chen and Macredie (2002) found that Field Dependent (FD) vs. Field Independent (FI) learners differ in the preference for linear vs. nonlinear pathways through hypermedia systems: FD learners preferred guided navigation, while FI preferred freedom of navigation. They speculate that FI might benet from learning systems with learner control, while FD might prefer system-controlled linear navigation. 1.2. Research question The objective of this research proposal is to answer the question raised by Melara (1996) of whether the learning styles Explorer vs. Observer, as measured by Kolbs Learning Style Inventory, have an eect on the navigational habits of users of Web-based training modules
Table 1 Kolbs Learning Styles and Learning Preferences (Bostrom et al., 1990) Learning style Preferences Concrete learners Prefer analogical or conceptual models Abstract learners Prefer analogical or abstract models, but perform better with abstract. Reective observers Require a guided form of instruction = Observe Active experimenters Benet from discovery mode learning = Explore

888

J.O. Liegle, T.N. Janicki / Computers in Human Behavior 22 (2006) 885898

and also on the amount of learning that takes place. If this is indeed the case, developers of Web-based training systems should rst determine the learning style of their users and then have two versions of their system ready for their users: one that supports exploring and one that guides the user through the material. As a guide we follow the design methodology of the Reed et al. (2000) study, but use a larger number of subjects and a WWW-based hypertext system. Specically, this experimental research project is designed to answer the following question: Do learners with dierent learning styles use Web-based training systems dierently? We expect that users with the learning style Explorer follow a non-linear approach to learning in Web-based learning systems, while users with the learning style Observers follow a linear approach, and that they while doing so outperform users that do not follow their respective optimal learning approach. This paper is organized to rst review the challenges for researchers in the areas of content and presentation mode adaptation. We then discuss the methodical design of our experiment followed by two main hypothesis regarding the interaction between learning style and navigational behavior. We then present the results of the experiment, briey discuss its limitations and future research opportunities, and conclude with a discussion of the impact of our ndings. 2. Challenges for researchers Two types of adaptation for hypermedia systems have been identied: adaptive presentation of the content and adaptive navigation support (Beaumont & Brusilovsky, 1995). 2.1. Adaptive presentation of the content As discussed in the previous section, it has been discussed that the by adapting the content to the individual learner, the actual learning outcome can be improved. However, there are known methodology problems with content-related research. Lieberman and Linn (1991) indicated that studies comparing classroom instruction to computer-assisted instruction (CAI) have many inherent problems. Preparation time and eort are dicult to control, content quality and instructional are delivery dicult to measure, and CAI systems suer from a novelty eect. These factors make any comparison between human instructors and CAI systems complex. Another approach to content-related research is the preparation of two dierent lesson versions that are taught by the same CAI system. One such experiment by Wang and Beasley (2002a, 2002b) examined the relationship between learner control with or without advisement conditions, and found no correlation to performance. Having dierent lesson versions, however, means that any comparison between the learning outcome would not be conclusive since one of the major diculties with research that compares the eectiveness of any method of presentation to another mode of presentation has been that the comparison is meaningless if the actual content of the lessons is dierent (McGrath, 1992). In order to test the eectiveness of adapting the content-structure to the preferred learning style of a user, experiments should be conducted that vary the sequence of the content within a lesson. This would change the relative emphasis on the dierent lesson parts but still teach the same content to all learners. To address this issue, our study used consistent learning content for all students participating in our experiment.

J.O. Liegle, T.N. Janicki / Computers in Human Behavior 22 (2006) 885898

889

2.2. Adaptive navigation support Who should determine what to learn next: the learner or the system/author? The research on learner control is not conclusive: Schroder, Mobus, and Pitschke (1995) found that novice learners used a fairly passive strategy for moving through a hypermedia system, not utilizing their selection control and instead following a linear viewing pattern. On the other hand, Rieman, Young, and Howes (1996) showed in their experiments that learners did not follow a linear viewing pattern when they had full control; instead, no specic sequence was followed in what they call exploratory learning, which was conrmed by a study of Boechler and Dawson (2002). Similarly, Melara (1996) found no performance dierences when students used a hierarchical organized system compared to a network structure. Goforth (1994) and Tennyson (1981) found that learner-control is more eective than system-control, but Young (1996) supported this nding only for learners with high self-regulated learning strategies, not for others. Allinson (1992) reported in her study that some subjects used a more linear navigation approach, and others preferred self-determined hypertext navigation. Tennyson (1989) warns against total learner control of the module, arguing that in order to make a module student-centered, learners must have sufcient information and advisement about their learning development. Cognitive theorists have been examined the relationship of adaptive user interfaces. Dufrese and Turcotte (1997) found a correlation between Field Dependency and user navigation needs, but also found prior experience with the interface to be an inuential factor. Reed et al. (2000) found a correlation between learning styles as measured by GEFT and navigation behavior, but not to learning styles as determined by Kolb. Based on this discussion, the prior research on the eectiveness of learner- vs. system-control seems to be inconclusive at best. It might be the case that the two approaches, learner-control and system-control, work well for dierent students, but at this point, the research regarding the optimal balance of learner- vs. system-control is inconclusive. It would appear that the best system would adapt itself to the learner by providing both the behavioralist and the constructivist approach based on the learners preference and personality type (Tan, 1996). One should therefore examine the eect of the Explorer vs. Observer personality-type dimension on system- vs. learner-controlled navigation. Our study is consequently trying to provide some insight into the relationship of navigational behavior and learning style, which can in turn be used to construct more eective training systems. 2.3. Kolb learning style inventory DeCiantis and Kirton (1996) reviewed Kolbs Experimental Learning Theory and found support for Kolbs dimensions (see Table 1) of comprehending (concrete experience vs. abstract conceptualization) and transforming (reective observation vs. active experimentation). They disagree with Kolbs attempt to contain within a single measure three unrelated aspects of cognition: style, level, and process (DeCiantis & Kirton, 1996). Instead, their study showed that style, level, and process should be separated. Kolb used his model to classify people into one of the four categories, but then continued and argued that people go through four phases that result from his model. The Kolb Learning Style Inventory is not without its critics. Continuous problems with testretest reliability and thus stability of the KLSI have been reported: Loo (1996) examined

890

J.O. Liegle, T.N. Janicki / Computers in Human Behavior 22 (2006) 885898

the revised Learning Style Inventory and found that when the stability of the four learning styles was examined over a 10-week period, between 42.8% and 50% of subjects remained in the same category and concluded that the revised KLSI has not resolved the psychometric problems raised against the original version. The problem of testretest reliability could be due to the fact that people change their cognitive style. Allinson et al. (1992) stated that though there is evidence that some people consistently demonstrate one form of thinking as opposed to the other, many individuals will change their cognitive style to suit the current task. However, this main critique of the KLSI might not apply for personalized instruction systems because even if learning style varies with situations, it will remain constant within a particular context, and especially within one training session. Thus, an individuals preferred learning style is likely to remain the same during software training (Bostrom et al., 1990). One can therefore argue that for the duration of an online tutorial, the learning style is not going to change. The context within a training module will remain more or less constant, assuming only one subject area is taught. If another subject matter is taught at a later point in time, a retest might be appropriate. Other researchers (Ayersman & Reed, 1995; Romero, Tepper, & Tetrault, 1992; Veres, Sims, & Locklear, 1991; Willcoxson & Prosser, 1996) also support the KLSI. Willcoxson and Prosser (1996) examined the validity and reliability of the revised version of KLSI. The reported results indicate high internal consistency and some evidence of validity. Veres et al. (1991) examined Kolbs Revised Learning Style Inventory (KLSI II) and found increased stability. They argue that the KLSI is a useful instrument for the study of learning styles. Ayersman and Reed (1995) claim that the KLSI has consistently been shown to possess both adequate reliability and validity. Romero et al. (1992) developed and tested an improved version of Kolbs Learning Style Inventory. Their experiments with 516 undergraduate students from various college courses and 155 MBA students showed support for reliability (internal consistency and six-week testretest stability), factor structure, and validity of the new scales. Based on these favorable results, Romeros et al. version of the KLSI questionnaire was deemed suitable for our experiment. Using Kolbs Learning Style Inventory, subjects receive scores on the dimensions concrete experience vs. abstract conceptualization, and on reective observation vs. active experimentation. For our research, we concentrated on the latter two dimensions, which the following paragraphs discuss in greater detail (see Fig. 1). A high score in reective observation indicates a tentative, impartial, and reective approach to learning. Such individuals rely heavily on careful observation in making judgments (Kolb, 1976; Loo, 2002). They prefer learning situations such as lectures that allow the role of impartial objective observers. These individuals tend to be introverts. One could argue that such individuals, when taking an online tutorial, would benet from a guided tour style navigation, where a predetermined next page is given to them. That way,
Explorer Concrete Observer
Fig. 1. Dimension of Kolbs Model (Kolb, 1976).

Abstract

J.O. Liegle, T.N. Janicki / Computers in Human Behavior 22 (2006) 885898

891

they could rely on the system/instructor to make the decision for them on what-to-do next. Wang and Beasley (2002a, 2002b) explained this eect through the need of students to build a navigation strategy based on their learning style and their needs of hypermedia instructions as the number of links and clicks can be very confusing to a student who is following linear thinking. On the other hand, a high score in active experimentation indicates an active doing orientation to learning that relies heavily on experimentation (Kolb, 1976). Such individuals learn best when they can engage in such things as projects, homework, or group discussions. They dislike passive learning situations such as lectures. Here one could argue that such individuals, while disliking traditional lectures, could benet from an user interface that allows them to actively select their next page themselves, thus feeling in control. 3. Experimental design In order to determine whether the users learning style has an eect on their navigational needs, this study used data from a laboratory experiment with student volunteers. This experiment had four basic phases as shown in Fig. 2. Phase one involved the building of Web-based learning tutorials. Ten dierent Ph.D. graduate business students were recruited to develop Web-based tutorials based on a management theory of employee motivation. In phase two, a panel of experts from a graduate school of education was surveyed to determine if learning concepts were employed in the design of the learning modules. The survey results indicate that the learning modules did promote the inclusion of ve learning principles. These learning principles are: an overview of learning objectives for the module, list of pre-requisites, a variety of presentation styles, learner self-control through the lesson and feedback and testing. These principles were chosen as they repeatedly surface in the research to consider when building eective instructional design (Dear, 1987; Gagne, Briggs, & Wager, 1988; Hannan & Peck, 1988; Janicki & Liegle, 2001; Merrill, 1997). These modules permitted the subject to learn new materials based on three dierent learning styles. These styles were: (a) list the information in a narrative format (lesson content); (b) show the new information through an example; and (c) have the learner do an exercise (see Fig. 3). Once the graduate students developed the tutorials, a beta test of the system was conducted. This beta test demonstrated that the system had been programmed eectively and could support at least 16 simultaneous users without any diculties or degradation of speed and performance. Finally the experiment was ready for testing by actual learners in phase four as shown in Fig. 2.

Phase One Development of Web-Based Learning Modules

Phase Two Evaluation of Modules by a Panel of Experts

Phase Three Beta Test the tutorial and navigation system

Phase Four Completion of Tutorials by Volunteer Learners

Fig. 2. Experiment design.

892

J.O. Liegle, T.N. Janicki / Computers in Human Behavior 22 (2006) 885898

Fig. 3. Screenshot of navigation options and dierent learning style presentations.

Undergraduate volunteers were recruited from three sections of the same class (Principles of Management) oered during the same semester. Three dierent instructors taught the sections and all instructors agreed not to present any course material on the employee motivation theories (the tutorial topics chosen for development) prior to the actual experiment. The three sections had a total enrollment of 75 subjects, with 68 students in class on the day of the experiment. Sixty-three of the students agreed to participate in the experiment. Five students were used as control subjects and did not access the learning modules that were built, instead they surfed the Web for the topics related to the new course material. A computer laboratory was reserved for each of the experimental sessions to reduce outside noise and interference. Subjects were permitted to take notes during the reading and review of the Web-based lesson materials, but were not allowed any other outside materials such as textbooks or class notes. Fig. 4 expands the fourth phase shown in Fig. 2. Subjects were requested to complete all three steps of the experiment. The rst step was completion of the survey that measures their preferred learning styles using a version of Kolbs Learning Style Inventory (KLSI) created by Romero et al. (Kolb, 1976; Romero et al., 1992). Kolbs Learning Style Inventory has been used by numerous studies and is a

Step One Complete modified Kolb Learning Style Survey

Step Two Completion of the Learning Module on a Mgt Theory

Step Three Quiz on the Learning Module Subject Material

System Tracked Learning Movements through the tutorials ("Clickstream")

Fig. 4. Subjects complete the Learning Modules (Phase 4 of experiment design).

J.O. Liegle, T.N. Janicki / Computers in Human Behavior 22 (2006) 885898

893

well accepted instrument for this type of research (Bostrom et al., 1990, 1993; Larkin-Hein & Budny, 2001; Loo, 2002; Reed et al., 2000). In step two the learners navigated through the actual learning content. Each Web page oered the learner the option to click the Next button or they could have chosen to Jump to a dierent link via a hyper linked Table of Contents that was presented on the Web-page (see Fig. 3). Finally, the subjects completed an on-line quiz (see Fig. 4). The navigational behavior (clickstream) of the 58 subjects was logged on a page-by-page basis. Clickstream analysis can be used to determine the navigational behavior of the learner (Mandese, 1995; Brodwin, OConnell, & Valdmanis, 1995). This type of analysis identies the actual paths that the learners used and classies them according to their degree of linearity. This experimental research project was designed to answer the following questions: Do learners with dierent learning styles use Web-based training systems dierently? We postulate: H0: There is no dierence in the approaches. H1: Users classied with learning style Explorer follow a non-linear approach to learning in computer-based learning systems, while users with the learning style Observers follow a linear approach. Should the previous hypotheses be supported, does this (non-linear or linear movement) have an eect on the amount of learning that takes place? We postulate: H0: There is no dierence in the amount of learning. H1: (a) Users with the learning style Explorer that follow a non-linear approach to learning in computer-based learning systems learn more than Explorers that follow a linear learning approach, while (b) Users with the learning style Observers learn more following a linear approach instead of a non-linear learning approach. If either of the above H1 is indeed the case, developers of Web-based learning systems should adapt the sequencing of their training material to the learning style of the user to make their systems more eective. Should neither be the case, the study might still give some insight whether a linear or a non-linear version of the modules should be used in general to achieve a higher learning outcome. 4. Results A total of 58 complete datasets were analyzed. In order to test our rst hypothesis on whether the learning style had an impact on the navigational behavior, we computed the average number of out-of-sequence jumps each user made while navigating the learning module. Since the user interface presented the users with a linear version of the document as well as a non-linear version, we dene an out-of-sequence jump (or just jump) as a click on a link that is dierent from the next suggested link, which would lead to the Web page following the current slide based on the instructors original sequence. In addition, a jump was not counted if a subject re-clicked on a link that would have taken them to the same page currently on the screen.

894

J.O. Liegle, T.N. Janicki / Computers in Human Behavior 22 (2006) 885898

Table 2 Two sample t-test of average number of jumps by personality type Personality type Observer Explorer N 15 43 Mean number of jumps expressed as a % of jumps vs. total pages 18.0 8.5 (4.8) Results by eliminating one outlier as described later in the paper1 SD 1.87 3.24 SE Mean 0.48 0.49 95% CI (0.62, 0.86) (0.89, 2.45)

t-Test for l(0) = l(1) (vs. <) : T = 1.41, p = 0.083, DF = 43.

We further classied students into being either of personality type observer or explorer based on the Kolb Learning Style Inventory questionnaire that we administered prior to the experiment. Table 2 shows the average number of jumps grouped by personality type. We nd that explorers had indeed a higher percentage of jumps vs. total pages in the learning modules. As a percentage of the total pages available per module, explorers jumped an average of 18% of the total pages while the observers jumped 8.5% (4.8% of the time if an outlier is dropped see later discussion). The standard deviation was 3.24 vs. 1.87 for observers vs. explorers. To test the hypothesis that the dierence of the means is statistically signicant, we conducted a two sample t-test for independent samples. Since p = 0.083 < a = 0.1, we reject H0 and conclude H1: The dierence of the means is significant, and therefore explorers jumped more than observers. For all statistics, an alpha of 0.1 was used, since this is exploratory research (Bostrom et al., 1990, 1993). To test the second hypothesis, whether the dierence in jumping had an impact on the amount of learning for the two dierent groups, we compared the average test scores of users who jumped vs. users who did not jump. Since the average number of jumps is relatively small (Mean=1.65, SD=2.96), there were not enough data points for us to compare the test scores based on the actual number of jumps. We therefore assume that users who did not jump did so because they were consciously following the next page approach, while users that jumped at least once were not, especially considering that the average number of pages that were available to the users was relatively small (8 per learning module). We therefore only compared users that did not jump at all to users that jumped once or more often. We conducted independent sample t-tests for both personality types. See Tables 35 for the results. From Table 3, we nd that explorers who did not jump had a lower mean score (58.0) than explorers that jumped (64.3). Since P = 0.064 < a = 0.1, we reject H0 and conclude H1 and nd that dierence to be statistically signicant. Based on the results as shown in Table 4, we nd that observers who did not jump had a higher mean score (61.8) than observers that jumped (48.3), which also means that jumping had the exact opposite eect on explorers than on observers. However, since p = 0.16 > a = 0.1, we cannot reject H0 and must therefore conclude that the dierence is statistically not signicant. Still, a number of factors motivated us to take a closer look at these results: First, there were only four observers that jumped, while 11 did not jump at all. And second, while the dierence of the two mean scores is actually fairly high (13.5), the main reason for this dierence is being signicant is the much larger standard deviation of these four users. A closer examination of these four users revealed one potential outlier:

J.O. Liegle, T.N. Janicki / Computers in Human Behavior 22 (2006) 885898 Table 3 Two sample t-test for Explorers Jump Explorers who did not Jump Explorers who did Jump N 23 20 Mean test score 58.0 64.3 SD 11.8 14.6

895

SE Mean 2.5 3.3

95% CI for l(0) l(1): (14.6, 1.9) t-test for l(0) = l(1) (vs. <) : T = 1.56, p = 0.064, DF = 36.

Table 4 Two sample t-test for Observers Jump Observers who did not Jump Observers who did Jump N 11 4 Mean test score 61.8 48.3 SD 10.4 22.0 SE Mean 3.1 11

95% CI for l(0) l(1): (22.9, 50) t-Test l(0) = l(1) (vs. >) : T = 1.18, p = 0.16, DF = 3.

Table 5 Two sample t-test for Observers (excludes one outlier) Jump Observers who did not Jump Observers who did Jump N 11 3 Mean test score 61.8 30.0 SD 10.4 17.6 SE Mean 3.1 10

95% CI for l(0) l(1): (24.0, 68) t-test l(0) = l(1) (vs. >) : T = 2.05, p = 0.089, DF = 2.

This user jumped a lot (six times, signicantly above the mean of 0.93 jumps for an observer). At the same time, this users score on the personality type scale just barely classied him/her as an observer (23 points, where any score above 24.5 would have been classied as an explorer). When we dropped this user to see if it made a dierence, the score dierence indeed becomes signicant at p = 0.089 < a = 0.1 (see Table 5). With only four (three) observations in the last category, we do not feel comfortable to simply drop an observation, especially since our relatively small overall sample size of 58 complete records did not produce an evenly distributed number of users for the dierent categories. However, we see the results as basically supporting our hypothesis, and feel condent that there is indeed a relationship between the navigation mode, the learning style, and the amount of learning that took place. 5. Limitations and future research This study has a number of limitations. For one, while an alpha of 0.1 is sucient for exploratory research (Bostrom et al., 1990, 1993), the dierences were only signicant in the p = 0.08 range. Possible reasons are that the average number of pages of a training module was fairly small at an average of eight per learning module. This gave students only few choices; results might be dierent when students have to learn fairly large modules. The small number of pages is, however, consistent with learning theory that claims that a module should be completed within a session that should last no longer than 20 to 25 minutes as the average adult attention span is 22 minutes (Ward & Lee, 1995). The same is true for the limited time that our subjects had up to 45 minutes to complete

896

J.O. Liegle, T.N. Janicki / Computers in Human Behavior 22 (2006) 885898

the modules, however the average time per subject was less then 20 minutes. Longer modules, more links, and more time might lead to dierent results. Another limitation was that our experiment gave all users the option to simply go to the next page or at any time to jump to an out-of-sequence page. We found that the amount of learning indeed depended on the learning style observer vs. explorer in combination with their mode of navigation. A next step would be to conduct experiments that classify users into the observer vs. explorer category and then give them customized navigation options in form of simple next button vs. hyperlink-enabled systems. We propose that explorers, when forced to constantly make a choice on where to go next, will learn more compared to explorers that can only go to the next-page. In contrast, we propose that observers that are forced to select their next page will learn less than observers that are guided through the learning material in form of sequential next-page navigation. In addition, further research needs to be conducted that examines whether the potential increase in learning is oset by the increase in cost that comes with the need for providing multiple versions of the training system. 6. Conclusions This paper summarizes the results of an experiment to determine if dierent learners (observes vs. explorers) have dierent navigational needs to assist them in learning. It was observed that explorers did not follow the suggested path, and they expressed a desire to jump around the learning modules and learn at their own sequence. It was also observed that subjects who were observers followed the suggested path of learner by clicking the next button. In addition, explorers who did indeed jump scored higher on the quiz at the end of the learning module than explorers who did not jump. Conversely, observers who did not jump scored higher on the quiz at the end of the learning module then observers who did jump. Both of these ndings are consistent with the theories presented by other researchers. This research provides insights for both the academic community as well as for IS managers. For the research community, the proposed hypothesis from Tan (1996) and Melara (1996) that the amount of learning will increase when the teaching style is adapted to the learner by providing both the behavioralist and the constructivist approach to learning based on the learners personality type was tested and principally found to be true. For practitioners, the results of this study can have signicant implications due to the rapid increase in Web-based learning modules as more university and business courses are oered on-line. If, like our study shows, the amount of learning is indeed dependent on the amount of learner control and the learning style of the users, then more eective systems could be constructed that make use of this relationship. Further research is needed to be conducted that examined whether the potential increase in learning is oset by the increase of cost that would come with the need for providing multiple versions of the same learning materials. References
Aleven, V., & Koedinger, K. R. (2001). Investigations into Help Seeking and Learning with a Cognitive Tutor, Working notes of the AIED 2001 Workshop Help Provisions And Help Seeking in Interactive Learning Environments. Available: http://cbl.leeds.ac.uk/ijaied/.

J.O. Liegle, T.N. Janicki / Computers in Human Behavior 22 (2006) 885898

897

Allinson, L. (1992). Learning Styles and Computer-Based Learning Environments. In 4th International Conference ICCAL 92, Wolfville, Nova Scotia, Canada. Ayersman, D. J., & Reed, M. W. (1995). Eects of Learning Styles, Programming, and Gender on Computer Anxiety. Journal of Research on Computing in Education, 28(2), 149161. Beaumont, I., & Brusilovsky, P. (1995). Adaptive Educational Hypermedia: From Ideas to Real Systems. In Maurer (Ed.). Proceedings of the educational multimedia and hypermedia Conference, Graz, Austria, June 17 21, 1995, pp. 9398. Bernstein, D. (1998). WBT: Are We Really Teaching?. Inside Technology and Training(2) 1517. Boechler, P. M., & Dawson, M. (2002). Eects of navigation tool information on hypertext navigation behavior: a congural analysis of page-transition data. Journal of Educational Multimedia and Hypermedia, 11(2), 95105. Bostrom, R. P., Olfman, L., & Sein, M. K. (1990). The Importance of Learning Style in End-User Training. MIS Quarterly, 14(2), 101119. Bostrom, R. P., Olfman, L., & Sein, M. K. (1993). Learning Styles and End-User Training: A First Step. MIS Quarterly, 17(1), 118120. Brandt, D. S. (1997). Constructivism: Teaching for Understanding of the Internet. Communications of the ACM, 40(10), 112117. Brodwin, D., OConnell, D., & Valdmanis, M. (1995). Mining the Clickstream. Upside, 101106. Chalmers, P. A. (2003). The role of cognitive theory in humancomputer interface. Computers in Human Behavior, 19, 593607. Chen, S. Y., & Macredie, R. D. (2002). Cognitive Styles and Hypermedia Navigation: Development of a Learning Model. Journal of the American Society for Information Science and Technology, 53(1), 315. Cornwell, J., & Manfredo, P. (1994). Kolbs learning style theory revisited. Educational and Psychological Measurement, 54(2), 317327. Dear, B. (1987). AI and the Authoring Process. IEEE Expert, 2(4), 1723. DeCiantis, S. M., & Kirton, M. J. (1996). A Psychometric Reexamination of Kolbs Experimental Learning Cycle Construct: A Separation of Level, Style, and Process. Educational And Psychological Measurement, 65(5), 809820. Dufrese, A., & Turcotte, S. (1997). Cognitive Style and its Implications for Navigation Strategies. In Buley & Mizoguchi (Eds.). Articial intelligence in education. pp. 287293. Gagne, R., Briggs, L., & Wager, W. (1988). Principles of Instruction Design (Third Edition.). New York: Holt, Reinhard, & Winston. Goforth, D. (1994). Learner Control = Decision Making + Information. Datamation, 43(9), 96101. Hannan, M., & Peck, K. (1988). The Design Development and Evaluation of Instructional Software. New York: MacMillian Publishing. Janicki, T., & Liegle, J. O. (2001). Development and evaluation of a framework for creating web-based learning modules: a pedagogical and systems perspective. Journal of Asynchronous Learning Networks, 5(1). Kolb, D. (1976). Learning Style Inventory, Self-Scoring Test and Interpretation booklet. Boston, MA: McBer and Company. Larkin-Hein, T., & Budny, D. (2001). Research on Learning Style: Applications in the Physics and Engineering Classrooms. IEEE Transaction on Education, 44(3), 276281. Lieberman, D. A., & Linn, M. C. (1991). Learning to Learn Revisited: Computers and the Development of SelfDirected Learning Skills. Journal of Research on Computing in Education, 23(3), 373395. Loo, R. (2002). A meta-analytic examination of Kolbs learning style preferences among business majors. Journal of Education for Business, 77(5), 252257. Mandese, J. (1995). Clickstreams in cyberspace. Advertising Age, 66(12), 18. Martinson, M. G., & Schindler, F. R. (1995). Organizational Visions for Technology Assimilation. IEEE Transactions on Engineering Management, 42(1), 918. McGrath, D. (1992). Hypertext, CAI, Paper, or Program Control: Do Learners Benet from Choices? Journal of Research on Computing in Education, 29(3), 276296. Melara, G. E. (1996). Investigating Learning Styles on Dierent Hypertext Environments: Hierarchical-Like and Network-Like Structures. Journal of Research on Computing in Education, 14(4), 313328. Merrill, M. D. (1997). On Instructional Strategies. Instructional Technology Magazine, 2(1). Merrill, M. D., Li, Z., & Jones, M. K. (1992). Instructional Transaction Shells: Responsibilities, Methods, and Parameters. Educational Technology, 526. Murray, T. (1998). Authoring Knowledge-Based Tutors. The Journal of the Learning Sciences, 7(1), 564.

898

J.O. Liegle, T.N. Janicki / Computers in Human Behavior 22 (2006) 885898

Namlu, A. G. (2003). The eect of learning strategy on computer anxiety. Computers in Human Behavior, 19, 565578. Olfman, L., & Mandviwalla, M. (1994). Conceptual versus Procedural Software Training for Graphical User Interfaces: A Longitudinal Field Experiment. MIS Quarterly, 18(4), 405426. Reed, W., Oughton, J. M., Ayersman, D. J., Ervin, J. R., Jr., & Giessler, S. F. (2000). Computer experience, learning style, and hypermedia navigation. Computers in Human Behavior, 16, 609628. Rieman, J., Young, R., & Howes, A. (1996). A dual-space model of iteratively deepening exploratory learning. International Journal of Human-Computer Studies, 44(6), 743755. Romero, J. E., Tepper, B. J., & Tetrault, L. A. (1992). Development and Validation of New Scales to Measure Kolbs Learning Style Dimensions. Educational and Psychological Measurement, 52, 171180. Schnackenberg, H. K., Sullivan, H. J., Leader, L. F., & Jones, E. E. K. (1998). Learner Preferences and Achievement Under Dierent Amounts of Learner Practice. Educational Technology Research and Development, 42(2), 515. Schroder, O., Mobus, C., & Pitschke, K. (1995). A Cognitive Model of Design Processes for Modeling Distributed Systems. In Conference on articial intelligence in education, Washington, DC, August 1619. Tan, S. T. (1996). Architecture of a Generic Instructional Planner. Journal of Network and Computer Applications, 19, 265274. Tennyson, R. D. (1981). Use of Adaptive Information for Advisement in Learning Concepts and Rules Using CAI. American Educational Research Journal, 18(4), 425438. Tennyson, R., & Christensen, D. (1988). Educational Research and Theory Perspectives on Intelligent ComputerAssisted Instruction. Proceedings of the Association for Educational Communication & Technology. Dallas TX: AECT. Tennyson, R. (1989). Cognitive Science and Instructional Technology: Improvements in Higher Order Thinking Strategies. Proceedings of the Association for Educational Communication & Technology. Dallas TX: AECT. Veres, J. G. I., Sims, R. R., & Locklear, T. S. (1991). Improving the Reliability of Kolbs Revised Learning Style Inventory. Educational and Psychological Measurement, 51, 143150. Wang, L. Ch., & Beasley, W. (2002a). Eects of learner control and hypermedia preference on cyber-students performance in a Web-based learning environment. Journal of Educational Multimedia and Hypermedia, 11(1), 7192. Wang, L. C., & Beasley, W. (2002b). Eects of Learner Control and Hypermedia Preference on Cyber-students Performance in a Web-based Learning Environment. Journal of Educational Multimedia and Hypermedia, 11(1), 7191. Ward, E., & Lee, J. (1995). An instructors guide to distance learning. Training & Development, 49(11), 4044. Willcoxson, L., & Prosser, M. (1996). Kolbs Learning Style Inventory 1985: Review and Further Study of Validity and Reliability. British Journal of Educational Psychology, 66(1), 247257. Wilson, D. K. (1986). An investigation of the properties of Kolbs Learning Style Inventory. Leadership and Organizational Development Journal, 7(1), 315. Young, J. D. (1996). The Eect of Self-Regulated Learning Strategies on Performance in Learner Controlled Computer-Assisted Instruction. Educational Technology, Research and Design, 44(2), 1727. Jens Liegle is Assistant Professor in the Department for Computer Information Systems at the J. Mack Robinson College of Business Information at Georgia State University. He received his Ph.D. from Kent State University in Ohio. His primary research areas involve Web-based training, User Modeling, Web Development, and Emerging Technologies. Jens is currently on the board of directors for the Information Systems Educators Group (EDSIG) for the Association of Technology Professionals (AITP). Thomas Janicki is Assistant Professor in the Information Systems Department in the Cameron School of Business at the University of North Carolina at Wilmington. He received his Ph.D. from Kent State University in Ohio. His primary research areas involve utilizing Web-based technologies to enhance the traditional classroomlearning environment. Tom is also a past board of directors for the Information Systems Educators Group (EDSIG) for the Association of Technology Professionals (AITP).

You might also like