Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

Hwang, G. H., Chen, B., & Huang, C. W. (2016).

Development and Effectiveness Analysis of a Personalized Ubiquitous Multi-


Device Certification Tutoring System Based on Blooms Taxonomy of Educational Objectives. Educational Technology &
Society, 19 (1), 223236.

Development and Effectiveness Analysis of a Personalized Ubiquitous Multi-


Device Certification Tutoring System Based on Blooms Taxonomy of
Educational Objectives
Gwo-Haur Hwang1*, Beyin Chen2 and Cin-Wei Huang3
1
Department of Information Networking and System Administration, Ling Tung University, Taiwan, R.O.C. //
2
Department of Information Management, Ling Tung University, Taiwan, R.O.C. // 3Institute of Learning Sciences,
National Tsing Hua University, Taiwan, R.O.C. // ghhwang@teamail.ltu.edu.tw // byc@teamail.ltu.edu.tw //
aha76540@hotmail.com.tw
*
Corresponding author

(Submitted October 8, 2014; Revised March 19, 2015; Accepted May 14, 2015)

ABSTRACT
In recent years, with the gradual increase in the importance of professional certificates, improvement in
certification tutoring systems has become more important. In this study, we have developed a personalized
ubiquitous multi-device certification tutoring system (PUMDCTS) based on Blooms Taxonomy of
Educational Objectives, and applied it to help students obtain HTML certificates. The system can help students
learn more effectively and acquire certificates more successfully through the mechanism of personalized
strengthening practice and the function of the learning diagnostic light table. The experimental results show that,
compared with the control group, the experimental group has significantly better cognitive test scores.
Meanwhile, from the interviews, it was also found that the students generally believed that the multi-device
design makes it more convenient for them to use the system, and the learning diagnostic light table can make
them more aware of their learning status.

Keywords
Personalized learning, Ubiquitous learning, Multi-device learning system, Certification tutoring system, Blooms
Taxonomy of educational objectives

Introduction

With the trend of the information industry moving towards professionalization, enterprises are paying attention to
their employees individual professional competence much more than to their academic qualifications (Kerckhoff &
Bell, 1998). Shanker (1996) argues that the certification authentication mechanism can ensure the skills and
knowledge required in the professional field. Xiao (1993) further pointed out that professional certification not only
consists of the basic certificate to prove professional competence, but also affects enterprises hiring decisions. In
recent years, in order to enhance students professional capacity and competitiveness in the job market, many
universities have started offering certification tutoring courses, and actively promoting students attainment of
certificates. Therefore, how to develop a more efficient way to enhance students certification exam pass rates is a
very important issue.

In recent years, e-learning (electronic learning) has become increasingly popular. In order to maximize the benefits
of e-learning, the most important issue is to fully understand students personal characteristics and learning styles,
and then provide appropriate assessment designs. In the past, many researchers used assessment methods to help
learners more clearly understand the deficiencies in their learning and to give them appropriate assistance (Perkowitz
& Etzioni, 1997; Wang, 2011; Wang, 2008).

Bloom, Engelahar, Frust, Hill and Krathwohl (1956) proposed Blooms Taxonomy of Educational Objectives,
which is widely used by educators, and was later expanded into a new version (Anderson & Krathwohl, 2001). There
are many precedents, for example, Shen et al. (2005), to use it as a basis for developing an adaptive assessment
system. However, most of these applications are still limited to the use of computers. In recent years, with the
advances and popularity of information technologies, wireless networks and mobile devices, most students now own
mobile devices such as smartphones and tablet PCs. These devices are portable and can be connected to the Internet.
They allow students to be able to learn outside of class time. Chen, Kao and Sheu (2003) pointed out that mobile
devices have the advantages of immediacy and convenience. Therefore, this study used a dynamic web technology to
ISSN 1436-4522 (online) and 1176-3647 (print). This article of the Journal of Educational Technology & Society is available under Creative Commons CC-BY-ND-NC
223
3.0 license (https://creativecommons.org/licenses/by-nc-nd/3.0/). For further queries, please contact Journal Editors at ets-editors@ifets.info.
construct a multi-device certification tutoring system. Through connecting the Internet or a wireless network to the
system, students can use various mobile devices (smartphones, tablet PCs, notebooks) to implement ubiquitous
learning.

In this study, we developed a personalized ubiquitous multi-device certification tutoring system (PUMDCTS) based
on Blooms Taxonomy of Educational Objectives and applied it to help students obtain the HTML certificate. After
the students finish the online tests, the system will provide a Bloom capability indicator so that they can know their
learning status. In addition, according to the students weaknesses, the system provides strengthening practice by
adding weight when assigning test questions. We hope that the certification exam pass rate of students can be
improved through this system. Finally, we also designed an experiment to explore the effectiveness of using the
system. Compared with the traditional computerized certification tutoring system, the learning effectiveness as a
result of using PUMDCTS was enhanced. In addition, we also interviewed students to elicit their views on
PUMDCTS.

Literature review

Development and trends of certification tutoring systems

With the increasing importance of certification, many researchers have started improving and researching
certification tutoring systems. For example, Hwang, Chen, and Wang (2012) imported the technology of the
interactive multimedia e-books into their certification tutoring system. Xie, Hwang, Bai, Lin and Tseng (2012)
combined QR Codes (Quick Response Codes) with their certification tutoring system to implement mobile-learning.
They pasted the corresponding QR Code of the detailed explanation of the answer of the test question on textbooks
so that students could learn more easily. Hwang, Lee and Tseng (2012) added some game elements to their
certification tutoring system. The results indicated that their enjoyable game-based system may be more helpful than
the traditional version for those participants with lower prior knowledge and who exerted a lower degree of effort.
Hwang, Chuang, Chen and Tseng (2012) further combined a customized interface with their certification tutoring
system. The system allows students to learn by using different technology devices, and to choose their needs by
setting customized menu buttons.

In the past, certification tutoring systems were only used on computers. They have gradually been evolving to
provide diversification and mobility. However, there is as yet very little related research on personalized ubiquitous
multi-device certification tutoring systems. The focus of our study is therefore on how to enable learners to use a
certification tutoring system anytime and anywhere, while also helping enhance their understanding of their own
learning status.

Blooms taxonomy of educational objectives

The assessment results of a system must be able to clearly reflect the teaching objectives so that appropriate
assistance can be provided based on the individual status of students. In the analysis of teaching objectives, the most
appropriate reference is Blooms Taxonomy of Educational Objectives (Chen & Wu, 2003).

Bloom et al. (1956) proposed Blooms Taxonomy of Educational Objectives (herein referred to as the old version)
in 1956. The educational objectives were divided into two parts: knowledge and intellectual abilities and skills.
Knowledge constitutes a single category, while intellectual abilities and skills is divided into five categories.
Thus, there are six categories in all. Sorted from simple to complex, they are knowledge, comprehension,
application, analysis, synthesis and evaluation. This taxonomy was adopted by many scholars for a considerable
period of time. However, over time, it was found that some parts needed improvement. Thus, Anderson and
Krathwohl (2001) proposed a revised edition of Blooms Taxonomy of Educational Objectives (herein referred to
as the new version). They divided the original educational objectives into the knowledge dimension and the
cognitive process dimension. The knowledge dimension is a category of knowledge, including factual

224
knowledge, conceptual knowledge, procedural knowledge and metacognitive knowledge. The cognitive process
dimension is divided into six categories: remember, understand, apply, analyze, evaluate and create.

Applying Blooms Taxonomy of Educational Objectives to evaluate learning effectiveness can not only reflect the
learners learning effectiveness, but can also give educators clear guidance (Shen et al., 2005). For example, Wu,
Hwang, and Tsai (2013) proposed an expert system which adopts several cognitive processes in Blooms taxonomy
of educational objectives. The results showed that the experimental group students had significantly better
achievements than the control group students in the remember, apply, analyze, and evaluate categories,
while no significant difference was found in the understand category. In this study, we applied Blooms Taxonomy
of Educational Objectives to implement a Bloom light table to diagnose students learning status.

Personalized learning

Chen and Macredie (2010) proposed that there are three human factors which may affect learners learning
effectiveness. In other words, differences in individual prior knowledge, cognitive style and gender may result in
different learning outcomes. According to an individuals status, different teaching methods are utilized. This type of
learning method is called personalized learning. In the past, many studies were developed related to personalized
learning. For example, Wang, Tsai, Lee, and Chiu (2007) proposed a personalized learning object recommendation
model. From their experimental results, it was found that the model is efficient in terms of making adaptive
personalized learning object recommendations. In addition, Nedungadi and Raman (2012) also proposed a cloud-
based adaptive learning system which incorporated mobile devices into a classroom setting. The system was used in
school computer labs. It provided teachers with real-time feedback on individual and group learners, and pedagogical
recommendations for content adaptation based on the users knowledge levels and preferences. In general, the main
emphasis of personalized learning is that the appropriate teaching content is provided according to the learners
knowledge level, learning experience, learning needs, and other personal factors. Based on learners individual
differences, the learning materials should be adjusted (Cho, Kim, & Kim, 2002).

Egan (1999) pointed out that images can make the message more specific. For example, Chen, Chen and Chen
(2011) also proposed a color-oriented tool to trace students learning processes. From the different colors, teachers
can quickly notice those students with poor performance in the learning process.

Taking these developments into consideration, this study built a system to enable students to more clearly view their
learning status. We designed a Bloom light table to diagnose students weaknesses. Using this table, a mechanism to
strengthen the practice of exam questions can be implemented. The system also displayed lights of different colors at
the end of the exam so that the students could see their level for different chapters. In this way, the aim of
personalized learning can be achieved.

Ubiquitous learning

In recent years, with the popularity of wireless network technologies and mobile devices, learning activities are no
longer limited by time and space. Whenever learners go outside, no matter where they are, digital learning activities
can be conducted. This forms a ubiquitous learning environment (Hwang, Tsai, & Yang, 2008; Hwang, Wu, & Chen,
2007).

Hwang, Wu, Tseng and Huang (2011) built a context aware ubiquitous learning platform from which students can get
an instant reply when they meet problems in the learning process by connecting the mobile devices to the platform.
The experimental results show that students achievement and efficiency were significantly improved.

In this study, in order to enable students to absorb knowledge more freely and actively, the certification tutoring
system was built using a cloud database. Students can use their mobile devices or PCs (personal computers) to link to
the system anytime and anywhere. Thus, ubiquitous learning can be conducted.

225
The related applications of multi-device learning systems

In recent years, with the rapid development of mobile learning, the learning modes have changed greatly. Learning
can not only be performed using computers but also with mobile devices such as smartphones, tablet PCs, notebooks
and digital personal assistants (PDAs), to conduct ubiquitous learning (Lin, Chu, Wang, & Guo, 2012).

However, the diversity of mobile learning devices results in compatibility issues. The specifications of different
devices are not the same. This can limit the functionality of software when its design does not consider the individual
characteristics of mobile learning devices. Hwang, Chuang, Chen and Tseng (2012) proposed a game-based learning
system and integrated the diverse mobile learning devices. This system can detect the learners device type and then
adjust to a more suitable device layout. Users can also set menus based on personal needs. The mechanism can
implement the integration of multiple devices.

Multi-device designs not only facilitate learning but also reduce system development costs. We believe that this
concept will be of increasing interest. Therefore, in this study, to make the certification tutoring system more
convenient for students to learn, we provided a multi-device environment for them to practice certification exam
questions.

The architecture and functions of PUMDCTS


Students can use PCs and mobile devices, such as smartphones, tablet PCs and notebooks, to connect to the server
through the Internet or wireless networks. Subsequently, they can log into the system by inputting their student ID
and password to conduct the practice of exam questions, mock exams and individual score inquiries. Teachers can
use PCs and the Internet to manage all the system databases. The hardware architecture of the system is illustrated in
Figure 1.

Figure 1. Hardware architecture diagram

The system development tools include ASP.NET for the web page design and SQL server for the database
management. Students can conduct the practice of different chapters through the exam practice module, take
similar exams to the real certification exam through the mock exam module, and conduct individual score inquiries
to know all their scores through the score inquiry module. Teachers can manage students data, scores and
portfolios by using the student data management module, the student score management module, and the

226
student portfolio management module. Teachers can also manage exam questions through the exam question
management module. The software architecture of the system is illustrated in Figure 2.

Figure 2. Software architecture diagram

In addition, we designed a ranking table of exam experience on the login screen. This table was ranked according to
the total number of questions answered correctly by students. We expected that healthy competition would enhance
the students learning motivation.

The classification of certification exam questions

According to the new version of Blooms Taxonomy of Educational Objectives, the cognitive process dimension has
six categories. They are remember, understand, apply, analyze, evaluate and create. The course of
certification tutoring is HTML and the certification exam is TQC HTML 4.01 for which the exam questions are
multiple choice questions, so only the first four categories were used to evaluate the students HTML capability. As
regards the categories of evaluate and create, deeper evaluation standards such as project implementation need to
be defined. However, because the course focuses on acquiring the certificate and building basic knowledge,
PUMDCTS uses only remember, understand, apply and analyze. In addition, because most of the exam
questions of the TQC HTML 4.01 Item Bank are synthetic, it is more appropriate to simply divide the questions into
two levels by combining remember and understand into a low level and apply and analyze into a high level.
The exam questions consisted of 500 questions distributed throughout nine chapters. To judge what level each exam
question belonged to, we invited two teachers who had more than five years of experience teaching HTML courses.
Based on their teaching experience, the teachers divided the 500 questions into two levels. If a question belonged to
remember or understand, it was allocated to the low level, while those questions belonging to apply or
analyze were categorized as the high level. After classification, the data were input into the exam question database

227
to facilitate the implementation of the subsequent personalized strengthening practice mechanism and learning status
diagnosis.

Learning diagnostic mechanism

After students finish the exam practice, the system will calculate the rates of correct answers for the low-level and
high-level questions respectively, and display the Bloom light table. The light color depends on the rate of correct
answers. If it is greater than or equal to 70%, the light is green, if it is less than 70% but greater than 40%, it is
yellow, and if the rate is less than or equal to 40%, the light is red. An example is illustrated in Table 1. When a
student has completed 10 questions of a certain chapter, the system will judge whether the answers are correct or
incorrect and calculate the rate of correct answers. In the example, because there are five questions for the low level
of which three are answered correctly, the rate is 60% and the light is yellow. For the high level, the rate of correct
answers is 20% and the light is red. If a student finishes the practice of every chapter of exam questions, he/she will
have a table showing 18 lights.

Table 1. Light table judgment example


Level Correct/Total Correct rate Light colors
Low-level (remember and understand ) 3/5 60% Yellow
High-level (apply and analyze) 1/5 20% Red

Strengthening practice mechanism

Figure 3 shows the algorithm to pick exam questions so that the students weaknesses can be improved by practicing
more frequently.

Figure 3. The algorithm to improve students weaknesses

In order to achieve the purpose, a random number R between 1 and 100 is generated for picking an exam question. If
R is between 1 and 50 then the pick will enter the strengthening area. If R is between 51 and 100 then the pick will
enter the general area. If the pick procedure enters into the strengthening area, the system will read the exam
questions for which the Bloom light is red and randomly pick an exam question from those questions. The red light
area represents that the exam questions have a high error rate, so the mechanism may result in a greater chance
(50%) of picking those questions to allow the student more practice with those questions.

If the pick procedure enters into the general area, a random number R2 between 1 and 100 is generated. If R2 is
between 1 and 50 then the pick will enter the white light area. The white light area represents that the exam

228
questions have not yet been picked and those questions should have a higher chance (50%) of being picked. If R2 is
between 51 and 80 then the pick will enter the yellow light area. The yellow light area represents that the exam
questions have a middle error rate and so those questions will have a middle chance (30%) of being picked. If R2 is
between 81 and 100 then the pick will enter the green light area. The green light area represents that the exam
questions have a low error rate and thus will have a low chance (20%) of being selected.

If no questions exist in the strengthening area (red light area), then R will be changed to between 51 and 100. Thus,
the system will be forced into the general area. If no questions exist in the white light area, then R2 will be
changed to between 51 and 100. Thus, the system will be forced into the yellow light area or the green light area.
If no questions exist in the yellow light area, then R2 will be changed to between 81 and 100. Thus, the system will
be forced into the green light area.

The main functions of each module for the student interface

To improve students weaknesses, we implement the strengthening practice mechanism through the exam practice
module, which allows students to practice the exam questions. Freedom is also provided to allow the students to
choose a range for the exam, but each practice is limited to a maximum of three chapters. After the students have
finished practicing, the system shows them the Bloom light table for the range.

The mock exam module is to simulate a real certification exam situation, and therefore does not invoke the
strengthening practice algorithm. The questions are picked randomly from all nine chapters of the exam question
base, and the exam time is limited. The system will also show the Bloom light table at the end of the mock exam.

The main differences between the two modules are as follows:


The exam practice module lets students choose the chapters to be tested, while the mock exam module does not.
The exam practice has no time limit, while the mock exam is limited to 40 minutes, which is the same as the real
certification exam.
The exam practice module shows only the lights of the chosen chapters, while the mock exam module shows
comprehensive lights.
The exam practice module will pick those questions which should be strengthened according to the students
weaknesses, while the mock exam module picks the questions completely randomly.

At the end of every exam, the students can enter the result inquiry module to browse all their exam scores and Bloom
light tables.

The main functions of each module for the teacher interface

Teachers can manage the basic data for each student, such as student ID, name, etc., by using the student data
management module. The teachers can manage the students scores for each exam by using the student score
management module. The teachers can manage the students portfolios, which record the data of the processes when
students take exams and make score inquiries, by using the student portfolio management module. The teachers
can append, update, delete and inquire about the exam questions through the exam question management module.

Snapshots of the system execution

Figure 4 is a snapshot of a mock exam on a smartphone. The snapshot shows that, after finishing the mock exam, the
system will give the test score and the comprehensive Bloom light table. Because the screen size of the smartphone
is small, the information which can be accommodated is limited. Thus, a streamlined design layout is needed.
Therefore, we integrated some functions into a menu button. The functions of the button contain test again,
navigation of the questions and back to home page. Figure 5 is a snapshot of a practice exam on a tablet PC.
Students are free to choose up to three chapters to test, and the system will give the test score and the Bloom light
table of the test range. Because the screen size of tablet PCs is smaller than that of PCs, the layout is adjusted

229
specifically. Figure 6 is a snapshot of a PC while a student is conducting a mock exam. Due to the large screen size
of PCs, the layout is designed to accommodate more information.

Figure 4. Snapshot of a mock exam on a smartphone

Figure 5. Snapshot of a practice exam on a tablet PC

Figure 6. Snapshot of an exam process on a PC

230
Research method
In this study, we planned an experiment to investigate the effectiveness of the system. The participants were 94
sophomore students at a University of Science and Technology in the central region of Taiwan. The experimental
course was HTML. We divided the students into an experimental group and a control group in a random manner.
Each group consisted of 47 students. Before the experiment, the first quiz was conducted and regarded as the pretest.
In addition, the course teacher and the progress of the course for the two groups were identical.

The experimental group used PUMDCTS and the control group used the general certification tutoring system which
picks exam questions at random. The experimental process matched the progress of the HTML course. During the
three lessons each week, the last was fixed to practice questions with the two systems. The systems were open
around the clock, and the students were free to use them whenever they wanted. The duration of the experiment was
one semester, with a total time of four months.

The experimental process of this study was divided into two stages. The first stage was the learning effectiveness
analysis. This stage analyzed the test scores of the midterm exam, the second quiz between the midterm exam and
the final exam, and the final exam within one semester. Each test was divided into 2 parts: the cognitive test and the
technical test. The cognitive test consisted of multiple choice questions. The number of exam questions and time
limits were consistent with the formal certification exam. A total of 50 questions were picked from chapters of a
certain range, and the exam time was 40 minutes. The technical test was conducted using a PC to implement the
requested functions in the exam questions. There were 10 questions in the technical test. The score was distributed
into the two parts for which the cognitive test constituted 50 points and the technical test 50 points. This stage was to
analyze the differences in the achievements of the two groups in the cognitive test and the technical test when using
the systems. The flowchart of the first stage is shown in Figure 7.

Figure 7. Flowchart of the first stage

The second stage involved in-depth interviews after the end of the experiment. A questionnaire of open-ended
questions was designed to collect more in-depth views of the participants on PUMDCTS. In addition, we chose 23
students for in-depth interviews, mainly to understand their views on the usefulness of the system, the ease of use of
the system, and the help the system provided them with learning. We expected that a wide range of data could be
collected and analyzed to make the results more complete.

231
Experimental results and discussions
The results of the first stage - the learning effectiveness of the cognitive tests

This section analyzes the cognitive test scores of the pre-test and the three post-tests (i.e., midterm exam, second quiz
and final exam) to show the effectiveness of PUMDCTS and the continuity of the effect. To more precisely evaluate
the two groups learning performances, ANCOVA was used to compare their cognitive test scores in the midterm
exam, second quiz and final exam by excluding the impact of the pre-test scores.

Before employing ANCOVA, the homogeneity of the regression coefficient was tested and confirmed with F = 2.56
(p > .05), implying that ANCOVA can be applied to the analysis of the three post-test scores of the two groups.

Table 2 shows the ANCOVA results. It was found that the students who learned with PUMDCTS showed
significantly better cognitive test scores than those who learned with the traditional certification tutoring system in
the three post-tests with F = 11.473 (p < 0.01), F = 5.435 (p < 0.05) and F = 11.106 (p < 0.01), respectively. That is,
the effectiveness of learning with PUMDCTS lasted until the final exam.

Table 2. ANCOVA results of the two groups cognitive test scores in the midterm exam, second quiz and final exam
Variables N Mean SD Adjusted mean F
Midterm Experimental group 47 31.64 11.711 32.176 11.473**
exam Control group 47 26.79 9.420 26.250
Experimental group 47 28.49 11.521 29.098 5.435*
Second quiz
Control group 47 25.49 11.273 24.880
Final exam Experimental group 47 33.60 11.721 34.156 11.106**
Control group 47 28.26 11.318 27.695
Note. **p < .01. *p < .05.

Table 3. Results of ANCOVA with repeated measures of the two groups cognitive test scores in the midterm exam,
second quiz and final exam
Variables N Mean SD Adjusted mean F Post hoc (Bonferroni)
(1)Midterm exam 94 29.21 10.85 29.33 12.47** (1)>(2)
(2)Second quiz 94 26.99 11.44 27.02 (3)>(2)
(3)Final exam 94 30.93 11.77 30.99
Note. **p < .01.

Statistical number of questions


80000 71270
58416
60000
44737
35692 36431
40000 29576

20000

0
Pretest-Midterm Midterm-The quiz The quiz-Final exam
Experimental group Control group
Figure 8. The history of different exams

In addition, ANCOVA with repeated measures was employed to compare the differences in the two groups
cognitive test scores between the midterm exam, second quiz and final exam. The Mauchly Spherical test showed
that X2 = 1.70 (df = 2p = 0.428 > 0.05), implying that ANOVA with repeated measures could be employed. As
shown in Table 3, the adjusted means of the three tests were 29.33, 27.02 and 30.99, respectively. A Bonferroni test
revealed that the test scores of the midterm exam and final exam were significantly higher than those of the second

232
quiz with F = 12.47 and p < .01. That is, PUMDCTS benefited the students more in the midterm and final exams
than in the second quiz.

To further investigate the effect factors, we accessed the students test records as shown in Figure 8. We found that
during the second quiz (the number of practicing questions for the experimental group is 36,431 and that for the
control group is 29,576), the number of times the students used PUMDCTS was significantly reduced compared with
before the midterm and final exams. One possible reason is that, when facing different exams, the students attitudes
towards preparing for the exams and their learning desires will be different. For the midterm and final exams, the
students may have taken more time and put more effort into practicing on the system than for the second quiz, so the
difference between the two groups in the midterm and final exams was more significant, while it was not so
significant in the second quiz.

The results of the first stage - the learning effectiveness of the technical tests

In addition to analyzing the learning effectiveness of the cognitive tests, this study also explored the impact of the
use of PUMDCTS on the students web design abilities. The students web design abilities were evaluated by the
scores of the technical tests. The results are shown in Figure 9. For the pretest, the average scores of the experimental
and control groups were 41.17 points and 41.28 points, respectively. They were very close and had no significant
difference. This indicated that the prior knowledge of the two groups was about the same. For the midterm exam, the
average scores of the experimental and control groups were 36.60 points and 36.70 points, respectively. There was
also no significant difference between the two groups. After the second quiz, the average scores of the experimental
and control groups were 39.15 points and 37.55 points, respectively. Although they did not reach significant
difference, it can be seen that the average gap between the two groups in this exam was 1.6 points. Finally, for the
final exam, the average scores of the experimental and control groups were 33.11 points and 30.32 points,
respectively. Although they still did not reach significant difference, the gap between the two groups increased by an
average of 2.79 points.

Average test scores for the technical tests


50 41.17 36.6 39.15
33.11
40
30 41.28
20 36.7 37.55
30.32
10
0
Midterm exam The second Final exam
Pretest score
score quiz score score
Experimental group 41.17 36.6 39.15 33.11
Control group 41.28 36.7 37.55 30.32
Figure 9. The exam score graph of the technical tests during one semester

To sum up, although the two groups did not reach statistically significant difference for the technical tests, the use of
PUMDCTS may have played a positive role in promoting the students web design capabilities, and a positive
transfer of learning from the cognitive tests to the technical tests may have taken place. As indicated by Huang,
Huang, Wang, and Hwang (2009), the transfer of learning from one subject to another could be stimulated in
electronically mediated courses. Therefore, in our future study, it is expected that more conclusive results can be
obtained by extending the experimental time to one year or more.

The results of the second stage - interviews regarding the systems usefulness

This section discusses whether PUMDCTS was helpful for acquiring the knowledge of the certification exam and the
web design course. In addition, whether the Bloom diagnostic light table was helpful to the learners is also explored.

233
After the interviews, the respondents generally felt that the use of PUMDCTS helped them learn the content of the
certification exam and the web design course. In terms of acquiring knowledge, the respondents expressed comments
the same as or similar to the following: Using PUMDCTS really helped the learning for the certification exam
(STA01, STA02, STA06, STA15) and Using PUMDCTS can contribute to learning web design and other related
knowledge (STA02, STA03, STA11, STA13, STA14).

Regarding the personalized strengthening practice mechanism and Bloom diagnostic light table, the respondents
expressed the same/similar comments as follows: The lights can give me a better understanding of my weaknesses
in learning HTML, can give me a deeper impression, and can enhance my memory (STA05, STA10, STA13,
STA15) and I can find out the direction of the problems and it is easier to understand my own learning status
(STA01, STA08, STA09, STA13, STA14).

However, some respondents gave negative comments, including the same as or similar to: The question repetition
rate of the personalized strengthening practice mechanism is too high (STA02, STA03, STA08, STA14, STA15).
This indicates that the strengthening practice mechanism needs to be improved.

The results of the second stage - interviews regarding the systems ease of use

According to their responses in the interviews, the respondents generally believed that the multi-device certification
tutoring system made it more convenient for them to learn, and they were willing to continue using it. They gave
positive responses regarding the multi-device design and expressed comments such as: I can easily use the system
anytime and anywhere (STA03, STA15, STB01, STB08).

However, the respondents also expressed some negative comments which were mostly related to the layout design.
Two representative comments are: The page layouts of navigation for the questions and score inquiries were not so
good (STA09, STA11, STB08) and You cant go back to the previous question (STA11, STA12, STB04, STB08).

The results of the second stage - interviews regarding learning motivation

In this study, to get students to learn in a healthy competitive way so as to enhance their learning motivation, we
designed experience rankings. In this interview, some learners expressed that through the competitions they had a
greater willingness to use the system. They expressed comments such as: The system allows me to interact with
classmates and have a good competition, which makes me have a strong wish to outperform my classmates (STA06,
STA10, STA12).

Discussions, contributions and suggestions

From Figure 8, it can be seen that the number of practice questions for the experimental group is totally 166,117
compared with 110,005 for the control group. During the experiment, the experimental group practiced with
PUMDCTS more frequently than the control group practiced with the traditional tutoring system. We infer that this is
one reason why the experimental group showed better learning achievements than the control group in the cognitive
tests.

To summarize the above, our contributions are as follows. In the past, very little research has proposed tutoring
systems which combine these functions. In addition, the strengthening practice mechanism implemented by
probability according to the light being red, yellow, green, or white is firstly proposed. The system evaluation results
showed that there was a positive influence of PUMDCTS on the cognitive tests, while its influences on the technical
tests was not significant.

The system implementation techniques and the experimental findings can be a reference to educators and/or
educational system developers. Based on the findings of this study, it is suggested that teachers or researchers who
intend to improve students learning performance in programming courses pay attention to the provision of
personalized guidance to individual students based on their learning status and knowledge levels, as indicated by

234
Wang et al. (2007) and Cho et al. (2003). This can be done by considering the following procedure: (1) Classifying
the exam questions based on Blooms six categories or other test-item categorizing schemes, such as the concept
effect model suggested by Hwang, Panjaburee, Triampo and Shih (2013); (2) adopting a learning diagnostic
mechanism based on the exam question classification scheme; (3) analyzing students learning status and needs
based on the learning diagnostic mechanism; and (4) providing personalized guidance to individual students based on
the diagnostic results.

Conclusions and future work


In this study, by combining Blooms Taxonomy of Educational Objectives, we developed a multi-device personalized
certification tutoring system. We also analyzed the students scores on the cognitive tests and technical tests in one
semester. In addition, interviews were conducted after the experiment. According to the analysis of the students
scores and the interview contents, we summarize the findings as follows:
By incorporating Blooms theory, the proposed system can enhance the students effectiveness in acquiring
certificates and related knowledge.
The personalized strengthening practice mechanism and Bloom diagnostic light table can help students better
understand their learning status.
The multi-device design approach can help students more conveniently use the system.

From the analysis of the experimental results, in the future, a more long-term observation can be made to explore
whether the use of PUMDCTS has a significant impact on web design capabilities. For students with different
learning styles and cognitive styles, whether the use of PUMDCTS results in different impacts can also be explored.
We hope that the research can be more comprehensive through the collection of a wider range of data.

Acknowledgements
This study is supported in part by the National Science Council of the Republic of China under Contract No. NSC
99-2511-S-275-001-MY3.

References
Anderson, L. W., & Krathwohl, D. R. (Eds.) (2001). A Taxonomy for learning, teaching and assessing: A Revision of Blooms
educational objectives. New York, NY: Addison Wesley Longman, Inc.
Bloom, B. S., Engelahar, M. D., Frust, E. J., Hill, W. H., & Krathwohl, D. R. (1956). Taxonomy of educational objectives,
Handbook 1: Cognitive domain. New York, NY: David McKay.
Chen, S. Y., & Macredie, R. (2010). Web-based interaction: A Review of three important human factors. International Journal of
Information Management, 30(5), 379-387.
Chen, W. N., Chen, C. H., & Chen, D. W. (2011, January). Color-coded guide students in the course of game-based learning tool
for tracking applications. Paper presented at the 2011 International Conference on Joyful E-Learning (JEL2011), Pingtung,
Taiwan.

Chen, Y. H., & Wu, Y. Y. (2003). [Testing and assessment]. Kaohsiung, Taiwan: FuWen.

Chen, Y. S., Kao, T. C., & Sheu, J. P. (2003). A Mobile learning system for scaffolding bird watching learning. Journal of
Computer Assisted Learning, 19, 347-359.
Cho, Y. H., Kim, J. K., & Kim, S. H. (2002). A Personalized recommender system based on web usage mining and decision tree
induction. Expert Systems with Applications, 23(3), 329-342.
Egan, M. (1999). Reflections on effective use of graphic organizers. Journal of Adolescent & Adult Literacy, 42(8), 641-645.
Huang, Y. M., Huang, T. C., Wang, K. T., & Hwang, W. Y. (2009). A Markov-based recommendation model for exploring the
transfer of learning on the web. Educational Technology & Society, 12(2), 144162.

235
Hwang, G. H., Chen, B., & Wang, J. M. (2012). A Joyful formative assessment license tutoring system combined with e-books.
Ling Tung Journal, 32, 113-132.
Hwang, G. H., Chuang, C. Y., Chen, S. Y., & Tseng, W. F. (2012, October). Mobility aids integration of diversity game-based
learning systems: Planning and implementation. Paper presented at the 2012 Taiwan E-Learning Forum (TWELF 2012), Tainan,
Taiwan.
Hwang, G. H., Lee, C. Y., & Tseng, W. F. (2012). Development and evaluation of an educational computer game for a
certification examination. Journal of Educational Technology Development and Exchange, 5(2), 27-40.
Hwang, G. J., Tsai, C. C., & Yang, S. J. H. (2008). Criteria, strategies and research issues of context-aware ubiquitous learning.
Educational Technology & Society, 11(2), 81-91.
Hwang, G. J., Wu, C. H., Tseng, J. C. R., & Huang, I. (2011). Development of a ubiquitous learning platform based on a real-time
help-seeking mechanism. British Journal of Educational Technology, 42(6), 992-1002.
Hwang, G. J., Wu, T. T., & Chen, Y. J. (2007). Ubiquitous computing technologies in education. Journal of Distance Education
Technology, 5(4), 1-4.
Hwang, G. J., Panjaburee, P., Triampo, W., & Shih, B. Y. (2013). A Group decision approach to developing concept-effect models
for diagnosing student learning problems in mathematics. British Journal of Educational Technology, 44(3), 453-468.
Kerckhoff, A. C., & Bell, L. (1998). Hidden capital: Vocational credentials and attainment in the United States. Sociology of
Education, 71, 152-174.
Lin, C. W., Chu, H. C., Wang, Z. Y., & Guo, Y. T. (2012, May). Concept mapping based mobile augmented reality learning
system. Paper presented at the 2012 Global Chinese Conference on Computers in Education (GCCCE 2012), Pingtung, Taiwan.
Nedungadi, P., & Raman, R. (2012). A New approach to personalization: Integrating e-learning and m-learning. Educational
Technology Research & Development, 60(4), 659-678.
Perkowitz, M., & Etzioni, O. (1997). Adaptive web sites: An AI challenge. In Proceedings of the 15th International Joint
Conferences on Artificial Intelligence (pp. 16-23). San Francisco, CA: Morgan Kaufmann Publishers Inc.
Shanker, A. (1996). Quality assurance. Phi Delta Kappan, 78(3), 220-225.
Shen, Y. S., Hwang, G. H., Lin, H. Y., Chen, W. S., Ke, C. F., & Liao, W. J. (2005, May). Use Bloom classification theory
developed online tests assessment system. Paper presented at the 2005 Taiwan E-Learning Forum (TWELF 2005), Taipei,
Taiwan.
Wang, T. H. (2008). Web-based quiz-game-like formative assessment: Development and evaluation. Computers & Education,
51(3), 1247-1263.
Wang, T. H. (2011). Developing web-based assessment strategies for facilitating junior high school students to perform self-
regulated learning in an e-learning environment. Computers & Education, 57(2), 18011812.
Wang, T. I., Tsai, K. H., Lee, M. C., & Chiu, T. K. (2007). Personalized learning objects recommendation based on the semantic-
aware discovery and the learner preference pattern. Educational Technology & Society, 10(3), 84-105.
Wu, P. H., Hwang, G. J., & Tsai, W. H. (2013). An Expert system-based context-aware ubiquitous learning approach for
conducting science learning activities. Educational Technology & Society, 16(4), 217230.
Xiao, S. C. (1993). [Talk about what should be the concept of vocational
education from the nature of technicians certificates]. Technical and Vocational Education, (17), 2427.
Xie, S. L., Hwang, G. H., Bai, C. H., Lin, T. C., & Tseng, C. K. (2012, December). Application QR code action research on
learning - A Case study in OCJP counseling certification. Paper presented at the 2012 International Conference on Digital
Convergence (ICDC 2012), Tainan, Taiwan.

236
Copyright of Journal of Educational Technology & Society is the property of International
Forum of Educational Technology & Society (IFETS) and its content may not be copied or
emailed to multiple sites or posted to a listserv without the copyright holder's express written
permission. However, users may print, download, or email articles for individual use.

You might also like