Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

DictionAR: An Augmented Reality Game to Improve Child Literacy Development through Word Construction

OMITTED FOR BLIND REVIEW

Abstract
This paper describes the concept, software design and the prototype implementation for an Augmented Reality Game (ARG) that addresses Child Literacy Development through a word construction approach. The goal is to enhance childrens reading and writing abilities by using multimedia and Augmented Reality in order to build an exciting software experience that favors development. This work also discusses the use of games and Augmented Reality in Education and presents early results of the research.

1. Introduction
Educators are always searching for new tools and methods to enhance students development performance in regular school skills such as mathematics and reading. Although these two skills are still of great importance new approaches are needed now, in what we call Information Age. This is partially caused due to the increasing importance of a more general group of skills known as the 21st century skills, some of them are: critical thinking, digital literacy, problem solving and communications, among others [6]. These skills are common to many aspects of life as illustrated in Figure 1. As the world experiences an increasing use of technological devices in all sort of elds, and Information Technology is inserted in our daily routines there are great expectations regarding advances in Education. Teachers have already realized the advantages and some of the limitations of many different types of educational software and media and, by understanding these tools better they use them more effectively and look into new ways of keeping students interested and motivated. The new generation educators also understand the value of other skills other than those traditionally considered standards in regular school. The appealing possibilities of Augmented reality (AR) in advancing spatial visualization make this new class of interface a very interesting option in Education [5]. This

has proven to be true according to data collected in a brief online survey that took place as we studied the viability and relevance of the project. Twelve subjects lled up the online form containing 9 questions and 2 short texts: one explaining the basic concepts and showing consolidated examples of Augmented Reality utilization, and the other was a brief introduction to our project. There are a few ndings to be explored and further studied. First, those who have less contact with children seem to nd the use of AR less interesting for educational purposes. Second, those subjects also think children are not very interested in VR or AR environments. The last observation is that those with more contact with children believe that DictionAR and other games can effectively stimulate children to exercise reading and perhaps even create on them a desire to read more frequently.

Figure 1. Relation between skills and aspects of life

2. Related Work
There are some researches and works addressing the use of Augmented Reality in Education but we have very little information about projects directly related to literacy, reading or writing development. AR Interactive Tutorial is a project by Human Interface Technology Laboratory New Zealands (HITLabNZ) students that presents a tangible interactive interface for language learning. Users play with 3-D tangible cubes and an animated agent to learn the construction of English words [2]. This was the work that orig-

inated our disposal for the use of an animated agent in future versions and this ended up being conrmed by the early survey as nearly all responses were positive to the question about how interesting it would be for kids to have an animated agent to interact with. Besides, this is perhaps the only slightly similar project found. Other related works are [4] [5] [1].

3. Software Design, Tools and Implementation Details


The data model for the prototype is quite simple. In the model, the Lesson entity represents the main subject to be discussed. An occurrence of a Lesson must have at least one Topic, but it can also have multiple Topics. A Topic is basically the description of a section within a Lesson. An occurrence of Topic must have one or more Steps. The Step entity is like a slide in common presentation programs. A Step occurrence might contain text, image and audio, but only a single occurrence of each type. The prototypes conceptual data model is shown in Figure 2.

teams background on basic OpenGL programming including essential tasks such as graphic primitives usage, model loading, texture mapping and object positioning according to its coordinate system. To implement the Augmented Reality features we have chosen the ARToolKit software library. It provides facilities to deal with 3-D objects, multiple camera position and orientation, marker patterns tracking, rendering, and interaction [3]. ARToolkit SDK provides a struct named ARMultiMarkerInfoT that has useful information on the list of markers used. Its attributes are: a pointer to an element of the marker list, a integer that holds the number of markers registered for use, an array of 12 double values that holds the current camera position, a visibility ag named prevF, and nally, another array of double that keeps that last position of the camera. Each element of the markers list is of the type ARMultiEachMarkerInfoT and has a number of attributes with information on each marker, among these attributes there are the visibility ones called visible and visibleR that can be used to determine occlusion. For now, the letter selection mechanism is based on marker occlusion. A basic set of routines were coded in nine modules to provide implementation to functions like, for example, identifying if at a given moment there is one or more markers being hidden by the user. To achieve a acceptable usage level in the letter selection mechanism there was implemented a threshold of 1 second between the rst detection of a marker occlusion and the systems recognition of this as being a selection. If the current frame rate is 20fps than to a marker to be considered as selected it shall remain in occlusion state for 20 frames in sequence, that allows the user to pick the letter cube. This also avoids the common side effects of miscalculating the letter boxes positions that happens when the application cannot nd one or more markers and is lead to act as if the player was causing occlusion intentionally. Figure 3 shows part of the code behind this feature.

4. DictionAR: Game Description


Figure 2. Summarized Conceptual Model DictionAR is a word construction game in which the main goal is to quickly complete a thematic dictionary. Since the game is still under early development we prioritized a few use cases to demonstrate the concept and test basic user interaction, as well as to check the viability of the project. Our game intents to make children more conformable with the alphabet but it relies on the assumption that the player already knows it to some extent so the word construction dynamics become fun. Educators can build their activities through the Word Entry Editor (Figure 4) making the game adaptive to each kind of learner the teacher can try

The current Intergraded Development Environment (IDE) being used to code the game prototype is the Microsoft Visual Studio 2008 and the programming languages are C and C++. To develop the editor the chosen language was and C# .NET 3.5 while the Graphic User Interface (GUI) uses the exible Windows Presentation Foundation (WPF), a programming model for user interface rendering. In this work we have chosen to use the OpenGL graphics library mainly because of its performance and for it is one of Industrys standards. Another reason for this choice is the

/*---------------------------------------------Module: OcclusionManager Action: returns a pointer to a array of integers representing the markers that are currently hidden ----------------------------------------------*/ int *arrayOccMarkers(){ /*Catching the number of occluded markers*/ int n = numberOccMarkers(markerStates); int *retOccMarkers; int i = 0; /*Allocating memory for the array according to the number of occluded markers*/ retOccMarkers = (int*)malloc(n*sizeof(int)); /*Setting variable to use it as a index to the new array*/ n = 0; /*Search in the array that has the markers states*/ for (i=0;i<6;i++){ /*if state is zero than current marker is hidden*/ if (markerStates[i]==0){ retOccMarkers[n]=i; n++; } } return retOccMarkers; }

Figure 4. DictionARs editor: new Word Entry screen

Image Tip: this tip is a 256 by 256 pixels image that is displayed as a texture onto a plane and can be enlarged by applying a zoom operation; Audio Tip: this tip a small audio containing a sentence that uses the current word; 3-D Model Tip: a word can have a three dimensional model attached in the OBJ le format; Video Tip: due to time restrictions and project decision the Video Tip has not been implemented yet. The Word Completing screen consists of six virtual 3-D cubes displayed on top of the multiple marker set and a 2-D information board set. Each cube holds a pseudo-random letter at the center. The pseudo-random routine assures the hidden letters (chosen by the teacher) are there to be selected otherwise it would be most likely for the player to reach a situation in which completing the word would be impossible. A player gets points for each correct letter, for the word completion and bonus score for speed. Player loses points when choosing letters that do not belong to the current word. When a player chooses a letter that is present in the current word it gets as feedback the score increase update and also the cube that holds the letter turns green for a few moments as shown in Figure 4(A). All asterisk (*) symbols in the word that hide the chosen letter will be updated to show it. When a player chooses a letter that is not present in the current word ones score is to be decreased by 5 points and the cube that holds the letter turns red. To request a Tip the the player must select 2 cubes simultaneously. In the rst occurrence of this event (for the current word) than an Image Tip is shown. Otherwise, if it happens to be the second call it will be played an Audio Tip

Figure 3. OcclusionManager module sample function that returns a list of occluded markers.

different approaches. Teachers can set the list of words in different ways to exercise distinct aspects of the language, such as: Hiding the initial letter to focus on the beginning of the word since many most languages are read from left to write; Hiding specic letter combinations on the beginning of the words, such as the pair tr in trick when the initial letter by itself is not challenging; Hiding vowels, which are the usually the rst letters learnt such as a in cat; Hiding custom patterns to exercise phonetic similarity like on in the words phone and tone or hiding custom patterns to teach specic phonetic or grammar like the f sound of ph in the words phone and elephant. The introductory screen displays a few instructions such as how to choose a letter, what are the color of the feedbacks for right or wrong letter choosing and how to request tips. Tips are of four different types:

Figure 5. Player choosing the correct letter O(A), requesting an Image Tip(B) and viewing a 3-D model tip(C).

and if it is the third call the 3-D model will be displayed. A player requesting an Image Tip is shown in Figure 4(B). To avoid situations in which the players choice is to request a tip and the system interprets it as letter choice it was implemented a time threshold of 1 second. So, if the player occludes one marker and in less than one second another marker is occluded the system reads it as a tip request, selected cubes turn yellow temporarily. The player can also apply a Zoom operation to an Image Tip by select a third marker. Once the player releases the third cube the image tip size starts decreasing until it reaches its original size. The latest implemented type of tip is the automatic rotating 3-D model, as shown in Figure 4(C).

An indispensable acquisition is the Head Mounted Display device, as it will give users a further immersive experience. In this sense, studies for the best equipment choice is being done since it is an important project decision and takes a while for this kind of hardware to be imported. The surveys and interviews generated a valuable amount of suggestions and data. One of the worth to mention suggestions was to add a new feature called New Set of Letters. This feature would be an option for the player to discard the current cubes and their letters and get in return a whole new set of cubes with random values. Another one was to give extra bonus for those players that do not use tips but we believe that would inhibit players curiosity which is one of the greatest motivations. This would also go against our intention to make the player investigate the tips before guessing since there are no losses of points for requesting tips. Else, the movements and the visualization of tips seemed to be very exciting. Some of the contributions of this work were: the information gathered on the online survey and interviews with parents and educators, a functional game demonstration version and, nally, the game dynamic gameplay. This last one is, perhaps, the most valuable contribution since it presents a creative new fashion of player interaction addressing Child Literacy through word construction games. The initial research results have proven to be stimulating, generating interest in students and teachers. The parents and people who have frequent contact with kids have shown enthusiasm and also found the proposal promising. Furthermore, more study is needed particularly on how effective DictionAR can be and how it inuences the learning process of reading and writing skills.

References
[1] (2008) HITLab Projects Homepage. Available: http://www.hitl.washington.edu. [2] M. Baird, B. Reeves, and P. Buchanan. Ar interactive tutorial, 2008. Available: http://www.hitlabnz.org/wiki/COSC426. [3] A. H. P. on HITLab, December 2009. Available: http://www.hitl.washington.edu/artoolkit. [4] B. E. Shelton. Augmented reality and education: Current projects and the potential for classroom learning. Available: http://www.newhorizons.org/strategies/technology. [5] B. E. Shelton and N. R. Hedley. Exploring a cognitive basis for learning spatial relationships with augmented reality. Available: http://www.newhorizons.org/strategies/technology. [6] P. F. S. C. SKILLS. 21st century skills standards, 2004. Available: http://www.21stcenturyskills.org.

5. Conclusions and Future Work


In this work we presented information on Child Development and Literacy, Augmented Reality Games and the relation between these and other important factors that are the theoretical base for our work. DictionARs rst version is still under development and some of the near future goals are: improve the general User Interface, improve the game art, run tests with a Head Mounted Display, run tests within the age range of 6-10 years old and implement better user interaction.

You might also like