Using Virtual Forest Environment On Collaborative Forest Management

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

Using Virtual Forest Environment on Collaborative Forest Management

Mingyao Qi , Tianhe Chi , Xin Zhang


Jingxiong Huang
ATR Lab, Shenzhen University Shenzhen, China,518060

Institute of Geographic Sciences and Natural Resources Research, CAS, Beijing, China, 100101 e-mail: qimingyao@vip.sina.com Institute of Remote Sensing Applications, CAS, Beijing, China, 100101
AbstractIn this paper, we introduce our methods on how to use Virtual Forest Environment for collaborative forest management. We first discuss the construction issues of a virtual forest environment and bring forward a method to automatically generate forest stand model through 2D GIS data, forest inventory data and Remote Sensing. Then, to meet the needs of opinion representation, we realize selecting and labeling directly on the scene with VRML and java. And then, to support collaborative group work, we integrate some methods in Computer Supported Cooperative Work (CSCW) and Collaborative Virtual Environment (CVE) and design three models: collaborative perception model, behavior model and collaboration model. Finally, we introduce our prototype system: a virtual forest planning environment system for people from all backgrounds, including public community, domain experts and government managers. Its proved that the combination of VR, CSCW and RS technologies is a feasible and innovative way to support forest management.. Keywords- RS, VR, CSCW, CVE, GIS, forest management

others, so as to formulate a cutting and planting plan based on the sustainable development policy. In this paper, we first discuss the construction issues of a virtual forest environment and bring forward a method to automatically generate forest stand model through 2D GIS data, forest inventory data and Remote Sensing. Then, to meet the needs of opinion representation, we realize selecting and labeling directly on the scene with VRML and java. And then, to support collaborative group work, we integrate some methods in Computer Supported Cooperative Work (CSCW) and Collaborative Virtual Environment (CVE) and design three models: collaborative perception model, behavior model and collaboration model. Finally, we introduce our prototype system: a virtual forest planning environment system for people from all backgrounds, including public community, domain experts and government managers. Its proved that the combination of VR, CSCW and RS technologies is a feasible and innovative way to support forest management. II. BACKGROUND OF OUR PROJECT

I.

INTRODUCTION

Since Our National Landscape meeting on applied techniques for analysis and management of the visual resource hold in Nevada, US in 1979, the past two decades has seen much progress on constructing Virtual Forest Environment (VFE) as a visual management tool for its sustainable development, such as AMAP[1], IMAGIS[2], Smart Forest [3][4] and so on. These tools can be used for forest visualization and simulation, however, they can neither let user immerse in the virtual environment, nor do they support multiusers to attend the same scene and communicate with each other in an intuitive way, say, draw a blueprint directly on the 3D scene. In this paper, we introduce our methods on how to use Virtual Forest Environment for collaborative forest management. A Collaborative Virtual Forest Environment system is a Collaborative Virtual Geographical Environment [5] that enables wide range people, including government officers, residents and domain experts to participant in different places at the same time or different time, by which they can join in, navigate, query, select, label and communicate with

In 2002, we started the project Study on Distributed Virtual Geographic Environments and Forestry Remote Sensing modeling funded by the Ministry of Science and Technology of China (2002CCC01900). This project is an interdisciplinarity study on how to integrate VR, GIS, Remote Sensing (RS), Artificial Intelligence (AI), CSCW and Computer Network technology to build a distributed virtual forest environment system. By this system we can not only visualize the forest landscape from GIS and RS data, simulate the developing procedure of a land stand, but also communicate with each other inside the virtual landscape through an avatar embodiment. The forest area we selected locates in Zhangpu district of Fujian province, China. With a forest cover rate of 60.52%, Fujian Province has the richest forest resource in south-east china. Local residents have taken good advantages of the forest both in economic and ecological aspects for a long time, and now that how to maintain a sustainable development and table a reasonable management proposal are undoubtedly cared by the government, domain experts and residents as well. This project aims at providing those people with a on the spot

0-7803-8742-2/04/$20.00 (C) 2004 IEEE

4862

virtual communication platform. The target area is 8.75 kilometers wide and 11.425 kilometers long, as figure 1 shows. GIS and RS data we gathered are as follows:

made in 1996, defined as more nodes as Transfer, Light, Viewpoint, Texture, Sensor, LOD, Fog, Sound, etc. The next generation of VRML is X3D, an XML-compliant language that will be much more widely accepted. Some forest landscape visualization or simulation applications have adopted VRML as their modeling language [7]. Because it is widely used, especially in web-based applications, we choose VRML as the modeling language for the VFE. We resort to Multigen Creator (version 3.5.1), a powerful 3D modeling software, to build the landform model. Firstly we convert ArcInfo DEM data to USGS DEM format by editing some fields in the file directly, since both of them are text file and they are very similar; then we convert USGS DEM file to Multigen DED (Digital Elevation Data) format using the readusgs model of Creator; the third step is to choose a algorithm to rebuild terrain surface from DED, such as Polymesh, Delaunay, CAT or TCT, after that, we get standard Multigen Creator model file - OpenFlight file; now in step four we can take the advantage of Creator to map the terrain with texture of RS orthograph image, create LODs, divide the whole terrain into several AOIs (Area of Interest); the last step is to export to VRML file. Vegetations (mainly trees) modeling is a little bit different because the distribution of trees is dynamically developing, not as fix as the terrain. GIS data can be updated from time to time through RS, on-the-spot investigation and other means, so many famous forest visualization software, like IMAGIS [2], support building vegetation model from land use map. In this project we consider two conditions: offline construction and online construction. In most cases we use offline construction to pre-build the forest model to alleviate the real time rendering burden, while online construction usually used for simulation on the given hypothesis of users, such as predicting what the stand will be five years later according to a stand growth model. Parallel Graphicss two products - Cortona Software Development Kit (SDK) and Cortona VRML Client as well as its External Authorize Interface (EAI) were used respectively to develop offline construction program in VB and online construction program in Java. The process of construct a land stand from GIS vector data can be divided into two steps: firstly construct the individual tree model, and secondly plant the model in the stand region. A very popular method of individual tree modeling is mapping a real trees transparent image on two planes[7][8]. We create an olive model by this method as figure 2 shows. To distribute trees on a stand region, we use horizontal parallel lines with specific interval to intersect with the polygon of forest stand boundaries, then interpolate the inner points.

Figure 1. The target area in Zhangpu District, Fujian Province, China

III.

DEM data, resolution 25m (ArcInfo DEM format); 1:50000 digital vector map (ArcInfo Shapefile format); 1:10000 forest stand map (ArcInfo Shapefile format); Forest investigation data (Oracle 9i table format); RS image data (TIFF).
CONSTRUCTION OF VISUAL FOREST ENVIRONMENT

The main functions of our collaborative visual forest environment system and their solutions are listed in table 1.
TABLE I.

COLLABORATIVE VFE FUNCTIONS AND THEIR


SOLUTIONS

Function 3D modeling rendering select and label collaborative perception behavior collaboration

solution Multigen Creator, VRML, ArcGIS Cortona VRML Client touch sensor, VRML EAI declare and subscribe, Agent collision detection, route planning 3D Electronic Whiteboard, MAS, FIPA , ACL

A. 3D modeling and real-time rendering Landscape can be understood usually as composed of six essential elements: landform, vegetation, water, structures, animals (including human) and atmosphere[6]. The first two will be discussed here. In the field of forest landscape modeling, many popular tools are available, such as Multigen Creator, 3DMAX, CosmoWorld, VRMLPad, Cult3D, etc., however, different tools often use different file formats which are not totally convertible to each other, so a more universal modeling language is preferred. VRML is a high performance language for 3D visualization on the world wide web, and most kinds of 3D modeling software support exporting VRML format files. VRML1.0 was introduced in 1994 and VRML2.0, a version with more dynamic and interactive functions was

Figure 2. Mapping a real olive trees transparent image on two planes.

0-7803-8742-2/04/$20.00 (C) 2004 IEEE

4863

Real time rendering is essential in collaborative VFE, in this project we leave it to Cosmo VRML Client a ActiveX plug-in running in Microsoft IE. What we can do to improve the rendering speed is to design efficient models, such as using LOD, as mentioned above. B. Collaboration model Select is an important means to interact with the environment. Considering the various kinds of users, we only deal with the 2D mouse select mode. Unlike in 2D UI, select in 3D VFE environment is a little bit complex. Mouse pointer moving on the screen in a 3D environment can be considered in a camera coordinate system with origin at the viewpoint and z axis pointing to the mouse pointer, then the mouse pointer have a 3D coordinate (x, y, d), d is the minimize depth of all intersections on the direction of z axis. Suppose the pointed points coordinate in the world coordinate system is (X, Y, Z), deviation vector is (X0, Y0, Z0), then (X, Y, Z) can be calculated by: (X, Y, Z)T = M (x, y, d) T + (X0, Y0, Z0) (1)

Here M is the rotation matrix between the two coordinate systems. In VRML2.0, several pointing sensors are defined to support interaction with the model, such as TouchSensor, SphereSensor, PlaneSensor, CylinderSensor, When they are activated, they can perform the above calculation and send out an event with the event description and the (X, Y, Z) coordinate of the target point. We put TouchSensors on objects and customize a java class based on VRML EAI to capture and handle those events. Label is necessary in CVGE, as illustrated in section three, since its an intuitive manner to express ones opinion. Till now, we support three kinds of labels: text, polyline, and symbol. Text is used for comment and annotation, polyline is used for sketch out a region, and a symbol is usually a tree model that can be placed on a clicked mouse point. Figure 3 shows labeling tree models and text directly on the terrain.

C. collaborative perception model Collaborative perception ability in CVGE includes three W Where, Who and What, that is, in what condition that one can detect others existence as well as their status. The most famous model of awareness in a multi-user environment is Benfords spatial model[9]. This spatial model defines four key concepts: aura, focus, nimbus and adapter, for allowing objects to establish and control interactions. Agent is a hardware or (more usually) software-based computer system that enjoys the following properties: autonomy, social ability, reactivity, pro activeness[10]. In this project, we use an Agent-Avatar pair to partly implement Benfords spatial model, that is, aura and nimbus, by the mechanism of declare and subscribe. Further more, we extend the aura and nimbus concepts from spatial dimensions to organization dimension in order to define whom you interest in. For each avatar, we define an Agent object for it, so before he logs in the environment, the agent will prompt him to define his aura (the potential area where others may detect you, such as specifying a radius; the potential people who may detect you) and nimbus (the field of view; who are you interest in), then after his login, the agent will declare his aura and subscribe his nimbus for the server. When the aura and nimbus intersect in both spatial and organization dimension, perception event will take place. D. Behavior model Headings, or heads, are organizational devices that guide the reader through your paper. There are two types: component heads and text heads. In most cases, behaviors of avatars in CVGE, such as walk, fly, sending message, etc., are controlled manually by the users, however, some behaviors request the computer to automatically perform some tasks or to guarantee their rationality, then behavior model is designed to fulfill that. In this project we implement a walk behavior model in the Agent. There are two modes of walk on the terrain: one is totally navigated by mouse or keyboard and the other is given a destination point or a few key points. In the former mode, to avoid colliding with the terrain or floating in the air, the Agent is designed to calculate the elevation value of each point in the route and adjust the elevation of the avatar to this value on real time. Since we have divided the whole terrain into 182 IndexedFaceSet (a node in VRML2.0), and created indexes to them by recording their boundaries, so the elevation calculation is timesaving. In the second mode, the Agent needs to perform a route planning from current position to the destination. Route planning in forest environment is very complex, sometimes even impossible to realize since roads are very limited in forest. Till now we just interpolate middle points along the direct line with fixed interval. It may be not accurate but really very simple and easy to understand.

Figure 3. Labeling tree models and text on the terrain.

E. Collaboration model In CSCW, collaboration is divided into four modes: same time and same place, same time and different place, different time and same place, different time and different place. Among

0-7803-8742-2/04/$20.00 (C) 2004 IEEE

4864

these, the different time and same place mode is an efficient online collaboration method. In CSCW, Electronic Whiteboard is a widely used collaboration model, so in this project we extend this model to 3D Electronic Whiteboard model. Essentially, the 3D Electronic Whiteboard is part of the whole VFE which can be shared by the relevant people in the same region. The communication control is implemented by the mechanism of declaring and subscription: commonly only ones working status is declared and published to others, and the labeling processes or labeling results are shared only if they are subscribed, thus, to avoid confliction between labels from different people, people usually subscribe one labeling process at one time. Communications between distributed users in collaborative environment has been realized by many means, such as DIS infrastructure[11], HLA infrastructure and directly TCP/IP. In fact, Multi-Agent System (MAS) has its own standards to facilitate Agent communication. One of the most important standards is FIPA (Foundation for Intelligent Physical Agents). Unlike all the above solutions, FIPA use an Agent Communication Language (ACL) to express the collaboration messages[12]. The FIPA ACL specifies communication messages between Agents and has an associated formal semantics. FIPA-compliant MAS platform makes us free from bottom-level communication design and to concentrate on semantic-level design. In this project, we use an open Agent building environment- JADE (Java Agent Development Environment) which implements most FIPA specifications, such as MTS (Message Transport Service), AMS (Agent Management System) and DF (Directory Facilitator), then the rest we should do is to construct the ACL message content (Figure 4).

IV.

DISCUSSIONS

Virtual Reality is a computer graphic world that looks like real, listens like real and touches like real , so realism is an important criterion for the usability of collaborative VFE. Texture mapping is believed to be an efficient way to express the detail of objects and make models more verisimilar, so we take photos on many species of trees like olive, pine, litchi and mango which make them really easy to recognize and timesaving to render. However, photos can only represent the current status of trees, while it is helpless for simulating what the forest will look like a few years later. So in many cases, we need to seek a balance between looks like and acts like [6]. In face, we have developed a stand-along program in Visual C++ to generate tree models by giving geometry parameters of the trunk, brunches, and leaves, we also have established a tree growth model base which records such parameters of different trees in different age. Thus, when given the tree species and its age, the 3D tree model can be calculated out quickly. Compared with the model like Figure 2, this kind of model consists of much more polygons (about a few thousands), so its rather suitable to construct a local area stand than a large area of forest, and it also may not look so verisimilar, but it has the virtue of flexibility to change. The next step is to integrate this stand-along function into the web-based collaborative VFE application. V.
[1]

REFERENCES

User Agent
Message Transport Service

User Agent
Message Transport Service

ACL

ACL

Message Transport Service


Agent Management System
Directory Facilitator

Message Transport Service

ACL

ACL

User Agent
Message Transport Service

User Agent
Message Transport Service

Figure 4. The architecture of the Agent communication system which implements most FIPA specifications such as MTS, AMS and DF.

De Reffye, P., Edelin, C., Francon, J., Jaeger, M., Puech, C., 1988, Plant models faithful to botanical structure and development. Computer Graph, 22, pp. 151158. [2] Perrin, L., Beauvais, N., Puppo, M., 2001, Procedural landscape modeling with geographic information: the IMAGIS approach. Landscape and Urban Planning, 54, pp. 3347. [3] Orland, B., Radja, P., Su, W., 1994, Smart Forest: an interactive forest data modeling and visualization tool. In Proceedings of the Fifth Forest Service Remote Sensing Applications Conference, Salt Lake City, Utah, US, pp. 283-292. [4] Orland, B., 1997, Smart Forest-II: Forest Visual Modeling for Forest Pest Management and Planning. http://www.imlab.psu.edu/smartforest [5] Mingyao Qi, Tianhe Chi, et al. Public Participation Virtual Geographic Environment: A Study on Vritual Forest Envrionment. In proceedings of International Conference on Virtual Geographic Environment and Geocollaboration, Hongkong, 15-16 Dec., 2003. [6] Ervin S.M., Digital landscape modeling and visualization: a research agenda. Landscape and Urban Planning, 54, pp. 49-62. [7] Lim, En-Mi, Honjo, T., 2003, Three-dimensional visualization forest of landscapes by VRML. Landscape and Urban Planning 63 pp. 175-186. [8] Muhar A., 2001, Three-dimensional modeling and visualization of vegetation for landscape simulation. Landscape and Urban Planning, 54 pp. 5-17. [9] [9] S. Benford, L. Fahln, A Spatial Model of Interaction in Large Virtual Environments, In Proceedings of ECSCW93, Milan, Italy, September 1993, pp. 13-17 . [10] [10] M. Wooldridge, N. R. Jennings, Intelligent Agents: theory and practice, Knowledge Engineering Review, 10:2, pp. 115-152. [11] [11] M. R. Stytz, S. B. Banks, W. D. Wells, Towards realizing cooperative distributed workspaces for distributed virtual environments, In Proceedings of Intelligent Information Systems, pp. 545 -549. [12] [12]Mingyao Qi, Tianhe Chi, Guang Shu, Agent based multi-user interaction in Geo-DVE, Proceedings of the International Conference on Active Media Technology, 2003, pp 259-264.

0-7803-8742-2/04/$20.00 (C) 2004 IEEE

4865

You might also like