Professional Documents
Culture Documents
Database Requirements of CIM Applications
Database Requirements of CIM Applications
G. Kappel‡, S. Vieweg†
†
Institute of Applied Computer Science and Information Systems. Department of
Information Engineering. University of Vienna, Austria
‡
Institute of Computer Science. Department of Information Systems.
University of Linz, Austria
1 Introduction
Computers have been widely used in business and manufacturing, but their use has
been limited almost exclusively to purely algorithmic solutions within the subtasks
design, engineering, production planning, and manufacturing. The data was organized
in file systems, application specific databases, and local databases. This lead to produc-
tion, design and business islands. The same data was stored in different representations
and locations. The consequences of this approach are obvious. Database changes have
only local effects and the independent organization of various databases implies a high
and non-controllable degree of replication. Thus, the global consistency of the data can
hardly be managed and the interdependent support of the enterprise tasks is compli-
cated.
Recent efforts are driven to integrate all of the technologies described above to
form an integrated and integrative model of business and manufacturing concepts. The
integration refers to the technical and administrative processes and the required data.
This concept is known as Computer Integrated Manufacturing (CIM) [Harrington73].
The major components of a CIM system span from sales and marketing over produc-
tion to storage and distribution. Computerization in sales, marketing, accounting,
stores, and distribution mainly focuses on administrative tasks. This includes the anal-
ysis and presentation of sales data and the administration of customers and marketing
data. In our analysis we will not cover these traditional issues explicitly. However, they
are of great concern when discussing integration aspects (see Section ‘3.3 Integration
Requirements’). Our investigation mainly focuses on engineering design (computer-
aided design), manufacturing engineering, production planning and control, manufac-
turing, and quality assurance.
The realization of CIM requires very complex algorithms, suitable organizational
structures, and integrated, distributed database systems. The decision-making pro-
cesses cover the spectrum from high-level decisions at the strategic planning level
down to the specific technical decisions at the machine scheduling level. Conse-
quently, the database system’s services and performance must be very compound in
order to satisfy the information requirements of the several tasks.
Database systems were introduced in the sixties and seventies to support the con-
current and reliable use of shared data, mainly in the area of business applications.
With the advance of so-called non-standard applications, like office information sys-
tems, multi-media systems, and computer integrated manufacturing, requirements for
various database features have changed; for example, from simple-structured data to
complexly structured data, from short single-user transactions to long multi-user trans-
actions, and from stable long-lived data descriptions to evolving schema management,
to mention just a few. This shift in requirements has also been reported in literature
[Ahmed92, Encarnaçao90, Kemper93]. The various articles focus on engineering
applications and their database demands. A comprehensive discussion of database
requirements, mainly integration requirements, of all aspects of a CIM system, how-
ever, is still missing. The objective of this chapter is to fill this gap.
Based on a brief introduction to database systems we will discuss CIM’s database
requirements. Thereby, we distinguish between requirements concerning data model-
ing issues, querying and manipulation issues, and integration issues. Based on the dis-
cussed requirements we focus on the state-of-the-art of database technology and
review to which extent CIM’s database requirements may be already fulfilled nowa-
days and in the near future, respectively. The chapter concludes with a prospective out-
look on database technology.
The work reported in this chapter is part of the ESPRIT project KBL (ESPRIT No.
5161), whose goal is the design and development of a Knowledge-Based Leitstand1.
The authors are responsible for incorporating object-oriented database technology as
an underlying information store, and as an integrative component between the Leit-
stand and various other CIM components.
1 The support by FFF (Austrian Foundation for Research Applied to Industry) under grant No.
2/279 is gratefully acknowledged.
2 Basic Features of Database Systems
In this section we will give a brief overview of the basic functionality of database
systems. We will start with the discussion of the functions of a database system, give a
definition of database systems, and review some basic data models. Due to the distrib-
uted nature of manufacturing we will also address issues of distributed data manage-
ment.
A database management system (DBMS) is a collection of mechanisms and tools
for the definition, manipulation, and control of databases (DB). A database system
(DBS) consists of a DBMS and one or more databases. In the following, we will use
the terms DBMS, DBS, and DB interchangeably if the meaning is unambiguous from
the context. In general, a DBMS has to provide the following functional capabilities
[Cattell91, Elmasri89, Ullman88]:
• management of persistent data (persistence)
• ability to manipulate the database, either with an interactive query language or
from applications (querying and manipulation)
• independence between application programs and databases (data independence)
• consistent and concurrent access of multiple users to the database (transaction
management)
• recovery from system failures and restoring of the database to a consistent state
(recovery management)
• authorization of user groups and their view of the database, archiving and tun-
ing of the database (database administration)
In order to support an independent view of the services provided by a DBMS, a
three level architecture (ANSI/SPARC architecture [Tsichritzis78]) was introduced
and is widely accepted. It consists of the physical (internal) database level, the con-
ceptual database level, and the view (external database) level. The physical database
(level) consists of a collection of files and access structures (the so-called internal
schema). The conceptual database (level) consists of the conceptual schema of the uni-
verse of discourse, i.e., of the problem domain which has to be stored and manipulated
in the database. The concepts and mechanisms used to specify the conceptual schema
are part of the data model which comprises the logical foundation of each database
system (see also below). The data definition language (DDL) is used to describe the
conceptual schema. The view level (= external database) consists of subsets of the con-
ceptual database. As the users’ work with the database is mostly restricted to pre-
defined functions and responsibilities, they do not need to know all the information
stored at the conceptual level of the database. Views provide a mechanism to supply
each user or each user group with his or her own database. The operations performed
within the database are expressed by means of a data manipulation language (DML).
The DML allows to query and manipulate the data and the data definition stored in the
database.
DBMSs may be distinguished in terms of the data model they support. The first
DBMSs followed the hierarchical and network model [Elmasri89]. Out of the defi-
ciencies of those early data models which were mainly due to the interdependence
between the logical data structure and the physical record and file structure the rela-
tional model [Codd70, Ullman88] emerged. The relational data model is based on the
mathematical notion of a relation. The querying and manipulation of the data is set-ori-
ented and follows the relational algebra. The de facto standard data definition and
manipulation language for relational databases is SQL [Melton93]. SQL is based on
the relational algebra. It fully supports the three level database architecture.
Relational databases have gained wide acceptance especially in the area of busi-
ness and administration applications since they fulfil the demand of managing large
amounts of simply-structured and equally-structured data and providing some intuitive
querying and manipulation language. However, it was soon be realized that the model-
ing power of the relational model is strong and weak at the same time. It is strong in
the sense that most problems may be expressed by a set of relations and by applying
the relational operators. However, it is weak in so far as it lacks expressive power and
extensibility to attribute domains. To cope with these insufficiencies various extended
relational data models and systems have been proposed such as Postgres
[Stonebraker91] and Starburst [Lohman91] (see also Section ‘4 Extended Relational
and Object-Oriented Database Systems’).
In spite of the higher expressiveness of extended relational data models some fun-
damental problems in working with (extended) relational database systems remain
which are known as impedance mismatch [Cattell91, Shasha92]. The term stems from
electrical engineering. When two attached wires have different impedances a signal
traveling from one to the other will be reflected at the interface, which may lead to
problems. This metaphor describes the problems caused by two different paradigms,
the programming language paradigm and the database paradigm. Whereas traditional
programming languages may be characterized as being procedural, supporting record-
at-a-time processing, and complexly structured data types, traditional (relational) data-
base systems can be characterized as being declarative, supporting set-at-a-time pro-
cessing, and simple data types. To overcome the disadvantages of working with two
different worlds object-oriented database systems were introduced.
An object-oriented database system follows an object-oriented data model
[Dittrich90]. At present, there exist several different object-oriented data models and
thus, different DDLs and DMLs. They are either based on existing object-oriented lan-
guages, like C++ and Smalltalk or they have been newly developed. Although there
exists no single object-oriented data model there is consensus that a data model to be
called object-oriented has to exhibit the following core features [Atkinson89]: com-
plex object modeling, object identity, encapsulation, types, inheritance, overriding
with late binding, extensible types, and computational completeness. Complex object
modeling deals with the specification of objects out of other objects which may also be
complex by nature. Object identity provides a system-defined unique key and thus,
allows the sharing of objects. The principle of encapsulation defines data (= structural
information) and accessing operations (= behavioral information) as one unit called
object type (or object class). It allows to distinguish the interface of the object type, the
set of operation signatures, from its implementation, the implementation of the opera-
tions and the underlying data structure. An operation signature consists of the name of
the operation, the types and names of its input parameters, and the type of its return
value. An object type defines structural and behavioral information of a set of objects
called instances of the object type. Inheritance supports incremental type specification
by specializing one or more existing object types to define a new object type. Overrid-
ing means the redefinition of the implementation of inherited operations, and late bind-
ing allows the binding of operation names to implementations at run time. The
extensible type requirement permits new object types consisting of structural and
behavioral information to be defined by the user and, furthermore to be treated like any
other predefined type in the system. Finally, computational completeness requires that
any computable algorithm can be expressed in the DDL and/or DML of the database
system.
So far, we have not mentioned the architecture of a DBS. A database system archi-
tecture reflects the functionality described above. The core of a database system con-
sists of the data itself (physical database). The physical database is managed by the file
manager which handles requests from the database manager. The database manager
operates at the conceptual level and coordinates concurrent access of multiple users to
the database and it also manages authorization. The access to the database from the
user’s point of view is managed by the query processor. The query processor handles
access to the database, either via application programs or interactively stated user que-
ries. The resources involved (data and control) may be organized centrally or distrib-
uted. This leads us to both an important requirement of CIM (see Section ‘3.3
Integration Requirements’) and a vital area of research in database systems, namely
distribution.
Distributed database systems (DDBS) extend the notion of distribution of
resources to the concept of transparency [Traiger82]. In a networking system the user
has to explicitly access different nodes (data or services) in a distributed environment.
In contrast to that, in a distributed database system the access control to subunits
located at different nodes is kept completely invisible to the users of such a system.
The user is not aware of the distribution of the resources such as data, machines, and
services (location transparency). The same is true in case of replicated data and ser-
vices (replication transparency). Database access and query submission to a distrib-
uted database can be performed from any node in the network (performance
transparency). Last but not least, data and schema changes should not affect distribu-
tion (copy and schema change transparency).
Distributed database systems have recently been extended to so-called multidata-
base systems (= MDBS = federated database systems) which incorporate issues of
autonomy of local systems, their distribution, and their heterogeneity [Öszu91,
Sheth90]. The autonomy of the database systems refers to the distribution of control
over the manipulated data. This corresponds to independently operating database sys-
tems which cooperate within a multidatabase system. A distributed database system
consists of a single distributed DBMS managing multiple databases. Distribution spec-
ifies the degree of distribution of data. The data either is physically distributed and
occasionally replicated within the network, or it is located at a single site. Heterogene-
ity ranges from differences in the DBMS to differences in the operating system and
hardware environment. The heterogeneity in the DBMS is due to distinct data models
(structures, constraints, DDL, and DML), distinct management features (concurrency
control, commit strategies, recovery) and semantic heterogeneity (= distinct schemas).
The latter is probably the most difficult to cope with. Semantic heterogeneity occurs
whenever various meanings or interpretations of the data have to be managed. Con-
sider a local production database where the productivity of a worker is stored in terms
of processed units per day. The management department may also store the worker’s
productivity, but computed as some ratio of working hour costs and cost of a bought-in
part. Attempts to compare or analyze data from the two databases is misleading
because they are semantically heterogeneous. To sum up, a multidatabase system sup-
ports access to multiple DBS, each with its own DBMS. Questions of implementing
distributed database systems and multidatabase systems in terms of communication
networks are beyond the scope of this chapter. The interested reader is referred to
[Tanenbaum89].
Product data contain the description of the products involved in the manufacturing
process. The data may consist of graphic, textual, and numeric information. The pro-
duction data describe how the individual parts are to be manufactured. An example of
production data are process plans with detailed instructions of how to make a part
based on its design. This includes operations, machines, tools, feeds, stock removal,
and inspection procedures. Operational data consist of all information about the pro-
duction process itself. This includes production schedules, production planning data,
and machine/part related data. Resource data describe the resources involved in pro-
duction; the kinds and number of machines, the tools, and personnel. Beside these pro-
duction related data financial and marketing data have to be managed as well.
Requirements for CIM applications differ a lot from those in pure business applica-
tions. Table 1 gives an overview of the differences between business and CIM data-
bases with respect to data modeling issues. Data types describe the conceptual entities
stored in a database. Business applications usually operate on basic data types such as
integer, floating point numbers, and character strings. CIM applications require more
complex, predefined as well as user-defined data types because of the complexity of
the objects that have to be designed and manufactured. These objects are far more
interrelated than data in conventional applications. The relationships between objects
of business applications are simple compared to those in CIM applications where they
are manifold and may imply dependencies between objects. There are lots of objects
involved in business applications as well as in CIM applications; most probably there
are more objects involved in the latter.
S E2
S E3 E3
Edge P4 P3
S E4 a)
EId Type select *
from Surface-Model
E1 line
where SId = ‘S’
E2 arc select *
E3 line Edge-Point from Edge
where EId in
E4 line
EId PId (select EId
E1 P1
from Surface-Edge
where SId = ‘S’)
Point E1 P2 select *
E2 P2 from Point
PId Type where PId in
E2 P3
P1 pointOfCorner
(select PId
E3 P3 from Edge-Point
P2 pointOfCorner where EId in
E3 P4
P3 pointOfCorner
E4 P1
(select EId
P4 pointOfCorner
from Surface-Edge
E4 P4 where SId = ‘S’))
b) c)
Figure 1. 2D model a) with data definition based on the relational model b) and
retrieve statement in SQL c), a) and b) are taken from [Scheer90, p. 269])
Example: Consider a 2D model as it is shown in Figure 1.a) and its specification using
the relational data model shown in Figure 1.b). The information concerning
just one graphical object is scattered over five relations. The situation gets
worse if one wants to retrieve the complex object, i.e., the surface object
with all its direct and indirect component objects. The appropriate SQL
statements are given in Figure 1.c). We will not go into details of SQL.
However, the example already demonstrates the user’s responsibility for
consistent querying and manipulation. In our example, the user has to write
three select-from-where statements to retrieve all information concerning
just one complex object.
Object-oriented models capture these requirements and allow the modeling and
processing of complexly structured design objects as logical units [Kim89]. This issue
is also related to the extension of attribute domains since the definition of complexly
structured objects equals the introduction of new object types.
Relationships and Dependencies
As these complex objects are highly interrelated to other objects, relationships and
dependencies between these objects must be considered. There are two main kinds of
relationships distinguished, generalization and aggregation. Generalization models the
isA-relationship between object types. An object type is a subtype of another object
type called supertype if it exhibits at least the same properties (= attributes and opera-
tions) as its supertype but possibly even more. Generalization may be used to classify
a set of objects according to some classification attribute. The objects in the subset (=
subtype) inherit all the properties from the superset (= supertype), thus, common prop-
erties are defined at a single place. Object types connected via an isA-relationship
build up a generalization hierarchy or a generalization graph; the latter occurs in case
of inheritance from more than one supertype.
Example: Parts may be produced inhouse or are delivered by external suppliers. Thus,
the generalization hierarchy consists of the object types PART, INHOUSE-PART,
and BOUGHT-IN-PART (see Figure 2). INHOUSE-PART and BOUGHT-IN-
PART inherit from PART. (Note: the notation is taken from the object-oriented
modeling technique OMT [Rumbaugh91].) In addition, parts may be classified
according to their production status as RAW-MATERIALs, INTERMEDIATE-
WORKPIECEs, and PRODUCTs. The hollow triangle represents the isA-relation-
ship. As already noticed in the example CIM applications comprise various gen-
eralization hierarchies which should be expressible in the data model.
Aggregation models the partOf-relationship between objects. An object called
component object is a part of another object called composite (= complex) object if the
latter is made out of the former. Component objects may again consist of (other) com-
ponent objects. The resulting hierarchy is called aggregation hierarchy.
PART
Table 2. Size and time requirements of control tasks in CIM databases [Hammer91]
The management and control tasks in a manufacturing enterprise are built up hier-
archically. The requirements with respect to the amount of data, and response time dif-
fer from level to level. Table 2 gives an overview of this control hierarchy including a
sketch on the amount of data involved and the required response time.
At the engineering and planning level the acceptable response time is within the
range of minutes and the amount of processed data is relatively high. At the machine
control level the response time has to be very short; small amounts of data gathered
with sensors have to be transformed immediately. In our analysis we focus on the engi-
neering and manufacturing control levels.
Business Databases CIM Databases
Table 3. Querying and manipulation issues in business databases and CIM databases
In addition to size and time requirements, accessing a CIM database differs a lot
from accessing a business database. Table 3 provides an overview. Access to the
highly interrelated objects in the database is more complex. The underlying query lan-
guage must support complex querying of data both in terms of query predicates sup-
porting associative access and in terms of path expressions supporting navigational
access. Usually the modification of data takes longer than in traditional transaction
processing applications. Consider the process of designing a new product as opposed
to the transaction processing system of an automatic teller machine. The design of the
geometric object is a long and highly interactive process whereas bank transactions are
mostly predetermined in the amount and the way they access the database.
The processing of engineering problems covers qualitative and quantitative analy-
sis of design data. If applied separately none of the techniques can address all prob-
lems in manufacturing successfully, thus, both qualitative and quantitative analysis
must be applied [Rao91]. Qualitative analysis is based on symbolic and graphical
information while quantitative analysis mostly relies on the processing of numerical
data. Database systems in the CIM environment must be capable of supporting the pro-
cessing and retrieval of both types of data, numerical and symbolic.
The functions of a manufacturing enterprise that have to be supported during the
life cycle of a product are manifold. The assemblies have to be designed and a toler-
ance analysis has to be performed for those assemblies. The engineering drawings of
assemblies, individual parts, fixtures, and manufacturing facilities have to be prepared.
Analytical models of parts have to be carried out with respect to structural and thermal
analysis. The weights, volumes, and costs of manufacturing have to be calculated. The
parts have to be classified in order to be reused or to find similar parts already existing
in stock. Before the manufacturing may start the bill of materials has to be computed,
usually a very resource intensive task. Production plans for the parts and assemblies
have to be designed and NC programs must be implemented and assigned to the indi-
vidual production of the parts. In case of robot use in work cells the movements and
tools must be programmed. Furthermore, changes in the design of assemblies and their
effects on manufacturing have to be controlled.
All phases in manufacturing require administrative support. Thus, business com-
puting is not only important per se but also as a supporting task of the manufacturing
process. As an example consider cost accounting. Product costs have to be computed
in any step of the production process. Cost planning, resource management, and mar-
keting strategies heavily rely on this data. Although the requirements for these applica-
tions are covered by traditional transaction processing they are noteworthy to mention
just to stress the integrative character of CIM applications once more. In our require-
ments analysis we put emphasis on advanced transaction processing issues.
CIM requires all the tasks during the product life-cycle to be supported by database
services. The major features for data querying and manipulation by multiple users are
presented in the following. These include:
• Advanced Transaction Management
• Flexible Database Access
• Change Management
• Interfaces
Change Management
Let us further elaborate on the example of the production plan. It is not only desir-
able to compute the production plan but also to handle several versions for comparison
and optimization. For example, the production planning and control system optimizes
the timely production of products. The optimization process heavily relies on con-
straints based on the availability of resources and based on administrative and market-
ing needs. Different versions of the production plan may exist due to partly conflicting
goals which have to be satisfied. In addition to version management evolving applica-
tions such as CIM applications must cope with changing requirements which imply
changes in the description of the data in the database, known as schema evolution and
schema versioning. Both version management and schema evolution/versioning have
to cope with changes either of data values or of data descriptions. These tasks are com-
monly referred to as change management [Joseph91]. In the following, the require-
ments of both aspects of change management will be discussed in turn.
Version management allows to keep track of evolving data values, i.e., the values
of the attributes, in a controlled way. Versioning of data is very important in the course
of the design process of product configurations. The design process is highly interac-
tive and consists of several abstraction levels. Each level introduces new constraints to
be satisfied, thus causing different versions of the design object. A version represents a
meaningful snapshot of a design object at a point in time [Katz90]. Versions of the
same design object either may exist in sequence and in revisions, or in alternatives.
These are also referred to as linear and branching versions. Revisions represent
improvements of a specified version. Alternatives describe different ways of manipu-
lating objects effecting the same result. Revisions and alternatives should be combined
in any order. Together the representations form a version tree where a node is linked to
its parent node via a derivedFrom-relationship (= versionOf-relationship). The manip-
ulation operations of the version tree include the merge of branching versions and the
way the version tree is referenced by other objects. Version merging is imperative dur-
ing the design phase when the versions of subunits have reached a stable state and
have to be put together to form a consistent design object. Thus, a version tree
becomes a directed acyclic version graph. Referencing versioned objects addresses the
problem of version states. Let us again consider the design process. It can be divided
into several phases. During the initial phase the designers tend to experiment with
design options; the versions are in a transient state where they may be updated and
deleted. Later, the versions of some design object may turn into a more stable or work-
ing state where they can only be deleted but not updated any longer. Finally, when the
design is finished, a stable or released version of the design object which can neither
be deleted nor updated is available for further processing.
Consider the development of an engine. Different designers may develop distinct
approaches for the transmission of the engine. Several designer groups develop their
own solutions to that problem. Each of the designers start off with a requirement spec-
ification of the transmission element, and an initial design is checked out from the
group database. During the design process the versions evolve from a transient state to
a more stable state. Versions reaching a state that could be advantageous for other
design groups are promoted to working versions and are added to the version graph as
alternative versions. This is performed with a check-in of the design object from the
local database to the group database. Finally, a design alternative from all alternative
versions is selected as released version. The need for version management is not only
limited to engineering design tasks. Manufacturing tasks and group work in office
information systems may also require version management. An example of the former
application are versions of production plans, an example of the latter is cooperative
editing of documents.
Let us now turn to changes in the description of the data in the database, to schema
evolution and schema versioning. The database schema describes the structure and
behavior of the entities in a database, also referred to as the description of a set of
object types. Changes to the schema description may occur at any time. Consider an
engineering environment; usually, the schema descriptions are likely to be modified as
soon as designers arrive at a better understanding of their problem. Attribute names
and attribute domains may change, the structure of composite objects may change,
new attributes may be added, and existing attributes may become obsolete. An impor-
tant issue is that the DBS must be able to check and resolve inconsistencies due to
schema changes dynamically, i. e., without database shutdown. The database services
must be available during the reorganization of the database [Banerjee87, Nguyen89,
Zicari91]. The reorganization of the database, i.e., of the stored data, to conform to the
new schema can be achieved in three ways: (a) conversion, i.e., the objects are imme-
diately changed to follow the new schema, (b) screening, i.e., the changes are deferred
to the time when the data is accessed, and (c) schema versioning. In the first approach
the contents of the database is immediately converted after changes of the schema
have occurred. This implies that whenever the conversion takes place the database is
not available. This might not be desirable. The second approach starts the conversion
only when the data is accessed and thus, does not require a database shutdown.
Schema versioning [Skarra90] is the most flexible way to cope with schema changes.
In addition to the changed schema the old schema is available. Existing entities which
were created within the old schema need not be converted but may be processed as
usual. Schema versioning might also build up a version tree or a version graph.
Dynamic schema evolution and schema versioning must be supported in CIM
applications. Consider the example of the Leitstand toolkit with an underlying data-
base system as it is developed within KBL [Adelsberger92]. The Leitstand toolkit con-
sists of a set of generic classes implementing an electronic planning board. It is
operational but at the same time not customized to specific user needs. When working
with the Leitstand toolkit at the customer’s site it might become necessary to change
and extend the existing schema to make it more suitable. It should be possible to carry
out changes not only at the client’s site but even during normal work. Alternative
approaches are too time-consuming and might be unacceptable for the production pro-
cess.
Interfaces
So far, we have discussed querying and manipulation of data which has been
defined using the DDL of the underlying database system. However, it is of equal
importance for CIM applications that information be automatically transferred to/from
the particular format of the underlying database system from/to other data formats. On
the one hand, this requirement is due to various software products used within the very
same manufacturing enterprise with syntactically and semantically different data for-
mats for the same information. On the other hand, this is due to information coming
from outside the system, for example from suppliers of raw material and intermediate
workpieces. Up to now, the information has to be processed manually and stored in the
database system.
Considering the above mentioned problems of translation from and to various data
formats it would be highly desirable to work with one standard neutral format used by
every software to communicate its data. One of the mainstream ISO standardization
efforts in this area is STEP, STandard for the Exchange of Product model data
[Trapp93]. Within STEP the specification language Express was proposed for formal
specification of product data [Schenk90]. Express already provides features of an
object-oriented specification language in terms of user-defined types, single and multi-
ple inheritance, and local and global rules. Local rules are defined for the objects of a
single object type (called entity type in Express), whereas global rules may incorporate
objects of several types. Extensions to Express, for example to even incorporate opera-
tions as part of the object type definition, are currently under investigation and have
also been proposed by the recently finished ESPRIT project IMPPACT (Integrated
Modeling of Products and Processes using Advanced Computer Technology
[Bjørke92]).
Database systems for CIM applications should support at least the import and
export of STEP/Express data descriptions to justify their role as common and integrat-
ing information store. The question of interfaces is highly related to that of integration
which will be discussed in the next section.
Acknowledgment
The authors are grateful to Michael Schrefl for developing the example depicted in
Figure 1.
References
[Adelsberger92] H. Adelsberger et al.; The Concept of Knowledge-Based Leit-
stand - Summary of First Results; Proc. of the 8th CIM-
Europe Annual Conference; C. O’Brien, P. MacConaill, and
W. Van Puymbroeck (eds.); Springer Verlag; Birmingham,
UK; 1992
[Ahad92] R. Ahad, D. Dedo; OpenODB from Hewlett-Packard: a Com-
mercial Object-Oriented Database Management System; Jour-
nal of Object-Oriented Programming (JOOP); Feb. 1992
[Ahmed92] S. Ahmed, A. Wong, D. Sriram, R. Logcher; Object-oriented
database management systems for engineering: A Compari-
son; Journal of Object-Oriented Programming (JOOP); Jun.
1992
[Andrews91] T. Andrews, C. Harris, K. Sinkel; Ontos: A Persistent Data-
base for C++; Object-Oriented Databases with Applications to
CASE, Networks, and VLSI CAD; Prentice-Hall; in:
[Gupta91]; 1991
[Atkinson89] M. Atkinson, F. Bancilhon, D. DeWitt, K. Dittrich, D. Maier,
S. Zdonik; The Object-Oriented Database System Manifesto;
Proc. First Int. Conf. on Deductive and Object-Oriented Data-
bases; Dec. 1989
[Bancilhon89] F. Bancilhon; Query Languages for Object-Oriented Database
Systems: Analysis and a Proposal; Proc. of the GI Conf. on
Database Systems for Office, Engineering, and Scientific
Applications; Springer IFB 204; Mar. 1989
[Banerjee87] J. Banerjee, W. Kim, H. Kim, H. Korth; Semantics and imple-
mentation of schema evolution in object-oriented databases;
SIGMOD Record, Vol. 16, No. 3; Dec. 1987
[Barghouti91] N. Barghouti, G. Kaiser; Concurrency Control in Advanced
Database Applications; ACM Computing Surveys, Vol. 23,
No. 3; Sept. 1991
[Batini92] C. Batini, C. Ceri, S. Navathe; Conceptual Database Design:
An Entity-Relationship Approach, Benjamin/Cummings
Publ.; 1992
[Bedworth91] D. Bedworth, M. Henderson, P. Wolfe; Computer-Integrated
Design and Manufacturing; Mechanical Engineering Series,
McGraw-Hill NY; 1991
[Bernstein87] P. Bernstein, V. Hadzilacos, N. Goodman; Concurrency Con-
trol and Recovery in Database Systems; Addison-Wesley,
Reading MA; 1987
[Bjørke92] Ø. Bjørke, O. Myklebust (eds.); IMPPACT - Integrated Mod-
elling of Products and Processes using Advanced Computer
Technologies; TAPIR Publishers; 1992
[Butterworth91] P. Butterworth, A. Otis, J. Stein; The GemStone Object Data-
base Management System; Communications of the ACM; Vol.
34, No. 10; Oct. 1991
[Cattell91] R. Cattell; Object data management: object-oriented and
extended database systems; Addison-Wesley, Reading MA;
1991
[Cattell92] R. Cattell, J. Skeen; Object Operations Benchmark; ACM
Transactions on Database Systems (TODS); Vol. 17, No. 1;
1992
[Codd70] E. Codd; A relational model for large shared data banks; Com-
munications of the ACM; Vol. 16; No. 6; 1970
[Dayal88] U. Dayal et. al.; The HiPAC Project: Combining Active Data-
bases and Timing Constraints; SIGMOD Record Vol. 17, No.
1; 1988
[Deux91] O. Deux; The O2 System; Communications of the ACM; Vol.
34, No. 10; Oct. 1991
[Dittrich90] K. Dittrich; Object-Oriented Database Systems: The Next
Miles of the Marathon; Information Systems; Vol. 15, No. 1;
1990
[Elmasri89] R. Elmasri, S. Navathe; Fundamentals of Database Systems;
Benjamin/Cummings; Redwood City CA; 1989
[Encarnaçao90] J. Encarnaçao, P. Lockemann; Engineering Databases, Con-
necting Islands of Automation Through Databases; Springer
Verlag; 1990
[Fishman89] D. Fishman et. al; Overview of the Iris DBMS; in: W. Kim, F.
Lochovsky (eds.); Object-Oriented Concepts, Databases, and
Applications; ACM Press; 1989
[Garcia83] H. Garcia-Molina; Using Semantic Knowledge for Transac-
tion Processing in a Distributed Database; ACM Transactions
on Database Systems (TODS), Vol. 8, No. 2; Jun. 1983
[Garza88] J. Garza, W. Kim; Transaction Management in an Object-Ori-
ented Database System; Proc. of the ACM SIGMOD Conf.;
1988
[Gupta91] R. Gupta, E. Horowitz (eds.); Object-Oriented Databases with
Applications to CASE, Networks, and VLSI CAD; Prentice-
Hall; 1991
[Gray93] J. Gray, A. Reuter; Transaction Processing: Concepts and
Techniques; Morgan Kaufmann; 1993
[Hammer91] H. Hammer; CAM-Konzepte - am Beispiel flexibler Ferti-
gungssysteme; in: CIM Handbuch; U. Geitner (ed.); Vieweg
Verlag; 1991
[Harrington73] J. Harrington; Computer Integrated Manufacturing; Krieger
Publishing, Malabar Fla., 1973
[Hars92] A. Hars et al.; Reference Models for Data Engineering in
CIM; Proc. of the 8th CIM-Europe Annual Conference; C.
O’Brien, P. MacConaill, and W. Van Puymbroeck (eds.);
Springer Verlag; Birmingham, UK; 1992
[Härder83] T. Härder, A. Reuter; Principles of Transaction-Oriented Data-
base Recovery; ACM Computing Surveys, Vol. 14, No. 4;
1983
[Itasca92] ITASCA Systems, Inc.; Technical Summary for Release 2.1;
1992
[Joseph91] J. Joseph, S. Thatte, C. Thompson, D. Wells; Object-Oriented
Databases: Design and Implementation; Proc. of the IEEE,
Vol. 79, No. 1; Jan. 1991
[Kappel93] G. Kappel, S. Rausch-Schott, W. Retschitzegger, S. Vieweg;
TriGS - Making a Passive Object-Oriented Database System
Active; accepted for publication in: Journal of Object-Ori-
ented Programming (JOOP); 1993
[Katz90] R. Katz; Toward a Unified Framework for Version Modeling
in Engineering Databases; ACM Computing Surveys, Vol. 22,
No. 4; Dec. 1990
[Kemper93] A. Kemper, G. Moerkotte; Object-Oriented Information Man-
agement in Engineering Applications; Prentice-Hall; 1993
[Ketabchi87] M. Ketabchi, V. Berzins; Modeling and Managing CAD Data-
bases; IEEE Computer, Vol. 20, No. 2; Feb. 1987
[Kim89] W. Kim, E. Bertino, J.F. Garza; Composite Objects Revisited;
Proc. of the ACM SIGMOD Conf.; 1989
[Kim90] W. Kim, N. Ballou, H. Chou, J. Garza, D. Woelk; Architecture
of the ORION Next-Generation Database System; IEEE
Transactions on Knowledge and Data Engineering; Vol. 2, No.
1; Mar. 1990
[Lamb91] C. Lamb, G. Landis, J. Orenstein, D. Weinreb; The Object-
Store Database System; Communications of the ACM; Vol.
34, No. 1; Oct. 1991
[Lohman91] G. Lohman et al; Extension to Starburst: Objects, Types,
Functions and Rules; Communications of the ACM, Vol. 34,
No. 10; Oct. 1991
[Melton93] J. Melton, A.R. Simon; Understanding the new SQL, Morgan
Kaufmann; 1993
[Nguyen89] G. Nguyen, D. Rieu; Schema evolution in object-oriented
database systems; Data & Knowledge Engineering, Vol. 4,
No. 1; Jul. 1989
[Objectivity91] Objectivity, Inc.; Objectivity/DB System Overview; Version
1.2; Jan. 1991
[Ovum91] J. Jeffcoate, C. Guilfoyle; Databases for Objects: The Market
Opportunity; Ovum Ltd.; 1991
[Öszu91] T. Özsu, P. Valduriez; Principles of Distributed Database Sys-
tems; Prentice-Hall International, Inc.; 1991
[Rao91] M. Rao, J. Luxhoj; Integration framework for intelligent man-
ufacturing processes; Journal of Intelligent Manufacturing;
No. 3; 1991
[Rumbaugh91] J. Rumbaugh, M. Blaha, W. Premerlani, F. Eddy, W. Lorensen;
Object-Oriented Modelling and Design; Prentice-Hall; 1991
[Scheer90] A. Scheer; Wirtschaftsinformatik; Springer Verlag; 1990
[Scheer92] A. Scheer, A. Hars, Extending Data Modeling to Cover the
Whole Enterprise; Communications of the ACM, Vol. 35, No.
9; 1992
[Schenk90] D. Schenk: EXPRESS Language Reference Manual, ISO
TC184/SC4/WG1 N466; 1990
[Shasha92] D. Shasha; Database Tuning, A principled approach; Prentice-
Hall; 1992
[Sheth90] A. Sheth, J. Larson; Federated Database Systems for Manag-
ing Distributed, Heterogeneous, and Autonomous Databases;
ACM Computing Surveys; Vol. 22, No. 3; Sept. 1990
[Sheth92] A. Sheth, H. Marcus; Schema Analysis and Integration: Meth-
odology, Techniques, and Prototype Toolkit; Technical Memo-
randum TM-STS-019981/1, Bellcore; Mar. 1992
[Skarra90] A. Skarra, S. Zdonik; Type Evolution in an Object-Oriented
Database; in: A. Cardenas, D. McLeod (eds.); Research Foun-
dations in object-oriented and semantic database systems;
Data and Knowledge Base Systems, Prentice Hall; 1990
[Soloviev92] V. Soloview; An Overview of Three Commercial Object-Ori-
ented Database Management Systems: ONTOS, ObjectStore,
and O2, SIGMOD Record, Vol. 21, No. 1; March 1992
[Stankovic88] J. Stankovic; Real-Time Computing Systems: The Next Gen-
eration; in: J. Stankovic, K. Ramamritham (eds.); Hard Real-
Time Systems; IEEE Computer Society Press; 1988
[Stonebraker91] M. Stonebraker, G. Kemnitz; The Postgres Next-Generation
Database Management System; Communications of the ACM,
Vol. 34, No. 10; Oct. 1991
[Tanenbaum89] A. Tanenbaum; Computer Networks; 2nd edition; Prentice-
Hall; 1989
[Thomas90] G. Thomas, G. Thompson, C. Chung, F. Carter, M. Templeton,
S. Fox, B. Hartman; Heterogeneous Distributed Database Sys-
tems for Production Use; ACM Computing Surveys, Vol. 22,
No. 3; Sept. 1990
[Traiger82] I. Traiger, J. Gray, C. Galtieri, B. Lindsay; Transactions and
Consistency in Distributed Database Systems; ACM Transac-
tions on Database Systems (TODS), Vol. 7, No. 3; Sept. 1982
[Trapp93] G. Trapp; The emerging STEP standard for production-model
data exchange; IEEE Computer; Vol. 26, No. 2; Feb. 1993
[Tsichritzis78] D. Tsichritzis, A. Klug; The ANSI/X3/SPARC DBMS Frame-
work Report of the Study Group on Database Management
Systems; Information Systems, Vol. 1; 1978
[Ullman88] J. Ullman; Principles of Database and Knowledge-Base Sys-
tems; Vol. I; Computer Society Press, Inc.; 1988
[Versant92] Versant Object Technology; System Reference Manual; Ver-
sant Version 1.7.3; Jan. 1992
[Winslett92] M. Winslett, I. Chu; Database Management Systems for
ECAD Applications: Architecture and Performance; NSF
Design and Manufacturing Conf., Atlanta; 1992
[Zicari91] R. Zicari; A Framework for Schema Updates in An Object-
Oriented Database System; Proc. of the IEEE Data Engineer-
ing Conf.; 1991