Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 7

System Database

System database master Database msdb Database model Database Resource Database tempdb Database Description Records all the system-level information for an instance of SQL Server. Is used by SQL Server Agent for scheduling alerts and jobs. Is used as the template for all databases created on the instance of SQL Server. Modifications made to the model database, such as database size, collation, recovery model, and other database options, are applied to any databases created afterward. Is a read-only database that contains system objects that are included with SQL Server. System objects are physically persisted in the Resource database, but they logically appear in the sys schema of every database. Is a workspace for holding temporary objects or intermediate result sets.

Modifying System Database: SQL Server does not support users directly updating the information in system objects such as system tables, system stored procedures, and catalog views. Instead, SQL Server provides a complete set of administrative tools that let users fully administer their system and manage all users and objects in a database. These include the following: Administration utilities, such as SQL Server Management Studio. SQL-SMO API. This lets programmers include complete functionality for administering SQL Server in their applications. Transact-SQL scripts and stored procedures. These can use system stored procedures and Transact-SQL DDL statements. These tools shield applications from changes in the system objects. For example, SQL Server sometimes has to change the system tables in new versions of SQL Server to support new functionality that is being added in that version. Applications issuing SELECT statements that directly reference system tables are frequently dependent on the old format of the system tables. Sites may not be able to upgrade to a new version of SQL Server until they have rewritten applications that are selecting from system tables. SQL Server considers the system stored procedures, DDL, and SQL-SMO published interfaces, and works to maintain the backward compatibility of these interfaces. SQL Server does not support triggers defined on the system tables, because they might modify the operation of the system. Viewing System Database Data You should not code Transact-SQL statements that directly query the system tables, unless that is the only way to obtain the information that is required by the application. Instead, applications should obtain catalog and system information by using the following: System catalog views SQL-SMO Windows Management Instrumentation (WMI) interface Catalog functions, methods, attributes, or properties of the data API used in the application, such as ADO, OLE DB, or ODBC. Transact-SQL system stored procedures and built-in functions.

Creating the Database


Our first step is to create the database itself. Many database management systems offer a series of options to customize database parameters at this step, but our database only permits the simple creation of a database. As with all of our commands, you may wish to consult the documentation for your DBMS to determine if any advanced parameters supported by your specific system meet your needs. Let's use the CREATE DATABASE command to set up our database: CREATE DATABASE personnel Take special note of the capitalization used in the example above. It's common practice among SQL programmers to use all capital letters for SQL keywords such as "CREATE" and "DATABASE" while using all lowercase letters for user-defined names like the "personnel" database name. These conventions provide for easy readability.

Creating Table
Our first table consists of the personal data for each employee of our company. We need to include each employee's name, salary, ID and manager. It's good design practice to separate the last and first names into separate fields to simplify data searching and sorting in the future. Also, we'll keep track of each employee's manager by inserting a reference to the manager's employee ID in each employee record. Let's first take a look at the desired employee table. The ReportsTo attribute stores the manager ID for each employee. From the sample records shown, we can determine that Sue Scampi is the manager of both Tom Kendall and John Smith. However, there is no information in the database on Sue's manager, as indicated by the NULL entry in her row. Now we can use SQL to create the table in our personnel database. Before we do so, let's ensure that we are in the correct database by issuing a USE command: USE personnel; Alternatively, the "DATABASE personnel;" command would perform the same function. Now we can take a look at the SQL command used to create our employees table: CREATE TABLE employees (employeeid INTEGER NOT NULL, lastname VARCHAR(25) NOT NULL, firstname VARCHAR(25) NOT NULL, reportsto INTEGER NULL); As with our previous example, note that programming convention dictates that we use all capital letters for SQL keywords and lowercase letters for user-named columns and tables. The command above may seem confusing at first, but there's actually a simple structure behind it. Here's a generalized view that might clear things up a bit: CREATE TABLE table_name (attribute_name datatype options, ..., attribute_name datatype options); Attributes and Data Types In the previous example, the table name is employees and we include four attributes: employeeid, lastname, firstname, and reportsto. The datatype indicates the type of information we wish to store in each field. The employee ID is a simple integer number, so we'll use the INTEGER datatype for both the employeeid field and the reportsto field. The employee names will be character strings of variable length and we don't expect any employee to have a first or last name longer than 25 characters. Therefore, we'll use the VARCHAR(25) type for these fields. NULL Values We can also specify either NULL or NOT NULL in the options field of the CREATE statement. This simply tells the database whether NULL (or empty) values are allowed for that attribute when adding rows to the database. In our example, the HR department requires that an employee ID and complete name be stored for each employee. However, not every employee has a manager -- the CEO reports to nobody! -- so we allow NULL entries in that

field. Note that NULL is the default value and omitting this option will implicitly allow NULL values for an attribute. Building The Remaining Tables Now let's take a look at the territories table. From a quick look at this data, it appears that we need to store an integer and two variable length strings. As with our previous example, we don't expect the Region ID to consume more than 25 characters. However, some of our territories have longer names, so we'll expand the allowable length of that attribute to 40 characters. Let's look at the corresponding SQL: CREATE TABLE territories (territoryid INTEGER NOT NULL, territory Description VARCHAR(40) NOT NULL, regionid VARCHAR(25) NOT NULL); Finally, we'll use the EmployeeTerritories table to store the relationships between employees and territories. Detailed information on each employee and territory is stored in our previous two tables. Therefore, we only need to store the two integer identification numbers in this table. If we need to expand this information we can use a JOIN in our data selection commands to obtain information from multiple tables. This method of storing data reduces redundancy in our database and ensures optimal use of space on our storage drives. We'll cover the JOIN command in-depth in a future tutorial. Here's the SQL code to implement our final table: CREATE TABLE employeeterritories (employeeid INTEGER NOT NULL, territoryid INTEGER NOT NULL); In computing, online analytical processing, or OLAP is an approach to swiftly answer multi-dimensional analytical (MDA) queries.[1] OLAP is part of the broader category of business intelligence, which also encompasses relational reporting and data mining.[2]Typical applications of OLAP include business reporting for sales, marketing, management reporting, business process management(BPM), [3] budgeting and forecasting, financial reporting and similar areas, with new applications coming up, such as agriculture.[4] The term OLAP was created as a slight modification of the traditional database term OLTP (Online Transaction Processing). OLAP tools enable users to interactively analyze multidimensional data from multiple perspectives. OLAP consists of three basic analytical operations: consolidation (roll-up), drill-down, and slicing and dicing. [6] Consolidation involves the aggregation of data that can be accumulated and computed in one or more dimensions. For example, all sales offices are rolled up to the sales department or sales division to anticipate sales trends. In contrast, the drill-down is a technique that allows users to navigate through the details. For instance, users can access to the sales by individual products that make up a regions sales. Slicing and dicing is a feature whereby users can take out (slicing) a specific set of data of the cube and view (dicing) the slices from different viewpoints. Databases configured for OLAP use a multidimensional data model, allowing for complex analytical and adhoc queries with a rapid execution time. [7] They borrow aspects of navigational databases, hierarchical databases and relational databases. The core of any OLAP system is an OLAP cube (also called a 'multidimensional cube' or a hypercube). It consists of numeric facts called measures which are categorized by dimensions. The cube metadata is typically created from a star schema or snowflake schema of tables in a relational database. Measures are derived from the records in the fact table and dimensions are derived from the dimension tables. Each measure can be thought of as having a set of labels, or meta-data associated with it. A dimension is what describes these labels; it provides information about the measure. A simple example would be a cube that contains a store's sales as a measure, and Date/Time as a dimension. Each Sale has a Date/Time label that describes more about that sale. Any number of dimensions can be added to the structure such as Store, Cashier, or Customer by adding a foreign key column to the fact table. This allows an analyst to view the measures along any combination of the dimensions.

Types
OLAP systems have been traditionally categorized using the following taxonomy.[14] Multidimensional 'MOLAP' is the 'classic' form of OLAP and is sometimes referred to as just OLAP. MOLAP stores this data in an optimized multi-dimensional array storage, rather than in a relational database. Therefore it requires the precomputation and storage of information in the cube - the operation known as processing. Relational ROLAP works directly with relational databases. The base data and the dimension tables are stored as relational tables and new tables are created to hold the aggregated information. Depends on a specialized schema design.This methodology relies on manipulating the data stored in the relational database to give the appearance of traditional OLAP's slicing and dicing functionality. In essence, each action of slicing and dicing is equivalent to adding a "WHERE" clause in the SQL statement. Hybrid There is no clear agreement across the industry as to what constitutes "Hybrid OLAP", except that a database will divide data between relational and specialized storage. For example, for some vendors, a HOLAP database will use relational tables to hold the larger quantities of detailed data, and use specialized storage for at least some aspects of the smaller quantities of more-aggregate or less-detailed data.

Comparison
Each type has certain benefits, although there is disagreement about the specifics of the benefits between providers. Some MOLAP implementations are prone to database explosion, a phenomenon causing vast amounts of storage space to be used by MOLAP databases when certain common conditions are met: high number of dimensions, pre-calculated results and sparse multidimensional data.

MOLAP generally delivers better performance due to specialized indexing and storage optimizations. MOLAP also needs less storage space compared to ROLAP because the specialized storage typically includes compression techniques.

ROLAP is generally more scalable. However, large volume pre-processing is difficult to implement efficiently so it is frequently skipped. ROLAP query performance can therefore suffer tremendously.

Since ROLAP relies more on the database to perform calculations, it has more limitations in the specialized functions it can use.

HOLAP encompasses a range of solutions that attempt to mix the best of ROLAP and MOLAP. It can generally pre-process swiftly, scale well, and offer good function support.

Online transaction processing, or OLTP, refers to a class of systems that facilitate and manage
transaction-oriented applications, typically for data entry and retrieval transaction processing. The term is somewhat ambiguous; some understand a "transaction" in the context of computer or database transactions, while others (such as the Transaction Processing Performance Council) define it in terms of business or commercial transactions.[1] OLTP has also been used to refer to processing in which the system responds immediately to user

requests. An automatic teller machine (ATM) for a bank is an example of a commercial transaction processing application. Requirement OLTP is a methodology to provide end users with access to large amounts of data in an intuitive and rapid manner to assist with deductions based on investigative reasoning. Online transaction processing increasingly requires support for transactions that span a network and may include more than one company. For this reason, new online transaction processing software uses client or server processing and brokering software that allows transactions to run on different computer platforms in a network. In large applications, efficient OLTP may depend on sophisticated transaction management software (such as CICS) and/or database optimization tactics to facilitate the processing of large numbers of concurrent updates to an OLTP-oriented database. For even more demanding Decentralized database systems, OLTP brokering programs can distribute transaction processing among multiple computers on a network. OLTP is often integrated into service-oriented architecture (SOA) and Web services. Benefits Online Transaction Processing has two key benefits: simplicity and efficiency. Reduced paper trails and the faster, more accurate forecasts for revenues and expenses are both examples of how OLTP makes things simpler for businesses. Disadvantages As with any information processing system, security and reliability are important considerations. When organizations choose to rely on OLTP, operations can be severely impacted if the transaction system or database is unavailable due to data corruption, systems failure, or network availability issues. Additionally, like many modern online information technology solutions, some systems require offline maintenance which further affects the cost-benefit analysis. Knowledge Assets: Intellectual capital assets such as copy right or a patent that does or can generate income. Unlike information, knowledge is less tangible and depends on human cognition and awareness. There are several types of knowledge - 'knowing' a fact is little different from 'information', but 'knowing' a skill, or 'knowing' that something might affect market conditions is something, that despite attempts of knowledge engineers to codify such knowledge, has an important human dimension. It is some combination of context sensing, personal memory and cognitive processes. Measuring the knowledge asset, therefore, means putting a value on people, both as individuals and more importantly on their collective capability, and other factors such as the embedded intelligence in an organizations computer systems. KNOWLEDGE GENERATION Information, Knowledge and Actions Based on Experiences, Values, Rules Conscious and Intentional K generation Five modes of knowledge generation: Acquisition Dedicated Resources Fusion Adaptation Knowledge Networking Process or thing acquisition:

NIH Syndrome Other Extreme Acquiring other firms, practices, individuals Minds more valuable than their creations?? K and talent not related to degrees?? Valuation of companies difficult! Organic connection of K to particular people and environment stickiness Ecology of Knowledge

KM Technologies
Early KM technologies included online corporate yellow pages as expertise locators and document management systems. Combined with the early development of collaborative technologies (in particular Lotus Notes), KM technologies expanded in the mid-1990s. Subsequent KM efforts leveraged semantic technologies for search and retrieval and the development of e-learning tools for communities of practice. Knowledge management systems can thus be categorized as falling into one or more of the following groups: Groupware, document management systems, expert systems, semantic networks, relational and object oriented databases, simulation tools, and artificial [16] (Gupta & Sharma 2004) More recently, development of social computing tools (such as bookmarks, blogs, and wikis) have allowed more unstructured, self-governing or ecosystem approaches to the transfer, capture and creation of knowledge, including the development of new forms of communities, networks, or matrixed organizations. However such tools for the most part are still based on text and code, and thus represent explicit knowledge transfer. These tools face challenges in distilling meaningful re-usable knowledge and ensuring that their content is transmissible through diverse channels. Software tools in knowledge management are a collection of technologies and are not necessarily acquired as a single software solution. Furthermore, these knowledge management software tools have the advantage of using the organization existing information technology infrastructure. Organizations and business decision makers spend a great deal of resources and make significant investments in the latest technology, systems and infrastructure to support knowledge management. It is imperative that these investments are validated properly, made wisely and that the most appropriate technologies and software tools are selected or combined to facilitate knowledge management. Knowledge management has also become a cornerstone in emerging business strategies such as Service Lifecycle Management (SLM) with companies increasingly turning to software vendors to enhance their efficiency in industries including, but not limited to, the aviation industry. Knowledge Utilization Introduction While the concept of Knowledge Utilization is well known, it is a Knowledge Management concept that is more difficult to specifically define. Knowledge Utilization as an Activity Knowledge Utilization is one of four types of activities integral to Knowledge Management (KM), the other three being Knowledge Creation, Knowledge Retention and Knowledge Transfer. Successful Knowledge Utilization is necessary as knowledge gained must be applied in order for the organization to successfully close its Knowledge Gaps and to meet organizational goals and objectives. Knowledge Utilization Cycle Viewpoint The knowledge cycle consists of three defined phases: knowledge creation, diffusion, and utilization. Knowledge Utilization in this cycle is viewed as something that is different than Knowledge Diffusion. Knowledge Utilization tends to be viewed as more of a linear process, while Knowledge Diffusion is viewed as non-linear. Diffusion refers more to the pushing of knowledge throughout an organization while the focus of the utilization cycle involves interventions and decisions related to what knowledge should be utilized and how to best do that.

One additional impact upon Knowledge Utilization can be found in understanding the stages of Knowledge Utilization as defined by Knott/Wildawsky (1980): Reception and Cognition. Reception is important to ensure that those in the organization are able to receive (through transfer, creation or diffusion) knowledge critical to close Knowledge Gaps. Cognition is important to ensure that those in the organization are able to understand and utilize the knowledge without which innovation would not be possible. KNOWLEDGE STORAGE Methods of storing and sharing this intellectual capital include searchable knowledge bases, Learning, other types of databases, enterprise portals, groupware tools, and email

You might also like