Download as pdf or txt
Download as pdf or txt
You are on page 1of 271

CBIP

IS Core module

PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Wed, 31 Oct 2012 13:55:33 UTC

Contents
Articles
Information systems Chief information officer Information technology management Information technology audit Corporate governance of information technology Systems development life-cycle End-user computing Middleware Enterprise content management Knowledge management Expert system Reference data Master data Conceptual schema Entityrelationship model Object-oriented modeling Logical data model RDF query language Web Ontology Language Enterprise architecture Segment architecture Solution architecture Service-oriented architecture Zachman Framework The Open Group Architecture Framework Federal enterprise architecture Operating system OSI model Virtual private network Semantic Web COBIT Information Technology Infrastructure Library Project management System testing 1 7 9 11 15 19 28 29 31 45 53 60 61 62 64 72 72 74 75 86 91 92 94 109 121 124 131 148 157 164 173 176 192 205

Unit testing Regression testing Acceptance testing Software testing Business process modeling Joint application design Software development process Agile software development

206 212 214 217 233 239 243 247

References
Article Sources and Contributors Image Sources, Licenses and Contributors 257 265

Article Licenses
License 268

Information systems

Information systems
Information systems (IS) is the study of complementary networks of hardware and software that people and organizations use to collect, filter, process, create, and distribute data.[1][2][3][4] The study bridges business and computer science using the theoretical foundations of information and computation to study various business models and related algorithmic processes within a computer science discipline.[5][6][7][8][9][10][11][12][13] Computer Information System(s) (CIS) is a field studying computers and algorithmic processes, including their principles, their software and hardware designs, their applications, and their impact on society[14][15][16] while IS emphasizes functionality over design.[17] The history of information systems coincides with the history of computer science that began long before the modern discipline of computer science emerged in the twentieth century.[18] Regarding the circulation of information and ideas, numerous legacy information systems still exist today that are continuously updated to promote ethnographic approaches, to ensure data integrity, and to improve the social effectiveness & efficiency of the whole process.[19] In general, information systems are focused upon processing information within organizations, especially within business enterprises, and sharing the benefits with modern society.[20]

Overview
Silver et al. (1995) provided two views on (IS) and IS-centered view that includes software, hardware, data, people, and procedures. A second managerial view includes people, business processes and Information Systems. There are various types of information systems, for example: transaction processing systems, office systems, decision support systems, knowledge management systems, database management systems, and office information systems. Critical to most information systems are information technologies, which are typically designed to enable humans to perform tasks for which the human brain is not well suited, such as: handling large amounts of information, performing complex calculations, and controlling many simultaneous processes. Information technologies are a very important and malleable resource available to executives.[21] Many companies have created a position of Chief Information Officer (CIO) that sits on the executive board with the Chief Executive Officer (CEO), Chief Financial Officer (CFO), Chief Operating Officer (COO) and Chief Technical Officer (CTO). The CTO may also serve as CIO, and vice versa. The Chief Information Security Officer (CISO) focuses on information security management.

The Discipline of Information Systems


Several IS scholars have debated the nature and foundations of Information Systems which has its roots in other reference disciplines such as Computer Science, Engineering, Mathematics, Management Science, Cybernetics, and others.[22][23][24][25] Information systems also can be defined as a collection of hardware, software, data, people and procedures that work together to produce quality information.

The Impact on Economic Models


Microeconomic theory model Transaction cost theory Agency Theory

Information systems

Differentiating IS from Related Disciplines


Similar to computer science, other disciplines can be seen as both related disciplines and foundation disciplines of IS. The domain of study of IS involves the study of theories and practices related to the social and technological phenomena, which determine the development, use and effects of information systems in organizations and society. [26] But, while there may be considerable overlap of the disciplines at the boundaries, the disciplines are still differentiated by the focus, purpose and orientation of their activities.[27]
Information Systems relationship to Information In a broad scope, the term Information Systems (IS) is a scientific field Technology, Computer Science, Information of study that addresses the range of strategic, managerial and Science, and Business. operational activities involved in the gathering, processing, storing, distributing and use of information, and its associated technologies, in society and organizations.[28] The term information systems is also used to describe an organizational function that applies IS knowledge in industry, government agencies and not-for-profit organizations.[29] Information Systems often refers to the interaction between algorithmic processes and technology. This interaction can occur within or across organizational boundaries. An information system is not only the technology an organization uses, but also the way in which the organizations interact with the technology and the way in which the technology works with the organizations business processes. Information systems are distinct from information technology (IT) in that an information system has an information technology component that interacts with the processes components.

Types of information systems


The 'classic' view of Information systems found in the textbooks[30] of the 1980s was of a pyramid of systems that reflected the hierarchy of the organization, usually transaction processing systems at the bottom of the pyramid, followed by management information systems, decision support systems and ending with executive information systems at the top. Although the pyramid model remains useful, since it was first formulated a number of new technologies have been developed and new categories of information systems have emerged, some of which no longer fit easily into the original pyramid model. the major role of the management of the plan. Some examples of such systems are: data warehouses enterprise resource planning enterprise systems expert systems geographic information system global information system

A four level pyramid model of different types of Information Systems based on the different levels of hierarchy in an organization

office automation.

Information systems Computer-Based Information System is essentially a system of information using computer technology to carry out some or all of its planned tasks. Below is a list of the basic components of computer based information system. The first four are known as information technology components: Hardware- this are the devices like the monitor, processor, printer and keyboard, all of which work together to accept, process, show data and information. Softwareis the programs that allow the hardware to process the data. Database- is the gathering of associated files or tables containing related data. Network- is a connecting system that allows diverse computers to distribute resources. Procedures- are the commands for combining the components above to process information and produce the preferred output. In the end of all this, it is the people who are the ones using these hardwares and softwares to interface with it and make use of its output. The first four components (hardware, software, database and network) that were discussed make up what is known as the information technology platform. Information technology workers could then use these components to create information systems that watch over safety measures, risk and the management of data. These actions are known as information technology services.[31]

Information systems career pathways


Information Systems have a number of different areas of work: Information systems strategy Information systems management Information systems development Information systems Information systems iteration Information system organization

There are a wide variety of career paths in the information systems discipline. "Workers with specialized technical knowledge and strong communications skills will have the best prospects. Workers with management skills and an understanding of business practices and principles will have excellent opportunities, as companies are increasingly looking to technology to drive their revenue."[32]

Information systems development


Information technology departments in larger organizations tend to strongly influence information technology development, use, and application in the organizations, which may be a business or corporation. A series of methodologies and processes can be used in order to develop and use an information system. Many developers have turned and used a more engineering approach such as the System Development Life Cycle (SDLC) which is a systematic procedure of developing an information system through stages that occur in sequence. An Information system can be developed in house (within the organization) or outsourced. This can be accomplished by outsourcing certain components or the entire system.[33] A specific case is the geographical distribution of the development team (Offshoring, Global Information System). A computer based information system, following a definition of Langefors,[34] is: a technologically implemented medium for recording, storing, and disseminating linguistic expressions, as well as for drawing conclusions from such expressions. which can be formulated as a generalized information systems design mathematical program. Geographic Information Systems, Land Information systems and Disaster Information Systems are also some of the emerging information systems but they can be broadly considered as Spatial Information Systems. System development is done in stages which include: Problem recognition and specification Information gathering Requirements specification for the new system

Information systems System design System construction System implementation Review and maintenance.[35]

Information systems research


Information systems research is generally interdisciplinary concerned with the study of the effects of information systems on the behavior of individuals, groups, and organizations.[36][37] Hevner et al. (2004) [38] categorized research in IS into two scientific paradigms including behavioral science which is to develop and verify theories that explain or predict human or organizational behavior and design science which extends the boundaries of human and organizational capabilities by creating new and innovative artifacts. Salvatore March and Gerald Smith [39] proposed a framework for researching different aspects of Information Technology including outputs of the research (research outputs) and activities to carry out this research (research activities). They identified research outputs as follows: 1. Constructs which are concepts that form the vocabulary of a domain. They constitute a conceptualization used to describe problems within the domain and to specify their solutions. 2. A model which is a set of propositions or statements expressing relationships among constructs. 3. A method which is a set of steps (an algorithm or guideline) used to perform a task. Methods are based on a set of underlying constructs and a representation (model) of the solution space. 4. An instantiation is the realization of an artifact in its environment. Also research activities including: 1. Build an artifact to perform a specific task. 2. Evaluate the artifact to determine if any progress has been achieved. 3. Given an artifact whose performance has been evaluated, it is important to determine why and how the artifact worked or did not work within its environment. Therefore theorize and justify theories about IT artifacts. Although Information Systems as a discipline has been evolving for over 30 years now,[40] the core focus or identity of IS research is still subject to debate among scholars such as.[41][42][43] There are two main views around this debate: a narrow view focusing on the IT artifact as the core subject matter of IS research, and a broad view that focuses on the interplay between social and technical aspects of IT that is embedded into a dynamic evolving context.[44] A third view provided by [45] calling IS scholars to take a balanced attention for both the IT artifact and its context. Since information systems is an applied field, industry practitioners expect information systems research to generate findings that are immediately applicable in practice. However, that is not always the case. Often information systems researchers explore behavioral issues in much more depth than practitioners would expect them to do. This may render information systems research results difficult to understand, and has led to criticism.[46] To study an information system itself, rather than its effects, information systems models are used, such as EATPUT. The international body of Information Systems researchers, the Association for Information Systems (AIS), and its Senior Scholars Forum Subcommittee on Journals (23 April 2007), proposed a 'basket' of journals that the AIS deems as 'excellent', and nominated: Management Information Systems Quarterly (MISQ), Information Systems Research (ISR), Journal of the Association for Information Systems (JAIS), Journal of Management Information Systems (JMIS), European Journal of Information Systems (EJIS), and Information Systems Journal (ISJ).[47] A number of annual information systems conferences are run in various parts of the world, the majority of which are peer reviewed. The AIS directly runs the International Conference on Information Systems (ICIS) and the Americas Conference on Information Systems (AMCIS), while AIS affiliated conferences [48] include the Pacific Asia Conference on Information Systems (PACIS), European Conference on Information Systems (ECIS), the

Information systems Mediterranean Conference on Information Systems (MCIS), the International Conference on Information Resources Management (Conf-IRM) and the Wuhan International Conference on E-Business (WHICEB). AIS chapter conferences [49] include Australasian Conference on Information Systems (ACIS), Information Systems Research Conference in Scandinavia (IRIS), Conference of the Italian Chapter of AIS (itAIS), Annual Mid-Western AIS Conference (MWAIS) and Annual Conference of the Southern AIS (SAIS).

References
[1] Archibald, J.A. (May 1975). "Computer Science education for majors of other disciplines". AFIPS Joint Computer Conferences: 903906. "Computer science spreads out over several related disciplines, and shares with these disciplines certain sub-disciplines that traditionally have been located exclusively in the more conventional disciplines" [2] Denning, Peter (July 1999). "COMPUTER SCIENCE: THE DISCIPLINE". Encyclopaedia of Computer Science (2000 Edition). "The Domain of Computer Science: Even though computer science addresses both human-made and natural information processes, the main effort in the discipline has been directed toward human-made processes, especially information processing systems and machines" [3] Coy, Wolfgang (June 2004). "Between the disciplines". ACM SIGCSE Bulletin 36 (2): 710. ISSN0097-8418. "Computer science may be in the core of these processes. The actual question is not to ignore disciplinary boundaries with its methodological differences but to open the disciplines for collaborative work. We must learn to build bridges, not to start in the gap between disciplines" [4] Jessup, Leonard M.; Joseph S. Valacich (2008). Information Systems Today (3rd ed.). Pearson Publishing. Pages ??? & Glossary p. 416 [5] Hoganson, Ken (December 2001). "Alternative curriculum models for integrating computer science and information systems analysis, recommendations, pitfalls, opportunities, accreditations, and trends". Journal of Computing Sciences in Colleges 17 (2): 313325. ISSN1937-4771. "... Information Systems grew out of the need to bridge the gap between business management and computer science ..." [6] Davis, Timothy; Geist, Robert; Matzko, Sarah; Westall, James (March 2004). ": A First Step". Technical Symposium on Computer Science Education: 125129. ISBN1-58113-798-2. "In 1999, Clemson University established a (graduate) degree program that bridges the arts and the sciences... All students in the program are required to complete graduate level work in both the arts and computer science" [7] Hoganson, Ken (December 2001). "Alternative curriculum models for integrating computer science and information systems analysis, recommendations, pitfalls, opportunities, accreditations, and trends". Journal of Computing Sciences in Colleges 17 (2): 313325. ISSN1937-4771. "The field of information systems as a separate discipline is relatively new and is undergoing continuous change as technology evolves and the field matures" [8] Khazanchi, Deepak; Bjorn Erik Munkvold (Summer 2000). "Is information system a science? an inquiry into the nature of the information systems discipline". ACM SIGMIS Database 31 (3): 2442. doi:10.1145/381823.381834. ISSN0095-0033. "From this we have concluded that IS is a science, i.e., a scientific discipline in contrast to purportedly non-scientific fields" [9] Denning, Peter (June 2007). Ubiquity a new interview with Peter Denning on the great principles of computing. 2007. pp.11. "People from other fields are saying they have discovered information processes in their deepest structures and that collaboration with computing is essential to them." [10] "Computer science is the study of computation." Computer Science Department, College of Saint Benedict (http:/ / www. csbsju. edu/ computerscience/ curriculum), Saint John's University [11] "Computer Science is the study of all aspects of computer systems, from the theoretical foundations to the very practical aspects of managing large software projects." Massey University (http:/ / study. massey. ac. nz/ major. asp?major_code=2010& prog_code=93068) [12] Kelly, Sue; Gibson, Nicola; Holland, Christopher; Light, Ben (July 1999). "Focus Issue on Legacy Information Systems and Business Process Engineering: a Business Perspective of Legacy Information Systems". Communications of the AIS 2 (7): 127. [13] Pearson Custom Publishing & West Chester University, Custom Program for Computer Information Systems (CSC 110), (Pearson Custom Publishing, 2009) Glossary p. 694 [14] Polack, Jennifer (December 2009). "Planning a CIS Education Within a CS Framework". Journal of Computing Sciences in Colleges 25 (2): 100106. ISSN1937-4771. [15] Hayes, Helen; Onkar Sharma (February 2003). "A decade of experience with a common first year program for computer science, information systems and information technology majors". Journal of Computing Sciences in Colleges 18 (3): 217227. ISSN1937-4771. "In 1988, a degree program in Computer Information Systems (CIS) was launched with the objective of providing an option for students who were less inclined to become programmers and were more interested in learning to design, develop, and implement Information Systems, and solve business problems using the systems approach" [16] CSTA Committee, Allen Tucker, et alia, A Model Curriculum for K-12 Computer Science (Final Report), (Association for Computing Machinery, Inc., 2006) Abstraction & p. 2 [17] Freeman, Peter; Hart, David (August 2004). "A Science of Design for Software-Intensive Systems Computer science and engineering needs an intellectually rigorous, analytical, teachable design process to ensure development of systems we all can live with.". Communications of the ACM 47 (8): 1921. ISSN0001-0782. "Though the other components' connections to the software and their role in the overall design of the system are critical, the core consideration for a software-intensive system is the software itself, and other approaches to systematizing design have yet to solve the "software problem"which won't be solved until software design is understood scientifically" [18] History of Computer Science (http:/ / www. cs. uwaterloo. ca/ ~shallit/ Courses/ 134/ history. html)

Information systems
[19] Kelly, Sue; Gibson, Nicola; Holland, Christopher; Light, Ben (July 1999). "Focus Issue on Legacy Information Systems and Business Process Engineering: a Business Perspective of Legacy Information Systems". Communications of the AIS 2 (7): 127. [20] "Scoping the Discipline of Information Systems" (http:/ / www. dogpile. com/ clickserver/ _iceUrlFlag=1?rawURL=http:%2/ citeseerx. ist. psu. edu/ viewdoc/ download?doi=10. 1. 1. 84. 7159& rep=rep1& type=pdf& 0=& 1=0& 4=76. 185. 237. 80& 5=76. 185. 237. 80& 9=9501c7397f68429bb16bc24369dcca01& 10=1& 11=info. dogpl. prefer& 13=search& 14=239138& 15=main-title& 17=10& 18=3& 19=0& 20=6& 21=4& 22=uKGy3oevUc4=& 23=0& 40=Kar0DLd5ckAhR1/ 79hs8iA==& _IceUrl=true) [21] Rockart et al. (1996) Eight imperatives for the new IT organization Sloan Management review. [22] Culnan, M. J. Mapping the Intellectual Structure of MIS, 1980-1985: A Co-Citation Analysis, MIS Quarterly, 1987, pp. 341-353. [23] Keen, P. G. W. MIS Research: Reference Disciplines and A Cumulative Tradition, in Proceedings of the First International Conference on Information Systems, E. McLean (ed.), Philadelphia, PA, 1980, pp. 9-18. [24] Lee, A. S. Architecture as A Reference Discipline for MIS, in Information Systems Research: Contemporary Approaches and Emergent Traditions, H.-E. Nisen, H. K. Klein, and R. A. Hirschheim (eds.), North-Holland, Amsterdam, 1991, pp. 573-592. [25] Mingers, J., and Stowell, F. (eds.). Information Systems: An Emerging Discipline?, McGraw- Hill, London, 1997. [26] John, W., and Joe, P. (2002) "Strategic Planning for Information System." 3rd Ed. West Sussex. John wiley & Sons Ltd [27] "Scoping the Discipline of Information Systems" (http:/ / www. dogpile. com/ clickserver/ _iceUrlFlag=1?rawURL=http:/ / citeseerx. ist. psu. edu/ viewdoc/ download?doi=10. 1. 1. 84. 7159& rep=rep1& type=pdf& 0=& 1=0& 4=76. 185. 237. 80& 5=76. 185. 237. 80& 9=9501c7397f68429bb16bc24369dcca01& 10=1& 11=info. dogpl. prefer& 13=search& 14=239138& 15=main-title& 17=10& 18=3& 19=0& 20=6& 21=4& 22=uKGy3oevUc4=& 23=0& 40=Kar0DLd5ckAhR1/ 79hs8iA==& _IceUrl=true) [28] "Scoping the Discipline of Information Systems" (http:/ / www. dogpile. com/ clickserver/ _iceUrlFlag=1?rawURL=http:/ / citeseerx. ist. psu. edu/ viewdoc/ download?doi=10. 1. 1. 84. 7159& rep=rep1& type=pdf& 0=& 1=0& 4=76. 185. 237. 80& 5=76. 185. 237. 80& 9=9501c7397f68429bb16bc24369dcca01& 10=1& 11=info. dogpl. prefer& 13=search& 14=239138& 15=main-title& 17=10& 18=3& 19=0& 20=6& 21=4& 22=uKGy3oevUc4=& 23=0& 40=Kar0DLd5ckAhR1/ 79hs8iA==& _IceUrl=true) [29] "Scoping the Discipline of Information Systems" (http:/ / www. dogpile. com/ clickserver/ _iceUrlFlag=1?rawURL=http:/ / citeseerx. ist. psu. edu/ viewdoc/ download?doi=10. 1. 1. 84. 7159& rep=rep1& type=pdf& 0=& 1=0& 4=76. 185. 237. 80& 5=76. 185. 237. 80& 9=9501c7397f68429bb16bc24369dcca01& 10=1& 11=info. dogpl. prefer& 13=search& 14=239138& 15=main-title& 17=10& 18=3& 19=0& 20=6& 21=4& 22=uKGy3oevUc4=& 23=0& 40=Kar0DLd5ckAhR1/ 79hs8iA==& _IceUrl=true) [30] Laudon, K.C. and Laudon, J.P. Management Information Systems, (2nd edition), Macmillan, 1988. [31] Rainer, R. Kelly Jr and Casey G. Cegielski. Introduction to Information System: Support and Transforming Business Fourth Edition. New Jersey: John Wiley and Sons, Inc., 2012. Print. [32] Sloan Career Cornerstone Center (2008). Information Systems (http:/ / www. careercornerstone. org/ infosys/ infosys. htm). Alfred P. Sloan Foundation. Access date June 2, 2008. [33] Using MIS. Kroenke. 2009. ISBN0-13-713029-5. [34] Brje Langefors (1973). Theoretical Analysis of Information Systems. Auerbach. ISBN0-87769-151-7. [35] Computer Studies. Frederick Nyawaya. 2008. ISBN9966-781-24-2. [36] Galliers, R.D., Markus, M.L., & Newell, S. (Eds) (2006). Exploring Information Systems Research Approaches. (http:/ / books. google. com/ books?id=brOkAAAACAAJ& dq=Exploring+ information+ systems+ research+ approaches:+ readings+ and+ reflections) New York, NY: Routledge. [37] Ciborra, C. (2002). The Labyrinths of Information: Challenging the Wisdom of Systems. (http:/ / books. google. com/ books?id=jb-vrAHmG0wC& printsec=frontcover& dq=Labyrinths+ of+ Information) Oxford, UK: Oxford University Press [38] Hevner, March, Park & Ram (2004): Design Science in Information Systems Research. MIS Quarterly, 28(1), 75-105. [39] March S., Smith G. (1995) Design and natural science in Information Technology (IT), Decision Support Systems, Vol. 15, pp. 251- 266. [40] Avgerou, C. (2000): Information systems: what sort of science is it? Omega, 28, 567-579. [41] Benbasat, I., Zmud, R. (2003): The identity crisis within the IS discipline: defining and communicating the disciplines core properties, MIS Quarterly, 27(2), 183-194. [42] Agarwal, R., Lucas, H. (2005): The information systems identity crisis: focusing on high- visibility and high-impact research, MIS Quarterly, 29(3), 381-398. [43] El Sawy, O. (2003): The IS core IX: The 3 faces of IS identity: connection, immersion, and fusion. Communications of AIS, 12, 588-598. [44] Mansour, O., Ghazawneh, A. (2009) Research in Information Systems: Implications of the constant changing nature of IT capabilities in the social computing era, in Molka-Danielsen, J. (Ed.): Proceedings of the 32nd Information Systems Research Seminar in Scandinavia, IRIS 32, Inclusive Design, Molde University College, Molde, Norway, August 912, 2009. ISBN 978-82-7962-120-1. [45] Orlikowski, W., Iacono, C. (2001): Research commentary: desperately seeking the IT in IT researcha call to theorizing about the IT artifact. Information Systems Research, 12(2), 121-134. [46] Kock, N., Gray, P., Hoving, R., Klein, H., Myers, M., & Rockart, J. (2002). Information Systems Research Relevance Revisited: Subtle Accomplishment, Unfulfilled Promise, or Serial Hypocrisy? (http:/ / aisel. aisnet. org/ cais/ vol8/ iss1/ 23/ ) Communications of the Association for Information Systems, 8(23), 330-346. [47] Senior Scholars (2007) AIS Senior Scholars Forum Subcommittee on Journals: A baseket of six (or eight) A* journals in Information Systems Archived at http:/ / home. aisnet. org/ associations/ 7499/ files/ Senior%20Scholars%20Letter. pdf. [48] http:/ / home. aisnet. org/ displaycommon. cfm?an=1& subarticlenbr=34 [49] http:/ / ais. affiniscape. com/ displaycommon. cfm?an=1& subarticlenbr=478

Information systems

Further reading
Rainer, R. Kelly and Cegielski, Casey G. (2009). "Introduction to Information Systems: Enabling and Transforming Business, 3rd Edition" (http://www.wiley.com/WileyCDA/WileyTitle/ productCd-EHEP000323.html) Kroenke, David (2008). Using MIS - 2nd Edition (http://www.pearsonhighered.com/kroenke/). Lindsay, John (2000). Information Systems Fundamentals and Issues (http://www.oturn.net/isfi/index. html). Kingston University, School of Information Systems Dostal, J. School information systems (Skolni informacni systemy). (http://mict.upol.cz/ skolni_informacni_systemy.pdf) In Infotech 2007 - modern information and communication technology in education. Olomouc, EU: Votobia, 2007. s. 540 546. ISBN 978-80-7220-301-7. O'Leary, Timothy and Linda. (2008). Computing Essentials Introductory 2008. McGraw-Hill on Computing2008.com (http://www.computing2008.com) Imperial College London - Information Systems Engineering degree - Information Systems Engineering (http:// www3.imperial.ac.uk/electricalengineering/courses/undergraduate/informationengineering)

External links
Association for Information Systems (AIS) (http://aisnet.org/) Center for Information Systems Research - Massachusetts Institute of Technology (http://mitsloan.mit.edu/ cisr/) European Research Center for Information Systems (http://www.ercis.org/) Index of Information Systems Journals (http://lamp.infosys.deakin.edu.au/journals/)

Chief information officer


Chief information officer (CIO), or information technology (IT) director, is a job title commonly given to the most senior executive in an enterprise responsible for the information technology and computer systems that support enterprise goals. The title of Chief Information Officer in Higher Education may be the highest ranking technology executive although depending on the institution, alternative titles are used to represent this position. Generally, the CIO reports to the chief executive officer, chief operations officer or chief financial officer. In military organizations, they report to the commanding officer.

CIO
Information technology and its systems have become so important that the CIO has come to be viewed in many organizations as the key contributor in formulating strategic goals for an organization. The CIO manages the implementation of the useful technology to increase information accessibility and integrated systems management. As a comparison, where the CIO adapts systems through the use of existing technologies, chief technology officer develops new technologies to expand corporate technological capabilities. When both positions are present in an organization, the CIO is generally responsible for processes and practices supporting the flow of information, whereas the CTO is generally responsible for technology infrastructure. CIO magazine's "State of the CIO 2008" survey asked 558 IT leaders who they report to. The results were: CEO (41%), CFO (23%), COO (16%), Corporate CIO (7%) and Other (13%).[1]

Chief information officer

Information technology
The prominence of the CIO position has risen greatly as information, and the information technology that drives it, has become an increasingly important part of the modern organization. The CIO may be a member of the executive committee of an organization, and/or may often be required to engage at board level depending on the nature of the organization and its operating structure and governance environment. No specific qualification are intrinsic of the CIO position, though the typical candidate may have expertise in a number of technological fields - computer science, software engineering, or information systems. Many candidates have Master of Business Administration or Master of Science in Management degrees.[2] More recently CIOs' leadership capabilities, business acumen and strategic perspectives have taken precedence over technical skills. It is now quite common for CIOs to be appointed from the business side of the organization, especially if they have project management skills. In 2007 a survey amongst CIOs by CIO magazine in the UK discovered that their top 10 concerns were: people leadership, managing budgets, business alignment, infrastructure refresh, security, compliance, resource management, managing customers, managing change and board politics.[3] In 2010, Gartner Executive Programs conducted a global CIO survey and received responses from 2,014 CIOs from 50 countries and 38 industries. [4] Gartner reported that the survey results indicated that the top ten technology priorities for CIOs for 2011 were cloud computing, virtualization, mobile technologies, IT management, business intelligence, networking, voice and data communications, enterprise applications, collaboration technologies, infrastructure, and Web 2.0. Typically, a CIO is involved with driving the analysis and re-engineering of existing business processes, identifying and developing the capability to use new tools, reshaping the enterprise's physical infrastructure and network access, and with identifying and exploiting the enterprise's knowledge resources. Many CIOs head the enterprise's efforts to integrate the Internet into both its long-term strategy and its immediate business plans. CIO's are often tasked with either driving or heading up crucial IT projects which are essential to the strategic and operational objectives of an organisation. A good example of this would be the implementation of an Enterprise Resource Planning (ERP) system which typically has wide-ranging implications for most organizations. The CIO is evolving into a role where he/she is creating and monitoring business value from IT assets, to the point where corporate strategist Chris Potts suggests in the novel FruITion that the Chief Information Officer (CIO) be replaced with Chief Internal Investments Officer (CIIO).[5] Another way that the CIO role is changing is an increased focus on service management.[6] As SaaS, IaaS, BPO and other more flexible value delivery techniques are brought into organizations the CIO usually functions as a 3rd party manager for the organization. In essence, a CIO in the modern organization is required to possess business skills and the ability to relate to the organization as a whole, as opposed to being a technological expert with limited functional business expertise. The CIO position is as much about anticipating trends in the market place with regards to technology as it is about ensuring that the business navigates these trends through expert guidance and proper strategic IT planning that is aligned to the corporate strategy of the organization.

Chief information officer

References
[1] "State of the CIO 2008 Data Shows CIO Salaries, Influence Rising" (http:/ / www. cio. com/ article/ 147950/ _State_of_the_CIO_2008_Data_Shows_CIO_Salaries_Influence_Rising). CIO. . Retrieved 27 February 2010. [2] Meridith Levinson (2007-07-05). "Should You Get an MBA? - CIO.com - Business Technology Leadership" (http:/ / www. cio. com/ article/ 122507/ Should_You_Get_an_MBA_). CIO.com. . Retrieved 2012-03-28. [3] "Granger: The final word - CIO UK Magazine" (http:/ / www. cio. co. uk/ concern/ budgets/ features/ index. cfm?articleid=351). Cio.co.uk. 2012-03-14. . Retrieved 2012-03-28. [4] "Gartner Executive Programs Worldwide Survey of More Than 2,000 CIOs Identifies Cloud Computing as Top Technology Priority for CIOs in 2011" (http:/ / www. gartner. com/ it/ page. jsp?id=1526414). Gartner. . Retrieved 23 March 2011. [5] fruITion: Creating the Ultimate Corporate Strategy for Information Technology, Chris Potts, Technics Publications, LLC 2008 (http:/ / www. technicspub. com/ product. sc?productId=7& categoryId=1) [6] "CIO Magazine: Recession Shifts IT Service Management into Fast Lane" (http:/ / www. cio. com/ article/ 558564/ Recession_Shifts_IT_Service_Management_Into_Fast_Lane). Cio.com. 2010-02-26. . Retrieved 2012-03-28.

US Federal CIO Council (http://www.cio.gov) UK Government - CIO Council (http://www.cabinetoffice.gov.uk/cio.aspx) Dead link as on 4th Nov 2010, unable to find new official site. The Chief Information Officer Concept in E-government: Lessons for Developing Countries. (http://esaconf.un. org/WB/default.asp?action=9&boardid=10&read=3538&fid=97) By D.C. Misra. On United Nations Department of Economic and Social Affairs WebBoard.

Information technology management


IT management is the discipline whereby all of the technology resources of a firm are managed in accordance with its needs and priorities. These resources may include tangible investments like computer hardware, software, data, networks and data centre facilities, as well as the staffs who are hired to maintain them. Managing this responsibility within a company entails many of the basic management functions, like budgeting, staffing, and organizing and controlling, along with other aspects that are unique to technology, like change management, software design, network planning, tech support etc.[1]

Overview
IT Management is a different subject from management information systems. The latter refers to management information methods tied to the automation or support of human decision making.[2] IT Management, as stated in the above definition, refers to the IT related management activities in organizations. MIS as it is referred to is focused mainly on the business aspect with a strong input into the technology phase of the business/organization. A primary focus of IT management is the value creation made possible by technology. This requires the alignment of technology and business strategies. While the value creation for an organization involves a network of relationships between internal and external environments, technology plays an important role in improving the overall value chain of an organization. However, this increase requires business and technology management to work as a creative, synergistic, and collaborative team instead of a purely mechanistic span of control according to Bird.[3] Historically, one set of resources was dedicated to one particular computing technology, business application or line of business, and managed in this silo-like fashion.[4] These resources supported a single set of requirements and processes, and cant easily be optimized or reconfigured to support actual demand.[5] This has led the leading technology providers to build out and complement their product-centric infrastructure and management offerings with Converged Infrastructure environments that converge servers, storage, networking, security, management and facilities.[6] [7] The efficiencies of having this type of integrated and automated management environment allows enterprises to get their applications up and running faster, with easier manageability and less maintenance, and enables IT to more rapidly adjust IT resources (such as servers, storage and networking) to meet fluctuating and

Information technology management unpredictable business demand.[8] [9]

10

IT infrastructure
The term IT infrastructure is defined in ITIL v3 as a combined set of hardware, software, networks, facilities, etc. (including all of the information technology), in order to develop, test, deliver, monitor, control or support IT services. Associated people, processes and documentation are not part of IT Infrastructure.[10]

List of IT management disciplines


The below concepts are commonly listed or investigated under the broad term IT Management:[11] [12] [13] [14] Business/IT alignment IT governance IT financial management IT service management Sourcing IT configuration management

IT managers
IT managers have a lot in common with project managers but their main difference is one of focus: an IT manager is responsible and accountable for an ongoing program of IT services while the project manager's responsibility and accountability are both limited to a project with a clear start and end date.[15] Most IT management programs are designed to educate and develop managers who can effectively manage the planning, design, selection, implementation, use, and administration of emerging and converging information and communications technologies. The program curriculum provides students with the technical knowledge and management knowledge and skills needed to effectively integrate people, information and communication technologies, and business processes in support of organizational strategic goals. Graduates should be able 1. to explain the important terminology, facts, concepts, principles, analytic techniques, and theories used in IT management. 2. to apply important terminology, facts, concepts, principles, analytic techniques, and theories in IT management when analyzing complex factual situations. 3. to integrate (or synthesize) important facts, concepts, principles, and theories in IT management when developing solutions to IT management multifaceted problems in complex situations.

References
[1] McNurlin, Barbara, et. al. (2009). "Information Systems Management in Practice (8th ed.)". Prentice Hall. [2] OBrien, J (1999). Management Information Systems Managing Information Technology in the Internetworked Enterprise. Boston: Irwin McGraw-Hill. ISBN0-07-112373-3. [3] Bird, M. (2010). Modern Management Guide to Information Technology. Create Space. (http:/ / harvardbookstore. biz) [4] Talbot, Chris, HP Adds to Converged Infrastructure Lineup, ChannelInsider, June 7, 2011. (http:/ / www. channelinsider. com/ c/ a/ Hewlett-Packard/ HP-Adds-to-Converged-Infrastructure-Lineup-636059/ ) [5] Gardner, Dana, "Converged Infrastructure Approach Paves Way for Improved Data Center Productivity, Private Clouds, February 9, 2010, IT Business Edge (http:/ / www. itbusinessedge. com/ cm/ community/ features/ guestopinions/ blog/ converged-infrastructure-approach-paves-way-for-improved-data-center-productivity-private-clouds/ ?cs=39310) [6] Huff, Lisa, The Battle for the Converged Data Center Network, Data Center Knowledge, August 18, 2011. (http:/ / www. datacenterknowledge. com/ archives/ 2011/ 08/ 18/ the-battle-for-the-converged-data-center-network/ ) [7] Harris, Derrick, "Can Open Converged Infrastructure Compete?" GigaOM, October 10, 2010. (http:/ / gigaom. com/ cloud/ can-open-converged-infrastructure-compete-2/ )

Information technology management


[8] Oestreich, Ken, "Converged Infrastructure," CTO Forum, November 15, 2010. (http:/ / www. thectoforum. com/ content/ converged-infrastructure-0) [9] Golden, Bernard, "Cloud Computing: Two Kinds of Agility," CIO, July 16, 2010. (http:/ / www. cio. com/ article/ 599626/ Cloud_Computing_Two_Kinds_of_Agility) [10] Veen, Annelies van der; Jan van Bon (2007). Foundations of ITIL V3. Van Haren Publishing. ISBN978-90-8753-057-0. [11] 28 Nov. 2008 http:/ / www. gartner. com/ it/ products/ research/ topics/ topics. jsp [12] 28 Nov. 2008 http:/ / www. gartner. com/ it/ products/ research/ research_services. jsp [13] McKeen, James D., and Smith, Heather A., Making IT Happen: Critical Issues in IT Management, Wiley Series in Information Systems, 2003 [14] CIO Wisdom: Best Practise from Silicon Valley's Leading IT Experts, Lane, D. (ed), Prentice Hall 2004 [15] Thomas, Rhan (June 15, 2009). "IT Managers and Project Management" (http:/ / www. pmhut. com/ it-managers-and-project-management). PM Hut. . Retrieved December 13, 2009.

11

Information technology audit


An information technology audit, or information systems audit, is an examination of the management controls within an Information technology (IT) infrastructure. The evaluation of obtained evidence determines if the information systems are safeguarding assets, maintaining data integrity, and operating effectively to achieve the organization's goals or objectives. These reviews may be performed in conjunction with a financial statement audit, internal audit, or other form of attestation engagement. IT audits are also known as "automated data processing (ADP) audits" and "computer audits". They were formerly called "electronic data processing (EDP) audits".

Purpose
An IT audit is different from a financial statement audit. While a financial audit's purpose is to evaluate whether an organization is adhering to standard accounting practices, the purposes of an IT audit are to evaluate the system's internal control design and effectiveness. This includes, but is not limited to, efficiency and security protocols, development processes, and IT governance or oversight. Installing controls are necessary but not sufficient to provide adequate security. People responsible for security must consider if the controls are installed as intended, if they are effective if any breach in security has occurred and if so, what actions can be done to prevent future breaches. These inquiries must be answered by independent and unbiased observers. These observers are performing the task of information systems auditing. In an Information Systems (IS) environment, an audit is an examination of information systems, their inputs, outputs, and processing. [1]

Types of IT audits
Various authorities have created differing taxonomies to distinguish the various types of IT audits. Goodman & Lawless state that there are three specific systematic approaches to carry out an IT audit: [2] Technological innovation process audit. This audit constructs a risk profile for existing and new projects. The audit will assess the length and depth of the company's experience in its chosen technologies, as well as its presence in relevant markets, the organization of each project, and the structure of the portion of the industry that deals with this project or product, organization and industry structure. Innovative comparison audit. This audit is an analysis of the innovative abilities of the company being audited, in comparison to its competitors. This requires examination of company's research and development facilities, as well as its track record in actually producing new products. Technological position audit: This audit reviews the technologies that the business currently has and that it needs to add. Technologies are characterized as being either "base", "key", "pacing" or "emerging". Others describe the spectrum of IT audits with five categories of audits:

Information technology audit Systems and Applications: An audit to verify that systems and applications are appropriate, are efficient, and are adequately controlled to ensure valid, reliable, timely, and secure input, processing, and output at all levels of a system's activity. Information Processing Facilities: An audit to verify that the processing facility is controlled to ensure timely, accurate, and efficient processing of applications under normal and potentially disruptive conditions. Systems Development: An audit to verify that the systems under development meet the objectives of the organization, and to ensure that the systems are developed in accordance with generally accepted standards for systems development. Management of IT and Enterprise Architecture: An audit to verify that IT management has developed an organizational structure and procedures to ensure a controlled and efficient environment for information processing. Client/Server, Telecommunications, Intranets, and Extranets: An audit to verify that telecommunications controls are in place on the client (computer receiving services), server, and on the network connecting the clients and servers. And some lump all IT audits as being one of only two type: "general control review" audits or "application control review" audits. A number of IT Audit professionals from the Information Assurance realm consider there to be three fundamental types of controls regardless of the type of audit to be performed, especially in the IT realm. Many frameworks and standards try to break controls into different disciplines or arenas, terming them Security Controls, Access Controls, IA Controls in an effort to define the types of controls involved. At a more fundamental level, these controls can be shown to consist of three types of fundamental controls: Protective/Preventative Controls, Detective Controls and Reactive/Corrective Controls. In an IS system, there are two types of auditors and audits: internal and external. IS auditing is usually a part of accounting internal auditing, and is frequently performed by corporate internal auditors. An external auditor reviews the findings of the internal audit as well as the inputs, processing and outputs of information systems. The external audit of information systems is frequently a part if the overall external auditing performed by a certain public accounting (CPA) firm. [3] IS auditing considers all the potential hazards and controls in information systems. It focuses on issues like operations, data, integrity, software applications, security, privacy, budgets and expenditures, cost control, and productivity. Guidelines are available to assist auditors in their jobs, such as those from Information Systems Audit and Control Association(www.isaca.org [4]). [5]

12

IT Audit process
The following are basic steps in performing the Information Technology Audit Process:[6] 1. 2. 3. 4. 5. Planning Studying and Evaluating Controls Testing and Evaluating Controls Reporting Follow-up

Information technology audit

13

Security
Auditing information security is a vital part of any IT audit and is often understood to be the primary purpose of an IT Audit. The broad scope of auditing information security includes such topics as data centers (the physical security of data centers and the logical security of databases, servers and network infrastructure components),[7] networks and application security. Like most technical realms, these topics are always evolving; IT auditors must constantly continue to expand their knowledge and understanding of the systems and environment& pursuit in system company. Several training and certification organizations have evolved. Currently, the major certifying bodies, in the field, are the Institute of Internal Auditors (IIA),[8] the SANS Institute (specifically, the audit specific branch of SANS and GIAC)[9] and ISACA.[10] While CPAs and other traditional auditors can be engaged for IT Audits, organizations are well advised to require that individuals with some type of IT specific audit certification are employed when validating the controls surrounding IT systems.

History of IT Auditing
The concept of IT auditing was formed in the mid-1960s. Since that time, IT auditing has gone through numerous changes, largely due to advances in technology and the incorporation of technology into business. Currently, there are many IT dependent companies that rely on the Information Technology in order to operate their business e.g. Telecommunication or Banking company. For the other types of business, IT plays the big part of company including the applying of workflow instead of using the paper request form, using the application control instead of manual control which is more reliable or implementing the ERP application to facilitate the organization by using only 1 application. According to these, the importance of IT Audit is constantly increased. One of the most important role of the IT Audit is to audit over the critical system in order to support the Financial audit or to support the specific regulations announced e.g. SOX.

Audit personnel
Qualifications
The CISM and CAP credentials are the two newest security auditing credentials, offered by the ISACA and (ISC), respectively. Strictly speaking, only the CISA or GSNA title would sufficiently demonstrate competences regarding both information technology and audit aspects with the CISA being more audit focused and the GSNA being more information technology focused.[11] Outside of the US, various credentials exist. For example, the Netherlands has the RE credential (as granted by the NOREA [12] [Dutch site] IT-auditors' association), which among others requires a post-graduate IT-audit education from an accredited university, subscription to a Code of Ethics, and adherence to strict continuous education requirements.

Professional certifications
Certified Information System Auditor (CISA) Certified in Risk and Information Systems Control (CRISC) Certified Internal Auditor (CIA) Certification and Accreditation Professional (CAP) Certified Computer Professional (CCP) Certified Information Privacy Professional (CIPP) Certified Information Systems Security Professional (CISSP)

Certified Information Security Manager (CISM) Certified Public Accountant (CPA)

Information technology audit Certified Internal Controls Auditor (CICA) Forensics Certified Public Accountant (FCPA) Certified Fraud Examiner (CFE) Chartered Accountant (CA) Chartered Certified Accountant (CCA) GIAC Certified System & Network Auditor (GSNA)[13] Certified Information Technology Professional (CITP), to certify, auditors should have 3 years experience.

14

Emerging Issues
There are also new audits being imposed by various standard boards which are required to be performed, depending upon the audited organization, which will affect IT and ensure that IT departments are performing certain functions and controls appropriately to be considered compliant. An example of such an audit is the newly minted SSAE 16 [14] .

References
[1] Rainer, R. Kelly, and Casey G. Cegielski. Introduction to information systems. 3rd ed. Hoboken, N.J.: Wiley ;, 2011. Print. [2] Richard A. Goodman; Richard Arthur Goodman; Michael W. Lawless (1994). Technology and strategy: conceptual models and diagnostics (http:/ / books. google. com/ books?id=GIRdX9hIL1EC). Oxford University Press US. ISBN978-0-19-507949-4. . Retrieved May 9, 2010. [3] Rainer, R. Kelly, and Casey G. Cegielski. Introduction to information systems. 3rd ed. Hoboken, N.J.: Wiley ;, 2011. Print. [4] https:/ / www. isaca. org/ Pages/ default. aspx [5] Rainer, R. Kelly, and Casey G. Cegielski. Introduction to information systems. 3rd ed. Hoboken, N.J.: Wiley ;, 2011. Print. [6] Davis, Robert E. (2005). IT Auditing: An Adaptive Process (http:/ / www. theiia. org/ bookstore/ product/ it-auditing-an-adaptive-process-1263. cfm). Mission Viejo: Pleier Corporation. ISBN978-0974302997. . [7] "Advanced System, Network and Perimeter Auditing" (http:/ / www. sans. org/ security-training/ auditing-networks-perimeters-and-systems-6-mid). . [8] "Institute of Internal Auditors" (http:/ / www. theiia. org). . [9] "The SANS Technology Institute" (http:/ / www. sans. org). . [10] "ISACA" (http:/ / www. isaca. org). . [11] Hoelzer, David (1999-2009). Audit Principles, Risk Assessment & Effective Reporting. SANS Press. p.32. [12] http:/ / www. norea. nl [13] "GIAC GSNA Information" (http:/ / www. giac. org/ certifications/ audit/ gsna. php). . [14] http:/ / www. ssae-16. com

External links
A career as Information Systems Auditor (http://www.networkmagazineindia.com/200312/securedview01. shtml), by Avinash Kadam (Network Magazine) IT Audit Careers guide (http://www.isrisk.net/information-technology-it-audit-computer-audit-careers-guide/) Federal Financial Institutions Examination Council (http://www.ffiec.gov/ffiecinfobase/booklets/audit/audit. pdf) (FFIEC) Information Systems Audit & Control Association (http://www.isaca.org/) (ISACA) Open Security Architecture- Controls and patterns to secure IT systems (http://www.opensecurityarchitecture. org) American Institute of Certified Public Accountants (http://www.aicpa.org/) (AICPA) IT Services Library (http://www.itil-officialsite.com/home/home.asp) (ITIL)

Corporate governance of information technology

15

Corporate governance of information technology


Information Technology Governance is a subset discipline of Corporate Governance focused on information technology (IT) systems and their performance and risk management. The rising interest in IT governance is partly due to compliance initiatives, for instance Sarbanes-Oxley in the USA and Basel II in Europe, but more so because of the need for greater accountability for decision-making around the use of IT in the best interest of all stakeholders. IT capability is directly related to the long term consequences of decisions made by top management. Traditionally, board-level executives deferred key IT decisions to the company's IT professionals. This cannot ensure the best interests of all stakeholders unless deliberate action involves all stakeholders. IT governance systematically involves everyone: board members, executive management, staff and customers. It establishes the framework (see below) used by the organization to establish transparent accountability of individual decisions, and ensures the traceability of decisions to assigned responsibilities.

Definitions
There are narrower and broader definitions of IT governance. Weill and Ross focus on "Specifying the decision rights and accountability framework to encourage desirable behavior in the use of IT."[1] In contrast, the IT Governance Institute expands the definition to include foundational mechanisms: " the leadership and organisational structures and processes that ensure that the organisations IT sustains and extends the organisations strategies and objectives."[2] Van Grembergen and De Haes (2009) focus on enterprise governance of IT and define this as "an integral part of corporate governance and addresses the definition and implementation of processes, structures and relational mechanisms in the organization that enable both business and IT people to execute their responsibilities in support of business/IT alignment and the creation of business value from IT enabled investments". While AS8015, the Australian Standard for Corporate Governance of Information and Communication Technology (ICT), defines Corporate Governance of ICT as "The system by which the current and future use of ICT is directed and controlled. It involves evaluating and directing the plans for the use of ICT to support the organisation and monitoring this use to achieve plans. It includes the strategy and policies for using ICT within an organisation."

Background
The discipline of information technology governance first emerged in 1993 as a derivative of corporate governance and deals primarily with the connection between strategic objectives and IT management of an organization. It highlights the importance of IT-related matters in contemporary organizations and states that strategic IT decisions should be owned by the corporate board, rather than by the chief information officer or other IT managers. The primary goals for information technology governance are to (1) assure that the investments in IT generate business value, and (2) mitigate the risks that are associated with IT. This can be done by implementing an organizational structure with well-defined roles for the responsibility of information, business processes, applications, ICT infrastructure, etc. Accountability is the key concern of IT governance. After the widely reported collapse of Enron in 2000 and the alleged problems within Arthur Andersen and WorldCom, the duties and responsibilities of auditors and the boards of directors for public and privately held corporations were questioned. As a response to this, and to attempt to prevent similar problems from happening again, the US Sarbanes-Oxley Act was written to stress the importance of business control and auditing. Although not directly related to IT governance, Sarbanes-Oxley and Basel-II in Europe have influenced the development of information technology governance since the early 2000s.

Corporate governance of information technology Following corporate collapses in Australia around the same time, working groups were established to develop standards for corporate governance. A series of Australian Standards for Corporate Governance were published in 2003, these were: Good Governance Principles (AS8000) Fraud and Corruption Control (AS8001) Organisational Codes of Conduct (AS8002) Corporate Social Responsibility (AS8003) Whistle Blower protection programs (AS8004)

16

AS8015 Corporate Governance of ICT was published in January 2005. It was fast-track adopted as ISO/IEC 38500 in May 2008.[3]

Problems with IT governance


Is IT governance different from IT management and IT controls? The problem with IT governance is that often it is confused with good management practices and IT control frameworks. ISO 38500 has helped clarify IT governance by describing it as the management system used by directors. In other words, IT governance is about the stewardship of IT resources on behalf of the stakeholders who expect a return from their investment. The directors responsible for this stewardship will look to the management to implement the necessary systems and IT controls. Whilst managing risk and ensuring compliance are essential components of good governance, it is more important to be focused on delivering value and measuring performance. Less than a quarter of all enterprises have adopted any major IT governance standard despite the potential benefits to performance and profitability. [4] While different companies have different reasons, the failure is often a reflection of the belief that IT governance standards are too expensive to implement, that they dont reflect reality, or that it is unnecessary if they have already reached compliance with Sarbanes-Oxley (SOX) and other standards. However, the benefits that can be achieved by following the best practices should outweigh these perceived issues.

Frameworks
There are quite a few supporting references that may be useful guides to the implementation of information technology governance. Some of them are: AS8015-2005 Australian Standard for Corporate Governance of Information and Communication Technology. AS8015 was adopted as ISO/IEC 38500 in May 2008 ISO/IEC 38500:2008 Corporate governance of information technology,[5] (very closely based on AS8015-2005) provides a framework for effective governance of IT to assist those at the highest level of organizations to understand and fulfill their legal, regulatory, and ethical obligations in respect of their organizations use of IT. ISO/IEC 38500 is applicable to organizations from all sizes, including public and private companies, government entities, and not-for-profit organizations. This standard provides guiding principles for directors of organizations on the effective, efficient, and acceptable use of Information Technology (IT) within their organizations. COBIT (Control Objectives for Information and related Technology) is regarded as the world's leading IT governance and control framework. COBIT provides a reference model of 34 IT processes typically found in an organization. Each process is defined together with process inputs and outputs, key process activities, process objectives, performance measures and an elementary maturity model. Originally created by ISACA, COBIT is now the responsibility of the ITGI[6] (IT Governance Institute). ITIL (IT Infrastructure Library)[7] is a high-level framework with information on how to achieve a successful operational Service management of IT, developed and maintained by the United Kingdom's Office of Government Commerce, in partnership with the IT Service Management Forum. While not specifically focused on IT governance, the process related information is a useful reference source for tackling the improvement of the

Corporate governance of information technology service management function. Others include: ISO27001 - focus on Information Security CMM - The Capability Maturity Model: focus on software engineering TickIT - a quality-management certification program for software development CARE[8] - Comprehensive Architecture Rationalization and Engineering: a prescriptive method to perform a systematic assessment of information systems applications in an application/project portfolio

17

Non-IT specific frameworks of use include: The Balanced Scorecard (BSC) - method to assess an organizations performance in many different areas. Six Sigma - focus on quality assurance TOGAF - The Open Group Architectural Framework - methodology to align business and IT, resulting in useful projects and effective governance.

Professional certification
Certified in the Governance of Enterprise Information Technology (CGEIT) is an advanced certification created in 2007 by the Information Systems Audit and Control Association (ISACA). It is designed for experienced professionals, who can demonstrate 5 or more years experience, serving in a managing or advisory role focused on the governance and control of IT at an enterprise level. It also requires passing a 4-hour test, designed to evaluate an applicant's understanding of enterprise IT management. The first examination was held in December 2008.

Further reading
Lutchen, M. (2004). Managing IT as a business : a survival guide for CEOs. Hoboken, N.J., J. Wiley., ISBN 0-471-47104-6 Van Grembergen W., Strategies for Information technology Governance, IDEA Group Publishing, 2004, ISBN 1-59140-284-0 Van Grembergen, W., and S. De Haes, Enterprise Governance of IT: Achieving Strategic Alignment and Value, Springer, 2009. W. Van Grembergen, and S. De Haes, A Research Journey into Enterprise Governance of IT, Business/IT Alignment and Value Creation, International Journal of IT/Business Alignment and Governance, Vol. No. 1, 2010, pp.113. S. De Haes, and W. Van Grembergen, An Exploratory Study into the Design of an IT Governance Minimum Baseline through Delphi Research, Communications of AIS, No. 22, 2008, pp.443458. S. De Haes, and W. Van Grembergen, An Exploratory Study into IT Governance Implementations and its Impact on Business/IT Alignment, Information Systems Management, Vol. 26, 2009, pp.123137. S. De Haes, and W. Van Grembergen, Exploring the relationship between IT governance practices and business/IT alignment through extreme case analysis in Belgian mid-to-large size financial enterprises, Journal of Enterprise Information Management, Vol. 22, No. 5, 2009, pp.615637. Georgel F., IT Gouvernance : Maitrise d'un systeme d'information, Dunod, 2004(Ed1) 2006(Ed2), 2009(Ed3), ISBN 2-10-052574-3. "Gouvernance, audit et securite des TI", CCH, 2008(Ed1) ISBN 978-2-89366-577-1 See also the bibliography sections of IT Portfolio Management and IT Service Management Renz, Patrick S. (2007). "Project Governance." Heidelberg, Physica-Verl. (Contributions to Economics) ISBN 978-3-7908-1926-7 Weill, P. and Ross, J.W. (2004). IT Governance: How Top Performers Manage IT Decision Rights for Superior Results, Boston, MA, Harvard Business School Publishing, ISBN 1-59139-253-5

Corporate governance of information technology Wood, David J., 2011. "Assessing IT Governance Maturity: The Case of San Marcos, Texas". Applied Research Projects, Texas State University-San Marcos. (This paper applies a modified COBIT framework to a medium sized city.) [9] Blitstein, Ron, 2012. "IT Governance: Bureaucratic Logjam or Business Enabler" [10], Cutter Consortium.

18

References
[1] Weill, P. & Ross, J. W., 2004, IT Governance: How Top Performers Manage IT Decision Rights for Superior Results", Harvard Business School Press, Boston. [2] "Board Briefing on IT Governance, 2nd Edition" (http:/ / www. isaca. org/ Knowledge-Center/ Research/ Documents/ BoardBriefing/ 26904_Board_Briefing_final. pdf). IT Governance Institute. 2003. . Retrieved January 18, 2006. [3] Introduction to ISO 38500 (http:/ / www. itsmf. nl/ imagesfile/ PRESENTATIES JC 08 tbv publicatie/ Christophe Feltus Introduction to ISO 38500 v1_0. pdf) [4] "IT Governance Standards: Myths & Reality" (http:/ / content. dell. com/ us/ en/ enterprise/ d/ large-business/ it-governance-myth-reality. aspx). Dell.com. . Retrieved 06-19-12. [5] (http:/ / www. iso. org/ iso/ pressrelease. htm?refid=Ref1135) [6] "itgi.org" (http:/ / www. itgi. org). itgi.org. 2010-06-22. . Retrieved 2012-09-19. [7] "itil.co.uk" (http:/ / www. itil. co. uk/ ). itil.co.uk. . Retrieved 2012-09-19. [8] Tony C. Shan, Winnie W. Hua (2009). "Chapter VI: Comprehensive Architecture Rationalization and Engineering" (http:/ / www. igi-global. com/ chapter/ comprehensive-architecture-rationalization-engineering/ 23687). In Aileen Cater-Steel. Information Technology Governance and Service Management: Frameworks and Adaptations. IGI Global. pp.125144. ISBN9781605660080. . Retrieved 2012-07-26. [9] http:/ / ecommons. txstate. edu/ arp/ 345 [10] http:/ / www. cutter. com/ promotions/ bitu1210. html

External links
Institutes and associations The IT Governance Institute (http://www.itgi.org) Information Systems Audit and Control Association (http://www.isaca.org) International Association of Information Technology Asset Managers, Inc. - IAITAM (http://www.iaitam.org/ Corp_Bios.htm) Australian Computer Society Governance of ICT Committee (http://www.acs.org.au/governance) IT Governance Network (http://www.itgovernance.com) Togaf (http://www.opengroup.org/togaf/) IT Governance Portal (http://www.cioindex.com/channels/it_governance.aspx)

Systems development life-cycle

19

Systems development life-cycle


The Systems development life cycle (SDLC), or Software development process in systems engineering, information systems and software engineering, is a process of creating or altering information systems, and the models and methodologies that people use to develop these systems. In software engineering, the SDLC concept underpins many kinds of software development methodologies. These methodologies form the framework for planning and controlling the creation of an information system:[1] the software development process.

Overview

Model of the Systems Development Life Cycle

The SDLC is a process used by a systems analyst to develop an information system, training, and user (stakeholder) ownership. The SDLC aims to produce a high quality system that meets or exceeds customer expectations, reaches completion within time and cost estimates, works effectively and efficiently in the current and planned Information Technology infrastructure, and is inexpensive to maintain and cost-effective to enhance.[2] Computer systems are complex and often (especially with the recent rise of service-oriented architecture) link multiple traditional systems potentially Model of the Systems Development Life Cycle supplied by different software vendors. To manage this level of complexity, a number of SDLC models or methodologies have been created, such as "waterfall"; "spiral"; "Agile software development"; "rapid prototyping"; "incremental"; and "synchronize and stabilize".[3] SDLC models can be described along spectrum of agile to iterative to sequential. Agile methodologies, such as XP and Scrum, focus on lightweight processes which allow for rapid changes along the development cycle. Iterative methodologies, such as Rational Unified Process and dynamic systems development method, focus on limited project scope and expanding or improving products by multiple iterations. Sequential or big-design-up-front (BDUF) models, such as Waterfall, focus on complete and correct planning to guide large projects and risks to successful and predictable results . Other models, such as Anamorphic Development, tend to focus on a form of development that is guided by project scope and adaptive iterations of feature development. In project management a project can be defined both with a project life cycle (PLC) and an SDLC, during which slightly different activities occur. According to Taylor (2004) "the project life cycle encompasses all the activities of the project, while the systems development life cycle focuses on realizing the product requirements".[4] SDLC (systems development life cycle) is used during the development of an IT project, it describes the different stages involved in the project from the drawing board, through the completion of the project.

Systems development life-cycle

20

History
The systems life cycle (SLC) is a methodology used to describe the process for building information systems, intended to develop information systems in a very deliberate, structured and methodical way, reiterating each stage of the life cycle. The systems development life cycle, according to Elliott & Strachan & Radford (2004), "originated in the 1960's, to develop large scale functional business systems in an age of large scale business conglomerates. Information systems activities revolved around heavy data processing and number crunching routines".[5] Several systems development frameworks have been partly based on SDLC, such as the structured systems analysis and design method (SSADM) produced for the UK government Office of Government Commerce in the 1980s. Ever since, according to Elliott (2004), "the traditional life cycle approaches to systems development have been increasingly replaced with alternative approaches and frameworks, which attempted to overcome some of the inherent deficiencies of the traditional SDLC".[5]

Systems development phases


The System Development Life Cycle framework provides a sequence of activities for system designers and developers to follow. It consists of a set of steps or phases in which each phase of the SDLC uses the results of the previous one. A Systems Development Life Cycle (SDLC) adheres to important phases that are essential for developers, such as planning, analysis, design, and implementation, and are explained in the section below. A number of system development life cycle (SDLC) models have been created: waterfall, fountain, spiral, build and fix, rapid prototyping, incremental, and synchronize and stabilize. The oldest of these, and the best known, is the waterfall model: a sequence of stages in which the output of each stage becomes the input for the next. These stages can be characterized and divided up in different ways, including the following:[6] Preliminary Analysis: The objective of phase 1 is to conduct a preliminary analysis, propose alternative solutions, describe costs and benefits and submit a preliminary plan with recommendations. Conduct the preliminary analysis: in this step, you need to find out the organization's objectives and the nature and scope of the problem under study. Even if a problem refers only to a small segment of the organization itself then you need to find out what the objectives of the organization itself are. Then you need to see how the problem being studied fits in with them. Propose alternative solutions: In digging into the organization's objectives and specific problems, you may have already covered some solutions. Alternate proposals may come from interviewing employees, clients , suppliers, and/or consultants. You can also study what competitors are doing. With this data, you will have three choices: leave the system as is, improve it, or develop a new system. Describe the costs and benefits. Systems analysis, requirements definition: Defines project goals into defined functions and operation of the intended application. Analyzes end-user information needs. Systems design: Describes desired features and operations in detail, including screen layouts, business rules, process diagrams, pseudocode and other documentation. Development: The real code is written here. Integration and testing: Brings all the pieces together into a special testing environment, then checks for errors, bugs and interoperability. Acceptance, installation, deployment: The final stage of initial development, where the software is put into production and runs actual business.

Systems development life-cycle Maintenance: What happens during the rest of the software's life: changes, correction, additions, moves to a different computing platform and more. This is often the longest of the stages. In the following example (see picture) these stage of the systems development life cycle are divided in ten steps from definition to creation and modification of IT work products:

21

The tenth phase occurs when the system is disposed of and the task performed is either eliminated or transferred to other systems. The tasks and work [7] products for each phase are described in subsequent chapters.

Not every project will require that the phases be sequentially executed. However, the phases are interdependent. Depending upon the size and complexity of the project, phases may be combined or may overlap.[7]

Investigation
The 1st stage of SDLC is the investigation phase. During this stage, business opportunities and problems are identified, and information technology solutions are discussed. Multiple alternative projects may be suggested and their feasibility analyzed. Operational feasibility is assessed, and it is determined whether or not the project fits with the current business environment, and to what degree it addresses business objects. In addition, an economic feasibility investigation is conducted to judge the costs and benefits of the project. Technical feasibility must also be analyzed to determine if the available hardware and software resources are sufficient to meet expected specifications. A legal feasibility study is important to discover any potential legal ramification. The results of the feasibility study can then be compiled into a report, along with preliminary specifications. When the investigation stage ends, a decision whether or not to move forward with the project should be made. If it is decided to move ahead, a proposal should have been produced that outlines the general specifications of the project.[8]

Systems development life-cycle

22

System analysis
The goal of system analysis is to determine where the problem is in an attempt to fix the system.This step involves breaking down the system in different pieces to analyze the situation, analyzing project goals, breaking down what needs to be created and attempting to engage users so that definite requirements can be defined.

Design
In systems design the design functions and operations are described in detail, including screen layouts, business rules, process diagrams and other documentation. The output of this stage will describe the new system as a collection of modules or subsystems. The design stage takes as its initial input the requirements identified in the approved requirements document. For each requirement, a set of one or more design elements will be produced as a result of interviews, workshops, and/or prototype efforts. Design elements describe the desired software features in detail, and generally include functional hierarchy diagrams, screen layout diagrams, tables of business rules, business process diagrams, pseudo-code, and a complete entity-relationship diagram with a full data dictionary. These design elements are intended to describe the software in sufficient detail that skilled programmers may develop the software with minimal additional input design.

Testing
The code is tested at various levels in software testing. Unit, system and user acceptance testings are often performed. This is a grey area as many different opinions exist as to what the stages of testing are and how much, if any iteration occurs. Iteration is not generally part of the waterfall model, but usually some occur at this stage. In the testing the whole system is test one by one Following are the types of testing: Defect testing the failed scenarios, including defect tracking Path testing Data set testing Unit testing System testing Integration testing Black-box testing White-box testing Regression testing Automation testing User acceptance testing Software performance testing

Systems development life-cycle

23

Operations and maintenance


The deployment of the system includes changes and enhancements before the decommissioning or sunset of the system. Maintaining the system is an important aspect of SDLC. As key personnel change positions in the organization, new changes will be implemented, which will require system.

Systems analysis and design


The Systems Analysis and Design (SAD) is the process of developing Information Systems (IS) that effectively use hardware, software, data, processes, and people to support the companys business objectives.

Object-oriented analysis
Object-oriented analysis (OOA) is the process of analyzing a task (also known as a problem domain), to develop a conceptual model that can then be used to complete the task. A typical OOA model would describe computer software that could be used to satisfy a set of customer-defined requirements. During the analysis phase of problem-solving, a programmer might consider a written requirements statement, a formal vision document, or interviews with stakeholders or other interested parties. The task to be addressed might be divided into several subtasks (or domains), each representing a different business, technological, or other areas of interest. Each subtask would be analyzed separately. Implementation constraints, (e.g., concurrency, distribution, persistence, or how the system is to be built) are not considered during the analysis phase; rather, they are addressed during object-oriented design (OOD). The conceptual model that results from OOA will typically consist of a set of use cases, one or more UML class diagrams, and a number of interaction diagrams. It may also include some kind of user interface mock-up.

Input (sources) for object-oriented design


The input for object-oriented design is provided by the output of object-oriented analysis. Realize that an output artifact does not need to be completely developed to serve as input of object-oriented design; analysis and design may occur in parallel, and in practice the results of one activity can feed the other in a short feedback cycle through an iterative process. Both analysis and design can be performed incrementally, and the artifacts can be continuously grown instead of completely developed in one shot. Some typical input artifacts for object-oriented design are: Conceptual model: Conceptual model is the result of object-oriented analysis, it captures concepts in the problem domain. The conceptual model is explicitly chosen to be independent of implementation details, such as concurrency or data storage. Use case: Use case is a description of sequences of events that, taken together, lead to a system doing something useful. Each use case provides one or more scenarios that convey how the system should interact with the users called actors to achieve a specific business goal or function. Use case actors may be end users or other systems. In many circumstances use cases are further elaborated into use case diagrams. Use case diagrams are used to identify the actor (users or other systems) and the processes they perform. System Sequence Diagram: System Sequence diagram (SSD) is a picture that shows, for a particular scenario of a use case, the events that external actors generate, their order, and possible inter-system events. User interface documentations (if applicable): Document that shows and describes the look and feel of the end product's user interface. It is not mandatory to have this, but it helps to visualize the end-product and therefore helps the designer. Relational data model (if applicable): A data model is an abstract model that describes how data is represented and used. If an object database is not used, the relational data model should usually be created before the design,

Systems development life-cycle since the strategy chosen for object-relational mapping is an output of the OO design process. However, it is possible to develop the relational data model and the object-oriented design artifacts in parallel, and the growth of an artifact can stimulate the refinement of other artifacts.

24

Systems development life cycle


Management and control
The SDLC phases serve as a programmatic guide to project activity and provide a flexible but consistent way to conduct projects to a depth matching the scope of the project. Each of the SDLC phase objectives are described in this section with key deliverables, a description of recommended tasks, and a summary of related control objectives for effective management. It is critical for the project manager to establish and monitor control objectives during each SDLC phase while executing projects. [9] Control objectives help to provide a SPIU phases related to management controls. clear statement of the desired result or purpose and should be used throughout the entire SDLC process. Control objectives can be grouped into major categories (domains), and relate to the SDLC phases as shown in the figure.[9] To manage and control any SDLC initiative, each project will be required to establish some degree of a Work Breakdown Structure (WBS) to capture and schedule the work necessary to complete the project. The WBS and all programmatic material should be kept in the project description section of the project notebook. The WBS format is mostly left to the project manager to establish in a way that best describes the project work. There are some key areas that must be defined in the WBS as part of the SDLC policy. The following diagram describes three key areas that will be addressed in the WBS in a manner established by the project manager.[9]

Work breakdown structured organization


The upper section of the work breakdown structure (WBS) should identify the major phases and milestones of the project in a summary fashion. In addition, the upper section should provide an overview of the full scope and timeline of the project and will be part of the initial project description effort leading to project approval. The middle section of the

Work breakdown structure.

[9]

Systems development life-cycle WBS is based on the seven systems development life cycle (SDLC) phases as a guide for WBS task development. The WBS elements should consist of milestones and tasks as opposed to activities and have a definitive period (usually two weeks or more). Each task must have a measurable output (e.x. document, decision, or analysis). A WBS task may rely on one or more activities (e.g. software engineering, systems engineering) and may require close coordination with other tasks, either internal or external to the project. Any part of the project needing support from contractors should have a statement of work (SOW) written to include the appropriate tasks from the SDLC phases. The development of a SOW does not occur during a specific phase of SDLC but is developed to include the work from the SDLC process that may be conducted by external resources such as contractors and struct.[9]

25

Baselines in the SDLC


Baselines are an important part of the systems development life cycle (SDLC). These baselines are established after four of the five phases of the SDLC and are critical to the iterative nature of the model .[10] Each baseline is considered as a milestone in the SDLC. functional baseline: established after the conceptual design phase. allocated baseline: established after the preliminary design phase. product baseline: established after the detail design and development phase. updated product baseline: established after the production construction phase.

Complementary to SDLC
Complementary software development methods to systems development life cycle (SDLC) are: Software prototyping Joint applications development (JAD) Rapid application development (RAD) Extreme programming (XP); extension of earlier work in Prototyping and RAD. Open-source development End-user development Object-oriented programming

Comparison of Methodology Approaches (Post, & Anderson 2006)[11]


SDLC Control Time frame Users MIS staff Transaction/DSS Interface Formal Long Many Many RAD MIS Short Few Few Open source Weak Medium Few Hundreds Both Objects JAD Prototyping End User User User Short One None DSS Crucial None Weak None

Standards Joint Any Varies Split Both Windows

Medium Short Few Few DSS Crucial One or two One or two DSS Crucial

Transaction Both Minimal

Minimal Weak Limited Vital Some Internal Unknown Maybe

Documentation and training Vital Integrity and security Reusability Vital Limited

In Objects Limited Weak In Objects Limited Weak Vital Limited Weak

Systems development life-cycle

26

Strengths and weaknesses


Few people in the modern computing world would use a strict waterfall model for their systems development life cycle (SDLC) as many modern methodologies have superseded this thinking. Some will argue that the SDLC no longer applies to models like Agile computing, but it is still a term widely in use in technology circles. The SDLC practice has advantages in traditional models of software development, that lends itself more to a structured environment. The disadvantages to using the SDLC methodology is when there is need for iterative development or (i.e. web development or e-commerce) where stakeholders need to review on a regular basis the software being designed. Instead of viewing SDLC from a strength or weakness perspective, it is far more important to take the best practices from the SDLC model and apply it to whatever may be most appropriate for the software being designed. A comparison of the strengths and weaknesses of SDLC:

Strength and Weaknesses of SDLC


Strengths Control. Monitor large projects. Detailed steps. Weaknesses Increased development time. Increased development cost. Systems must be defined up front.

Evaluate costs and completion targets. Rigidity. Documentation. Well defined user input. Ease of maintenance. Development and design standards. Tolerates changes in MIS staffing. Hard to estimate costs, project overruns. User input is sometimes limited.

An alternative to the SDLC is rapid application development, which combines prototyping, joint application development and implementation of CASE tools. The advantages of RAD are speed, reduced development cost, and active user involvement in the development process.

References

[1] SELECTING A DEVELOPMENT APPROACH (http:/ / www. cms. hhs. gov/ SystemLifecycleFramework/ Downloads/ SelectingDevelopmentApproach. pdf). Retrieved 27 October 2008. [2] "Systems Development Life Cycle" (http:/ / foldoc. org/ foldoc. cgi?Systems+ Development+ Life+ Cycle). In: Foldoc(2000-12-24) [3] Software Development Life Cycle (SDLC) (http:/ / docs. google. com/ viewer?a=v& q=cache:bfhOl8jp1S8J:condor. depaul. edu/ ~jpetlick/ extra/ 394/ Session2. ppt+ & hl=en& pid=bl& srcid=ADGEEShCfW0_MLC4wRbczfUxrndHTkbwguF9fZuaUCe0RDyOCWyO2PTmaPhHnZ4jRhZZ75maVO_7gVAD2ex5-QIhrj1683hMefBNkak7FkQJCAw sig=AHIEtbRhMlZ-TUyioKEhLQQxXk1WoSJXWA), Power Point, Powered by Google Docs [4] James Taylor (2004). Managing Information Technology Projects. p.39.. [5] Geoffrey Elliott & Josh Strachan (2004) Global Business Information Technology. p.87. [6] QuickStudy: System Development Life Cycle (http:/ / www. computerworld. com/ s/ article/ 71151/ System_Development_Life_Cycle), By Russell Kay, May 14, 2002 [7] US Department of Justice (2003). INFORMATION RESOURCES MANAGEMENT (http:/ / www. usdoj. gov/ jmd/ irm/ lifecycle/ ch1. htm) Chapter 1. Introduction. [8] Marakas & O'Brien (2011). Management Information Systems. New York, NY: McGraw-Hill/Irwin. pp.485489. ISBN978-0-07-337681-3. [9] U.S. House of Representatives (1999). Systems Development Life-Cycle Policy (http:/ / www. house. gov/ cao-opp/ PDFSolicitations/ SDLCPOL. pdf). p.13. [10] Blanchard, B. S., & Fabrycky, W. J.(2006) Systems engineering and analysis (4th ed.) New Jersey: Prentice Hall. p.31 [11] Post, G., & Anderson, D., (2006). Management information systems: Solving business problems with information technology. (4th ed.). New York: McGraw-Hill Irwin.

Systems development life-cycle

27

Further reading
Blanchard, B. S., & Fabrycky, W. J.(2006) Systems engineering and analysis (4th ed.) New Jersey: Prentice Hall. Cummings, Haag (2006). Management Information Systems for the Information Age. Toronto, McGraw-Hill Ryerson Beynon-Davies P. (2009). Business Information Systems. Palgrave, Basingstoke. ISBN 978-0-230-20368-6 Computer World, 2002 (http://www.computerworld.com/developmenttopics/development/story/ 0,10801,71151,00.html), Retrieved on June 22, 2006 from the World Wide Web: Management Information Systems, 2005 (http://www.cbe.wwu.edu/misclasses/MIS320_Spring06_Bajwa/ Chap006.ppt), Retrieved on June 22, 2006 from the World Wide Web: This article is based on material taken from the Free On-line Dictionary of Computing prior to 1 November 2008 and incorporated under the "relicensing" terms of the GFDL, version 1.3 or later.

External links
The Agile System Development Lifecycle (http://www.ambysoft.com/essays/agileLifecycle.html) Pension Benefit Guaranty Corporation Information Technology Solutions Lifecycle Methodology (http:// www.pbgc.gov/docs/ITSLCM V2007.1.pdf) FSA Life Cycle Framework (http://federalstudentaid.ed.gov/static/gw/docs/lcm/ FSALCMFrameworkOverview.pdf) HHS Enterprise Performance Life Cycle Framework (http://www.hhs.gov/ocio/eplc/ eplc_framework_v1point2.pdf) The Open Systems Development Life Cycle (http://OpenSDLC.org) System Development Life Cycle Evolution Modeling (http://www.scribd.com/doc/103966748/ SDLC-Evolution-Model)

End-user computing

28

End-user computing
End-user computing (EUC) refers to systems in which non-programmers can create working applications.[1] EUC is a group of approaches to computing that aim at better integrating end users into the computing environment. These approaches attempt to realize the potential for high-end computing to perform in a trustworthy manner in problem-solving.[2][3][4] End-user computing can range in complexity from users simply clicking a series of buttons, to writing scripts in a controlled scripting language, to being able to modify and execute code directly. Examples of end-user computing are systems built using fourth-generation programming languages, such as MAPPER or SQL, or one of the fifth-generation programming languages, such as ICAD.

Factors
Of late, this discipline has moved to a co-joint architecture that features advanced interactive domain visualization coupled with an API. Some of the factors contributing to the need for further EUC research are knowledge processing, pervasive computing, issues of ontology, interactive visualization, and the like. Some of the issues related to end-user computing concern architecture (iconic versus language interface, open versus closed, and others). Other issues relate to IP, configuration and maintenance. End-user computing allows more user input into system affairs that can range from personalization to full-fledged ownership of the system.

References
[1] End-user computing (http:/ / portal. acm. org/ citation. cfm?id=1120304), [2] McBride, Neil, " Towards User-Oriented control of End-User Computing in Large Organizations (http:/ / www. cse. dmu. ac. uk/ ~nkm/ EUC1. html)" [3] Mahmood, Adam, Advances in End User Computing Series (http:/ / www. idea-group. com/ bookseries/ details. asp?id=3) University of Texas, USA,ISSN: 1537-9310 [4] End User Computing (http:/ / www. ieuc. org/ end-user-computing. html) The Institute for End User Computing, Inc.

External links
EUSES Consortium, a collaboration that researches end-user computing. (http://eusesconsortium.org/)

Middleware

29

Middleware
In its most general sense, middleware is computer software that provides services to software applications beyond those available from the operating system. Middleware can be described as "software glue".[1] Thus middleware is not obviously part of an operating system, not a database management system, and neither is it part of one software application. Middleware makes it easier for software developers to perform communication and input/output, so they can focus on the specific purpose of their application.

Middleware in distributed applications


The term is most commonly used for software that enables communication and management of data in distributed applications. In this more specific sense middleware can be described as the dash in 'client-server'. ObjectWeb defines middleware as: "The software layer that lies between the operating system and applications on each side of a distributed computing system in a network."[2] Services that can be regarded as middleware include enterprise application integration, data integration, message oriented middleware (MOM), object request brokers (ORBs), and the enterprise service bus (ESB). Distributed computing system middleware can loosely be divided into two categories those that provide human-time services (such as web request servicing) and those that perform in machine-time. This latter middleware is somewhat standardized through the Service Availability Forum and is commonly used in complex embedded systems within telecom, defense and aerospace industries.

Other examples of middleware


The term middleware is used in other contexts as well. Middleware is sometimes used in a similar sense to a software driver, an abstraction layer that hides detail about hardware devices or other software from an application. The Android operating system uses the Linux kernel at its core, and also provides an application framework that developers incorporate into their applications. In addition, Android provides a middleware layer including libraries that provide services such as data storage, screen display, multimedia, and web browsing. Because the middleware libraries are compiled to machine language, services execute quickly. Middleware libraries also implement device-specific functions, so applications and the application framework need not concern themselves with variations between various Android devices. Android's middleware layer also contains the Dalvik virtual machine and its core Java application libraries.[3] Game engine software such as Gamebryo and Renderware are sometimes described as middleware, because they provide many services to simplify game development. In simulation technology, middleware is generally used in the context of the high level architecture (HLA) that applies to many distributed simulations. It is a layer of software that lies between the application code and the run-time infrastructure. Middleware generally consists of a library of functions, and enables a number of applicationssimulations or federates in HLA terminologyto page these functions from the common library rather than re-create them for each application. Wireless networking developers can use middleware to meet the challenges associated with wireless sensor network (WSN), or WSN technologies. Implementing a middleware application allows WSN developers to integrate operating systems and hardware with the wide variety of various applications that are currently available.[4] The QNX operating system offers middleware for providing multimedia services for use in automobiles, aircraft and other environments. Multimedia Home Platform (DVB-MHP) is an open middleware system standard designed by the DVB project for interactive digital television. The MHP enables the reception and execution of interactive, Java-based

Middleware applications on a television set. Universal Home API, or UHAPI, is an application programming interface (API) for consumer electronics appliances, created by the UHAPI Forum. The objective of UHAPI is to enable standard middleware to run on audio/video streaming platforms via a hardware-independent industry standard API. The Miles Sound System provided a middleware software driver allowing developers to build software that worked with a range of different sound cards, without concerning themselves with the details of each card. Radio-frequency identification software toolkits provide middleware to filter noisy and redundant raw data. ILAND is a service-based middleware dedicated to real-time applications. It offers deterministic reconfiguration support in bounded time.

30

Boundaries
The distinction between operating system and middleware functionality is, to some extent, arbitrary. While core kernel functionality can only be provided by the operating system itself, some functionality previously provided by separately sold middleware is now integrated in operating systems. A typical example is the TCP/IP stack for telecommunications, nowadays included in virtually every operating system.

Origins
Middleware is a relatively new addition to the computing landscape. It gained popularity in the 1980s as a solution to the problem of how to link newer applications to older legacy systems, although the term had been in use since 1968.[5] It also facilitated distributed processing, the connection of multiple applications to create a larger application, usually over a network.

References
[1] [2] [3] [4] What is Middleware? (http:/ / www. middleware. org/ whatis. html) Krakowiak, Sacha. "What's middleware?" (http:/ / middleware. objectweb. org/ ). ObjectWeb.org. . Retrieved 2005-05-06. Charlie Collins, Michael Galpin and Matthias Kaeppler, Android in Practice, Manning Publications, 2011 Hadim, S. and Mohamed, N. (2006). Middleware challenges and approaches for wireless sensor networks. IEEE Distributed Systems Online vol 7. Issue 3. Retrieved March 4, 2009 from iEEE Distributed Systems Online (http:/ / dsonline. computer. org/ portal/ site/ dsonline/ menuitem. 3a529f3832e8f1e13587e0606bcd45f3/ index. jsp) [5] Gall, Nick (July 30, 2005). "Origin of the term middleware" (http:/ / ironick. typepad. com/ ironick/ 2005/ 07/ update_on_the_o. html). .

External links
Service Availability Forum http://www.saforum.org

Enterprise content management

31

Enterprise content management


Enterprise content management (ECM) is a formalized means of organizing and storing an organization's documents, and other content, that relate to the organization's processes. The term encompasses strategies, methods, and tools used throughout the lifecycle of the content.[1]

Definition
The Association for Information and Image Management (AIIM) International, the worldwide association for enterprise content management, defined the term Enterprise Content Management in 2000. AIIM has refined the abbreviation ECM several times to reflect the expanding scope and importance of information management: Late 2005 Enterprise content management is the technologies used to Capture, Manage, Store, Preserve, and Deliver content and documents related to organizational processes. Early 2006 Enterprise content management is the technologies used to Capture, Manage, Store, Preserve, and Deliver content and documents related to organizational processes. ECM tools and strategies allow the management of an organization's unstructured information, wherever that information exists. Early 2008 Enterprise Content Management (ECM) is the strategies, methods and tools used to capture, manage, store, preserve, and deliver content and documents related to organizational processes. ECM tools and strategies allow the management of an organization's unstructured information, wherever that information exists.[1] Early 2010 Enterprise Content Management (ECM) is the strategies, methods and tools used to capture, manage, store, preserve, and deliver content and documents related to organizational processes. ECM covers the management of information within the entire scope of an enterprise whether that information is in the form of a paper document, an electronic file, a database print stream, or even an email.[1] The latest definition encompasses areas that have traditionally been addressed by records management and document management systems. It also includes the conversion of data between various digital and traditional forms, including paper and microfilm. ECM is an umbrella term covering document management, web content management, search, collaboration, records management, digital asset management (DAM), work-flow management, capture and scanning. ECM is primarily aimed at managing the life-cycle of information from initial publication or creation all the way through archival and eventually disposal. ECM applications are delivered in three ways: on-premise software (installed on the organizations own network), software as a service (SaaS) (web access to information that is stored on the software manufacturers system), or a hybrid solution composed of both on-premise and SaaS components. ECM aims to make the management of corporate information easier through simplifying storage, security, version control, process routing, and retention. The benefits to an organization include improved efficiency, better control, and reduced costs. For example, many banks have converted to storing copies of old checks within ECM systems versus the older method of keeping physical checks in massive paper warehouses. Under the old system a customer request for a copy of a check might take weeks, as the bank employees had to contact the warehouse to have someone locate the right box, file and check, pull the check, make a copy and then mail it to the bank who would eventually mail it to the customer. With an ECM system in place, the bank employee simply searches the system for the customers account number and the number of the requested check. When the image of the check appears on

Enterprise content management screen, they are able to immediately mail it to the customerusually while the customer is still on the phone.

32

History
Enterprise content management, as a form of content management, combines the capture, search and networking of documents with digital archiving, document management and workflow. It specifically includes the special challenges involved in using and preserving a company's internal, often unstructured information, in all of its forms. Therefore, most ECM solutions focus on business-to-employee (B2E) systems. As ECM solutions have evolved, new components have emerged. For example, as content is checked in and out, each use generates new metadata about the content, to some extent automatically; information about how and when the content was used can allow the system to gradually acquire new filtering, routing and search pathways, corporate taxonomies and semantic networks, and retention-rule decisions. Email and instant messaging are increasingly employed in decision-making processes; ECM can provide access to data about these communications, which can be used in business decisions. Solutions can provide intranet services to employees (B2E), and can also include enterprise portals for business-to-business (B2B), business-to-government (B2G), government-to-business (G2B), or other business relationships. This category includes most former document-management groupware and workflow solutions that have not yet fully converted their architecture to ECM, but provide a web interface. Digital asset management is a form of ECM concerned with content stored using digital technology. The technologies that comprise ECM today are the descendants of late 1980s and early 1990s electronic document management systems (EDMS). The original EDMS products were stand-alone products, providing functionality in one of four areas: imaging, workflow, document management, or COLD/ERM (see components below). The typical early EDMS adopter deployed a small-scale imaging and workflow system, possibly to just a single department, in order to improve a paper-intensive process and migrate towards the mythical paperless office. The first stand-alone EDMS technologies were designed to save time and/or improve information access by reducing paper handling and paper storage, thereby reducing document loss and providing faster access to information. EDMS could provide online access to information formerly available only on paper, microfilm, or microfiche. By improving control over documents and document-oriented processes, EDMS streamlined time-consuming business practices. The audit trail generated by EDMS enhanced document security, and provided metrics to help measure productivity and identify efficiency. Through the late 1990s, the EDMS industry continued to grow steadily. The technologies appealed to organizations that needed targeted, tactical solutions to address clearly defined problems. As time passed, and more organizations achieved "pockets" of productivity with these technologies, it became clear that the various EDMS product categories were complementary. Organizations increasingly wanted to leverage multiple EDMS products. Consider, for example, a customer service departmentwhere imaging, document management, and workflow could be combined to allow agents to better resolve customer inquiries. Likewise, an accounting department might access supplier invoices from a COLD/ERM system, purchase orders from an imaging system, and contracts from a document management system as part of an approval workflow. As organizations established an Internet presence, they wanted to present information via the web, which required managing web content. Organizations that had automated individual departments now began to envision wider benefits from broader deployment. Many documents cross multiple departments and affect multiple processes. The movement toward integrated EDMS solutions merely reflected a common trend in the software industry: the ongoing integration of point solutions into more comprehensive solutions. For example, until the early 1990s, word processing, spreadsheet, and presentation software products were standalone products. Thereafter, the market shifted toward integration.

Enterprise content management Early leaders already offered multiple stand-alone EDMS technologies. The first phase was to offer multiple systems as a single, packaged "suite", with little or no functional integration. Throughout the 1990s, integration increased. Beginning in approximately 2001, the industry began to use the term enterprise content management to refer to these integrated solutions. In 2006, Microsoft (with its SharePoint product family) and Oracle Corporation (with Oracle Content Management) joined established leaders such as EMC Documentum, Laserfiche & OpenText and entered the entry-level "value" market segment of ECM.[2][3] Open source ECM products are also available, including Alfresco, LogicalDOC, Sense/Net, eZ Publish, KnowledgeTree, Jumper 2.0, Nuxeo, and Plone. Government standards, including HIPAA, SAS 70, BS 7799 and ISO/IEC 27001, are factors in developing and deploying ECM. Standards compliance may make outsourcing to certified service providers a viable alternative to an internal ECM deployment. Today, organizations can deploy a single, flexible ECM system to manage information in all functional departments, including customer service, accounting, human resources, etc.

33

Adoption drivers
There are numerous factors driving businesses to adopt an ECM solution, such as the need to increase efficiency, improve control of information, and reduce the overall cost of information management for the enterprise. ECM applications streamline access to records through keyword and full-text search allowing employees to get to the information they need directly from their desktops in seconds rather than searching multiple applications or digging through paper records. These management systems can enhance record control to help businesses to comply with government and industry regulations such as HIPAA, Sarbanes-Oxley, PCI DSS, and the Federal Rules of Civil Procedure. Security functions including user-level, function-level and even record-specific security options protect your most sensitive data. In fact, even information contained on a specific document can be masked using redaction features, so the rest of the document can be shared without compromising individual identity or key data. Every action taken within the system is tracked and reportable for auditing purposes for a wide variety of regulations. ECM systems can reduce storage, paper and mailing needs, make employees more efficient, and result in better, more informed decisions across the enterpriseall of which reduce the overhead costs of managing information. SaaS ECM services can convert expensive capital outlay for servers and network equipment into a monthly operating expense, while also reducing the IT resources required to manage enterprise records.

Characteristics
Content management includes ECM, Web content management (WCM), content syndication, and media asset management. Enterprise content management is not a closed-system solution or a distinct product category. Therefore, along with Document Related Technologies or Document Lifecycle Management, ECM is just one possible catch-all term for a wide range of technologies and vendors. The content and structure of today's outward-directed web portal will be the platform for tomorrow's internal information system. In his article in ComputerWoche,[4] Ulrich Kampffmeyer distilled ECM to three key ideas that distinguish such solutions from Web content management: Enterprise content management as integrative middleware ECM is used to overcome the restrictions of former vertical applications and island architectures. The user is basically unaware of using an ECM solution. ECM offers the requisite infrastructure for the new world of web-based IT, which is establishing itself as a kind of third platform alongside conventional

Enterprise content management host and client/server systems. Therefore, EAI (enterprise application integration) and SOA (service-oriented architecture) will play an important role in the implementation and use of ECM. Enterprise content management components as independent services ECM is used to manage information without regard to the source or the required use. The functionality is provided as a service that can be used from all kinds of applications. The advantage of a service concept is that for any given functionality only one general service is available, thus avoiding redundant, expensive and difficult to maintain parallel functions. Therefore, standards for interfaces connecting different services will play an important role in the implementation of ECM. Enterprise content management as a uniform repository for all types of information ECM is used as a content warehouse (both data warehouse and document warehouse) that combines company information in a repository with a uniform structure. Expensive redundancies and associated problems with information consistency are eliminated. All applications deliver their content to a single repository, which in turn provides needed information to all applications. Therefore, content integration and ILM (Information Lifecycle Management) will play an important role in the implementation and use of ECM. Enterprise content management is working properly when it is effectively "invisible" to users. ECM technologies are infrastructures that support specialized applications as subordinate services. ECM thus is a collection of infrastructure components that fit into a multi-layer model and include all document related technologies (DRT) for handling, delivering, and managing structured data and unstructured information jointly. As such, enterprise content management is one of the necessary basic components of the overarching e-business application area. ECM also sets out to manage all the information of a WCM and covers archiving needs as a universal repository.[5]

34

Components
ECM combines components which can also be used as stand-alone systems without being incorporated into an enterprise-wide system.[5] The five ECM components and technologies were first defined by AIIM as capture, manage, store, preserve, and deliver.

Capture
Capture involves converting information from paper documents into an electronic format through scanning. Capture is also used to collect electronic files and information into a consistent structure for management. Capture technologies also encompass the creation of metadata (index values) that describe characteristics of a document for easy location through search technology. For example, a medical chart might include the patient ID, patient name, date of visit, and procedure as index values to make it easy for medical personnel to locate the chart. Earlier document automation systems photographed documents for storage on microfilm or microfiche. Optical scanners now make digital copies of paper documents. Documents already in digital form can be copied, or linked to if they are already available online. Automatic or semi-automatic capture can use EDI or XML documents, business and ERP applications, or existing specialist application systems as sources.

Enterprise content management Recognition technologies Various recognition technologies can be used to extract information from scanned documents and digital faxes, including: Optical character recognition (OCR) Converts images of typeset text into alphanumeric characters handprint character recognition (HCR) Converts images of handwritten text into alphanumerics. Gives better results for short text in fixed locations than for freeform text. Intelligent character recognition (ICR) Extends OCR and HCR to use comparison, logical connections, and checks against reference lists and existing master data to improve recognition. For example, on a form where a column of numbers is added up, the accuracy of the recognition can be checked by adding the recognized numbers and comparing them to the sum written on the original form. Optical mark recognition (OMR) Reads special markings, such as checkmarks or dots, in predefined fields. Barcode recognition Decodes industry-standard encodings of product and other commercial data. Image cleanup Image cleanup features include rotation, straightening, color adjustment, transposition, zoom, aligning, page separation, annotations and despeckling. Forms processing In forms capture, there are two groups of technologies, although the information content and character of the documents may be identical. Forms processing is the capture of printed forms via scanning; recognition technologies are often used here, since well-designed forms enable largely automatic processing. Automatic processing can be used to capture electronic forms, such as those submitted via web pages, as long as the layout, structure, logic, and contents are known to the capture system. COLD Computer Output to Laser Disc (COLD) records reports and other documents on optical disks, or any form of digital storage for ongoing management by ECM systems. Another term for this is enterprise report management (ERM). Originally, the technology only worked with laserdiscs; the name was not changed after other technologies supplanted the laserdisc. Aggregation Aggregation combines documents from different applications. The goal is to unify data from different sources, forwarding them to storage and processing systems in a uniform structure and format.

35

Enterprise content management Indexing components Indexing improves searches, and provides alternative ways to organize the information. Manual indexing assigns index database attributes to content by hand, typically used by the database of a "manage" component for administration and access. Manual indexing may make use of input designs to limit the information that can be entered; for example, entry masks may use program logic to restrict inputs based on other information known about the document. Both automatic and manual attribute indexing can be made easier and better with preset input-design profiles; these can describe document classes that limit the number of possible index values, or automatically assign certain criteria. Automatic classification programs can extract index, category, and transfer data autonomously. Automatic classification or categorizing, based on the information contained in electronic information objects, can evaluate information based on predefined criteria or in a self-learning process. This technique can be used with OCR-converted faxes, office files, or output files.

36

Manage
The Manage category includes five traditional application areas: Document management (DM) Collaboration (or collaborative software, a.k.a. groupware) Web content management (including web portals) Records management Workflow and business process management (BPM)

The Manage category connects the other components, which can be used in combination or separately. Document management, web content management, collaboration, workflow and business process management address the dynamic part of the information's lifecycle. Records management focuses on managing finalized documents in accordance with the organization's document retention policy, which in turn must comply with government mandates and industry practices.[6] All Manage components incorporate databases and access authorization systems. Manage components are offered individually or integrated as suites. In many cases they already include the "store" components. Document management Document management, in this context, refers to document management systems in the narrow sense of controlling documents from creation to archiving. Document management includes functions like: Check in/check out For checking stored information for consistency. Version management To keep track of different versions of the same information with revisions and renditions (same information in a different format). Search and navigation For finding information and its associated contexts. Organizing documents In structures like files, folders, and overviews. However, document management increasingly overlaps with other "Manage" components, office applications like Microsoft Outlook and Exchange, or Lotus Notes and Domino, as well as "library services" for administering information storage.

Enterprise content management Collaboration Collaboration components in an ECM system help users work with each other to develop and process content. Many of these components were developed from collaborative software, or groupware, packages; ECM collaborative systems go much further, and include elements of knowledge management. ECM systems facilitate collaboration by using information databases and processing methods that are designed to be used simultaneously by multiple users, even when those users are working on the same content item. They make use of knowledge based on skills, resources and background data for joint information processing. Administration components, such as virtual whiteboards for brainstorming, appointment scheduling and project management systems, communications application such as video conferencing, etc., may be included. Collaborative ECM may also integrate information from other applications, permitting joint information processing. Web content management The scope of Enterprise content management integrates web content management systems. WCM as ECM component is used to present information already existing and managed in the ECM repository.[7] However, information presented via Web technologies - on the Internet, an extranet, or on a portal uses the workflow, access control, versioning, delivery and authorization modules of the WCM instead of own integrated ECM functionality. There are only few examples of successful implementations whereby a shared repository for documents and web content are managed together. Records management (file and archive management) Unlike traditional electronic archival systems, records management refers to the pure administration of records, important information, and data that companies are required to archive. Records management is independent of storage media; managed information does not necessarily need to be stored electronically, but can be on traditional physical media as well. Some of the functions of records management are: Visualisation of file plans and other structured indexes for the orderly storage of information Unambiguous indexing of information, supported by thesauri or controlled wordlists Management of record retention schedules and deletion schedules Protection of information in accordance with its characteristics, sometimes down to individual content components in documents Use of international, industry-specific or company-wide standardized metadata for the unambiguous identification and description of stored information Workflow/business process management Workflow and business process management differ substantially. Workflow There are different types of workflow: production workflow uses predefined sequences to guide and control processes, whereas in an ad-hoc workflow, the user determines the process sequence on the fly. Workflow can be implemented as workflow solutions with which users interact, or as workflow engines, which act as a background service controlling the information and data flow. Workflow management includes the following functions: Visualisation of process and organization structures Capture, administration, visualization, and delivery of grouped information with its associated documents or data Incorporation of data processing tools (such as specific applications) and documents (such as office products) Parallel and sequential processing of procedures including simultaneous saving Reminders, deadlines, delegation and other administration functionalities

37

Enterprise content management Monitoring and documentation of process status, routing, and outcomes Tools for designing and displaying process The objective is to automate processes as much as possible by incorporating all necessary resources. Business process management Business process management (BPM) goes a step further than workflow. Although the words are often used interchangeably, BPM aims to completely integrate all of the affected applications within an enterprise, monitoring processes and assembling all required information. Among BPM's functions are: BPM offers complete workflow functionality, providing process and data monitoring at the server level. Enterprise application integration is used to link different applications. Business intelligence, with rule structures, integrates information warehouses and provides utilities that assist users in their work.

38

Store
Store components temporarily store information that isn't required, desired, or ready for long-term storage or preservation. Even if the Store component uses media that are suitable for long-term archiving, "Store" is still separate from "Preserve." The Store components can be divided into three categories: Repositories as storage locations, Library Services as administration components for repositories, and storage Technologies. These infrastructure components are sometimes held at the operating system level (like the file system), and also include security technologies that work in tandem with the "Deliver" components. However, security technologies, including access control, are superordinated components of an ECM solution. Repositories Different kinds of ECM repositories can be used in combination. Among the possible kinds are: File systems File systems are used primarily for temporary storage, as input and output caches. ECM's goal is to reduce the data burden on the file system, and make the information generally available through Manage, Store, and Preserve technologies. Content management systems This is the actual storage and repository system for content, which can be a database or a specialized storage system. Databases Databases administer access information, but can also be used for the direct storage of documents, content, or media assets. Data warehouses These are complex storage systems based on databases, which reference or provide information from all kinds of sources. They can also be designed with global functions, such as document or information warehouses. Library services Library services are the administrative components of the ECM system that handle access to information. The library service is responsible for taking in and storing information from the Capture and Manage components. It also manages the storage locations in dynamic storage, the actual "Store," and in the long-term Preserve archive. The storage location is determined only by the characteristics and classification of the information. The library service works in concert with the Manage components' database to provide the necessary functions of search and retrieval.

Enterprise content management While the database does not "know" the physical location of a stored object, the library service manages online storage (direct access to data and documents), nearline storage (data and documents on a medium that can be accessed quickly, but not immediately, such as data on an optical disc that is present in a storage system's racks but not currently inserted in a drive that can read it), and offline storage (data and documents on a medium that is not quickly available, such as data stored offsite). If the document management system does not provide the functionality, the library service must have version management to control the status of information, and check-in/check-out, for controlled information provision. The library service generates logs of information usage and editing, called an "audit trail." Storage technologies A wide variety of technologies can be used to store information, depending on the application and system environment: Magnetic online media Hard drives, typically configured as RAID systems, may be locally attached, part of a storage area network (SAN), or mounted from another server (network-attached storage). Magnetic tape Magnetic tape data storage, in the form of automated storage units called tape libraries, use robotics to provide nearline storage. Standalone tape drives may be used for backup, but not online access. Digital optical media Besides the common Compact Disc and DVD optical media in write-once or rewritable forms, Storage systems may use other specialized optical formats like magneto-optical drives for storage and distribution of data. Optical jukeboxes can be used for nearline storage. Optical media in jukeboxes can be removed, transitioning it from nearline to offline storage. Cloud computing Data can be stored on offsite cloud computing servers, accessed via the Internet.

39

Preserve
Preserve involves the long-term, safe storage and backup of static, unchanging information. Preservation is typically accomplished by the records management features of an ECM system and many are designed to help companies comply with government and industry regulations. Eventually, content ceases to change and becomes static. The preserve components of ECM handle the long-term, safe storage and backup of static information, as well as the temporary storage of information that does not need to be archived. Electronic archiving, a related concept, has substantially broader functionality than ECM Preserve components. Electronic archiving systems generally consist of a combination of administration software like records management, imaging or document management, library services or information retrieval systems, and storage subsystems. Other forms of media are also suitable for long-term archiving. If the desire is merely to ensure information is available in the future, microfilm is still viable; unlike many digital records, microfilm is readable without access to the specialized software that created it. Hybrid systems combine microfilm with electronic media and database-supported access. Long-term storage systems require the timely planning and regular performance of data migrations, in order to keep information available in the changing technical landscape. As storage technologies fall into disuse, information must be moved to newer forms of storage, so that the stored information remains accessible using contemporary systems. For example, data stored on floppy disks becomes essentially unusable if floppy disk drives are no longer readily

Enterprise content management available; migrating the data stored on floppy disks to Compact Discs preserves not only the data, but the ability to access it. This ongoing process is called continuous migration. The Preserve components contain special viewers, conversion and migration tools, and long term storage media: Long term storage media WORM optical disc Write once read many (WORM) rotating digital optical storage media, including the 5.25-inch or 3.5-inch WORM disc in a protective sleeve, as well as CD-R and DVD-R. Recording methods vary for these media, which are held in jukeboxes for online and automated nearline access. WORM tape Magnetic tapes used in special drives, that can be as secure as optical write-once, read-many media if used properly with specially secured tapes. WORM hard disk drive Magnetic disk storage with special software protection against overwriting, erasure, and editing; delivers security similar to optical write-once, read-many media. This category includes content-addressable storage. Storage networks Storage networks, such as network-attached storage and storage area networks, can be used if they meet the requirements of edit-proof auditing with unchangeable storage and protection against manipulation and erasure. Microform Microforms like microfilm, microfiche, and aperture cards can be used to back up information that is no longer in use and does not require machine processing. It is typically used only to double-secure originally electronic information. Paper Paper still has use as a long-term storage medium, since it does not require migration, and can be read without any technical aids. In ECM systems, however, it is used only to double-secure originally electronic information... Long term preservation strategies To secure the long term availability of information different strategies are used for electronic archives. The continuous migration of applications, index data, metadata and objects from older systems to new ones generates a lot of work, but secures the accessibility and usability of information. During this process, information that is no longer relevant can be deleted. Conversion technologies are used to update the format of the stored information, where needed. Emulation of older software allows users to run and access the original data and objects. Special viewer software can identify the format of the preserved objects and can display the objects in the new software environment. Standards for interfaces, metadata, data structures and object formats are important to secure the availability of information.

40

Enterprise content management

41

Deliver
The deliver components of ECM present information from the Manage, Store, and Preserve components. The AIIM component model for ECM is function-based, and doesn't impose a strict hierarchy; the Deliver components may contain functions used to enter information into other systems (such as transferring information to portable media, or generating formatted output files); or for readying information, such as by converting its format or compressing it, for the "Store" and "Preserve" components. The Deliver category's functionality is also known as "output"; technologies in this category are often termed output management. The Deliver components break down into three groups: transformation technologies, security technologies, and distribution. Transformation and security, as services, are middleware and should be equally available to all ECM components. For output, two functions are of primary importance: layout and design, with tools for laying out and formatting output, and publishing, with applications for presenting information for distribution and publication. In short, ECM delivery provides information to users. Secure distribution, collaboration, and version control take the forefront. In some cases, these components are still deployed as stand-alone systems without being incorporated into an enterprise-wide ECM system. Methods On-premise ECM was developed as a traditional software application that companies implemented on their own corporate networks. In this scenario, each individual company manages and maintains both the ECM application, and the network storage devices that store the data. Many on-premise ECM systems are highly customized for individual organizational needs. A note about Capture: since paper document capture requires the use of physical scanning devices, like scanners or multi-function devices, it is typically performed on-premise. However, it can be outsourced to businesses that provide scanning services. Known as Service Bureaus, these companies complete high-volume scanning and indexing and return the electronic files to organizations via web transfer or on CDs, DVDs, or other external storage devices. Software as a service (SaaS) SaaS ECM means that rather than deploying software on an in-house network, users access the application and their data online. It is also known as cloud computing, hosted, and on demand. As SaaS distribution technologies mature, businesses can count on receiving the same features and customization capabilities they have come to expect from on-premise ECM applications. SaaS delivery allows companies to more quickly begin using ECM, since they do not have to purchase hardware or configure the applications, databases, or servers. In addition, organizations trade the capital costs associated with a hardware and software purchase for a monthly operating expense and storage capabilities that grow automatically to accommodate company growth. Hybrid In some scenarios, companies find a hybrid composed of both SaaS and on-premise software work best for their situation. For example, hybrid ECM systems are being used to bridge the gap during company moves or to simplify information exchange following an acquisition. Hybrid is also being used when companies want to manage their own ECM on-premise, but also provide easy web access to certain information for business partners or customers using a SaaS model. Hybrid makes the most sense when the two technologies are provided by the same manufacturer, so that features and interfaces are an exact match.

Enterprise content management Transformation technologies Transformations should always be controlled and trackable. This is done by background services which the end user generally does not see. Among the transformation technologies are: Computer output to laser disc (COLD) Unlike its use in the Capture stage, when used for delivery COLD prepares output data for distribution and transfer to the archive. Typical applications are lists and formatted output (for example, individualized customer letters). These technologies also include journals and logs generated by the ECM components. Unlike most imaging media, COLD records are indexed not in a database table, but by absolute positions within the document itself (i.e. page 1, line 82, position 12). As a result, COLD index fields are not available for editing after submission unless they are converted into a standard database. Personalization Functions and output can be customized to a particular user's needs. XML (Extensible Markup Language) A computer language that allows the description of interfaces, structures, metadata, and documents in a standardized, cross-platform manner. PDF (Portable Document Format) A cross-platform print and distribution format. Unlike image formats such as TIFF, PDFs permit content searches, the addition of metadata, and the embedding of electronic signatures. When generated from electronic data, PDFs are resolution-independent, allowing crisp reproduction at any scale. XPS (XML Paper Specification) An XML specification developed by Microsoft, describing the formats and rules for distributing, archiving, rendering, and processing XPS documents. Converters and viewers Serve to reformat information to generate uniform formats, and also to display and output information from different formats. Compression Used to reduce the storage space needed for pictorial information. Syndication Used for presenting content in different formats, selections, and forms in the context of content management. Syndication allows the same content to be used multiple times in different forms for different purposes. Security technologies Security technologies are available to all ECM components. For example, electronic signatures are used not only when documents are sent, but also in data capture via scanning, in order to document the completeness of the capture. Public key infrastructure is a basic technology for electronic signatures. It manages keys and certificates, and checks the authenticity of signatures. Other electronic signatures confirm the identity of the sender and the integrity of the sent data, i.e., that it is complete and unchanged. In Europe, there are three forms of electronic signatures, of different quality and security: simple, advanced, and qualified. In most European states the qualified electronic signature is legally admissible in legal documents and contracts. Digital rights management and watermarking are used in content syndication and media asset management, to manage and secure intellectual property rights and copyrights. Digital rights management works with techniques like electronic watermarks that are integrated directly into the file, and seeks to protect usage rights and protect content

42

Enterprise content management that is published on the Internet. Distribution All of the above technologies serve to provide an ECM's contents to users by various routes, in a controlled and user-oriented manner. These can be active components such as e-mail, data media, memos, and passive publication on websites and portals where users can get the information themselves. Possible output and distribution media include: The Internet extranets intranets E-business portals Employee portals E-mail Fax Data transfer by EDI, XML or other formats Mobile devices, like mobile phones, PDAs, and others Data media like CDs and DVDs

43

Digital TV and other multimedia services Paper. The various Deliver components provide information to users in the best way for the given application, while controlling its use as far as possible.

ECM market development


Prior to 2003, the ECM market was dominated by a number of medium-sized independent vendors that fell into two categories: those who had originated as Document Management companies (Advanced Processing & Imaging, Documentum, Laserfiche, FileNet, OpenText, Db technology) and had begun adding on management of other enterprise content, and those who had started as Web Content Management providers (Interwoven, Vignette, Stellent) and had begun trying to branch out into managing other types of content such as business documents and rich media. Larger vendors, such as IBM and Oracle, also had offerings in this space, and the market share remained largely fragmented. In 2002, Documentum had added collaboration capabilities with its acquisition of eRoom while Interwoven and Vignette countered with their respective acquisitions of iManage and Intraspect. Similarly, Documentum purchased Bulldog for its Digital Asset Management (DAM) capabilities while Interwoven and OpenText countered with acquisitions of MediaBin and Artesia. OpenText also acquired European companies IXOS and Red Dot to shore up its software portfolio. In October 2003, EMC Corporation acquired Documentum. Soon EMC's primary competitors in the database space responded as IBM purchased FileNet and Oracle purchased Stellent in 2006. OpenText also purchased Hummingbird in 2006. Hewlett-Packard (HP) entered the ECM space with its acquisition of Australian company Tower Software in 2008. In March 2009, Autonomy purchased Interwoven, in July 2009 OpenText acquired Vignette, and in February 2011 OpenText acquired MetaStorm. Most recently, OpenText acquired Global 360 in July 2011,[8] and HP made an agreement to purchase Autonomy in August 2011.[9] In April 2007, independent analyst firm CMS Watch noted that "some of the biggest names in this business are undergoing substantial transformation that will lead to shifting road maps and product sets over the next few years".[10] In addition, 2007 saw the emergence of open-source options for ECM supplied by Nuxeo and Alfresco, along with a software-as-a-service offering from Spring CM. In 2008, Sense/Net released Sense/Net 6.0, an open source ECM and EPS solution. [11]

Enterprise content management There are a number of software companies that have sprung up to develop applications to complement ECM with specific functions and features. There are companies that provide third-party document and image viewers such as LEAD Technologies, Microsoft, and Accusoft.[12] There are companies that provide workflows such as Office Gemini, SpringCM, and docAssist. There are also several companies that provide plugins for ECMs. The Web 2.0 wave brought new players to the market with strength in web-based delivery. Koral, Box.net, and EchoSign, all available on the Salesforce.com AppExchange platform, are representative of this trend.[13] Web 2.0 was also instrumental in bringing Cygnet ECM, an entirely web-based ECM product, to the market.[14] Enterprises are increasingly implementing analytics tools to help present targeted content to users in order to improve productivity, sales and user engagement. This has been referred to by some as "web engagement management". [15]
[16]

44

Gartner estimated that the ECM market was worth approximately $3.3 billion in 2008; this was expected to grow at a compound annual growth rate of 9.5 percent through 2013. After a plethora of industry consolidation, only three or four major companies are left in this space, and the industry as a whole is undergoing a significant transformation as Microsoft commoditizes content-management components.[17] According to Gartner's 2009 report, 75 percent of Global 2000 companies were highly likely to have a desktop-focused, process-focused content management implementation by 2008, and ECM would continue to absorb other technologies, such as digital asset management and e-mail management. Gartner also predicted that there will be further market consolidation, acquisition, and separation of vendors into platform and solution providers.[17] Currently, enterprise information management (EIM) is gaining more interest from organizations trying to approach information management (whether structured or unstructured) from an enterprise perspective. EIM combines ECM and business intelligence. Cloud content management is emerging as a web-based alternative, combining the content focus of ECM with the collaborative elements of social business software.

Footnotes
[1] "What is Enterprise Content Management (ECM)?" (http:/ / www. aiim. org/ What-is-ECM-Enterprise-Content-Management. aspx). AIIM. Association for Information and Image Management. . Retrieved September 20, 2010. [2] Microsoft launched its ECM strategy with MOSS 2007; Oracle, with Oracle 10g and the acquisition of Stellent, both in late 2006. [3] Evolving Electronic Document Management Solutions: The Doculabs Report, Third Edition. Chicago: Doculabs, 2002. [4] Ulrich Kampffmeyer, "ECM Herrscher ber Informationen". ComputerWoche, CW-exktraKT, Munich, September 24th, 2001. [5] Trends in Records, Document and Enterprise Content Management. Whitepaper. S.E.R. conference, Visegrd, September 28th, 2004 PDF (http:/ / www. project-consult. net/ Files/ ECM_Handout_english_SER. pdf) original source of this Wikipedia article by the German consulting company Project Consult Unternehmensberatung [6] Kampffmeyer, Ulrich (2006). "ECM: Enterprise Content Management" (http:/ / www. project-consult. net/ Files/ ECM_White Paper_kff_2006. pdf) (in English, French, and German). DMS EXPO 2006, Kln. Hamburg: PROJECT CONSULT. ISBNISBN 978-3-936534-09-2. . Retrieved September 20, 2010. [7] Ulrich Kampffmeyer, Enterprise Content Management, 2006 [8] "OpenText profiting from acquisitions as it extends reach" (http:/ / www. therecord. com/ news/ business/ article/ 562424--opentext-profiting-from-acquisitions-as-it-extends-reach). July 13, 2011. . Retrieved August 25, 2011. [9] "HP to Acquire Leading Enterprise Information Management Software Company Autonomy Corporation plc" (http:/ / www. hp. com/ hpinfo/ newsroom/ press/ 2011/ 110818xc. html). August 18, 2011. . Retrieved August 25, 2011. [10] Manoj Jasra (April 17, 2007). "CMS Watch Releases Enterprise CMS Comparison Report" (http:/ / www. webanalyticsworld. net/ 2007/ 04/ cms-watch-releases-enterprise-cms. html). . Retrieved September 21, 2010. [11] Open Source ECM continues to grow (http:/ / www. cmswatch. com/ Trends/ 996-Open-Source-ECM-continues-to-grow) [12] Tessa Magee (October 27, 2011). "Accusoft Pegasus Celebrates 20th Anniversary" (http:/ / www. accusoft. com/ news_accusoft-pegasus-celebrates-20th-anniversary. htm). . Retrieved November 29, 2011. [13] Ismael Ghalimi (2007-04-10). "First Koral, then ThinkFree and EchoSign" (http:/ / itredux. com/ blog/ 2007/ 04/ 10/ first-koral-then-thinkfree-and-echosign/ ). ITRedux. . [14] DM, Rank and file: A case study. Retrieved 10 November 2010 from http:/ / www. document-manager. com/ articles/ reviews. asp?a_id=336 [15] Brice Dunwoodie, "What is Web Engagement Management (WEM)?" (http:/ / www. cmswire. com/ cms/ web-engagement/ what-is-web-engagement-management-wem-007400. php), CMSWire, 2010-05-05

Enterprise content management


[16] Elcom, "Web Engagement Management (WEM)" (http:/ / www. elcom. com. au/ Products/ Web-Engagement-Management), Elcom, 2010-11-22 [17] Bell, Toby; Shegda, Karen M.; Gilbert, Mark R.; Chin, Kenneth (November 16, 2010). "Magic Quadrant for Enterprise Content Management" (http:/ / www. gartner. com/ technology/ media-products/ reprints/ microsoft/ vol14/ article8/ article8. html). Gartner.com. Gartner. . Retrieved August 25, 2011.

45

Bibliography
Kampffmeyer, Ulrich (2006). "ECM: Enterprise Content Management" (http://www.project-consult.net/Files/ ECM_White Paper_kff_2006.pdf) (in English, French, and German). DMS EXPO 2006, Kln. Hamburg: PROJECT CONSULT. ISBNISBN 978-3-936534-09-2. Retrieved September 20, 2010. Fray, Michael (2008). "ECM - Enterprise Content Management" (http://www.globe.dk/alle/ 314-ecm---enterprise-content-management-9788779008311-9788779008311.html) (in Danish). Denmark: Forlaget Globe. ISBN978-87-7900-831-1. Retrieved May 23, 2011.

References

Knowledge management
Knowledge management (KM) comprises a range of strategies and practices used in an organization to identify, create, represent, distribute, and enable adoption of insights and experiences. Such insights and experiences comprise knowledge, either embodied in individuals or embedded in organizations as processes or practices. An established discipline since 1991 (see Nonaka 1991), KM includes courses taught in the fields of business administration, information systems, management, and library and information sciences (Alavi & Leidner 1999). More recently, other fields have started contributing to KM research; these include information and media, computer science, public health, and public policy. Many large companies and non-profit organizations have resources dedicated to internal KM efforts, often as a part of their business strategy, information technology, or human resource management departments (Addicott, McGivern & Ferlie 2006). Several consulting companies also exist that provide strategy and advice regarding KM to these organizations. Knowledge management efforts typically focus on organizational objectives such as improved performance, competitive advantage, innovation, the sharing of lessons learned, integration and continuous improvement of the organization. KM efforts overlap with organizational learning, and may be distinguished from that by a greater focus on the management of knowledge as a strategic asset and a focus on encouraging the sharing of knowledge. It is seen as an enabler of organisational learning[1] and a more concrete mechanism than the previous abstract research.

History
KM efforts have a long history, to include on-the-job discussions, formal apprenticeship, discussion forums, corporate libraries, professional training and mentoring programs. More recently, with increased use of computers in the second half of the 20th century, specific adaptations of technologies such as knowledge bases, expert systems, knowledge repositories, group decision support systems, intranets, and computer-supported cooperative work have been introduced to further enhance such efforts.[2] In 1999, the term personal knowledge management was introduced which refers to the management of knowledge at the individual level (Wright 2005). In terms of the enterprise, early collections of case studies recognized the importance of knowledge management dimensions of strategy, process, and measurement (Morey, Maybury & Thuraisingham 2002). Key lessons learned

Knowledge management included: people and the cultural norms which influence their behaviors are the most critical resources for successful knowledge creation, dissemination, and application; cognitive, social, and organizational learning processes are essential to the success of a knowledge management strategy; and measurement, benchmarking, and incentives are essential to accelerate the learning process and to drive cultural change. In short, knowledge management programs can yield impressive benefits to individuals and organizations if they are purposeful, concrete, and action-oriented. More recently with the advent of the Web 2.0, the concept of Knowledge Management has evolved towards a vision more based on people participation and emergence. This line of evolution is termed Enterprise 2.0 (McAfee 2006). However, there is an ongoing debate and discussions (Lakhani & McAfee 2007) as to whether Enterprise 2.0 is just a fad that does not bring anything new or useful or whether it is, indeed, the future of knowledge management (Davenport 2008).

46

Research
KM emerged as a scientific discipline in the earlier 1990s. It was initially supported solely by practitioners, when Skandia hired Leif Edvinsson of Sweden as the worlds first Chief Knowledge Officer (CKO). Hubert Saint-Onge (formerly of CIBC, Canada), started investigating various sides of KM long before that. The objective of CKOs is to manage and maximize the intangible assets of their organizations. Gradually, CKOs became interested in not only practical but also theoretical aspects of KM, and the new research field was formed. The KM ideas taken up by academics, such as Ikujiro Nonaka (Hitotsubashi University), Hirotaka Takeuchi (Hitotsubashi University), Thomas H. Davenport (Babson College) and Baruch Lev (New York University). In 2001, Thomas A. Stewart, former editor at FORTUNE Magazine and subsequently the editor of Harvard Business Review, published a cover story highlighting the importance of intellectual capital of organizations. Since its establishment, the KM discipline has been gradually moving towards academic maturity. First, there is a trend towards higher cooperation among academics; particularly, there has been a drop in single-authored publications. Second, the role of practitioners has changed. Their contribution to academic research has been dramatically declining from 30% of overall contributions up to 2002, to only 10% by 2009 (Serenko et al. 2010). A broad range of thoughts on the KM discipline exist; approaches vary by author and school. As the discipline matures, academic debates have increased regarding both the theory and practice of KM, to include the following perspectives : Techno-centric with a focus on technology, ideally those that enhance knowledge sharing and creation. Organizational with a focus on how an organization can be designed to facilitate knowledge processes best. Ecological with a focus on the interaction of people, identity, knowledge, and environmental factors as a complex adaptive system akin to a natural ecosystem. Regardless of the school of thought, core components of KM include people, processes, technology (or) culture, structure, technology, depending on the specific perspective (Spender & Scherer 2007). Different KM schools of thought include various lenses through which KM can be viewed and explained, to include: community of practice (Wenger, McDermott & Synder 2001)[3] social network analysis[4] intellectual capital (Bontis & Choo 2002)[5] information theory[6] (McInerney 2002) complexity science[7][8] constructivism[9] (Nanjappa & Grant 2003)

The practical relevance of academic research in KM has been questioned (Ferguson 2005) with action research suggested as having more relevance (Andriessen 2004) and the need to translate the findings presented in academic journals to a practice (Booker, Bontis & Serenko 2008).

Knowledge management

47

Dimensions
Different frameworks for distinguishing between different 'types of' knowledge exist. One proposed framework for categorizing the dimensions of knowledge distinguishes between tacit knowledge and explicit knowledge. Tacit knowledge represents internalized knowledge that an individual may not be consciously aware of, such as how he or she accomplishes particular tasks. At the opposite end of the spectrum, explicit knowledge represents knowledge that the individual holds consciously in mental focus, in a form that can easily be communicated to others.[10] (Alavi & Leidner 2001). Similarly, Hayes and Walsham (2003) describe content and relational perspectives of knowledge and knowledge management as two fundamentally different epistemological perspectives. The content perspective suggest that knowledge is easily stored because it may be codified, while the relational perspective recognizes the contextual and relational aspects of knowledge which can make knowledge difficult to share outside of the specific location where the knowledge is developed.[11] Early research suggested that a successful KM effort needs to convert internalized tacit knowledge into explicit knowledge in order to share it, but the same effort must also permit individuals to internalize and make personally meaningful any codified knowledge retrieved from the KM effort. Subsequent research into KM suggested that a distinction between tacit knowledge and explicit The Knowledge Spiral as described by Nonaka & Takeuchi. knowledge represented an oversimplification and that the notion of explicit knowledge is self-contradictory. Specifically, for knowledge to be made explicit, it must be translated into information (i.e., symbols outside of our heads) (Serenko & Bontis 2004). Later on, Ikujiro Nonaka proposed a model (SECI for Socialization, Externalization, Combination, Internalization) which considers a spiraling knowledge process interaction between explicit knowledge and tacit knowledge (Nonaka & Takeuchi 1995). In this model, knowledge follows a cycle in which implicit knowledge is 'extracted' to become explicit knowledge, and explicit knowledge is 're-internalized' into implicit knowledge. More recently, together with Georg von Krogh, Nonaka returned to his earlier work in an attempt to move the debate about knowledge conversion forwards (Nonaka & von Krogh 2009). A second proposed framework for categorizing the dimensions of knowledge distinguishes between embedded knowledge of a system outside of a human individual (e.g., an information system may have knowledge embedded into its design) and embodied knowledge representing a learned capability of a human bodys nervous and endocrine systems (Sensky 2002). A third proposed framework for categorizing the dimensions of knowledge distinguishes between the exploratory creation of "new knowledge" (i.e., innovation) vs. the transfer or exploitation of "established knowledge" within a group, organization, or community. Collaborative environments such as communities of practice or the use of social computing tools can be used for both knowledge creation and transfer.[12]

Knowledge management

48

Strategies
Knowledge may be accessed at three stages: before, during, or after KM-related activities. Different organizations have tried various knowledge capture incentives, including making content submission mandatory and incorporating rewards into performance measurement plans. Considerable controversy exists over whether incentives work or not in this field and no consensus has emerged. One strategy to KM involves actively managing knowledge (push strategy). In such an instance, individuals strive to explicitly encode their knowledge into a shared knowledge repository, such as a database, as well as retrieving knowledge they need that other individuals have provided to the repository.[13] This is also commonly known as the Codification approach to KM. Another strategy to KM involves individuals making knowledge requests of experts associated with a particular subject on an ad hoc basis (pull strategy). In such an instance, expert individual(s) can provide their insights to the particular person or people needing this (Snowden 2002). This is also commonly known as the Personalization approach to KM. Other knowledge management strategies and instruments for companies include: rewards (as a means of motivating for knowledge sharing) storytelling (as a means of transferring tacit knowledge) cross-project learning after action reviews knowledge mapping (a map of knowledge repositories within a company accessible by all) communities of practice expert directories (to enable knowledge seeker to reach to the experts) best practice transfer knowledge fairs competence management (systematic evaluation and planning of competences of individual organization members) proximity & architecture (the physical situation of employees can be either conducive or obstructive to knowledge sharing) master-apprentice relationship collaborative technologies (groupware, etc.) knowledge repositories (databases, bookmarking engines, etc.) measuring and reporting intellectual capital (a way of making explicit knowledge for companies) knowledge brokers (some organizational members take on responsibility for a specific "field" and act as first reference on whom to talk about a specific subject) social software (wikis, social bookmarking, blogs, etc.) Inter-project knowledge transfer

Motivations
A number of claims exist as to the motivations leading organizations to undertake a KM effort.[14] Typical considerations driving a KM effort include: Making available increased knowledge content in the development and provision of products and services Achieving shorter new product development cycles Facilitating and managing innovation and organizational learning Leveraging the expertise of people across the organization

Increasing network connectivity between internal and external individuals Managing business environments and allowing employees to obtain relevant insights and ideas appropriate to their work

Knowledge management Solving intractable or wicked problems Managing intellectual capital and intellectual assets in the workforce (such as the expertise and know-how possessed by key individuals) Debate exists whether KM is more than a passing fad, though increasing amount of research in this field may hopefully help to answer this question, as well as create consensus on what elements of KM help determine the success or failure of such efforts (Wilson 2002).[15] Knowledge Sharing remains a challenging issue for knowledge management, and while there is no clear agreement barriers may include time issues for knowledge works, the level of trust, lack of effective support technologies and culture (Jennex 2008).

49

Technologies
Early KM technologies included online corporate yellow pages as expertise locators and document management systems. Combined with the early development of collaborative technologies (in particular Lotus Notes), KM technologies expanded in the mid-1990s. Subsequent KM efforts leveraged semantic technologies for search and retrieval and the development of e-learning tools for communities of practice[16] (Capozzi 2007). Knowledge management systems can thus be categorized as falling into one or more of the following groups: Groupware, document management systems, expert systems, semantic networks, relational and object oriented databases, simulation tools, and artificial intelligence [17] (Gupta & Sharma 2004) More recently, development of social computing tools (such as bookmarks, blogs, and wikis) have allowed more unstructured, self-governing or ecosystem approaches to the transfer, capture and creation of knowledge, including the development of new forms of communities, networks, or matrixed organizations. However such tools for the most part are still based on text and code, and thus represent explicit knowledge transfer. These tools face challenges in distilling meaningful re-usable knowledge and ensuring that their content is transmissible through diverse channels[18](Andrus 2005). Software tools in knowledge management are a collection of technologies and are not necessarily acquired as a single software solution. Furthermore, these knowledge management software tools have the advantage of using the organization existing information technology infrastructure. Organizations and business decision makers spend a great deal of resources and make significant investments in the latest technology, systems and infrastructure to support knowledge management. It is imperative that these investments are validated properly, made wisely and that the most appropriate technologies and software tools are selected or combined to facilitate knowledge management. Knowledge management has also become a cornerstone in emerging business strategies such as Service Lifecycle Management (SLM) with companies increasingly turning to software vendors to enhance their efficiency in industries including, but not limited to, the aviation industry.[19]

Notes
[1] Sanchez, R (1996) Strategic Learning and Knowledge Management, Wiley, Chichester [2] "Introduction to Knowledge Management" (http:/ / www. unc. edu/ ~sunnyliu/ inls258/ Introduction_to_Knowledge_Management. html). Unc.edu. . Retrieved 15 January 2010. [3] (PDF). http:/ / www. crito. uci. edu/ noah/ HOIT/ HOIT%20Papers/ TeacherBridge. pdf. Retrieved 15 January 2010. [4] (PDF). http:/ / www. ischool. washington. edu/ mcdonald/ ecscw03/ papers/ groth-ecscw03-ws. pdf. Retrieved 15 January 2010. [5] Secretary of Defense Corporate Fellows Program; Observations in Knowledge Management: Leveraging the Intellectual Capital of a Large, Global Organization with Technology, Tools and Policies (http:/ / www. ndu. edu/ sdcfp/ reports/ 2007Reports/ IBM07 . doc). IBM, Global Business Services. 2002. . Retrieved 15 January 2010. [6] "Information Architecture and Knowledge Management" (http:/ / web. archive. org/ web/ 20080629190725/ http:/ / iakm. kent. edu/ programs/ information-use/ iu-curriculum. html). Iakm.kent.edu. Archived from the original (http:/ / iakm. kent. edu/ programs/ information-use/ iu-curriculum. html) on June 29, 2008. . Retrieved 15 January 2010. [7] Snowden, Dave (2002). "Complex Acts of Knowing Paradox and Descriptive Self Awareness". Journal of Knowledge Management, Special Issue 6 (2): 100 111. [8] SSRN-Knowledge Ecosystems: A Theoretical Lens for Organizations Confronting Hyperturbulent Environments by David Bray (http:/ / papers. ssrn. com/ sol3/ papers. cfm?abstract_id=984600). Papers.ssrn.com. . Retrieved 15 January 2010.

Knowledge management
[9] http:/ / citeseer. ist. psu. edu/ wyssusek02sociopragmatic. html [10] "SSRN-Literature Review Knowledge Management Research at the Organizational Level by David Bray" (http:/ / papers. ssrn. com/ sol3/ papers. cfm?abstract_id=991169). Papers.ssrn.com. . Retrieved 15 January 2010. [11] Hayes, M.; Walsham, G. (2003). Knowledge sharing and ICTs: A relational perspective In M. Easterby-Smith and M. A. Lyles (Eds.), The Blackwell handbook of organizational learning and knowledge management. Malden, MA: Blackwell. pp.5477. ISBN978-0-631-22672-7. [12] "SSRN-Exploration, Exploitation, and Knowledge Management Strategies in Multi-Tier Hierarchical Organizations Experiencing Environmental Turbulence by David Bray" (http:/ / papers. ssrn. com/ sol3/ papers. cfm?abstract_id=961043). Papers.ssrn.com. . Retrieved 15 January 2010. [13] (PDF). http:/ / www. cs. fiu. edu/ ~chens/ PDF/ IRI00_Rathau. pdf. Retrieved 15 January 2010. [14] http:/ / tecom. cox. smu. edu/ abasu/ itom6032/ kmlect. pdf [15] (PDF). http:/ / myweb. whitman. syr. edu/ yogesh/ papers/ WhyKMSFail. pdf. Retrieved 15 January 2010. [16] "p217-ricardo.pdf" (http:/ / elvis. slis. indiana. edu/ irpub/ HT/ 2001/ pdf53. pdf) (PDF). . Retrieved 15 January 2010. [17] Gupta, Jatinder; Sharma, Sushil (2004). Creating Knowledge Based Organizations. Boston: Idea Group Publishing. ISBN1-59140-163-1. [18] "Knowledge Management" (http:/ / www. systems-thinking. org/ kmgmt/ kmgmt. htm). www.systems-thinking.org. . Retrieved 26 February 2009. [19] Aviation Industry Group. "Service life-cycle management" (http:/ / www. avioxi. com/ downloads/ ATEMv74_SLM_Reprint_Final. pdf), Aircraft Technology: Engineering & Maintenance, FebruaryMarch, 2005.

50

References
This article is based on material taken from the Free On-line Dictionary of Computing prior to 1 November 2008 and incorporated under the "relicensing" terms of the GFDL, version 1.3 or later. Addicott, Rachael; McGivern, Gerry; Ferlie, Ewan (2006). "Networks, Organizational Learning and Knowledge Management: NHS Cancer Networks" (http://papers.ssrn.com/sol3/papers.cfm?abstract_id=889992). Public Money & Management 26 (2): 8794. doi:10.1111/j.1467-9302.2006.00506.x. Alavi, Maryam; Leidner, Dorothy E. (1999). "Knowledge management systems: issues, challenges, and benefits" (http://portal.acm.org/citation.cfm?id=374117). Communications of the AIS 1 (2). Alavi, Maryam; Leidner, Dorothy E. (2001). "Review: Knowledge Management and Knowledge Management Systems: Conceptual Foundations and Research Issues" (http://web.njit.edu/~jerry/CIS-677/Articles/ Alavi-MISQ-2001.pdf). MIS Quarterly 25 (1): 107136. doi:10.2307/3250961. JSTOR3250961. Andriessen, Daniel (2004). "Reconciling the rigor-relevance dilemma in intellectual capital research". The Learning Organization 11 (4/5): 393401. doi:10.1108/09696470410538288. Andrus, D. Calvin (2005). "The Wiki and the Blog: Toward a Complex Adaptive Intelligence Community". Studies in Intelligence 49 (3). SSRN755904. Benbasat, Izak; Zmud, Robert (1999). "Empirical research in information systems: The practice of relevance". MIS Quarterly 23 (1): 316. doi:10.2307/249403. JSTOR249403. Bontis, Nick; Choo, Chun Wei (2002). The Strategic Management of Intellectual Capital and Organizational Knowledge (http://choo.fis.toronto.edu/OUP/). New York:Oxford University Press. ISBN0-19-513866-X. Booker, Lorne; Bontis, Nick; Serenko, Alexander (2008). "The relevance of knowledge management and intellectual capital research" (http://foba.lakeheadu.ca/serenko/papers/ Booker_Bontis_Serenko_KM_relevance.pdf). Knowledge and Process Management 15 (4): 235246. doi:10.1002/kpm.314. Capozzi, Marla M. (2007). "Knowledge Management Architectures Beyond Technology" (http://firstmonday. org/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/1871/1754). First Monday 12 (6). Davenport, Tom (2008). "Enterprise 2.0: The New, New Knowledge Management?" (http://discussionleader. hbsp.com/davenport/2008/02/enterprise_20_the_new_new_know_1.html). Harvard Business Online, Feb. 19, 2008. Ferguson, J (2005). "Bridging the gap between research and practice". Knowledge Management for Development Journal 1 (3): 4654. Gupta, Jatinder; Sharma, Sushil (2004). Creating Knowledge Based Organizations. Boston: Idea Group Publishing. ISBN1-59140-163-1.

Knowledge management Lakhani, Karim R.; McAfee (2007). "Case study on deleting "Enterprise 2.0" article" (http://courseware.hbs. edu/public/cases/wikipedia/). Courseware #9-607-712, Harvard Business School. Liebowitz, Jay (2006). What they didn't tell you about knowledge management. pp.23. McAdam, Rodney; McCreedy, Sandra (2000). "A Critique Of Knowledge Management: Using A Social Constructionist Model" (http://papers.ssrn.com/sol3/papers.cfm?abstract_id=239247). New Technology, Work and Employment 15 (2). McAfee, Andrew P. (2006). "Enterprise 2.0: The Dawn of Emergent Collaboration" (http://sloanreview.mit. edu/the-magazine/articles/2006/spring/47306/enterprise-the-dawn-of-emergent-collaboration/). Sloan Management Review 47 (3): 2128. McInerney, Claire (2002). "Knowledge Management and the Dynamic Nature of Knowledge" (http://www. scils.rutgers.edu/~clairemc/KM_dynamic_nature.pdf). Journal of the American Society for Information Science and Technology 53 (12): 10091018. doi:10.1002/asi.10109. Morey, Daryl; Maybury, Mark; Thuraisingham, Bhavani (2002). Knowledge Management: Classic and Contemporary Works (http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=8987). Cambridge: MIT Press. p.451. ISBN0-262-13384-9. Nanjappa, Aloka; Grant, Michael M. (2003). "Constructing on constructivism: The role of technology" (http:// ejite.isu.edu/Volume2No1/nanjappa.pdf). Electronic Journal for the Integration of Technology in Education 2 (1). Nonaka, Ikujiro (1991). "The knowledge creating company" (http://hbr.harvardbusiness.org/2007/07/ the-knowledge-creating-company/es). Harvard Business Review 69 (6 NovDec): 96104. Nonaka, Ikujiro; Takeuchi, Hirotaka (1995). The knowledge creating company: how Japanese companies create the dynamics of innovation (http://books.google.com/?id=B-qxrPaU1-MC). New York: Oxford University Press. p.284. ISBN978-0-19-509269-1. Nonaka, Ikujiro; von Krogh, Georg (2009). "Tacit Knowledge and Knowledge Conversion: Controversy and Advancement in Organizational Knowledge Creation Theory" (http://zonecours.hec.ca/documents/ H2010-1-2241390. S2-TacitKnowledgeandKnowledgeConversion-ControversyandAdvancementinOrganizationalKnowledgeCreation. pdf). Organization Science 20 (3): 635652. doi:10.1287/orsc.1080.0412. Sensky, Tom (2002). "Knowledge Management" (http://apt.rcpsych.org/cgi/content/full/8/5/387). Advances in Psychiatric Treatment 8 (5): 387395. doi:10.1192/apt.8.5.387. Snowden, Dave (2002). "Complex Acts of Knowing Paradox and Descriptive Self Awareness" (http://www. cognitive-edge.com/articledetails.php?articleid=13). Journal of Knowledge Management, Special Issue 6 (2): 100111. doi:10.1108/13673270210424639. Spender, J.-C.; Scherer, Andreas Georg (2007). "The Philosophical Foundations of Knowledge Management: Editors' Introduction". Organization 14 (1): 528. doi:10.1177/1350508407071858. SSRN958768. Serenko, Alexander; Bontis, Nick (2004). "Meta-review of knowledge management and intellectual capital literature: citation impact and research productivity rankings" (http://www.business.mcmaster.ca/mktg/ nbontis//ic/publications/KPMSerenkoBontis.pdf). Knowledge and Process Management 11 (3): 185198. doi:10.1002/kpm.203. Serenko, Alexander; Bontis, Nick; Booker, Lorne; Sadeddin, Khaled; Hardie, Timothy (2010). "A scientometric analysis of knowledge management and intellectual capital academic literature (19942008)" (http://foba. lakeheadu.ca/serenko/papers/Serenko_Bontis_JKM_MetaAnalysis_Published.pdf). Journal of Knowledge Management 14 (1): 1323. doi:10.1108/13673271011015534. Thompson, Mark P. A.; Walsham, Geoff (2004). "Placing Knowledge Management in Context" (http://papers. ssrn.com/sol3/papers.cfm?abstract_id=559300). Journal of Management Studies 41 (5): 725747. doi:10.1111/j.1467-6486.2004.00451.x.

51

Knowledge management Wenger, Etienne; McDermott, Richard; Synder, Richard (2002). Cultivating Communities of Practice: A Guide to Managing Knowledge Seven Principles for Cultivating Communities of Practice (http://hbswk.hbs.edu/ archive/2855.html). Boston: Harvard Business School Press. pp.107136. ISBN1-57851-330-8. Wilson, T.D. (2002). "The nonsense of 'knowledge management'" (http://informationr.net/ir/8-1/paper144. html). Information Research 8 (1). Wright, Kirby (2005). "Personal knowledge management: supporting individual knowledge worker performance". Knowledge Management Research and Practice 3 (3): 156165. doi:10.1057/palgrave.kmrp.8500061. Akscyn, Robert M., Donald L. McCracken and Elise A. Yoder (1988). "KMS: A distributed hypermedia system for managing knowledge in organizations". Communications of the ACM 31 (7): 820835. Benbya, H (2008). Knowledge Management Systems Implementation: Lessons from the Silicon Valley. Oxford, Chandos Publishing. Langton, N & Robbins, S. (2006). Organizational Behaviour (Fourth Canadian Edition). Toronto, Ontario: Pearson Prentice Hall. Maier, R (2007): Knowledge Management Systems: Information And Communication Technologies for Knowledge Management. 3rd edition, Berlin: Springer. Rhetorical Structure Theory (assumed from the reference of RST Theory above) http://acl.ldc.upenn.edu/W/ W01/W01-1605.pdf Rosner, D.., Grote, B., Hartman, K, Hofling, B, Guericke, O. (1998) From natural language documents to sharable product knowledge: a knowledge engineering approach. in Borghoff Uwe M., and Pareschi, Remo (Eds.). Information technology for knowledge management. Springer Verlag, pp 3551. The RST site at http://www.sfu.ca/rst/run by Bill Mann Jennex, M. E. (2008). Knowledge Management: Concepts, Methodologies, Tools, and Applications (pp.13808).

52

External links
Knowledge management (http://www.dmoz.org/Reference/Knowledge_Management//) at the Open Directory Project Knowledge@work community (http://www.ami-communities.eu/wiki/Knowledge@Work)

Expert system

53

Expert system
In artificial intelligence, an expert system is a computer system that emulates the decision-making ability of a human expert.[1] Expert systems are designed to solve complex problems by reasoning about knowledge, like an expert, and not by following the procedure of a developer as is the case in conventional programming.[2][3][4] The first expert systems were created in the 1970s and then proliferated in the 1980s.[5] Expert systems were among the first truly successful forms of AI software.[6][7][8][9][10][11] An expert system has a unique structure, different from traditional programs. It is divided into two parts, one fixed, independent of the expert system: the inference engine, and one variable: the knowledge base. To run an expert system, the engine reasons about the knowledge base like a human.[12] In the 80s a third part appeared: a dialog interface to communicate with users.[13] This ability to conduct a conversation with users was later called "conversational".[14][15]

History
Expert systems were introduced by researchers in the Stanford Heuristic Programming Project, including the "father of expert systems" with the Dendral and Mycin systems. Principal contributors to the technology were Bruce Buchanan, Edward Shortliffe, Randall Davis, William vanMelle, Carli Scott and others at Stanford. Expert systems were among the first truly successful forms of AI software.[6][7][8][9][10][11] Research is also very active in France, where researchers focus on the automation of reasoning and logic engines. The French Prolog computer language, designed in 1972, marks a real advance over expert systems like Dendral or Mycin: it is a shell,[16] that is to say a software structure ready to receive any expert system and to run it. It integrates an engine using First-Order logic, with rules and facts. It's a tool for mass production of expert systems and was the first operational declarative language,[17] later becoming the best selling AI language in the world.[18] However Prolog is not particularly user friendly and is an order of logic away from human logic.[19][20][21] In the 1980s, expert systems proliferated as they were recognized as a practical tool for solving real-world problems. Universities offered expert system courses and two thirds of the Fortune 1000 companies applied the technology in daily business activities.[5][22] Interest was international with the Fifth Generation Computer Systems project in Japan and increased research funding in Europe. Growth in the field continued into the 1990s. The development of expert systems was aided by the development of the symbolic processing languages Lisp and Prolog. To avoid re-inventing the wheel, expert system shells were created that had more specialized features for building large expert systems.[23] In 1981 the first IBM PC was introduced, with MS-DOS operating system. Its low price started to multiply users and opened a new market for computing and expert systems. In the 80's the image of AI was very good and people believed it would succeed within a short time[15]. Many companies began to market expert systems shells from universities, renamed "generators" because they added to the shell a tool for writing rules in plain language and thus, theoretically, allowed to write expert systems without a programming language nor any other software[16]. The best known: Guru (USA) inspired by Mycin[17][18], Personal Consultant Plus (USA)[19][20], Nexpert Object (developed by Neuron Data, company founded in California by three French)[21][22], Genesia (developed by French public company Electricit de France and marketed by Steria)[23], VP Expert (USA)[24]. But eventually the tools were only used in research projects. They did not penetrate the business market, showing that AI technology was not mature. In 1986, a new expert system generator for PCs appeared on the market, derived from the French academic research: Intelligence Service,[24][25] sold by GSI-TECSI software company. This software showed a radical innovation: it used propositional logic ("Zeroth order logic") to execute expert systems, reasoning on a knowledge base written with everyday language rules, producing explanations and detecting logic contradictions between the facts. It was the

Expert system first tool showing the AI defined by Edward Feigenbaum in his book about the Japanese Fifth Generation, Artificial Intelligence and Japan's Computer Challenge to the World [26] (1983): "The machines will have reasoning power: they will automatically engineer vast amounts of knowledge to serve whatever purpose humans propose, from medical diagnosis to product design, from management decisions to education", "The reasoning animal has, perhaps inevitably, fashioned the reasoning machine", "the reasoning power of these machines matches or exceeds the reasoning power of the humans who instructed them and, in some cases, the reasoning power of any human performing such tasks". Intelligence Service was in fact "Pandora" (1985),[27] a software developed for their thesis by two academic students of Jean-Louis Laurire,[28] one of the most famous and prolific French AI researcher.[29] Unfortunately, as this software was not developed by his own IT developers, GSI-TECSI was unable to make it evolve. Sales became scarce and marketing stopped after a few years.

54

Software architecture
The rule base or knowledge base
In expert system technology, the knowledge base is expressed with natural language rules IF ... THEN ... For examples : "IF it is living THEN it is mortal" "IF his age = known THEN his year of birth = date of today - his age in years" "IF the identity of the germ is not known with certainty AND the germ is gram-positive AND the morphology of the organism is "rod" AND the germ is aerobic THEN there is a strong probability (0.8) that the germ is of type enterobacteriacae"[30] This formulation has the advantage of speaking in everyday language which is very rare in computer science (a classic program is coded). Rules express the knowledge to be exploited by the expert system. There exist other formulations of rules, which are not in everyday language, understandable only to computer scientists. Each rule style is adapted to an engine style. The whole problem of expert systems is to collect this knowledge, usually unconscious, from the experts. There are methods but almost all are usable only by computer scientists.

The inference engine


The inference engine is a computer program designed to produce a reasoning on rules. In order to produce a reasoning, it should be based on logic. There are several kinds of logic: propositional logic, predicates of order 1 or more, epistemic logic, modal logic, temporal logic, fuzzy logic, etc. Except for propositional logic, all are complex and can only be understood by mathematicians, logicians or computer scientists. Propositional logic is the basic human logic, that is expressed in syllogisms. The expert system that uses that logic is also called a zeroth-order expert system. With logic, the engine is able to generate new information from the knowledge contained in the rule base and data to be processed. The engine has two ways to run: batch or conversational. In batch, the expert system has all the necessary data to process from the beginning. For the user, the program works as a classical program: he provides data and receives results immediately. Reasoning is invisible. The conversational method becomes necessary when the developer knows he cannot ask the user for all the necessary data at the start, the problem being too complex. The software must "invent" the way to solve the problem, request the missing data from the user, gradually approaching the goal as quickly as possible. The result gives the impression of a dialogue led by an expert. To guide a dialogue, the engine may have several levels of sophistication: "forward chaining", "backward chaining" and "mixed chaining". Forward chaining is the questioning of an expert who has no idea of the solution and investigates progressively (e.g. fault diagnosis). In backward chaining, the engine has an idea of the target (e.g. is it okay or not? Or: there is danger but what is the level?). It starts from the goal in hopes of finding the solution as soon as possible. In mixed chaining the engine has an idea of the goal but it is not enough: it deduces in forward chaining from previous user responses all

Expert system that is possible before asking the next question. So quite often he deduces the answer to the next question before asking it. A strong interest in using logic is that this kind of software is able to give the user clear explanation of what it is doing (the "Why?") and what it has deduced (the "How?" ). Better yet, thanks to logic, the most sophisticated expert systems are able to detect contradictions[31] in user information or in the knowledge and can explain them clearly, revealing at the same time the expert's knowledge and way of thinking.

55

Advantages
Conversational
Expert systems offer many advantages for users when compared to traditional programs because they operate like a human brain,.[32][33]

Quick availability and opportunity to program itself


As the rule base is in everyday language (the engine is untouchable), expert system can be written much faster than a conventional program, by users or experts, bypassing professional developers and avoiding the need to explain the subject.

Ability to exploit a considerable amount of knowledge


The expert system uses a rule base, unlike conventional programs, which means that the volume of knowledge to program is not a major concern. Whether the rule base has 10 rules or 10 000, the engine operation is the same.

Reliability
The reliability of an expert system is the same as the reliability of a database, i.e. good, higher than that of a classical program. It also depends on the size of knowledge base.

Scalability
Evolving an expert system is to add, modify or delete rules. Since the rules are written in plain language, it is easy to identify those to be removed or modified.

Pedagogy
The engines that are run by a true logic are able to explain to the user in plain language why they ask a question and how they arrived at each deduction. In doing so, they show knowledge of the expert contained in the expert system. So, user can learn this knowledge in its context. Moreover, they can communicate their deductions step by step. So, the user has information about their problem even before the final answer of the expert system.

Preservation and improvement of knowledge


Valuable knowledge can disappear with the death, resignation or retirement of an expert. Recorded in an expert system, it becomes eternal. To develop an expert system is to interview an expert and make the system aware of their knowledge. In doing so, it reflects and enhances it.

Expert system

56

New areas neglected by conventional computing


Automating a vast knowledge, the developer may meet a classic problem: "combinatorial explosion" commonly known as "information overload" that greatly complicates his work and results in a complex and time consuming program. The reasoning expert system does not encounter that problem since the engine automatically loads combinatorics between rules. This ability can address areas where combinatorics are enormous: highly interactive or conversational applications, fault diagnosis, decision support in complex systems, educational software, logic simulation of machines or systems, constantly changing software.

Disadvantages
The expert system has a major flaw, which explains its low success despite the principle having existed for 70 years: knowledge collection and its interpretation into rules, or knowledge engineering. Most developers have no automated method to perform this task; instead they work manually, increasing the likelihood of errors. Expert knowledge is generally not well understood; for example, rules may not exist, be contradictory, or be poorly written and unusable. Worse still, most expert systems use an engine incapable of reasoning. As a result, an expert system will often work poorly, and the project abandoned.[34] Correct development methodology can mitigate these problems. There exists software capable of interviewing a true expert on a subject and automatically writing the rule base, or knowledge base, from the answers. The expert system can then be simultaneously run before the true expert's eyes, performing a consistency of rules check.[35][36][37] Experts and users can check the quality of the software before it is finished. Many expert systems are also penalized by the logic used. Most formal systems of logic operate on variable facts, i.e. facts the value of which changes several times during one reasoning. This is considered a property belonging to more powerful logic. This is the case of the Mycin and Dendral expert systems, and of, for example, fuzzy logic, predicate logic (Prolog), symbolic logic and mathematical logic. Propositional logic uses only invariant facts.[38] In the human mind, the facts used must remain invariable as long as the brain reasons with them. This makes possible two ways of controlling the consistency of the knowledge: detection of contradictions and production of explanations.[39][40] That is why expert systems using variable facts, which are more understandable to developers creating such systems and hence more common, are less easy to develop, less clear to users, less reliable, and why they don't produce explanations of their reasoning, or contradiction detection.

Application field
Expert systems address areas where combinatorics is enormous: highly interactive or conversational applications, IVR, voice server, chatterbot fault diagnosis, medical diagnosis decision support in complex systems, process control, interactive user guide educational and tutorial software logic simulation of machines or systems knowledge management constantly changing software.

They can also be used in software engineering for rapid prototyping applications (RAD). Indeed, the expert system quickly developed in front of the expert shows him if the future application should be programmed. Indeed, any program contains expert knowledge and classic programming always begins with an expert interview. A program written in the form of expert system receives all the specific benefits of expert system, among others things it can be developed by anyone without computer training and without programming languages. But this solution has a defect: expert system runs slower than a traditional program because he consistently "thinks" when in fact a classic software just follows paths traced by the programmer.

Expert system

57

Examples of applications
Expert systems are designed to facilitate tasks in the fields of accounting, the law, medicine, process control, financial service, production, human resources, among others. Typically, the problem area is complex enough that a more simple traditional algorithm cannot provide a proper solution. The foundation of a successful expert system depends on a series of technical procedures and development that may be designed by technicians and related experts. As such, expert systems do not typically provide a definitive answer, but provide probabilistic recommendations. An example of the application of expert systems in the financial field is expert systems for mortgages. Loan departments are interested in expert systems for mortgages because of the growing cost of labour, which makes the handling and acceptance of relatively small loans less profitable. They also see a possibility for standardized, efficient handling of mortgage loan by applying expert systems, appreciating that for the acceptance of mortgages there are hard and fast rules which do not always exist with other types of loans. Another common application in the financial area for expert systems are in trading recommendations in various marketplaces. These markets involve numerous variables and human emotions which may be impossible to deterministically characterize, thus expert systems based on the rules of thumb from experts and simulation data are used. Expert system of this type can range from ones providing regional retail recommendations, like Wishabi, to ones used to assist monetary decisions by financial institutions and governments. Another 1970s and 1980s application of expert systems, which we today would simply call AI, was in computer games. For example, the computer baseball games Earl Weaver Baseball and Tony La Russa Baseball each had highly detailed simulations of the game strategies of those two baseball managers. When a human played the game against the computer, the computer queried the Earl Weaver or Tony La Russa Expert System for a decision on what strategy to follow. Even those choices where some randomness was part of the natural system (such as when to throw a surprise pitch-out to try to trick a runner trying to steal a base) were decided based on probabilities supplied by Weaver or La Russa. Today we would simply say that "the game's AI provided the opposing manager's strategy". A new application for expert systems is automated computer program generation. Funded by a US Air Force grant, an expert system-based application (hprcARCHITECT) that generates computer programs for mixed processor technology (FPGA/GPU/Multicore) systems without a need for technical specialists has recently been commercially introduced. There is also a large body of contemporary research and development directed toward using expert systems for human behavior modeling and decision support systems. The former is especially important in the area of intercultural relations and the latter in improving management operations in small businesses.

Knowledge engineering
The building, maintaining and development of expert systems is known as knowledge engineering.[41] Knowledge engineering is a "discipline that involves integrating knowledge into computer systems in order to solve complex problems normally requiring a high level of human expertise".[42] There are generally three individuals having an interaction in an expert system. Primary among these is the end-user, the individual who uses the system for its problem solving assistance. In the construction and maintenance of the system there are two other roles: the problem domain expert who builds the system and supplies the knowledge base, and a knowledge engineer who assists the experts in determining the representation of their knowledge, enters this knowledge into an explanation module and who defines the inference technique required to solve the problem. Usually the knowledge engineer will represent the problem solving activity in the form of rules. When these rules are created from domain expertise, the knowledge base stores the rules of the expert system.

Expert system

58

References
[1] Jackson, Peter (1998), Introduction To Expert Systems (3 ed.), Addison Wesley, p.2, ISBN978-0-201-87686-4 [2] Nwigbo Stella and Agbo Okechuku Chuks (http:/ / www. hrmars. com/ admin/ pics/ 261. pdf), School of Science Education, Expert system: a catalyst in educational development in Nigeria: "The ability of this system to explain the reasoning process through back-traces (...) provides an additional feature that conventional programming does not handle" [3] Regina Barzilay, Daryl McCullough, Owen Rambow, Jonathan DeCristofaro, Tanya Korelsky, Benoit Lavoie (http:/ / www. cogentex. com/ papers/ explanation-iwnlg98. pdf): "A new approach to expert system explanations" [4] Conventional programming (http:/ / www. pcmag. com/ encyclopedia_term/ 0,2542,t=conventional+ programming& i=40325,00. asp) [5] Cornelius T. Leondes (2002). Expert systems: the technology of knowledge management and decision making for the 21st century. pp.122. ISBN978-0-12-443880-4. [6] ACM 1998, I.2.1 [7] Russell & Norvig 2003, pp.2224 [8] Luger & Stubblefield 2004, pp.227331 [9] Nilsson 1998, chpt. 17.4 [10] McCorduck 2004, pp.327335, 434435 [11] Crevier 1993, pp.14562, 197203 [12] Nwigbo Stella and Agbo Okechuku Chuks (http:/ / www. hrmars. com/ admin/ pics/ 261. pdf), School of Science Education, Expert system: a catalyst in educational development in Nigeria: "Knowledge-based systems collect the small fragments of human know-how into a knowledge-base which is used to reason through a problem, using the knowledge that is appropriated" [13] Koch, C. G.; Isle, B. A.; Butler, A. W. (1988). "Intelligent user interface for expert systems applied to power plant maintenance and troubleshooting". IEEE Transactions on Energy Conversion 3: 71. doi:10.1109/60.4202. [14] McTear, M. F. (2002). "Spoken dialogue technology: Enabling the conversational user interface". ACM Computing Surveys 34: 90. doi:10.1145/505282.505285. [15] Lowgren, J. (1992). "The Ignatius environment: Supporting the design and development of expert-system user interfaces". IEEE Expert 7 (4): 4957. doi:10.1109/64.153464. [16] George F. Luger and William A. Stubblefield, Benjamin/Cummings Publishers, Rule Based Expert System Shell: example of code using the Prolog rule based expert system shell [17] A. MICHIELS (http:/ / promethee. philo. ulg. ac. be/ engdep1/ download/ prolog/ htm_docs/ prolog. htm), Universit de Lige, Belgique: "PROLOG, the first declarative language [18] Carnegie Mellon University's AI Web Site (http:/ / www. prenhall. com/ divisions/ bp/ app/ turban/ dss/ html/ chap16. html): "Prolog was the most popular AI language in Japan and probably in Europe" [19] Ivana Berkovi, Biljana Radulovi and Petar Hotomski (http:/ / www. proceedings2007. imcsit. org/ pliks/ 33. pdf), University of Novi Sad, 2007, Extensions of Deductive Concept in Logic Programing and Some Applications: "the defects of PROLOG-system: the expansion concerning Horn clauses, escaping negation treatment as definite failure" [20] "Software developed in Prolog has been criticized for having a high performance penalty compared to conventional programming languages" [21] Dr. Nikolai Bezroukov (http:/ / www. softpanorama. org/ Lang/ prolog. shtml), Softpanorama: "I think that most people exposed to Prolog remember strongly the initial disappointment. Language was/is so hyped but all you can see initially are pretty trivial examples that are solved by complex, obscure notation that lacks real expressive power: some of simple examples can be expressed no less concisely is many other languages" [22] Durkin, J. Expert Systems: Catalog of Applications. Intelligent Computer Systems, Inc., Akron, OH, 1993. [23] page 21. Giarratano & Riley, 3rd ed. [24] Flamant B. and Girard G (http:/ / cat. inist. fr/ ?aModele=afficheN& cpsidt=7001328)., GSI-TECSI, Intelligence Service: build your own expert system : "Intelligence Service is a development environment for expert systems that requires no experience of classic programming that offers to everyone the opportunity to develop its own expert system" [25] Bertrand Savatier (http:/ / www. tree-logic. com/ Articles/ 01, seul IS fait de l'IA, par un universitaire. jpg), Le Monde Informatique, November 23, 1987: "Expert systems accessible to all" [26] http:/ / www. atarimagazines. com/ creative/ v10n8/ 103_The_fifth_generation_Jap. php [27] Jean-Philippe de Lespinay (http:/ / www. tree-logic. com/ scienceetvie. htm), Science et Vie, "From total zero to Zero Plus [logic]", May 1991 [28] Death of Jean-Louis Laurire (http:/ / www. lip6. fr/ actualite/ information-fiche. php?RECORD_KEY(informations)=id& id(informations)=18) [29] Journey " In honor of Jean-Louis Laurire (http:/ / www. lip6. fr/ Laboratoire/ 2006-03-22/ 2006-03-22-Affiche. pdf)", Universit Pierre et Marie Curie in Paris (March 22, 2006) [30] Mycin rule [31] Nabil Arman (http:/ / www. ccis2k. org/ iajit/ PDF/ vol. 4,no. 1/ 9-Nabil. pdf), Polytechnic University of Palestine, January 2007, Fault Detection in Dynamic Rule Bases Using Spanning Trees and Disjoin Sets: ""

Expert system
[32] Olivier Rafal, Le Monde Informatique, July 2001 : " Developing for all (http:/ / www. tree-logic. com/ Articles/ LMI 2001 cf Maieutique. jpg)" [33] Jean-Philippe de Lespinay, Automates Intelligents, December 2008 : " Reasoning Artificial Intelligence : the end of intermediaries between users and computers (http:/ / www. automatesintelligents. com/ echanges/ 2008/ dec/ ialespinay. html)" [34] Kenneth Laudon, Jane Laudon, Eric Fimbel, "Management Information Systems: Managing the Digital Firm", Business & Economics, 2010 edition, chapter 11-3.5: The implementation of a large number of expert systems requires the deployment of considerable development efforts, lengthy and expensive. Hiring and training a larger number of experts may be less expensive than building an expert system .(...) Some expert systems, particularly the largest, are so complex that over years, the costs of curative and adaptive maintenance become as high as the cost of development. [35] Systmes Experts, April 15, 1990, Miao, authentic expert system generator of fault diagnosis: "MIAO can explain, again in [plain] language, all of his logical approach: why he is asking such a question and how it came to such a conclusion. And that because he is constantly reasoning and not because an IT developer programmed in advance all the possible explanations." [36] Olivier Rafal (http:/ / www. tree-logic. com/ Articles/ LMI 2001 cf Maieutique. jpg), Le Monde Informatique, Programming for all (T.Rex generator): "This software allows to develop a conversational application (...) leading to a self-learning" (i.e. thanks to the automatic explanations) [37] French Technology Survey (http:/ / www. tree-logic. com/ Articles/ Maeutica par FTS (91). jpg), MAIEUTICA, An Expert System Generator which writes its own rules, July 1991: "checking the coherence of the knowledge", "it can detect contradictions", "it react appropriately to changes of minds" [38] RGU: School of Computing (http:/ / www. comp. rgu. ac. uk/ docs/ info/ index. php), More Complex Inference: "propositional logic, where variables are not allowed". [39] Ong K. and Lee R.M, Texas University-Austin, A logic model for maintaining consistency of bureaucratic policies, 1993: "Inconsistencies can be detected if any of the integrity constraints is proven false, and an explanation can be provided based on the proof tree. A more general inference mechanism is presented based on the theory of abduction for checking potential inconsistency of policies" [40] Carl G. Hempel and Paul Oppenheim (http:/ / people. cohums. ohio-state. edu/ tennant9/ hempel_oppenheim_PS1948. pdf), Philosophy of Science, Studies in the Logic of Explanation, 1948: "The sentences constituting the explanans must be true" [41] Kendal, S.L.; Creen, M. (2007), An introduction to knowledge engineering, London: Springer, ISBN978-1-84628-475-5, OCLC70987401 [42] Feigenbaum, Edward A.; McCorduck, Pamela (1983), The fifth generation (1st ed.), Reading, MA: Addison-Wesley, ISBN978-0-201-11519-2, OCLC9324691

59

Bibliography
Textbooks
Darlington, Keith (2000). The Essence of Expert Systems. Pearson Education. ISBN978-0-13-022774-4. Ignizio, James (1991). Introduction to Expert Systems. McGraw-Hill Companies. ISBN978-0-07-909785-9. Giarratano, Joseph C. and Riley, Gary; Gary Riley (2005). Expert Systems, Principles and Programming. Course Technology Ptr. ISBN978-0-534-38447-0. Jackson, Peter (1998). Introduction to Expert Systems. Addison Wesley. ISBN978-0-201-87686-4. Walker, Adrian et al. (1990). Knowledge Systems and Prolog. Addison-Wesley. ISBN978-0-201-52424-6. Naylor, Chris. (1983). Build your own Expert System. Sigma Technical Press. ISBN978-0-905104-41-6.

History of AI
Crevier, Daniel (1993), AI: The Tumultuous Search for Artificial Intelligence, New York, NY: BasicBooks, ISBN 0-465-02997-3 McCorduck, Pamela (2004), Machines Who Think (http://www.pamelamc.com/html/machines_who_think. html) (2nd ed.), Natick, MA: A. K. Peters, Ltd., ISBN1-56881-205-1 Luger, George; William A. Stubblefield (2004). Artificial Intelligence: Structures and Strategies for Complex Problem Solving (http://www.cs.unm.edu/~luger/ai-final/tocfull.html) (5th ed.). The Benjamin/Cummings Publishing Company, Inc.. ISBN978-0-8053-4780-7. Nilsson, Nils (1998-04-01). Artificial Intelligence: A New Synthesis. Morgan Kaufmann Publishers. ISBN978-1-55860-467-4. Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (http://aima.cs.berkeley. edu/) (2nd ed.), Upper Saddle River, New Jersey: Prentice Hall, ISBN0-13-790395-2

Expert system Winston, Patrick Henry (1984-04). Artificial Intelligence. Addison-Wesley. ISBN978-0-201-08259-3.

60

Other
"ACM Computing Classification System: Artificial intelligence" (http://www.acm.org/class/1998/I.2.html). ACM. 1998. Retrieved 2007-08-30. Jean-Philippe de Lespinay, Admiroutes, December 2008: Reasoning AI (http://www.admiroutes.asso.fr/ larevue/2008/93/lespinay.htm) Automates Intelligents, 2009: Conversational and Call centers (http://www.automatesintelligents.com/ interviews/2009/lespinay.html) US patent 4763277 (http://worldwide.espacenet.com/textdoc?DB=EPODOC&IDX=US4763277), Ashford, Thomas J. et al., "Method for obtaining information in an expert system", published 1988-08-09, issued 1988-08-09

External links
Artificial Intelligence (http://www.dmoz.org/Computers/Artificial_Intelligence//) at the Open Directory Project Expert System tutorial on Code Project (http://www.codeproject.com/KB/recipes/ArtificialAdvice-1.aspx)

Reference data
Reference data are data from outside the organisation (often from standards organisations) which is, apart from occasional revisions, static. This non-dynamic data is sometimes also known as "standing data".[1] Examples would be currency codes, Countries (in this case covered by a global standard ISO 3166-1) etc. Reference data should be distinguished[2] from "Master Data" which is also relatively static data but originating from within the organisation e.g. products, departments, even customers.

References
[1] "Standing Data" (http:/ / www. websters-online-dictionary. org/ definitions/ standing+ data?cx=partner-pub-0939450753529744:v0qd01-tdlq& cof=FORID:9& ie=UTF-8& q=standing+ data& sa=Search#906). . [2] "Master Data versus Reference Data" (http:/ / www. information-management. com/ issues/ 20060401/ 1051002-1. html). .

Master data

61

Master data
Master data is information that is key to the operation of a business. It is the primary focus of the Information Technology (IT) discipline of Master Data Management (MDM), and can include reference data. This key business information may include data about customers, products, employees, materials, suppliers, and the like. While it is often non-transactional in nature, it is not limited to non-transactional data, and often supports transactional processes and operations. For example, analysis and reporting is greatly dependent on an organization's master data. Because master data may not be stored and referenced centrally, but is often used by several functional groups and stored in different data systems across an organization, master data may be duplicated and inconsistent (and if so, inaccurate). Thus Master Data is that persistent, non-transactional data that defines a business entity for which there is, or should be, an agreed-upon view across the organization. Care should be taken to properly version Master Data if the need arises to modify it. The versioning of Master Data can be a issue.

Master Data Defined


Reference Data is basic business data used in a single application, system, or process. Master Data is a single source of basic business data used across multiple systems, applications, and/or processes. Enterprise Master Data is the single source of basic business data used across all systems, applications, and processes for an entire enterprise (all departments, divisions, companies, and countries). Market Master Data is the single source of basic business data for an entire marketplace. Market master data is used among enterprises within the value chain. An example of Market Master Data is the UPC (Universal Product Code) found on consumer products. Market Master Data is compatible with enterprise-specific and domain-specific systems, compliant with or linked to industry standards, and incorporated within market research analytics. Market master data also facilitates integration of multiple data sources and literally puts everyone in the market on the same page. Excerpted from Master Data Management for Media: A Call to Action for Business Leaders in Marketing, Advertising, and the Media, a Microsoft White Paper by Scott Taylor and Robin Laylin, January 2010

Master data and Master reference data


Master data is also called Master reference data. This is to avoid confusion with the usage of the term Master data for original data, like an original recording (see also: Master Tape). Master data is nothing but unique data, i.e., there are no duplicate values. Material Master Data is a specific data set holding structured information about spare parts, raw materials and products within Enterprise Resource Planning (ERP) software. The data is held centrally and used across organisations. Vendor Master refers to the centralised location of information pertinent to the Vendor. Often this will include the Legal entity name, Tax identification and contact information.

Master data

62

External links
Semarchy: What is Master Data? [1]

References
[1] http:/ / www. semarchy. com/ overview/ what-is-master-data/

Conceptual schema
A conceptual schema or conceptual data model is a map of concepts and their relationships used for databases. This describes the semantics of an organization and represents a series of assertions about its nature. Specifically, it describes the things of significance to an organization (entity classes), about which it is inclined to collect information, and characteristics of (attributes) and associations between pairs of those things of significance (relationships).

Overview
Because a conceptual schema represents the semantics of an organization, and not a database design, it may exist on various levels of abstraction. The original ANSI four-schema architecture began with the set of external schemas that each represent one person's view of the world around him or her. These are consolidated into a single conceptual schema that is the superset of all of those external views. A data model can be as concrete as each person's perspective, but this tends to make it inflexible. If that person's world changes, the model must change. Conceptual data models take a more abstract perspective, identifying the fundamental things, of which the things an individual deals with are just examples. The model does allow for what is called inheritance in object oriented terms. The set of instances of an entity class may be subdivided into entity classes in their own right. Thus, each instance of a sub-type entity class is also an instance of the entity class's super-type. Each instance of the super-type entity class, then is also an instance of one of the sub-type entity classes. Super-type/sub-type relationships may be exclusive or not. A methodology may require that each instance of a super-type may only be an instance of one sub-type. Similarly, a super-type/sub-type relationship may be exhaustive or not. It is exhaustive if the methodology requires that each instance of a super-type must be an instance of a sub-type.

Conceptual schema

63

Example relationships
Each PERSON may be the vendor in one or more ORDERS. Each ORDER must be from one and only one PERSON. PERSON is a sub-type of PARTY. (Meaning that every instance of PERSON is also an instance of PARTY.) Each Employee may have the supervisor within Employee.

Data structure diagram


A data structure diagram (DSD) is a data model or diagram used to describe conceptual data models by providing graphical notations which document entities and their relationships, and the constraints that binds them.

Further reading
Data Structure Diagram. Perez, Sandra K., & Anthony K. Sarris, eds. (1995) Technical Report for IRDS Conceptual Schema, Part 1: Conceptual Schema for IRDS, Part 2: Modeling Language Analysis, X3/TR-14:1995, American National Standards Institute, New York, NY.

External links
A different point of view [1], as described by the "agile" community

References
[1] http:/ / www. agiledata. org/ essays/ dataModeling101. html

Entityrelationship model

64

Entityrelationship model
In software engineering, an Entity Relationship model (ER model for short) is an abstract way to describe a database. It usually starts with a relational database, which stores data in tables. Some of the data in these tables point to data in other tables - for instance, your entry in the database could point to several entries for each of the phone numbers that are yours. The ER model would say that you are an entity, and each phone number is an entity, and the relationship between you and the phone numbers is 'has a phone number'. Diagrams created to design these entities and relationships are called entityrelationship diagrams or ER diagrams.
A sample Entity Relationship diagram using Chen's notation

This article refers to the techniques proposed in Peter Chen's 1976 paper.[1] However, variants of the idea existed previously,[2] and have been devised subsequently such as supertype and subtype data entities [3] and commonality relationships (an example with additional concepts is the enhanced entityrelationship model).

Overview
Using the three schema approach to software engineering, there are three levels of ER models that may be developed. The conceptual data model is the highest level ER model in that it contains the least granular detail but establishes the overall scope of what is to be included within the model set. The conceptual ER model normally defines master reference data entities that are commonly used by the organization. Developing an enterprise-wide conceptual ER model is useful to support documenting the data architecture for an organization. A conceptual ER model may be used as the foundation for one or more logical data models. The purpose of the conceptual ER model is then to establish structural metadata commonality for the master data entities between the set of logical ER models. The conceptual data model may be used to form commonality relationships between ER models as a basis for data model integration. A logical ER model does not require a conceptual ER model especially if the scope of the logical ER model is to develop a single disparate information system. The logical ER model contains more detail than the conceptual ER model. In addition to master data entities, operational and transactional data entities are now defined. The details of each data entity are developed and the entity relationships between these data entities are established. The logical ER model is however developed independent of technology into which it will be implemented. One or more physical ER models may be developed from each logical ER model. The physical ER model is normally developed be instantiated as a database. Therefore, each physical ER model must contain enough detail to produce a database and each physical ER model is technology dependent since each database management system is somewhat different.

Entityrelationship model The physical model is normally forward engineered to instantiate the structural metadata into a database management system as relational database objects such as database tables, database indexes such as unique key indexes, and database constraints such as a foreign key constraint or a commonality constraint. The ER model is also normally used to design modifications to the relational database objects and to maintain the structural metadata of the database. The first stage of information system design uses these models during the requirements analysis to describe information needs or the type of information that is to be stored in a database. The data modeling technique can be used to describe any ontology (i.e. an overview and classifications of used terms and their relationships) for a certain area of interest. In the case of the design of an information system that is based on a database, the conceptual data model is, at a later stage (usually called logical design), mapped to a logical data model, such as the relational model; this in turn is mapped to a physical model during physical design. Note that sometimes, both of these phases are referred to as "physical design".

65

A UML metamodel of Extended Entity Relationship models

The building blocks: entities, relationships, and attributes


An entity may be defined as a thing which is recognized as being capable of an independent existence and which can be uniquely identified. An entity is an abstraction from the complexities of a domain. When we speak of an entity, we normally speak of some aspect of the real world which can be distinguished from other aspects of the real world.[4] An entity may be a physical object such as a house or a car, an event such as a house sale or a car service, or a concept such as a customer transaction or order. Although the term entity is the one most commonly used, following Chen we should really distinguish between an entity and an entity-type. An entity-type is a category. An entity, strictly speaking, is an instance of a given entity-type. There are usually many instances of an entity-type. Because the term entity-type is somewhat cumbersome, most people tend to use the term entity as a synonym for this term. Entities can be thought of as nouns. Examples: a computer, an employee, a song, a mathematical theorem. A relationship captures how entities are related to one another. Relationships can be thought of as verbs, linking two or more nouns.
Primary key

Two related entities

An entity with an attribute

A relationship with an attribute

Entityrelationship model Examples: an owns relationship between a company and a computer, a supervises relationship between an employee and a department, a performs relationship between an artist and a song, a proved relationship between a mathematician and a theorem. The model's linguistic aspect described above is utilized in the declarative database query language ERROL, which mimics natural language constructs. ERROL's semantics and implementation are based on Reshaped relational algebra (RRA), a relational algebra which is adapted to the entityrelationship model and captures its linguistic aspect. Entities and relationships can both have attributes. Examples: an employee entity might have a Social Security Number (SSN) attribute; the proved relationship may have a date attribute. Every entity (unless it is a weak entity) must have a minimal set of uniquely identifying attributes, which is called the entity's primary key. Entityrelationship diagrams don't show single entities or single instances of relations. Rather, they show entity sets and relationship sets. Example: a particular song is an entity. The collection of all songs in a database is an entity set. The eaten relationship between a child and her lunch is a single relationship. The set of all such child-lunch relationships in a database is a relationship set. In other words, a relationship set corresponds to a relation in mathematics, while a relationship corresponds to a member of the relation. Certain cardinality constraints on relationship sets may be indicated as well.

66

Relationships, roles and cardinalities


In Chen's original paper he gives an example of a relationship and its roles. He describes a relationship "marriage" and its two roles "husband" and "wife". A person plays the role of husband in a marriage (relationship) and another person plays the role of wife in the (same) marriage. These words are nouns. That is no surprise; naming things requires a noun. However as is quite usual with new ideas, many eagerly appropriated the new terminology but then applied it to their own old ideas. Thus the lines, arrows and crows-feet of their diagrams owed more to the earlier Bachman diagrams than to Chen's relationship diamonds. And they similarly misunderstood other important concepts. In particular, it became fashionable (now almost to the point of exclusivity) to "name" relationships and roles as verbs or phrases.

Relationship names
A relationship expressed with a single verb implying direction, makes it impossible to discuss the model using the following proper English. For example: the song and the artist are related by a 'performs' the husband and wife are related by an 'is-married-to'. Expressing the relationships with a noun resolves this: the song and the artist are related by a 'performance' the husband and wife are related by a 'marriage'. Traditionally, the relationships are expressed twice, (using present continuous verb phrases), once in each direction. This gives two English statements per relationship. For example: the song is performed by the artist the artist performs the song

Entityrelationship model

67

Role naming
It has also become prevalent to name roles with phrases e.g. is-the-owner-of and is-owned-by etc. Correct nouns in this case are "owner" and "possession". Thus "person plays the role of owner" and "car plays the role of possession" rather than "person plays the role of is-the-owner-of" etc. The use of nouns has direct benefit when generating physical implementations from semantic models. When a person has two relationships with car then it is possible to very simply generate names such as "owner_person" and "driver_person" which are immediately meaningful.

Cardinalities
Modifications to the original specification can be beneficial. Chen described look-across cardinalities. As an aside, the Barker-Ellis notation, used in Oracle Designer, uses same-side for minimum cardinality (analogous to optionality) and role, but look-across for maximum cardinality (the crows foot). In Merise,[5] Elmasri & Navathe[6] and others[7] there is a preference for same-side for roles and both minimum and maximum cardinalities. Recent researchers (Feinerer,[8] Dullea et al.[9]) have shown that this is more coherent when applied to n-ary relationships of order > 2. In Dullea et al. one reads "A 'look across' notation such as used in the UML does not effectively represent the semantics of participation constraints imposed on relationships where the degree is higher than binary." In Feinerer it says "Problems arise if we operate under the look-across semantics as used for UML associations. Hartmann[10] investigates this situation and shows how and why different transformations fail." (Although the "reduction" mentioned is spurious as the two diagrams 3.4 and 3.5 are in fact the same) and also "As we will see on the next few pages, the look-across interpretation introduces several difficulties which prevent the extension of simple mechanisms from binary to n-ary associations."

Semantic modelling
The father of ER modelling said in his seminal paper: "The entity-relationship model adopts the more natural view that the real world consists of entities and relationships. It incorporates some of the important semantic information about the real world." [1] He is here in accord with philosophic and theoretical traditions from the time of the Ancient Greek philosophers: Socrates, Plato and Aristotle (428 BC) through to modern epistemology, semiotics and logic of Peirce, Frege and Russell. Plato himself associates knowledge with the apprehension of unchanging Forms (The forms, according to Socrates, are roughly speaking archetypes or abstract representations of the many types of things, and properties) and their relationships to one another. In his original 1976 article Chen explicitly contrasts entityrelationship diagrams with record modelling techniques: "The data structure diagram is a representation of the organisation of records and is not an exact representation of entities and relationships." Several other authors also support his program: Kent in "Data and Reality" [11] : "One thing we ought to have clear in our minds at the outset of a modelling endeavour is whether we are intent on describing a portion of "reality" (some human enterprise) or a data processing activity." Abrial in "Data Semantics" : "... the so called "logical" definition and manipulation of data are still influenced (sometimes unconsciously) by the "physical" storage and retrieval mechanisms currently available on computer systems." Stamper: "They pretend to describe entity types, but the vocabulary is from data processing: fields, data items, values. Naming rules don't reflect the conventions we use for naming people and things; they reflect instead techniques for locating records in files." In Jackson's words: "The developer begins by creating a model of the reality with which the system is concerned, the reality which furnishes its [the system's] subject matter ..."

Entityrelationship model Elmasri, Navathe: "The ER model concepts are designed to be closer to the users perception of data and are not meant to describe the way in which data will be stored in the computer." A semantic model is a model of concepts, it is sometimes called a "platform independent model". It is an intensional model. At the latest since Carnap, it is well known that:[12] "...the full meaning of a concept is constituted by two aspects, its intension and its extension. The first part comprises the embedding of a concept in the world of concepts as a whole, i.e. the totality of all relations to other concepts. The second part establishes the referential meaning of the concept, i.e. its counterpart in the real or in a possible world". An extensional model is that which maps to the elements of a particular methodology or technology, and is thus a "platform specific model". The UML specification explicitly states that associations in class models are extensional and this is in fact self-evident by considering the extensive array of additional "adornments" provided by the specification over and above those provided by any of the prior candidate "semantic modelling languages"."UML as a Data Modeling Notation, Part 2" [13]

68

Diagramming conventions
Chen's notation for entityrelationship modeling uses rectangles to represent entities, and diamonds to represent relationships appropriate for first-class objects: they can have attributes and relationships of their own. Entity sets are drawn as rectangles, relationship sets as diamonds. If an entity set participates in a relationship set, they are connected with a line. Attributes are drawn as ovals and are connected with a line to exactly one entity or relationship set. Cardinality constraints are expressed as follows: a double line indicates a participation constraint, totality or surjectivity: all entities in the entity set must participate in at least one relationship in the relationship set; an arrow from entity set to relationship set indicates a key constraint, i.e. injectivity: each entity of the entity set can participate in at most one relationship in the relationship set; a thick line indicates both, i.e. bijectivity: each entity in the entity set is involved in exactly one relationship.

Various methods of representing the same one to many relationship. In each case, the diagram shows the relationship between a person and a place of birth: each person must have been born at one, and only one, location, but each location may have had zero or more people born at it.

Entityrelationship model an underlined name of an attribute indicates that it is a key: two different entities or relationships with this attribute always have different values for this attribute. Attributes are often omitted as they can clutter up a diagram; other diagram techniques often list entity attributes within the rectangles drawn for entity sets. Related diagramming convention techniques: Bachman notation Barker's Notation EXPRESS IDEF1X[14]
Two related entities shown using Crow's Foot notation. In this example, an optional relationship is shown between Artist and Song; the symbols closest to the song entity represents "zero, one, or many", whereas a song has "one and only one" Artist. The former is therefore read as, an Artist (can) perform(s) "zero, one, or many" song(s).

69

Martin notation (min, max)-notation of Jean-Raymond Abrial in 1974 UML class diagrams Merise Object-Role Modeling

Crow's Foot Notation


Crow's Foot notation is used in Barker's Notation, SSADM and Information Engineering. Crow's Foot diagrams represent entities as boxes, and relationships as lines between the boxes. Different shapes at the ends of these lines represent the cardinality of the relationship. Crow's Foot notation was used in the 1980s by the consultancy practice CACI. Many of the consultants at CACI (including Richard Barker) subsequently moved to Oracle UK, where they developed the early versions of Oracle's CASE tools, introducing the notation to a wider audience. The following tools use Crow's Foot notation: ARIS, System Architect, Visio, PowerDesigner, Toad Data Modeler, DeZign for Databases, Devgems Data Modeler, OmniGraffle, MySQL Workbench and SQL Developer Data Modeler. CA's ICASE tool, CA Gen aka Information Engineering Facility also uses this notation.

ER diagramming tools
There are many ER diagramming tools. Free software ER diagramming tools that can interpret and generate ER models and SQL and do database analysis are MySQL Workbench (formerly DBDesigner), and Open ModelSphere (open-source). A freeware ER tool that can generate database and application layer code (webservices) is the RISE Editor. Proprietary ER diagramming tools are Avolution, dbForge Studio for MySQL, ER/Studio, ERwin, MagicDraw, MEGA International, ModelRight, Navicat Data Modeler, OmniGraffle, Oracle Designer, PowerDesigner, Rational Rose, Sparx Enterprise Architect, SQLyog, System Architect, Toad Data Modeler, and Visual Paradigm. Free software diagram tools just draw the shapes without having any knowledge of what they mean, nor do they generate SQL. These include Creately, yEd, LucidChart, Kivio, and Dia.

Entityrelationship model

70

Limitations
ER models assume information content that can readily be represented in a relational database. They describe only a relational structure for this information. Hence, they are inadequate for systems in which the information cannot readily be represented in relational form, such as with semi-structured data. Furthermore, for many systems, the possible changes to the information contained are nontrivial and important enough to warrant explicit specification. Some authors have extended ER modeling with constructs to represent change, an approach supported by the original author;[15] an example is Anchor Modeling. An alternative is to model change separately, using a process modeling technique. Additional techniques can be used for other aspects of systems. For instance, ER models roughly correspond to just 1 of the 14 different modeling techniques offered by UML. Another limitation: ER modeling is aimed at specifying information from scratch. This suits the design of new, standalone information systems, but is of less help in integrating pre-existing information sources that already define their own data representations in detail. Even where it is suitable in principle, ER modeling is rarely used as a separate activity. One reason for this is today's abundance of tools to support diagramming and other design support directly on relational database management systems. These tools can readily extract database diagrams that are very close to ER diagrams from existing databases, and they provide alternative views on the information contained in such diagrams. In a survey, Brodie and Liu[16] could not find a single instance of entityrelationship modeling inside a sample of ten Fortune 100 companies. Badia and Lemire[17] blame this lack of use on the lack of guidance but also on the lack of benefits, such as lack of support for data integration. Also, the enhanced entityrelationship model (EER modeling) introduces several concepts which are not present in ER modeling.

References
[1] "The Entity Relationship Model: Toward a Unified View of Data" (http:/ / citeseerx. ist. psu. edu/ viewdoc/ summary?doi=10. 1. 1. 123. 1085) for entityrelationship modeling. [2] A.P.G. Brown, "Modelling a Real-World System and Designing a Schema to Represent It", in Douque and Nijssen (eds.), Data Base Description, North-Holland, 1975, ISBN 0-7204-2833-5. [3] Designing a Logical Database: Supertypes and Subtypes (http:/ / technet. microsoft. com/ en-us/ library/ cc505839. aspx) [4] Paul Beynon-Davies (2004). Database Systems. Houndmills, Basingstoke, UK: Palgrave [5] Hubert Tardieu, Arnold Rochfeld and Ren Colletti La methode MERISE: Principes et outils (Paperback - 1983) [6] Elmasri, Ramez, B. Shamkant, Navathe, Fundamentals of Database Systems, third ed., Addison-Wesley, Menlo Park, CA, USA, 2000. [7] ER 2004 : 23rd International Conference on Conceptual Modeling, Shanghai, China, November 8-12, 2004 (http:/ / books. google. com/ books?id=odZK99osY1EC& pg=PA52& img=1& pgis=1& dq=genova& sig=ACfU3U3tDC_q8WOMqUJW4EZCa5YQywoYLw& edge=0) [8] A Formal Treatment of UML Class Diagrams as an Efficient Method for Configuration Management 2007 (http:/ / publik. tuwien. ac. at/ files/ pub-inf_4582. pdf) [9] James Dullea, Il-Yeol Song, Ioanna Lamprou - An analysis of structural validity in entity-relationship modeling 2002 (http:/ / www. ischool. drexel. edu/ faculty/ song/ publications/ p_DKE_03_Validity. pdf) [10] "Reasoning about participation constraints and Chen's constraints" S Hartmann - 2003 (http:/ / www. acs. org. au/ documents/ public/ crpit/ CRPITV17Hartmann. pdf) [11] http:/ / www. bkent. net/ Doc/ darxrp. htm [12] http:/ / wenku. baidu. com/ view/ 8048e7bb1a37f111f1855b22. html [13] http:/ / www. tdan. com/ view-articles/ 8589 [14] IDEF1X (https:/ / idbms. navo. navy. mil/ DataModel/ IDEF1X. html) [15] P. Chen. Suggested research directions for a new frontier: Active conceptual modeling (http:/ / www. springerlink. com/ content/ 5160x2634402663r/ ). ER 2006, volume 4215 of Lecture Notes in Computer Science, pages 14. Springer Berlin / Heidelberg, 2006. [16] M. L. Brodie and J. T. Liu. The power and limits of relational technology in the age of information ecosystems (http:/ / www. michaelbrodie. com/ documents/ The Power and Limits of Relational Technology In the Age of Information Ecosystems V2. pdf). On The Move Federated Conferences, 2010.

Entityrelationship model
[17] A. Badia and D. Lemire. A call to arms: revisiting database design (http:/ / dl. acm. org/ citation. cfm?id=2070750). SIGMOD Record 40, 3 (November 2011), 61-69.

71

Further reading
Richard Barker (1990). CASE Method: Tasks and Deliverables. Wokingham, England: Addison-Wesley. Paul Beynon-Davies (2004). Database Systems. Houndmills, Basingstoke, UK: Palgrave Peter Chen (March 1976). "The Entity-Relationship Model - Toward a Unified View of Data". ACM Transactions on Database Systems 1 (1): 936. doi:10.1145/320434.320440. 1976. "The Entity-Relationship Model--Toward a Unified View of Data" (http://csc.lsu.edu/news/erd.pdf). In: ACM Transactions on Database Systems 1/1/1976 ACM-Press ISSN 0362-5915, S. 936

External links
Entity Relationship Modeling (http://www.devarticles.com/c/a/Development-Cycles/ Entity-Relationship-Modeling/) - Article from Development Cycles Entity Relationship Modelling (http://www.databasedesign.co.uk/bookdatabasesafirstcourse/chap3/chap3. htm) An Entity Relationship Diagram Example (http://rapidapplicationdevelopment.blogspot.com/2007/06/ entity-relationship-diagram-example.html). Demonstrates the crow's feet notation by way of an example. "Entity-Relationship Modeling: Historical Events, Future Trends, and Lessons Learned" (http://bit.csc.lsu.edu/ ~chen/pdf/Chen_Pioneers.pdf) by Peter Chen. "English, Chinese and ER diagrams" (http://bit.csc.lsu.edu/~chen/pdf/ER_C.pdf) by Peter Chen. Case study: E-R diagram for Acme Fashion Supplies (http://www.cilco.co.uk/briefing-studies/ acme-fashion-supplies-feasibility-study/slides/logical-data-structure.html) by Mark H. Ridley. Logical Data Structures (LDSs) - Getting started (http://www.cems.uwe.ac.uk/~tdrewry/lds.htm) by Tony Drewry. Introduction to Data Modeling (http://www.utexas.edu/its/archive/windows/database/datamodeling/index. html) Lecture by Prof.Dr.Muhittin GKMEN (http://www3.itu.edu.tr/~gokmen/SE-lecture-5.pdf), Department of Computer Engineering, Istanbul Technical University. ER-Diagram Convention (http://www.scribd.com/doc/3053988/ER-Diagram-convention) Crow's Foot Notation (http://www2.cs.uregina.ca/~bernatja/crowsfoot.html) "Articulated Entity Relationship (AER) Diagram for Complete Automation of Relational Database Normalization" (http://airccse.org/journal/ijdms/papers/0510ijdms06.pdf) P. S. Dhabe, Dr. M. S. Patwardhan, Asavari A. Deshpande. [[zh:ER ]

Object-oriented modeling

72

Object-oriented modeling
Object-oriented modeling (OOM), also called object-oriented programming (OOP) is a modeling paradigm mainly used in computer programming. Prior to the rise of OOM, the dominant paradigm was procedural programming, which emphasized the use of discrete reusable code blocks that could stand on their own, take variables, perform a function on them, and return values. The object-oriented paradigm assists the programmer to address the complexity of a problem domain by considering the problem not as a set of functions that can be performed but primarily as a set of related, interacting Objects. The modeling task then is specifying, for a specific context, those Objects (or the Class the Objects belongs to), their respective set of Properties and Methods, shared by all Objects members of the Class. For more discussion, see object-oriented analysis and design and object-oriented programming. The description of these objects is a schema. As an example, in a model of a Payroll System, a Company is an Object. An Employee is another Object. Employment is a Relationship or Association. An Employee Class (or Object for simplicity) has Attributes like Name, Birthdate, etc. The Association itself may be considered as an Object, having Attributes, or Qualifiers like Position, etc. An Employee Method may be Promote, Raise, etc. The Model description or Schema may grow in complexity to require a Notation. Many notations have been proposed, based on different paradigms, diverged, and converged in a more popular one known as UML. An informal description or a Schema notation is translated by the programmer or a CASE tool in the case of Schema notation (created using a Module specific to the CASE tool application) into a specific programming language that supports object-oriented programming (or a Class Type), a declarative language or into a database schema.

Logical data model


A logical data model (LDM) in systems engineering is a representation of an organization's data, organized in terms of entities and relationships and is independent of any particular data management technology.

Overview
Logical data models represent the abstract structure of a domain of information. They are often diagrammatic in nature and are most typically used in business processes that seek to capture things of importance to an organization and how they relate to one another. Once validated and approved, the logical data model can become the basis of a physical data model and inform the design of a database. Logical data models should be based on the structures identified in a preceding conceptual data model, since this describes the semantics of the information context, which the logical model should also reflect. Even so, since the logical data model anticipates implementation on a specific computing system, the content of the logical data model is adjusted to achieve certain efficiencies. The term 'Logical Data Model' is sometimes used as a synonym of 'Domain Model' or as an alternative to the domain model. While the two concepts are closely related, and have overlapping goals, a domain model is more focused on capturing the concepts in the problem domain rather than the structure of the data associated with that domain.

Logical data model

73

History
When ANSI first laid out the idea of a logical schema in 1975,[2] the choices were hierarchical and network. The relational model where data is described in terms of tables and columns had just been recognized as a data organization theory but no software existed to support that approach. Since that time, an object-oriented approach to data modelling where data is described in terms of classes, attributes, and associations has also been introduced.
The ANSI/SPARC three level architecture, which "shows that a data model can be an external model (or view), a conceptual model, or a physical model. This is not the only way to look at data models, but it is a useful way, particularly when comparing [1] models".

Logical data model topics


Reasons for building a logical data model

Helps common understanding of business data elements and requirements Provides foundation for designing a database Facilitates avoidance of data redundancy and thus prevent data & business transaction inconsistency Facilitates data re-use and sharing Decreases development and maintenance time and cost Confirms a logical process model and helps impact analysis.

Logical & Physical Data Model


A logical data model is sometimes incorrectly called a physical data model, which is not what the ANSI people had in mind. The physical design of a database involves deep use of particular database management technology. For example, a table/column design could be implemented on a collection of computers, located in different parts of the world. That is the domain of the physical model. Logical and physical data models are very different in their objectives, goals and content. Key differences noted below.
Logical Data Model Includes entities (tables), attributes (columns/fields) and relationships (keys) Uses business names for entities & attributes Is independent of technology (platform, DBMS) Is normalized to fourth normal form(4NF) Physical Data Model Includes tables, columns, keys, data types, validation rules, database triggers, stored procedures, domains, and access constraints

Uses more defined and less generic specific names for tables and columns, such as abbreviated column names, limited by the database management system (DBMS) and any company defined standards Includes primary keys and indices for fast data access.

May be de-normalized to meet performance requirements based on the nature of the database. If the nature of the database is Online Transaction Processing(OLTP) or Operational Data Store (ODS) it is usually not de-normalized. De-normalization is common in Datawarehouses.

Logical data model

74

References
[1] Matthew West and Julian Fowler (1999). Developing High Quality Data Models (http:/ / www. matthew-west. org. uk/ documents/ princ03. pdf). The European Process Industries STEP Technical Liaison Executive (EPISTLE). [2] American National Standards Institute. 1975. ANSI/X3/SPARC Study Group on Data Base Management Systems; Interim Report. FDT(Bulletin of ACM SIGMOD) 7:2.

External links
Building a Logical Data Model (http://replay.web.archive.org/20080509063521/http://www.dbmsmag. com/9506d16.html) By George Tillmann, DBMS, June 1995.

RDF query language


An RDF query language is a computer language, specifically a query language for databases, able to retrieve and manipulate data stored in Resource Description Framework format. SPARQL is emerging as the de facto RDF query language, and is a W3C Recommendation.[1] Released as a Candidate Recommendation in April 2006, it returned to Working Draft status in October 2006, due to open issues. It returned to Candidate Recommendation status in June 2007.[2] On 12 November 2007 the status of SPARQL changed into Proposed Recommendation.[3] On 15 January 2008, SPARQL was standardized.[4]

Other RDF query languages


DQL, XML-based, queries and results expressed in DAML+OIL N3QL, based on Notation 3 R-DEVICE RDFQ, XML-based RDQ, SQL-like RDQL, SQL-like RQL/RVL, SQL-like SeRQL, SQL-like, similar to RQL/RVL Versa (query language), compact syntax (nonSQL-like), solely implemented in 4Suite (Python) XUL has a template [5] element in which to declare rules for matching data in RDF. XUL uses RDF extensively for databinding. Adenine (programming language written in RDF).

External links
RDF Query specification [6] RDF query language survey [7] A Comparison of (some) RDF Query Languages [8] RDF query use cases, including query language samples [9] SparQL [10]

RDF query language

75

References
[1] Prud'hommeaux, Eric; Seaborne, Andy (15 January 2008). "SPARQL Query Language for RDF" (http:/ / www. w3. org/ TR/ rdf-sparql-query/ ). W3C. World Wide Web Consortium. . [2] Herman, Ivan (15 June 2007). "SPARQL is a Candidate Recommendation" (http:/ / www. w3. org/ blog/ SW/ 2007/ 06/ 15/ sparql_is_a_candidate_recommendation). Semantic Web Activity News. World Wide Web Consortium. . [3] "Three SPARQL Proposed Recommendations: SPARQL Query Language for RDF; Query Results XML Format; Protocol for RDF" (http:/ / www. w3. org/ News/ 2007#item247). W3C News in 2007. World Wide Web Consortium. 13 November 2007. . [4] Herman, Ivan (15 January 2008). "SPARQL is a Recommendation" (http:/ / www. w3. org/ blog/ SW/ 2008/ 01/ 15/ sparql_is_a_recommendation). Semantic Web Activity News. World Wide Web Consortium. . [5] http:/ / developer. mozilla. org/ en/ docs/ XUL:Template_Guide:Introduction [6] http:/ / www. w3. org/ TandS/ QL/ QL98/ pp/ rdfquery. html [7] http:/ / www. w3. org/ 2001/ 11/ 13-RDF-Query-Rules/ [8] http:/ / web. archive. org/ web/ 20080702143156/ http:/ / www. aifb. uni-karlsruhe. de/ WBS/ pha/ rdf-query/ [9] http:/ / rdfstore. sourceforge. net/ 2002/ 06/ 24/ rdf-query/ [10] http:/ / www. w3. org/ TR/ rdf-sparql-query/

Web Ontology Language


OWL Web Ontology Language
Current status Published Year started Editors 2002 Mike Dean, Guus Schreiber

Base standards Resource Description Framework, RDFS Domain Abbreviation Website Semantic Web OWL OWL Reference [1]

OWL 2 Web Ontology Language


Current status Published Year started Editors 2008 W3C OWL Working Group

Base standards Resource Description Framework, RDFS Domain Abbreviation Website Semantic Web OWL 2 OWL2 Overview [2]

The Web Ontology Language (OWL) is a family of knowledge representation languages for authoring ontologies. The languages are characterised by formal semantics and RDF/XML-based serializations for the Semantic Web. OWL is endorsed by the World Wide Web Consortium (W3C)[3] and has attracted academic, medical and commercial interest. In October 2007, a new W3C working group[4] was started to extend OWL with several new features as proposed in the OWL 1.1 member submission.[5] W3C announced the new version of OWL on 27 October 2009.[6] This new version, called OWL 2, soon found its way into semantic editors such as Protg and semantic reasoners such as

Web Ontology Language Pellet,[7][8] RacerPro,[9] FaCT++[10][11] and HermiT.[12] The OWL family contains many species, serializations, syntaxes and specifications with similar names. OWL and OWL2 are used to refer to the 2004 and 2009 specifications, respectively. Full species names will be used, including specification version (for example, OWL2 EL). When referring more generally, OWL Family will be used.

76

History
Early ontology languages
Further information: Knowledge_representation#History_of_knowledge_representation_and_reasoning There is a long history of ontological development in philosophy and computer science. Since the 1990s, a number of research efforts have explored how the idea of knowledge representation (KR) from artificial intelligence (AI) could be made useful on the World Wide Web. These included languages based on HTML (called SHOE), based on XML (called XOL, later OIL), and various frame-based KR languages and knowledge acquisition approaches.

Ontology languages for the web


In 2000 in the United States, DARPA started development of DAML led by James Hendler.[13] In March 2001, the Joint EU/US Committee on Agent Markup Languages decided that DAML should be merged with OIL.[13] The EU/US ad hoc Joint Working Group on Agent Markup Languages was convened to develop DAML+OIL as a web ontology language. This group was jointly funded by the DARPA (under the DAML program) and the European Union's Information Society Technologies (IST) funding project. DAML+OIL was intended to be a thin layer above RDFS,[13] with formal semantics based on a description logic (DL).[14] OWL started as a research-based[15] revision of DAML+OIL aimed at the semantic web.

Semantic web standards


The Semantic Web provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries. World Wide Web Consortium,W3C Semantic Web Activity[16] Further information: Semantic Web RDF schema a declarative representation language influenced by ideas from knowledge representation World Wide Web Consortium,Metadata Activity[17] In the late 1990s, the World Wide Web Consortium (W3C) Metadata Activity started work on RDF Schema (RDFS), a language for RDF vocabulary sharing. The RDF became a W3C Recommendation in February 1999, and RDFS a Candidate Recommendation in March 2000.[17] In February 2001, the Semantic Web Activity replaced the Metadata Activity.[17] In 2004 (as part of a wider revision of RDF) RDFS became a W3C Recommendation.[18] Though RDFS provides some support for ontology specification, the need for a more expressive ontology language had become clear.[19] Further information: RDFS

Web Ontology Language Web-Ontology Working Group As of Monday, the 31st of May, our working group will officially come to an end. We have achieved all that we were chartered to do, and I believe our work is being quite well appreciated. James Hendler and Guus Schreiber,So Long and thanks for all the fish[20] The World Wide Web Consortium (W3C) created the Web-Ontology Working Group as part of their Semantic Web Activity. It began work on November 1, 2001 with co-chairs James Hendler and Guus Schreiber.[20] The first working drafts of the abstract syntax, reference and synopsis were published in July 2002.[20] OWL became a formal W3C recommendation on February 10, 2004 and the working group was disbanded on May 31, 2004.[20] OWL Working Group In 2005, at the OWL Experiences And Directions Workshop a consensus formed that recent advances in description logic would allow a more expressive revision to satisfy user requirements more comprehensively whilst retaining good computational properties. In December 2006, the OWL1.1 Member Submission[21] was made to the W3C. The W3C chartered the OWL Working Group as part of the Semantic Web Activity in September 2007. In April 2008, this group decided to call this new language OWL2, indicating a substantial revision.[22] OWL 2 became a W3C recommendation in October 2009. OWL 2 introduces profiles to improve scalability in typical applications.[6]

77

Acronym
Why not be inconsistent in at least one aspect of a language which is all about consistency? Guus Schreiber,Why OWL and not WOL?[23] The natural acronym for Web Ontology Language would be WOL instead of OWL. Although the character Owl from Winnie the Pooh wrote his name WOL, the acronym OWL was proposed without reference to that character, as an easily pronounced acronym that would yield good logos, suggest wisdom, and honor William A. Martin's One World Language knowledge representation project from the 1970s.

Adoption
A survey (published in 2006) of ontologies available on the web collected 688 OWL ontologies. Of these, 199 were OWL Lite, 149 were OWL DL and 337 OWL Full (by syntax). They found that 19 ontologies had in excess of 2,000 classes, and that 6 had more than 10,000. The same survey collected 587 RDFS vocabularies.[24]

Ontologies
Introduction
An ontology is an explicit specification of a conceptualization. Tom Gruber, A Translation Approach to Portable Ontology Specifications[25] The data described by an ontology in the OWL family is interpreted as a set of "individuals" and a set of "property assertions" which relate these individuals to each other. An ontology consists of a set of axioms which place constraints on sets of individuals (called "classes") and the types of relationships permitted between them. These axioms provide semantics by allowing systems to infer additional information based on the data explicitly provided. A full introduction to the expressive power of the OWL is provided in the W3C's OWL Guide.

Web Ontology Language

78

Example
An ontology describing families might include axioms stating that a "hasMother" property is only present between two individuals when "hasParent" is also present, and individuals of class "HasTypeOBlood" are never related via "hasParent" to members of the "HasTypeABBlood" class. If it is stated that the individual Harriet is related via "hasMother" to the individual Sue, and that Harriet is a member of the "HasTypeOBlood" class, then it can be inferred that Sue is not a member of "HasTypeABBlood".

Species
OWL sublanguages
The W3C-endorsed OWL specification includes the definition of three variants of OWL, with different levels of expressiveness. These are OWL Lite, OWL DL and OWL Full (ordered by increasing expressiveness). Each of these sublanguages is a syntactic extension of its simpler predecessor. The following set of relations hold. Their inverses do not. Every legal OWL Lite ontology is a legal OWL DL ontology. Every legal OWL DL ontology is a legal OWL Full ontology. Every valid OWL Lite conclusion is a valid OWL DL conclusion. Every valid OWL DL conclusion is a valid OWL Full conclusion. OWL Lite OWL Lite was originally intended to support those users primarily needing a classification hierarchy and simple constraints. For example, while it supports cardinality constraints, it only permits cardinality values of 0 or 1. It was hoped that it would be simpler to provide tool support for OWL Lite than its more expressive relatives, allowing quick migration path for systems utilizing thesauri and other taxonomies. In practice, however, most of the expressiveness constraints placed on OWL Lite amount to little more than syntactic inconveniences: most of the constructs available in OWL DL can be built using complex combinations of OWL Lite features.[22] Development of OWL Lite tools has thus proven almost as difficult as development of tools for OWL DL, and OWL Lite is not widely used.[22] OWL DL OWL DL was designed to provide the maximum expressiveness possible while retaining computational completeness (either or belong), decidability (there is an effective procedure to determine whether is derivable or not), and the availability of practical reasoning algorithms. OWL DL includes all OWL language constructs, but they can be used only under certain restrictions (for example, number restrictions may not be placed upon properties which are declared to be transitive). OWL DL is so named due to its correspondence with description logic, a field of research that has studied the logics that form the formal foundation of OWL. OWL Full OWL Full is based on a different semantics from OWL Lite or OWL DL, and was designed to preserve some compatibility with RDF Schema. For example, in OWL Full a class can be treated simultaneously as a collection of individuals and as an individual in its own right; this is not permitted in OWL DL. OWL Full allows an ontology to augment the meaning of the pre-defined (RDF or OWL) vocabulary. OWL Full is undecidable, so no reasoning software is able to perform complete reasoning for it.

Web Ontology Language

79

OWL2 profiles
In OWL 2, there are three sublanguages of the language. OWL 2 EL is a fragment that has polynomial time reasoning complexity; OWL 2 QL is designed to enable easier access and query to data stored in databases; OWL 2 RL is a rule subset of OWL 2.

Syntax
The OWL family of languages supports a variety of syntaxes. It is useful to distinguish high level syntaxes aimed at specification from exchange syntaxes more suitable for general use.

High level
These are close to the ontology structure of languages in the OWL family. OWL abstract syntax This high level syntax is used to specify the OWL ontology structure and semantics.[26] The OWL abstract syntax presents an ontology as a sequence of annotations, axioms and facts. Annotations carry machine and human oriented meta-data. Information about the classes, properties and individuals that compose the ontology is contained in axioms and facts only. Each class, property and individual is either anonymous or identified by an URI reference. Facts state data either about an individual or about a pair of individual identifiers (that the objects identified are distinct or the same). Axioms specify the characteristics of classes and properties. This style is similar to frame languages, and quite dissimilar to well known syntaxes for description logics (DLs) and Resource Description Framework (RDF).[26] Sean Bechhofer, et al. argue that though this syntax is hard to parse, it is quite concrete. They conclude that the name abstract syntax may be somewhat misleading.[27] OWL2 functional syntax This syntax closely follows the structure of an OWL2 ontology. It is used by OWL2 to specify semantics, mappings to exchange syntaxes and profiles.[28]

Exchange syntaxes OWL RDF/XML Serialization

Filename extension .owx, .owl, .rdf Internet media type application/owl+xml, application/rdf+xml[29] Developed by Standard(s) World Wide Web Consortium [30] OWL 2 XML Serialization October 27, 2009, [31] OWL Reference February 10, 2004 Yes

Open format?

Web Ontology Language RDF syntaxes Syntactic mappings into RDF are specified[26][32] for languages in the OWL family. Several RDF serialization formats have been devised. Each leads to a syntax for languages in the OWL family through this mapping. RDF/XML is normative.[26][32] OWL2 XML syntax OWL2 specifies an XML serialization that closely models the structure of an OWL2 ontology.[33] Manchester Syntax The Manchester Syntax is a compact, human readable syntax with a style close to frame languages. Variations are available for OWL and OWL2. Not all OWL and OWL2 ontologies can be expressed in this syntax.[34]

80

Examples
The W3C OWL 2 Web Ontology Language provides syntax examples.[35] Tea ontology Consider an ontology for tea based on a Tea class. But first, an ontology is needed. Every OWL ontology must be identified by an URI (http://www.example.org/tea.owl, say). This is enough to get a flavour of the syntax. To save space below, preambles and prefix definitions have been skipped. OWL2 Functional Syntax Ontology(<http://example.com/tea.owl> Declaration( Class( :Tea ) ) ) OWL2 XML Syntax <Ontology ontologyIRI="http://example.com/tea.owl" ...> <Prefix name="owl" IRI="http://www.w3.org/2002/07/owl#"/> <Declaration> <Class IRI="Tea"/> </Declaration> </Ontology> Manchester Syntax Ontology: <http://example.com/tea.owl> Class: Tea RDF/XML syntax <rdf:RDF ...> <owl:Ontology rdf:about=""/> <owl:Class rdf:about="#Tea"/> </rdf:RDF> RDF/Turtle <http://example.com/tea.owl> rdf:type owl:Ontology . :Tea rdf:type owl:Class .

Web Ontology Language

81

Semantics
Relation to description logic
In the beginning, IS-A was quite simple. Today, however, there are almost as many meanings for this inheritance link as there are knowledge-representation systems. Ronald J. Brachman,What ISA is and isn't[36] Early attempts to build large ontologies were plagued by a lack of clear definitions. Members of the OWL family have model theoretic formal semantics, and so have strong logical foundations. Description logics (DLs) are a family of logics that are decidable fragments of first-order logic with attractive and well-understood computational properties. OWL DL and OWL Lite semantics are based on DLs.[37] They combine a syntax for describing and exchanging ontologies, and formal semantics that gives them meaning. For example, OWL DL corresponds to the SHOIN (D) description logic, while OWL 2 corresponds to the SROIQ(D) logic.[38] Sound, complete, terminating reasoners (i.e. systems which are guaranteed to derive every consequence of the knowledge in an ontology) exist for these DLs.

Relation To RDFS
OWL Full is intended to be compatible with RDF Schema (RDFS), and to be capable of augmenting the meanings of existing Resource Description Framework (RDF) vocabulary.[39] A model theory describes the formal semantics for RDF.[40] This interpretation provides the meaning of RDF and RDFS vocabulary. So, the meaning of OWL Full ontologies are defined by extension of the RDFS meaning, and OWL Full is a semantic extension of RDF.[41]

Open world assumption


[The closed] world assumption implies that everything we dont know is false, while the open world assumption states that everything we dont know is undefined. Stefano Mazzocchi,Closed World vs. Open World: the First Semantic Web Battle[42] The languages in the OWL family use the open world assumption. Under the open world assumption, if a statement cannot be proven to be true with current knowledge, we cannot draw the conclusion that the statement is false. Contrast to other languages A relational database consists of sets of tuples with the same attributes. SQL is a query and management language for relational databases. Prolog is a logical programming language. Both use the closed world assumption.

Terminology
Languages in the OWL family are capable of creating classes, properties, defining instances and its operations.

Instances
An instance is an object. It corresponds to a description logic individual.

Classes
A class is a collection of objects. It corresponds to a description logic (DL) concept. A class may contain individuals, instances of the class. A class may have any number of instances. An instance may belong to none, one or more classes. A class may be a subclass of another, inheriting characteristics from its parent superclass. This corresponds to logical subsumption and DL concept inclusion notated .

Web Ontology Language All classes are subclasses of owl:Thing (DL top notated ), the root class. ), the empty class. No instances are members of

82

All classes are subclassed by owl:Nothing (DL bottom notated

owl:Nothing. Modelers use owl:Thing and owl:Nothing to assert facts about all or no instances.[43] Example For example, Employee could be the subclass of class owl:Thing while Dealer, Manager, and Labourer all subclass of Employee.

Properties
A property is a directed binary relation that specifies class characteristics. It corresponds to a description logic role. They are attributes of instances and sometimes act as data values or link to other instances. Properties may possess logical capabilities such as being transitive, symmetric, inverse and functional. Properties may also have domains and ranges. Datatype properties Datatype properties are relations between instances of classes and RDF literals or XML schema datatypes. For example, modelName (String datatype) is the property of Manufacturer class. They are formulated using owl:DatatypeProperty type. Object properties Object properties are relations between instances of two classes. For example, ownedBy may be an object type property of the Vehicle class and may have a range which is the class Person. They are formulated using owl:ObjectProperty.

Operators
Languages in the OWL family support various operations on classes such as union, intersection and complement. They also allow class enumeration, cardinality, and disjointness.

Public ontologies
Libraries
Biomedical OBO Foundry[44][45] NCBO BioPortal[46] NCI Enterprise Vocabulary Services [47]

Web Ontology Language Miscellaneous SchemaWeb [48]

83

Standards
Suggested Upper Merged Ontology[49] TDWG[50]

Browsers
The following tools include public ontology browsers: Protg OWL[51]

Search
Swoogle

Limitations
No direct language support for n-ary relationships. For example modelers may wish to describe the qualities of a relation, to relate more than 2 individuals or to relate an individual to a list. This cannot be done within OWL. They may need to adopt a pattern instead which encodes the meaning outside the formal semantics.[52]

References
[1] [2] [3] [4] [5] [6] [7] http:/ / www. w3. org/ TR/ owl-ref/ http:/ / www. w3. org/ TR/ owl2-overview/ "OWL 2 Web Ontology Language Document Overview" (http:/ / www. w3. org/ TR/ owl2-overview/ ). W3C. 2009-10-27. . W3C working group (http:/ / www. w3. org/ 2007/ OWL) "Submission Request to W3C: OWL 1.1 Web Ontology Language" (http:/ / www. w3. org/ Submission/ 2006/ 10/ ). W3C. 2006-12-19. . http:/ / www. w3. org/ 2009/ 10/ owl2-pr Sirin, Evren; Parsia, Bijan; Grau, Bernardo Cuenca; Kalyanpur, Aditya; Katz, Yarden (2007). "Pellet: A practical OWL-DL reasoner" (http:/ / pellet. owldl. com/ papers/ sirin05pellet. pdf). Web Semantics: Science, Services and Agents on the World Wide Web 5 (2): 5153. doi:10.1016/j.websem.2007.03.004. . [8] Pellet (http:/ / pellet. owldl. org/ ) [9] RacerPro (http:/ / www. racer-systems. com/ ) [10] Tsarkov, Dmitry; Horrocks, Ian (2006). "FaCT++ Description Logic Reasoner: System Description" (http:/ / www. cs. ox. ac. uk/ ian. horrocks/ Publications/ download/ 2006/ TsHo06a. pdf). Automated Reasoning. Lecture Notes in Computer Science. 4130. pp.292297. doi:10.1007/11814771_26. ISBN978-3-540-37187-8. . [11] FaCT++ (http:/ / code. google. com/ p/ factplusplus/ ) [12] HermiT (http:/ / hermit-reasoner. com/ ) [13] Lacy, Lee W. (2005). "Chapter 10". OWL: Representing Information Using the Web Ontology Language. Victoria, BC: Trafford Publishing. ISBN1-4120-3448-5. [14] Baader, Franz; Horrocks, Ian; Sattler, Ulrike (2005). "Description Logics as Ontology Languages for the Semantic Web" (http:/ / www. springerlink. com/ content/ axh20n8l34bc3ecb/ ). In Hutter, Dieter; Stephan, Werner. Mechanizing Mathematical Reasoning: Essays in Honor of Jrg H. Siekmann on the Occasion of His 60th Birthday (http:/ / www. springerlink. com/ content/ mf848ceackyx). Heidelberg, DE: Springer Berlin. ISBN978-3-540-25051-7. . [15] "Feature Synopsis for OWL Lite and OWL: W3C Working Draft 29 July 2002" (http:/ / www. w3. org/ TR/ 2002/ WD-owl-features-20020729/ ). W3C. 2002-07-29. . [16] World Wide Web Consortium (2010-02-06). "W3C Semantic Web Activity" (http:/ / www. w3. org/ 2001/ sw/ ). . Retrieved 18 April 2010. [17] World Wide Web Consortium (2002-08-23). "Metadata Activity Statement" (http:/ / www. w3. org/ Metadata/ Activity. html). World Wide Web Consortium. . Retrieved 20 April 2010. [18] World Wide Web Consortium (2002-08-23). "RDF Vocabulary Description Language 1.0: RDF Schema" (http:/ / www. w3. org/ Metadata/ Activity. html). RDF Vocabulary Description Language 1.0. World Wide Web Consortium. . Retrieved 20 April 2010. [19] Lacy, Lee W. (2005). "Chapter 9 - RDFS". OWL: Representing Information Using the Web Ontology Language. Victoria, BC: Trafford Publishing. ISBN1-4120-3448-5.

Web Ontology Language


[20] "Web-Ontology (WebOnt) Working Group (Closed)" (http:/ / www. w3. org/ 2001/ sw/ WebOnt/ #L151). W3C. . [21] Patel-Schneider, Peter F.; Horrocks, Ian (2006-12-19). "OWL 1.1 Web Ontology Language" (http:/ / www. w3. org/ Submission/ 2006/ SUBM-owl11-overview-20061219/ ). World Wide Web Consortium. . Retrieved 26 April 2010. [22] Grau, Bernardo Cuenca; Horrocks, Ian; Motik, Boris; Parsia, Bijan; Patel-Schneider, Peter F.; Sattler, Ulrike (2008). "OWL 2: The next step for OWL" (http:/ / www. cs. ox. ac. uk/ boris. motik/ pubs/ ghmppss08next-steps. pdf). Web Semantics: Science, Services and Agents on the World Wide Web 6 (4): 309322. doi:10.1016/j.websem.2008.05.001. . [23] Herman, Ivan. "Why OWL and not WOL?" (http:/ / www. w3. org/ People/ Ivan/ CorePresentations/ RDFTutorial/ Slides. html#(114)). Tutorial on Semantic Web Technologies. World Wide Web Consortium. . Retrieved 18 April 2010. [24] Wang, Taowei David; Parsia, Bijan; Hendler, James (2006). "A Survey of the Web Ontology Landscape". The Semantic Web - ISWC 2006. Lecture Notes in Computer Science. 4273. pp.682. doi:10.1007/11926078_49. ISBN978-3-540-49029-6. [25] Gruber, Tom (1993); "A Translation Approach to Portable Ontology Specifications" (http:/ / tomgruber. org/ writing/ ontolingua-kaj-1993. pdf), in Knowledge Acquisition, 5: 199-199 [26] Patel-Schneider, Peter F.; Horrocks, Ian; Patrick J., Hayes (2004-02-10). "OWL Web Ontology Language Semantics and Abstract Syntax" (http:/ / www. w3. org/ TR/ 2004/ REC-owl-semantics-20040210/ syntax. html). World Wide Web Consortium. . Retrieved 18 April 2010. [27] Bechhofer, Sean; Patel-Schneider, Peter F.; Turi, Daniele (2003-12-10). "OWL Web Ontology Language Concrete Abstract Syntax" (http:/ / owl. man. ac. uk/ 2003/ concrete/ 20031210/ ). University of Manchester. . Retrieved 18 April 2010. [28] Motik, Boris; Patel-Schneider, Peter F.; Parsia, Bijan (2009-10-27). "OWL 2 Web Ontology Language Structural Specification and Functional-Style Syntax" (http:/ / www. w3. org/ TR/ 2009/ REC-owl2-syntax-20091027/ ). OWL 2 Web Ontology Language. World Wide Web Consortium. . Retrieved 18 April 2010. [29] "application/rdf+xml Media Type Registration" (http:/ / tools. ietf. org/ html/ rfc3870). IETF. 2004-09. pp.2. . Retrieved 2011-01-08. [30] http:/ / www. w3. org/ TR/ owl2-xml-serialization/ [31] http:/ / www. w3. org/ TR/ owl-ref/ #MIMEType [32] Patel-Schneider, Peter F.; Motik, Boris (2009-10-27). "OWL 2 Web Ontology Language Mapping to RDF Graphs" (http:/ / www. w3. org/ TR/ 2009/ REC-owl2-mapping-to-rdf-20091027/ ). OWL 2 Web Ontology Language. World Wide Web Consortium. . Retrieved 18 April 2010. [33] Motik, Boris; Parsia, Bijan; Patel-Schneider, Peter F. (2009-10-27). "OWL 2 Web Ontology Language XML Serialization" (http:/ / www. w3. org/ TR/ 2009/ REC-owl2-xml-serialization-20091027/ ). OWL 2 Web Ontology Language. World Wide Web Consortium. . Retrieved 18 April 2010. [34] Horridge, Matthew; Patel-Schneider, Peter F. (2009-10-27). "OWL 2 Web Ontology Language Manchester Syntax" (http:/ / www. w3. org/ TR/ owl2-manchester-syntax/ ). W3C OWL 2 Web Ontology Language. World Wide Web Consortium. . Retrieved 18 April 2010. [35] Hitzler, Pascal; Krtzsch, Markus; Parsia, Bijan; Patel-Schneider, Peter F.; Rudolph, Sebastian (2009-10-27). "OWL 2 Web Ontology Language Primer" (http:/ / www. w3. org/ TR/ 2009/ REC-owl2-primer-20091027/ ). OWL 2 Web Ontology Language. World Wide Wed Consortium. . Retrieved 2010-04-26. [36] Brachman, Ronald J. (1983); What ISA is and isn't: An analysis of taxonomic links in semantic networks, IEEE Computer, vol. 16, no. 10, pp. 30-36 [37] Horrocks, Ian; Patel-Schneider, Peter F.. "Reducing OWL Entailment to Description Logic Satisfiability" (http:/ / www. cs. man. ac. uk/ ~horrocks/ Publications/ download/ 2003/ HoPa03c. pdf) (PDF). . [38] Hitzler, Pascal; Krtzsch, Markus; Rudolph, Sebastian (2009-08-25). Foundations of Semantic Web Technologies (http:/ / www. semantic-web-book. org). CRCPress. ISBN1-4200-9050-X. . [39] McGuinness, Deborah; van Harmelen, Frank (2004-02-10). "OWL Web Ontology Language Overview" (http:/ / www. w3. org/ TR/ 2004/ REC-owl-features-20040210/ ). W3C Recommendation for OWL, the Web Ontology Language. World Wide Web Consortium. . Retrieved 18 April 2010. [40] Hayes, Patrick (2004-02-10). "RDF Semantics" (http:/ / www. w3. org/ TR/ 2004/ REC-rdf-mt-20040210/ ). Resource Description Framework. World Wide Web Consortium. . Retrieved 18 April 2010. [41] Patel-Schneider, Peter F.; Hayes, Patrick; Horrocks, Ian (2004-02-10). "OWL Web Ontology Language Semantics and Abstract Syntax Section 5. RDF-Compatible Model-Theoretic Semantics" (http:/ / www. w3. org/ TR/ owl-semantics/ rdfs. html). W3C Recommendation for OWL, the Web Ontology Language. World Wide Web Consortium. . Retrieved 18 April 2010. [42] Mazzocchi, Stefano (2005-06-16). "Closed World vs. Open World: the First Semantic Web Battle" (http:/ / www. betaversion. org/ ~stefano/ linotype/ news/ 91/ ). . Retrieved 27 April 2010. [43] Lacy, Lee W. (2005). "Chapter 12". OWL: Representing Information Using the Web Ontology Language. Victoria, BC: Trafford Publishing. ISBN1-4120-3448-5. [44] OBO Foundry (http:/ / obofoundry. org) [45] OBO Download Matrix (http:/ / www. berkeleybop. org/ ontologies/ ) [46] NCBO BioPortal (http:/ / www. bioontology. org/ ncbo/ faces/ pages/ ontology_list. xhtml) [47] http:/ / www. cancer. gov/ cancertopics/ terminologyresources [48] http:/ / www. schemaweb. info/ [49] SUMO download (http:/ / www. ontologyportal. org/ translations/ SUMO. owl. txt) [50] TDWG LSID Vocabularies (http:/ / rs. tdwg. org/ ontology/ voc/ ) [51] Protg web site (http:/ / protege. stanford. edu)

84

Web Ontology Language


[52] Noy, Natasha; Rector, Alan (2006-04-12). "Defining N-ary Relations on the Semantic Web" (http:/ / www. w3. org/ TR/ swbp-n-aryRelations/ ). World Wide Web Consortium. . Retrieved 17 April 2010.

85

External links
Horrocks, Ian (http://www.cs.ox.ac.uk/people/ian.horrocks/Seminars/) (2010); SemTech 2010 (http:// semtech2010.semanticuniverse.com/) tutorial part 1 (http://www.comlab.ox.ac.uk/people/ian.horrocks/ Seminars/download/Horrocks_Ian_pt1.pdf) and part 2 (http://www.comlab.ox.ac.uk/people/ian.horrocks/ Seminars/download/Horrocks_Ian_pt2.pdf) on Description Logics and OWL ESWC09 Tutorial (http://www.semantic-web-book.org/page/ESWC09_Tutorial) including an introduction to OWL 2 Visual OWL (http://www.visualmodeling.com/VisualOWL.htm) Visual Modeling Forum page dedicated to graphic notations for OWL Tutorial on OWL (http://www.cs.man.ac.uk/~horrocks/ISWC2003/Tutorial/) at the University of Manchester Computer Science Department (http://www.cs.man.ac.uk/) Introduction to Description Logics DL course (http://www.inf.unibz.it/~franconi/dl/course/) by Enrico Franconi, Faculty of Computer Science, Free University of Bolzano, Italy Cooperative Ontologies (CO-ODE) web site (http://www.co-ode.org/) includes OWL tutorial materials and software. UML2OWL - XSLT scripts to transform UML class diagrams into valid OWL DL ontologies / modelling OWL DL ontologies with UML (http://diplom.ooyoo.de/) OWL API (http://owlapi.sourceforge.net/) API for Using OWL 2, at SourceForge ROWLEX Toolkit (http://rowlex.nc3a.nato.int/) NATO C3 Agency Semantic Interoperability Relaxed OWL Experience Toolkit for .NET ViziQuer (http://viziquer.lumii.lv) a tool that allows to browse a SPARQL endpoint ontology and construct SPARQL queries

Enterprise architecture

86

Enterprise architecture
Enterprise architecture (EA) is the process of translating business vision and strategy into effective enterprise change by creating, communicating and improving the key requirements, principles and models that describe the enterprise's future state and enable its evolution.[1] Practitioners of EA call themselves enterprise architects. An enterprise architect is a person responsible for performing this complex analysis of business structure and processes and is often called upon to draw conclusions from the information collected. By producing this understanding, architects are attempting to address the goals of Enterprise Architecture: Effectiveness, Efficiency, Agility, and Durability.[2]

Definition
Enterprise architecture is an ongoing business function that helps an 'enterprise' figure out how to execute best the strategies that drive its development. The MIT Center for Information Systems Research (MIT CISR) defines enterprise architecture as the specific aspects of a business that are under examination: Enterprise architecture is the organizing logic for business processes and IT infrastructure reflecting the integration and standardization requirements of the company's operating model. The operating model is the desired state of business process integration and business process standardization for delivering goods and services to customers.[3] The United States Government classifies enterprise architecture as an Information Technology function, and defines the term not as the process of examining the enterprise, but rather the documented results of that examination. Specifically, US Code Title 44, Chapter 36, defines it as a 'strategic information base' that defines the mission of an agency and describes the technology and information needed to perform that mission, along with descriptions of how the architecture of the organization should be changed in order to respond to changes in the mission.[4]

Scope
The term enterprise is used because it is generally applicable in many circumstances, including Public or private sector organizations An entire business or corporation A part of a larger enterprise (such as a business unit) A conglomerate of several organizations, such as a joint venture or partnership A multiply outsourced business operation Many collaborating public and/or private organizations in multiple countries

The term enterprise includes the whole complex, socio-technical system,[5] including: people information technology business (e.g. operations)

Defining the boundary or scope of the enterprise to be described is an important first step in creating the enterprise architecture. Enterprise as used in enterprise architecture generally means more than the information systems employed by an organization.[6] A pragmatic enterprise architecture provides a context and a scope. The context encompasses the (people), organizations, systems and technology out of scope that have relationships with the organisation(s), systems and technology in the scope. In practice, the architect is responsible for the articulation of the scope in the context, engineers are responsible for the details of the scope (just as in the building practice). The architect remains responsible for the work of the engineers, and the implementing contractors thereafter.

Enterprise architecture

87

Developing an Enterprise Level Architectural Description


Paramount to the enterprise architecture is the identification of the sponsor, his/her mission, vision and strategy and the governance framework to define all roles, responsibilities and relationships involved in the anticipated transition. As the purpose of architecture is: "INSIGHT, TO DECIDE, FOR ALL STAKEHOLDERS", enterprise architects work very closely with the enterprise sponsor and key stakeholders, internal and external to the enterprise. The architect understands the enterprise mission, vision and strategy and the sponsor's ideas about the approach. The architect articulates the existing enterprise infrastructure value-chain: market, business, systems and technology. Architects present and discuss the technology, systems, business and market options to fulfill the enterprise mission. Insight is improved by using the 'solution architecture' which is, relative to the decisions ahead, a specific blend of technology, systems, business and market options. Together with the sponsor and the main stakeholders, they make informed choices about the options. For large transitions, architectural decisions are supported by proofs-of-concept and/or business pilots. Enterprise architects use various methods and tools to capture the structure and dynamics of an enterprise. In doing so, they produce taxonomies, diagrams, documents and models, together called artifacts. These artifacts describe the logical organization of business functions, business capabilities, business processes, people, information resources, business systems, software applications, computing capabilities, information exchange and communications infrastructure within the enterprise. A collection of these artifacts, sufficiently complete to describe the enterprise in useful ways, is considered by EA practitioners an 'enterprise' level architectural description, or enterprise architecture, for short. The UK National Computing Centre EA best practice guidance[7] states Normally an EA takes the form of a comprehensive set of cohesive models that describe the structure and functions of an enterprise. and continues The individual models in an EA are arranged in a logical manner that provides an ever-increasing level of detail about the enterprise: its objectives and goals; its processes and organization; its systems and data; the technology used and any other relevant spheres of interest. This is the definition of enterprise architecture implicit in several EA frameworks including the popular TOGAF architectural framework. An enterprise architecture framework bundles tools, techniques, artifact descriptions, process models, reference models and guidance used by architects in the production of enterprise-specific architectural description. Several enterprise architecture frameworks break down the practice of enterprise architecture into a number of practice areas or domains. See the related articles on enterprise architecture frameworks and domains for further information. In 1992, Steven Spewak described a process for creating an enterprise architecture that is widely used in educational courses.[8]

Enterprise architecture

88

Using an enterprise architecture


Describing the architecture of an enterprise aims primarily to improve the effectiveness or efficiency of the business itself. This includes innovations in the structure of an organization, the centralization or federation of business processes, the quality and timeliness of business information, or ensuring that money spent on information technology (IT) can be justified.[2] One method of using this information to improve the functioning of a business, as described in the TOGAF architectural framework, involves developing an "architectural vision": a description of the business that represents a "target" or "future state" goal. Once this vision is well understood, a set of intermediate steps are created that illustrate the process of changing from the present situation to the target. These intermediate steps are called "transitional architectures" by TOGAF.[9] Similar methods have been described in other enterprise architecture frameworks.

Benefits of enterprise architecture


As new technologies arise and are implemented, the benefits of enterprise architecture continue to grow. Enterprise architecture defines what an organization does; who performs individual functions within the organization, and within the market value chain; how the organizational functions are performed; and how information is used and stored. IT costs are reduced and responsiveness with IT systems is improved. However, to be successful, continual development and periodic maintenance of the enterprise architecture is essential. Building an enterprise architecture could take considerable time and proper planning is essential, including phasing the project in slowly, prior to implementation. If the enterprise architecture is not kept up to date, the aforementioned benefits will become useless.

The growing use of enterprise architecture


Documenting the architecture of enterprises is done within the U.S. Federal Government[10] in the context of the Capital Planning and Investment Control (CPIC) process. The Federal Enterprise Architecture (FEA) reference models guides federal agencies in the development of their architectures.[11] Companies such as Independence Blue Cross, Intel, Volkswagen AG[12] and InterContinental Hotels Group[13] also use enterprise architecture to improve their business architectures as well as to improve business performance and productivity.

Relationship to other disciplines


Enterprise architecture is a key component of the information technology governance process in many organizations, which have implemented a formal enterprise architecture process as part of their IT management strategy. While this may imply that enterprise architecture is closely tied to IT, it should be viewed in the broader context of business optimization in that it addresses business architecture, performance management and process architecture as well as more technical subjects. Depending on the organization, enterprise architecture teams may also be responsible for some aspects of performance engineering, IT portfolio management and metadata management. Recently, protagonists like Gartner and Forrester have stressed the important relationship of Enterprise Architecture with emerging holistic design practices such as Design Thinking and User Experience Design.[14] Analyst firm Real Story Group went further, suggesting that Enterprise Architecture and the emerging concept of the Digital Workplace were "two sides to the same coin."[15] The following image from the 2006 FEA Practice Guidance of US OMB sheds light on the relationship between enterprise architecture and segment (BPR) or Solution architectures.

Enterprise architecture

89

Published examples
It is uncommon for a commercial organization to publish rich detail from their enterprise architecture descriptions. Doing so can provide competitors information on weaknesses and organizational flaws that could hinder the company's market position. However, many government agencies around the world have begun to publish the architectural descriptions that they have developed. Good examples include the US Department of the Interior, US Department of Defense Business Enterprise Architecture [16], or the 2008 BEAv5.0 version.

Academic qualifications
Enterprise Architecture was included in the Association for Computing Machinery (ACM) and Association for Information Systems (AIS)s Curriculum for Information Systems as one of the 6 core courses.[17] A new MSc in Enterprise Architecture was introduced at the University of East London [18] in collaboration with Iasa [19] to start February 2013. There are several universities that offer enterprise architecture as a fourth year level course or part of a master's syllabus. California State University offers a post-baccalaureate certificate in enterprise architecture, in conjunction with FEAC Institute. National University offers a Master of Science in Engineering Management with specialization in Enterprise Architecture, again in conjunction with FEAC Institute. The Center for Enterprise Architecture [20] at the Penn State University is one of these institutions that offer EA courses. It is also offered within the Masters program in Computer Science at The University of Chicago. In 2010 researchers at the Meraka Institute, Council for Scientific and Industrial Research, in South Africa organized a workshop and invited staff from computing departments in South African higher education institutions. The purpose was to investigate the current status of EA offerings in South Africa. A report was compiled and is available for download at the Meraka Institute.[21]

Enterprise architecture

90

References
[1] Definition of Enterprise Architecture, Gartner (http:/ / www. gartner. com/ technology/ it-glossary/ enterprise-architecture. jsp) [2] Pragmatic Enterprise Architecture Foundation, PEAF Foundation - Vision (http:/ / www. pragmaticea. com/ display-doc. asp?DocName=peaf-foundation-vision) [3] MIT Center for Information Systems Research, Peter Weill, Director, as presented at the Sixth e-Business Conference, Barcelona Spain, 27 March 2007, (http:/ / www. iese. edu/ en/ files/ 6_29338. pdf) [4] U.S.C. Title 44, Chap. 36, 3601 (http:/ / us-code. vlex. com/ vid/ sec-definitions-19256361) [5] Giachetti, R.E., Design of Enterprise Systems, Theory, Architecture, and Methods, CRC Press, Boca Raton, FL, 2010. [6] (http:/ / enterprisearchitecture. nih. gov/ About/ What/ enterprisearchitecture. nih. gov) [7] Jarvis, R, Enterprise Architecture: Understanding the Bigger Picture - A Best Practice Guide for Decision Makers in IT, The UK National Computing Centre, Manchester, UK [8] Spewak, Steven H. and Hill, Steven C. , Enterprise Architecture Planning - Developing a Blueprint for Data Applications and Technology,(1992), John Wiley [9] The Open Group, TOGAF standard, http:/ / www. opengroup. org/ togaf/ [10] Federal Government agency success stories, (2010), whitehouse.gov (http:/ / www. whitehouse. gov/ omb/ E-Gov/ ea_success. aspx) [11] FEA Practice Guidance Federal Enterprise Architecture Program Management Office OMB, (2007), whitehouse.gov (http:/ / www. whitehouse. gov/ sites/ default/ files/ omb/ assets/ fea_docs/ FEA_Practice_Guidance_Nov_2007. pdf) [12] "Volkswagen of America: Managing IT Priorities," Harvard Business Review, October 5, 2005, Robert D. Austin, Warren Ritchie, Greggory Garrett [13] ihg.com (http:/ / www. ihg. com) [14] Leslie Owens, Forrester Blogs - Who Owns Information Architecture? All Of Us., (2010), blogs.forrester.com (http:/ / blogs. forrester. com/ information_management/ 2010/ 02/ who-owns-information-architecture-all-of-us. html) [15] Tony Byrne, Real Story Group Blog - Digital workplace and enterprise architecture: two sides to same coin, (2012), (http:/ / www. realstorygroup. com/ Blog/ 2311-Digital-workplace-and-enterprise-architecture-two-sides-to-same-coin) [16] DoD BEA (http:/ / dcmo. defense. gov/ products-and-services/ business-enterprise-architecture/ ) [17] ACM and AIS Curriculum for Information Systems acm.org (http:/ / www. acm. org/ education/ curricula/ IS 2010 ACM final. pdf) [18] MSc in Enterprise Architecture at the University of East London (http:/ / www. uel. ac. uk/ postgraduate/ specs/ enterprise-arch/ ) [19] Iasa Global Iasa (http:/ / www. iasaglobal. org/ iasa/ default. asp) [20] Center for Enterprise Architecture, Penn State University, ea.ist.psu.edu (http:/ / ea. ist. psu. edu/ ) [21] hufee.meraka.org.za (http:/ / hufee. meraka. org. za/ Hufeesite/ mekes-projects)

External links
Professional Practice Guide for Enterprise Architects (http://caeap.org/IndustryArtifacts.aspx)

University and college programs


University of East London (http://www.uel.ac.uk/postgraduate/specs/enterprise-arch/) Pennsylvania State University (http://ea.ist.psu.edu) Carnegie Mellon (http://execed.isri.cmu.edu/elearning/enterprise-architecture/index.html) National University (http://www.nu.edu/OurPrograms/SchoolOfEngineeringAndTechnology/ AppliedEngineering/Programs/720-810.html) Kent State University (http://www.kent.edu/dsci/enterprisearchitecture/index.cfm) Griffith University (http://www17.griffith.edu.au/cis/p_cat/admission.asp?ProgCode=5493&Type=apply) Royal Melbourne Institute of Technology (http://www.rmit.edu.au/browse/Our Organisation/Science Engineering and Health/Schools/Computer Science and IT/Programs and Courses/Postgraduate/MC152 Master of Technology (Enterprise Architecture)) Temple University, Fox School of Business (http://community.mis.temple.edu/mis2501sec001s12/) University of Utrecht, Dept of Information and Computing Sciences (http://www.cs.uu.nl/education/vak. php?vak=INFOEAR)

Segment architecture

91

Segment architecture
Segment architecture is a detailed, formal description of areas within an enterprise, used at the program or portfolio level to organize and align change activity.[1] It defines a simple roadmap for a core mission area, business service, or enterprise service. Segment architecture is driven by business management and delivers products that improve the delivery of services to citizens and agency staff. From an investment perspective, segment architecture drives decisions for a business case or group of business cases supporting a core mission area or common or shared service. The primary stakeholders for segment architecture are business owners and managers. Segment architecture is related to EA through three principles: structure: segment architecture inherits the framework used by the EA, although it may be extended and specialized to meet the specific needs of a core mission area or common or shared service. reuse : segment architecture reuses important assets defined at the enterprise level including: data; common business processes and investments; and applications and technologies. alignment : segment architecture aligns with elements defined at the enterprise level, such as business strategies, mandates, standards, and performance measures.[3] (Note: this information was copied from a previous posting under enterprise architecture; it is not original content)

References
[1] TOGAF 9 Section 3.62 (http:/ / pubs. opengroup. org/ architecture/ togaf9-doc/ arch/ chap03. html#tag_03_62)

External links
Federal Segment Architecture Methodology (http://www.fsam.gov/)

Solution architecture

92

Solution architecture
Solution architecture (within or without enterprise architecture) is a kind of architecture domain, that aims to address specific problems and requirements, usually through the design of specific information systems or applications. Solution architecture is either: Documentation describing the structure and behaviour of a solution to a problem, or A process for describing a solution and the work to deliver it. The documentation is typically divided into broad views, each known as an architecture domain. Where the solution architect starts and stops work depends on the funding model for the process of solution identification and delivery. E.g. An enterprise may employ a solution architect on a feasibility study, or to prepare a solution vision or solution outline for an Invitation to Tender. A systems integrator may employ a solution architect at bid time, before any implementation project is costed and resourced. Both may employ a solution architect to govern an implementation project, or play a leading role within it. Typical outcomes of solution architecture. Solution architects typically produce solution outlines and migraton paths that show the evolution of a system from baseline state to target state. A solution architect is often but not always responsible for design to ensure that the target applications, in a technical architecture, will meet non-functional requirements. Solution architecture often but not always leads to software architecture work[1] and technical architecture work, and often contains elements of those. A solution architecture description (or solution outline) will typically be an abstraction of an end-to-end subsystem,[2] consisting of application software supported by middleware which together provide: An IT implementation of a specific business task or process necessary to support a business function with appropriate non-functional requirements (e.g. integrity, performance, security, recoverability, etc.) A synchronization mechanism between the subsystem consumers/providers and the associated business task or process e.g. an end-to-end eCommerce subsystem which allows customers to place orders for goods and services or an end-to-end Supply Replenishment subsystem which enables an enterprise to order new stock from its suppliers. Relationship of solution architecture to enterprise architecture.

Solution architecture Generally speaking, an enterprise architects deliverables are more abstract than a solution architects deliverables. But that is not always the case, and the main distinction between enterprise architect and solution architect lies in their different motivations. The solutions architect is primarily employed to help and support programme and project managers in the design, planning and direction of implementation projects. The enterprise architect is primarily employed to identify and direct strategic and cross-organisational solution delivery. A solutions architect may also report to an enterprise architect, but the strength of that reporting line varies between organisations. The influence of the enterprise architect team on solution architects depends on an organisations policies and management structure. So, the extent to which a solution architects work realises an enterprise architects road maps will vary widely in different contexts.

93

References
[1] Patterns of enterprise application architecture by Martin Fowler. [2] End-to-end subsystem defined within Patterns for e-business at http:/ / www. ibm. com/ developerworks/ patterns/ library/ definitions. html

External links
Solution Architecture Certification and Resources (http://avancier.co.uk/) Solution Architecture open community (http://www.solutionarchitecture.org/) Architecture Patterns (EA Reference Architecture)

Service-oriented architecture

94

Service-oriented architecture
In software engineering, a service-oriented architecture (SOA) is a set of principles and methodologies for designing and developing software in the form of interoperable services. These services are well-defined business functionalities that are built as software components (discrete pieces of code and/or data structures) that can be reused for different purposes. SOA design principles are used during the phases of systems development and integration. SOA generally provides a way for consumers of services, such as web-based applications, to be aware of available SOA-based services. For example, several disparate departments within a company may develop and deploy SOA services in different implementation languages; their respective clients will benefit from a well-defined interface to access them. XML is often used for interfacing with SOA services, though this is not required. JSON is also becoming increasingly common. SOA defines how to integrate widely disparate applications for a Web-based environment and uses multiple implementation platforms. Rather than defining an API, SOA defines the interface in terms of protocols and functionality. An endpoint is the entry point for such a SOA implementation. Service-orientation requires loose coupling of services with operating systems and other technologies that underlie applications. SOA separates functions into distinct units, or services,[1] which developers make accessible over a network in order to allow users to combine and reuse them in the production of applications. These services and their corresponding consumers communicate with each other by passing data in a well-defined, shared format, or by coordinating an activity between two or more services.[2]

Layer interaction in service-oriented Architecture

SOA can be seen in a continuum, from older concepts of distributed computing[1][3] and modular programming, through SOA, and on to current practices of mashups, SaaS, and cloud computing (which some see as the offspring of SOA).[4]

Description
Overview
Services are unassociated, loosely coupled units of functionality that have no calls to each other embedded in them. Each service implements one action, such as filling out an online application for an account, or viewing an online bank statement, or placing an online booking or airline ticket order. Rather than services embedding calls to each other in their source code, they use defined protocols that describe how services pass and parse messages using description metadata. SOA developers associate individual SOA objects by using orchestration. In the process of orchestration the developer associates software functionality (the services) in a non-hierarchical arrangement using a software tool that contains a complete list of all available services, their characteristics, and the means to build an application utilizing these sources. Underlying and enabling all of this requires metadata in sufficient detail to describe not only the characteristics of these services, but also the data that drives them. Programmers have made extensive use of XML in SOA to structure

Service-oriented architecture data that they wrap in a nearly exhaustive description-container. Analogously, the Web Services Description Language (WSDL) typically describes the services themselves, while the SOAP protocol describes the communications protocols. Whether these description languages are the best possible for the job, and whether they will become/remain the favorites in the future, remain open questions. As of 2008 SOA depends on data and services that are described by metadata that should meet the following two criteria: 1. The metadata should come in a form that software systems can use to configure dynamically by discovery and incorporation of defined services, and also to maintain coherence and integrity. For example, metadata could be used by other applications, like a catalogue, to perform autodiscovery of services without modifying the functional contract of a service. 2. The metadata should come in a form that system designers can understand and manage with a reasonable expenditure of cost and effort. SOA aims to allow users to string together fairly large chunks of functionality to form ad hoc applications that are built almost entirely from existing software services. The larger the chunks, the fewer the interface points required to implement any given set of functionality; however, very large chunks of functionality may not prove sufficiently granular for easy reuse. Each interface brings with it some amount of processing overhead, so there is a performance consideration in choosing the granularity of services. The great promise of SOA suggests that the marginal cost of creating the nth application is low, as all of the software required already exists to satisfy the requirements of other applications. Ideally, one requires only orchestration to produce a new application. For this to operate, no interactions must exist between the chunks specified or within the chunks themselves. Instead, humans specify the interaction of services (all of them unassociated peers) in a relatively ad hoc way with the intent driven by newly emergent requirements. Thus the need for services as much larger units of functionality than traditional functions or classes, lest the sheer complexity of thousands of such granular objects overwhelm the application designer. Programmers develop the services themselves using traditional languages like Java, C, C++, C#, Visual Basic, COBOL, or PHP. Services may also be wrappers for existing Legacy systems, allowing re-facing of old systems. SOA services feature loose coupling, in contrast to the functions that a linker binds together to form an executable, to a dynamically linked library or to an assembly. SOA services also run in "safe" wrappers (such as Java or .NET) and in other programming languages that manage memory allocation and reclamation, allow ad hoc and late binding, and provide some degree of indeterminate data typing. As of 2008, increasing numbers of third-party software companies offer software services for a fee. In the future, SOA systems may consist of such third-party services combined with others created in-house. This has the potential to spread costs over many customers and customer uses, and promotes standardization both in and across industries. In particular, the travel industry now has a well-defined and documented set of both services and data, sufficient to allow any reasonably competent software engineer to create travel-agency software using entirely off-the-shelf software services.[5] Other industries, such as the finance industry, have also started making significant progress in this direction. SOA as an architecture relies on service-orientation as its fundamental design principle. If a service presents a simple interface that abstracts away its underlying complexity, users can access independent services without knowledge of the service's platform implementation.[6]

95

Service-oriented architecture

96

Requirements
In order to efficiently use a SOA, the architecture must meet the following requirements: Interoperability among different systems and programming languages that provides the basis for integration between applications on different platforms through a communication protocol. One example of such communication depends on the concept of messages. Using messages across defined message channels decreases the complexity of the end application, thereby allowing the developer of the application to focus on true application functionality instead of the intricate needs of a communication protocol. Desire to create a federation of resources. Establish and maintain data flow to a federated database system. This allows new functionality developed to reference a common business format for each data element.

Principles
The following principles were proposed by Yvonne Balzer to guide development, maintenance, and usage of the SOA:[7] Reuse, granularity, modularity, composability, componentization and interoperability. Standards-compliance (both common and industry-specific). Services identification and categorization, provisioning and delivery, and monitoring and tracking. The Microsoft Windows Communication Foundation team proposed the following principles for service-oriented design: [8] Boundaries are explicit. Services are autonomous. Services share schema and contract, not class. Service compatibility is based on policy.

The first published research of service orientation from an industry perspective was provided by Thomas Erl of SOA Systems Inc. who defined eight specific service-orientation principles [9] common to all primary SOA platforms. These principles were published in Service-Oriented Architecture: Concepts, Technology, and Design, on the www.soaprinciples.com research site, and in the September 2005 edition of the Web Services Journal (see Service-orientation). Standardized service contract: Services adhere to a communications agreement, as defined collectively by one or more service-description documents. Service loose coupling: Services maintain a relationship that minimizes dependencies and only requires that they maintain an awareness of each other. Service abstraction: Beyond descriptions in the service contract, services hide logic from the outside world. Service reusability: Logic is divided into services with the intention of promoting reuse. Service autonomy: Services have control over the logic they encapsulate. Service statelessness: Services minimize resource consumption by deferring the management of state information when necessary Service discoverability: Services are supplemented with communicative meta data by which they can be effectively discovered and interpreted. Service composability: Services are effective composition participants, regardless of the size and complexity of the composition. Some authors also include the following principles: Service granularity: A design consideration to provide optimal scope and right granular level of the business functionality in a service operation. Service normalization: Services are decomposed and/or consolidated to a level of normal form to minimize redundancy. In some cases, services are denormalized for specific purposes, such as performance optimization,

Service-oriented architecture access, and aggregation.[10] Service optimization: All else equal, high-quality services are generally preferable to low-quality ones. Service relevance: Functionality is presented at a granularity recognized by the user as a meaningful service. Service encapsulation: Many services are consolidated for use under the SOA. Often such services were not planned to be under SOA. Service location transparency: This refers to the ability of a service consumer to invoke a service regardless of its actual location in the network. This also recognizes the discoverability property (one of the core principle of SOA) and the right of a consumer to access the service. Often, the idea of service virtualization also relates to location transparency. This is where the consumer simply calls a logical service while a suitable SOA-enabling runtime infrastructure component, commonly a service bus, maps this logical service call to a physical service.

97

The following references provide additional considerations for defining a SOA implementation: SOA reference architecture [11] provides a working design of an enterprise-wide SOA implementation with detailed architecture diagrams, component descriptions, detailed requirements, design patterns, opinions about standards, patterns on regulation compliance, standards templates etc.[12] Life-cycle management SOA Practitioners Guide Part 3: Introduction to Services Lifecycle [13] introduces the services lifecycle and provides a detailed process for services management through the service lifecycle, from inception to retirement or repurposing of the services. It also contains an appendix that includes organization and governance best-practices, templates, comments on key SOA standards, and recommended links for more information. SOA design principles [14] provides more information about SOA realization using Service design principles In addition, one might take the following factors into account when defining a SOA implementation: Efficient use of system resources Service maturity and performance EAI (Enterprise application integration)

Types
Four common SOA types have emerged in order to improve physical design.[15] Documenting architecture types encourages services that are more standardized, interoperable and composable. This also assists in understanding interdependencies among services. Service architecture This is the physical design of an individual service that encompasses all the resources used by a service. This would normally include databases, software components, legacy systems, identity stores,[16] XML schemas and any backing stores, e.g. shared directories. It is also beneficial to include any service agents[17] employed by the service, as any change in these service agents would affect the message processing capabilities of the service. The standardized service contract design principle, keeps service contracts independent from their implementation. The service contract needs to be documented to formalize the required processing resources by the individual service capabilities. Although it is beneficial to document details about the service architecture, the service abstraction design principle dictates that any internal details about the service are invisible to its consumers so that they do not develop any unstated couplings. The service architecture serves as a point of reference for evolving the service or gauging the impact of any change in the service.

Service-oriented architecture Service composition architecture One of the core characteristics of services developed using service-orientation design paradigm is that they are composition-centric. Services with this characteristic can potentially address novel requirements by recomposing the same services in different configurations. Service composition architecture is itself a composition of the individual architectures of the participating services. In the light of the Service Abstraction principle, this type of architecture only documents the service contract and any published service-level agreement (SLA); internal details of each service are not included. If a service composition is a part of another (parent) composition, the parent composition can also be referenced in the child service composition. The design of service composition also includes any alternate paths, such as error conditions, which may introduce new services into the current service composition. Service inventory architecture A service inventory is composed of services that automate business processes. It is important to account for the combined processing requirements of all services within the service inventory. Documenting the requirements of services, independently from the business processes that they automate, helps identify processing bottlenecks. The service inventory architecture is documented from the service inventory blueprint, so that service candidates[18] can be redesigned before their implementation. Service-oriented enterprise architecture This umbrella architecture incorporates service, composition and inventory architectures and any enterprise-wide technological resources accessed by these architectures e.g. an ERP system. This can further be supplemented by including enterprise-wide standards that apply to the aforementioned architecture types. Any segments of the enterprise that are not service-oriented can also be documented in order to consider transformation requirements if a service needs to communicate with the business processes automated by such segments.

98

Web services approach


Web services can implement a service-oriented architecture. Web services make functional building-blocks accessible over standard Internet protocols independent of platforms and programming languages. These services can represent either new applications or just wrappers around existing legacy systems to make them network-enabled. Each SOA building block can play one or both of two roles: 1. Service provider: The service provider creates a web service and possibly publishes its interface and access information to the service registry. Each provider must decide which services to expose, how to make trade-offs between security and easy availability, how to price the services, or (if no charges apply) how/whether to exploit them for other value. The provider also has to decide what category the service should be listed in for a given broker service and what sort of trading partner agreements are required to use the service. It registers what services are available within it, and lists all the potential service recipients. The implementer of the broker then decides the scope of the broker. Public brokers are available through the Internet, while private brokers are only accessible to a limited audience, for example, users of a company intranet. Furthermore, the amount of the offered information has to be decided. Some brokers specialize in many listings. Others offer high levels of trust in the listed services. Some cover a broad landscape of services and others focus within an industry. Some brokers catalog other brokers. Depending on the business model, brokers can attempt to maximize look-up requests, number of listings or accuracy of the listings. The Universal Description Discovery and Integration (UDDI) specification defines a way to publish and discover information about Web services. Other service broker technologies include (for example) ebXML (Electronic Business using eXtensible Markup Language) and those based on the ISO/IEC 11179 Metadata Registry (MDR) standard.

Service-oriented architecture 2. Service consumer: The service consumer or web service client locates entries in the broker registry using various find operations and then binds to the service provider in order to invoke one of its web services. Whichever service the service-consumers need, they have to take it into the brokers, then bind it with respective service and then use it. They can access multiple services if the service provides multiple services.

99

Web-service protocols
Implementors commonly build SOAs using web services standards (for example, SOAP) that have gained broad industry acceptance after recommendation of Version 1.2 from the W3C[19] (World Wide Web Consortium) in 2003. These standards (also referred to as web service specifications) also provide greater interoperability and some protection from lock-in to proprietary vendor software. One can, however, implement SOA using any service-based technology, such as Jini, CORBA or REST.

Other SOA concepts


Architectures can operate independently of specific technologies.[3] Designers can implement SOA using a wide range of technologies, including: SOAP, RPC REST DCOM CORBA Web services DDS Java RMI WCF (Microsoft's implementation of web services now forms a part of WCF)

Implementations can use one or more of these protocols and, for example, might use a file-system mechanism to communicate data conforming to a defined interface specification between processes conforming to the SOA concept. The key is independent services with defined interfaces that can be called to perform their tasks in a standard way, without a service having foreknowledge of the calling application, and without the application having or needing knowledge of how the service actually performs its tasks. Many implementers of SOA have begun to adopt an evolution of SOA concepts into a more advanced architecture called SOA 2.0. SOA enables the development of applications that are built by combining loosely coupled and interoperable services.[21] These services inter-operate based on a formal definition (or contract, e.g., WSDL) that is independent of the underlying platform and programming language. The interface definition hides the implementation of the language-specific service. SOA-based systems can therefore function independently of development technologies and platforms (such as

[20] Elements of SOA, by Dirk Krafzig, Karl Banke, and Dirk Slama

Service-oriented architecture

100

Java, .NET, etc.). Services written in C# running on .NET platforms and services written in Java running on Java EE platforms, for example, can both be consumed by a common composite application (or client). Applications running on either platform can also consume services running on the other as web services that facilitate reuse. Managed environments can also wrap COBOL legacy systems and present them as software services. This has extended the useful life of many core legacy systems indefinitely, no matter what language they originally used. SOA can support integration and consolidation activities within complex enterprise systems, but SOA does not specify or provide a methodology or framework for documenting capabilities or services. High-level languages such as BPEL and specifications such as WS-CDL and WS-Coordination extend the service concept by providing a method of defining and supporting orchestration of fine-grained services into more coarse-grained business services, which architects can in turn incorporate into workflows and business processes implemented in composite applications or portals.
SOA meta-model, The Linthicum Group, 2007

Service-Oriented Modeling Framework (SOMF) Version 2.0

As of 2008 researchers have started investigating the use of service component architecture (SCA) to implement SOA. Service-oriented modeling [1] is a SOA framework that identifies the various disciplines that guide SOA practitioners to conceptualize, analyze, design, and architect their service-oriented assets. The Service-oriented modeling framework (SOMF) offers a modeling language and a work structure or "map" depicting the various components that contribute to a successful service-oriented modeling approach. It illustrates the major elements that identify the what to do aspects of a service development scheme. The model enables practitioners to craft a project plan and to identify the milestones of a service-oriented initiative. SOMF also provides a common modeling notation to address alignment between business and IT organizations. SOMF addresses the following principles:

Service-oriented architecture

101

Definitions
Commentators have provided multiple definitions of SOA. The OASIS group[22] and the Open Group[23] have both created formal definitions. OASIS defines SOA as the following: A paradigm for organizing and utilizing distributed capabilities that may be under the control of different ownership domains. It provides a uniform means to offer, discover, interact with and use capabilities to produce desired effects consistent with measurable preconditions and expectations. According to Thomas Erl: SOA represents an open, agile, extensible, federated, composable architecture comprised of autonomous, QoS-capable, vendor diverse, interoperable, discoverable, and potentially reusable services, implemented as Web services. SOA can establish an abstraction of business logic and technology, resulting in a loose coupling between these domains. SOA is an evolution of past platforms, preserving successful characteristics of traditional architectures, and bringing with it distinct principles that foster service-orientation in support of a service-oriented enterprise. SOA is ideally standardized throughout an enterprise, but achieving this state requires a planned transition and the support of a still evolving technology set.[3]

Programmatic service contract


A service contract may have the following components:[24] Header Name: Name of the service. This should indicate in general terms what the service does, not just its definition Version: The version of this service contract Owner: The person/team in charge of the service Responsibility assignment (RACI)

Responsible: The role/person/team responsible for the deliverables of this contract/service. All versions of the contract Accountable: Ultimate decision-maker in terms of this contract/service Consulted: Whom one must consult before action is taken on this contract/service. This is two-way communication. These people have an impact on the decision or the execution of that decision. Informed: Who must be informed that a decision or action is being taken. This is a one-way communication. These people are impacted by the decision or execution of that decision, but have no control over the action. Type: This is the type of the service: to help distinguish the layer in which it resides. Different implementations will have different service types. Examples of service types include: Presentation Process Business Data Integration Functional Functional requirement (from requirements document): Indicates the functionality in specific bulleted items: what exactly this service accomplishes. The language should encourage test cases to prove the functionality is accomplished. Service operations: Methods, actions etc. Must be defined in terms of what part of the functionality it provides.

Service-oriented architecture Invocation: Indicates how to invoke the service. This includes the URL, interface, etc. There may be multiple invocation paths for the same service. One may have the same functionality for an internal and some external clients, each with different invocation means and interfaces. Examples: SOAP REST Events triggers Non-functional Security constraints: Defines who can execute this service in terms of roles or individual partners etc. and which invocation mechanism they can invoke. Quality of service: Determines the allowable failure rate Transactional: Is this capable of acting as part of a larger transaction and if so, how do we control that? Service level agreement: Determines the amount of latency the service is allowed to have to perform its actions Semantics: Dictates or defines the meaning of terms used in the description and interfaces of the service Process: Describes the process, if any, of the contracted service

102

Network management architecture


As of 2008 the principles of SOA are being applied by network managers in their field. Examples of service-oriented network management architectures include TS 188 001 NGN Management OSS Architecture from ETSI and M.3060 Principles for the Management Of Next Generation Networks recommendation from the ITU-T. Tools for managing SOA infrastructure include: HP Software & Solutions HyPerformix IPS Performance Optimizer IBM Tivoli Framework Red Hat JBoss Operations Network Oracle SOA Management Pack Enterprise Edition (Official Product Page) [25]

Discussion
Benefits
Some enterprise architects believe that SOA can help businesses respond more quickly and more cost-effectively to changing market conditions.[26] This style of architecture promotes reuse at the macro (service) level rather than micro (classes) level. It can also simplify interconnection toand usage ofexisting IT (legacy) assets. With SOA, the idea is that an organization can look at a problem holistically. A business has more overall control. Theoretically there would not be a mass of developers using whatever tool sets might please them. But rather there would be a coding to a standard that is set within the business. They can also develop enterprise-wide SOA that encapsulates a business-oriented infrastructure. SOA has also been illustrated as a highway system providing efficiency for car drivers. The point being that if everyone had a car, but there was no highway anywhere, things would be limited and disorganized, in any attempt to get anywhere quickly or efficiently. IBM Vice President of Web Services Michael Liebow says that SOA "builds highways".[27] In some respects, one can regard SOA as an architectural evolution rather than as a revolution. It captures many of the best practices of previous software architectures. In communications systems, for example, little development of solutions that use truly static bindings to talk to other equipment in the network has taken place. By formally embracing a SOA approach, such systems can position themselves to stress the importance of well-defined, highly inter-operable interfaces.[28]

Service-oriented architecture Some have questioned whether SOA simply revives concepts like modular programming (1970s), event-oriented design (1980s) or interface/component-based design (1990s). SOA promotes the goal of separating users (consumers) from the service implementations. Services can therefore be run on various distributed platforms and be accessed across networks. This can also maximize reuse of services. SOA realizes its business and IT benefits by utilizing an analysis and design methodology when creating services. This methodology ensures that services remain consistent with the architectural vision and roadmap and that they adhere to principles of service-orientation. Arguments supporting the business and management aspects from SOA are outlined in various publications.[29] A service comprises a stand-alone unit of functionality available only via a formally defined interface. Services can be some kind of "nano-enterprises" that are easy to produce and improve. Also services can be "mega-corporations" constructed as the coordinated work of subordinate services. Services generally adhere to the following principles of service-orientation:[30] Abstraction Autonomy Composability Discoverability Formal contract

103

Loose coupling Reusability Statelessness A mature rollout of SOA effectively defines the API of an organization. Reasons for treating the implementation of services as separate projects from larger projects include: 1. Separation promotes the concept to the business that services can be delivered quickly and independently from the larger and slower-moving projects common in the organization. The business starts understanding systems and simplified user interfaces calling on services. This advocates agility. That is to say, it fosters business innovations and speeds up time-to-market.[31] 2. Separation promotes the decoupling of services from consuming projects. This encourages good design insofar as the service is designed without knowing who its consumers are. 3. Documentation and test artifacts of the service are not embedded within the detail of the larger project. This is important when the service needs to be reused later. An indirect benefit of SOA involves dramatically simplified testing. Services are autonomous, stateless, with fully documented interfaces, and separate from the cross-cutting concerns of the implementation. If an organization possesses appropriately defined test data, then a corresponding stub is built that reacts to the test data when a service is being built. A full set of regression tests, scripts, data, and responses is also captured for the service. The service can be tested as a 'black box' using existing stubs corresponding to the services it calls. Test environments can be constructed where the primitive and out-of-scope services are stubs, while the remainder of the mesh is test deployments of full services. As each interface is fully documented with its own full set of regression test documentation, it becomes simple to identify problems in test services. Testing evolves to merely validate that the test service operates according to its documentation, and finds gaps in documentation and test cases of all services within the environment. Managing the data state of idempotent services is the only complexity. Examples may prove useful to aid in documenting a service to the level where it becomes useful. The documentation of some APIs within the Java Community Process provide good examples. As these are exhaustive, staff would typically use only important subsets. The 'ossjsa.pdf' file within JSR-89 exemplifies such a file.[32]

Service-oriented architecture

104

Challenges
One obvious and common challenge faced involves managing services metadata. SOA-based environments can include many services that exchange messages to perform tasks. Depending on the design, a single application may generate millions of messages. Managing and providing information on how services interact can become complex. This becomes even more complicated when these services are delivered by different organizations within the company or even different companies (partners, suppliers, etc.). This creates huge trust issues across teams; hence SOA Governance comes into the picture. Another challenge involves the lack of testing in SOA space. There are no sophisticated tools that provide testability of all headless services (including message and database services along with web services) in a typical architecture. Lack of horizontal trust requires that both producers and consumers test services on a continuous basis. SOA's main goal is to deliver agility to businesses. Therefore it is important to invest in a testing framework (build it or buy it) that would provide the visibility required to find the culprit in the architecture. Business agility requires SOA services to be controlled by the business goals and directives as defined in the business Motivation Model (BMM).[33] Another challenge relates to providing appropriate levels of security. Security models built into an application may no longer suffice when an application exposes its capabilities as services that can be used by other applications. That is, application-managed security is not the right model for securing services. A number of new technologies and standards have started to emerge and provide more appropriate models for security in SOA. Finally, the impact of changing a service that touches multiple business domains will require a higher level of change management governance [34] As SOA and the WS-* specifications practitioners expand, update and refine their output, they encounter a shortage of skilled people to work on SOA-based systems, including the integration of services and construction of services infrastructure. Interoperability becomes an important aspect of SOA implementations. The WS-I organization has developed basic profile (BP) and basic security profile (BSP) to enforce compatibility.[35] WS-I has designed testing tools to help assess whether web services conform to WS-I profile guidelines. Additionally, another charter has been established to work on the Reliable Secure Profile. Significant vendor hype surrounds SOA, which can create exaggerated expectations. Product stacks continue to evolve as early adopters test the development and runtime products with real-world problems. SOA does not guarantee reduced IT costs, improved systems agility or shorter time to market. Successful SOA implementations may realize some or all of these benefits depending on the quality and relevance of the system architecture and design.[36][37] Internal IT delivery organizations routinely initiate SOA efforts, and some do a poor job of introducing SOA concepts to a business with the result that SOA remains misunderstood within that business. The adoption of SOA starts to meet IT delivery needs instead of those of the business, resulting in an organization with, for example, superlative laptop provisioning services, instead of one that can quickly respond to market opportunities. Business leadership also frequently becomes convinced that the organization is executing well on SOA. One of the most important benefits of SOA is its ease of reuse. Therefore accountability and funding models must ultimately evolve within the organization. A business unit needs to be encouraged to create services that other units will use. Conversely, units must be encouraged to reuse services. This requires a few new governance components: Each business unit creating services must have an appropriate support structure in place to deliver on its service-level obligations, and to support enhancing existing services strictly for the benefit of others. This is typically quite foreign to business leaders. Each business unit consuming services accepts the apparent risk of reusing services outside their own control, with the attendant external project dependencies, etc.

Service-oriented architecture An innovative funding model is needed as incentive to drive these behaviors above. Business units normally pay the IT organization to assist during projects and then to operate the environment. Corporate incentives should discount these costs to service providers and create internal revenue streams from consuming business units to the service provider. These streams should be less than the costs of a consumer simply building it the old-fashioned way. This is where SOA deployments can benefit from the SaaS monetization architecture.[38]

105

Criticisms
Some criticisms of SOA depend on conflating SOA with Web services.[39] For example, some critics claim SOA results in the addition of XML layers, introducing XML parsing and composition. In the absence of native or binary forms of remote procedure call (RPC), applications could run slower and require more processing power, increasing costs. Most implementations do incur these overheads, but SOA can be implemented using technologies (for example, Java Business Integration (JBI), Windows Communication Foundation (WCF) and data distribution service (DDS)) that do not depend on remote procedure calls or translation through XML. At the same time, emerging open-source XML parsing technologies (such as VTD-XML) and various XML-compatible binary formats promise to significantly improve SOA performance.[40][41][42] Stateful services require both the consumer and the provider to share the same consumer-specific context, which is either included in or referenced by messages exchanged between the provider and the consumer. This constraint has the drawback that it could reduce the overall scalability of the service provider if the service-provider needs to retain the shared context for each consumer. It also increases the coupling between a service provider and a consumer and makes switching service providers more difficult.[43] Ultimately, some critics feel that SOA services are still too constrained by applications they represent.[44] Another concern relates to the ongoing evolution of WS-* standards and products (e.g., transaction, security), and SOA can thus introduce new risks unless properly managed and estimated with additional budget and contingency for additional proof-of-concept work. There has even been an attempt to parody the complexity and sometimes-oversold benefits of SOA, in the form of a 'SOA Facts [45]' site that mimics the 'Chuck Norris Facts' meme. Some critics regard SOA as merely an obvious evolution of currently well-deployed architectures (open interfaces, etc.). IT system designs sometimes overlook the desirability of modifying systems readily. Many systems, including SOA-based systems, hard-code the operations, goods and services of the organization, thus restricting their online service and business agility in the global marketplace. The next step in the design process covers the definition of a service delivery platform (SDP) and its implementation. In the SDP design phase one defines the business information models, identity management, products, content, devices, and the end-user service characteristics, as well as how agile the system is so that it can deal with the evolution of the business and its customers.

SOA Manifesto
In October 2009, at the 2nd International SOA Symposium, a mixed group of 17 independent SOA practitioners and vendors, the "SOA Manifesto Working Group", announced the publication of the SOA Manifesto.[46] The SOA Manifesto is a set of objectives and guiding principles that aim to provide a clear understanding and vision of SOA and service-orientation. Its purpose is rescuing the SOA concept from an excessive use of the term by the vendor community and "a seemingly endless proliferation of misinformation and confusion". [47] The manifesto provides a broad definition of SOA, the values it represents for the signatories and some guiding principles. The manifesto prioritizes: Business value over technical strategy

Service-oriented architecture Strategic goals over project-specific benefits Intrinsic interoperability over custom integration Shared services over specific-purpose implementations Flexibility over optimization Evolutionary refinement over pursuit of initial perfection

106

As of September 2010, the SOA Manifesto had been signed by more than 700 signatories and had been translated to nine languages.

Extensions
SOA, Web 2.0, services over the messenger, and mashups
Web 2.0, a perceived "second generation" of web activity, primarily features the ability of visitors to contribute information for collaboration and sharing. Web 2.0 applications often use REST-ful web services and commonly feature AJAX based user interfaces, utilizing web syndication, blogs, and wikis. While there are no set standards for Web 2.0, it is characterized by building on the existing Web server architecture and using services. Web 2.0 can therefore be regarded as displaying some SOA characteristics.[48][49][50] Some commentators also regard mashups as Web 2.0 applications. The term " business mashups" describes web applications that combine content from more than one source into an integrated user experience that shares many of the characteristics of service-oriented business applications (SOBAs). SOBAs are applications composed of services in a declarative manner. There is ongoing debate about "the collision of Web 2.0, mashups, and SOA," with some stating that Web 2.0 applications are a realization of SOA composite and business applications.[51]

Web 2.0
Tim O'Reilly coined the term "Web 2.0" to describe a perceived, quickly growing set of web-based applications.[52] A topic that has experienced extensive coverage involves the relationship between Web 2.0 and Service-Oriented Architectures (SOAs). SOA is considered as the philosophy of encapsulating application logic in services with a uniformly defined interface and making these publicly available via discovery mechanisms. The notion of complexity-hiding and reuse, but also the concept of loosely coupling services has inspired researchers to elaborate on similarities between the two philosophies, SOA and Web 2.0, and their respective applications. Some argue Web 2.0 and SOA have significantly different elements and thus can not be regarded parallel philosophies, whereas others consider the two concepts as complementary and regard Web 2.0 as the global SOA.[49] The philosophies of Web 2.0 and SOA serve different user needs and thus expose differences with respect to the design and also the technologies used in real-world applications. However, as of 2008, use-cases demonstrated the potential of combining technologies and principles of both Web 2.0 and SOA.[49] In an "Internet of Services", all people, machines, and goods will have access via the network infrastructure of tomorrow. The Internet will thus offer services for all areas of life and business, such as virtual insurance, online banking and music, and so on. Those services will require a complex services infrastructure including service-delivery platforms bringing together demand and supply. Building blocks for the Internet of Services include SOA, Web 2.0 and semantics on the technology side; as well as novel business models, and approaches to systematic and community-based innovation.[53] Even though Oracle indicates that Gartner is coining a new term, Gartner analysts indicate that they call this advanced SOA and refer to it as "SOA 2.0".[54] Most of the major middleware vendors (e.g., Red Hat, webMethods, TIBCO Software, IBM, Sun Microsystems, and Oracle) have had some form of SOA 2.0 attributes for years.

Service-oriented architecture

107

Digital nervous system


SOA implementations have been described as representing a piece of the larger vision known as the digital nervous system [55][56] or the Zero Latency Enterprise.[57]

References
[1] Bell, Michael (2008). "Introduction to Service-Oriented Modeling". Service-Oriented Modeling: Service Analysis, Design, and Architecture. Wiley & Sons. p.3. ISBN978-0-470-14111-3. [2] Bell_, Michael (2010). SOA Modeling Patterns for Service-Oriented Discovery and Analysis. Wiley & Sons. p.390. ISBN978-0-470-48197-4. [3] Erl, Thomas. About the Principles. Serviceorientation.org, 200506 (http:/ / www. serviceorientation. org/ ) [4] "Application Platform Strategies Blog: SOA is Dead; Long Live Services" (http:/ / apsblog. burtongroup. com/ 2009/ 01/ soa-is-dead-long-live-services. html). Apsblog.burtongroup.com. 2009-01-05. . Retrieved 2012-08-13. [5] "OpenTravel" (http:/ / www. opentravel. org/ ). OpenTravel. . Retrieved 2012-08-13. [6] Channabasavaiah, Holley and Tuggle, Migrating to a service-oriented architecture (http:/ / www-128. ibm. com/ developerworks/ library/ ws-migratesoa/ ), IBM DeveloperWorks, 16 December 2003. [7] Yvonne Balzer Improve your SOA project plans (http:/ / www-128. ibm. com/ developerworks/ webservices/ library/ ws-improvesoa/ ), IBM, 16 July 2004 [8] Microsoft Windows Communication Foundation team (2012 [last update]). "Principles of Service Oriented Design" (http:/ / msdn. microsoft. com/ en-us/ library/ bb972954. aspx). msdn.microsoft.com. . Retrieved September 3, 2012. [9] http:/ / soaprinciples. com [10] Tony Shan, "Building a Service-Oriented eBanking Platform" (http:/ / doi. ieeecomputersociety. org/ 10. 1109/ SCC. 2004. 1358011), scc, pp.237-244, First IEEE International Conference on Services Computing (SCC'04), 2004 [11] http:/ / www. ibm. com/ developerworks/ webservices/ library/ ws-soa-design1/ [12] SOA Practitioners Guide Part 2: SOA Reference Architecture (http:/ / www. soablueprint. com/ whitepapers/ SOAPGPart2. pdf) [13] http:/ / www. soablueprint. com/ whitepapers/ SOAPGPart3. pdf [14] http:/ / www. ibm. com/ developerworks/ webservices/ library/ ws-soa-design/ [15] Thomas Erl (2012 [last update]). "Introducing SOA Design Patterns" (http:/ / soa. sys-con. com/ node/ 645271?page=0,1). soa.sys-con.com (SOA World Magazine). . Retrieved August 17, 2012. [16] A repository that keeps records of user identities e.g. usernames [17] Light weight, event-based software programs that automatically intercept messages to and from services and execute custom logic [18] Services whose design has not been finalized [19] "SOAP Version 1.2 (W3C )" (http:/ / www. w3. org/ 2003/ 06/ soap12-pressrelease) (in (Japanese)). W3.org. . Retrieved 2012-08-13. [20] Enterprise SOA. Prentice Hall, 2005 [21] Cardoso (http:/ / eden. dei. uc. pt/ ~jcardoso/ ), Jorge; Sheth, Amit P. (2006). "Foreword". Semantic Web Services, Processes and Applications. SEMANTIC WEB AND BEYOND: Computing for Human Experience. Foreword by Frank Leymann. Springer. xxi. ISBN978-0-387-30239-3. "The corresponding architectural style is called "service-oriented architecture": fundamentally, it describes how service consumers and service providers can be decoupled via discovery mechanisms resulting in loosely coupled systems. Implementing a service-oriented architecture means to deal with heterogeneity and interoperability concerns." [22] SOA Reference Model definition (http:/ / www. oasis-open. org/ committees/ tc_home. php?wg_abbrev=soa-rm) [23] (http:/ / opengroup. org/ projects/ soa/ doc. tpl?gdid=10632) [24] "What Belongs in a Service Contract?" (http:/ / www. zapthink. com/ 2005/ 08/ 24/ what-belongs-in-a-service-contract/ ). ZapThink. . Retrieved 2012-08-13. [25] http:/ / www. oracle. com/ us/ technologies/ soa/ management-pack-soa-066457. html [26] Christopher Koch A New Blueprint For The Enterprise (http:/ / www. cio. com. au/ index. php/ id;1350140708), CIO Magazine, March 1, 2005 [27] Elizabeth Millard. "Building a Better Process". Computer User. January 2005. Page 20. [28] Bieberstein et al., Service-Oriented Architecture (SOA) Compass: Business Value, Planning, and Enterprise Roadmap (The developerWorks Series) (Hardcover), IBM Press books, 2005, 978-0131870024 [29] Martin van den berg et al. SOA for Profit, A Manager's Guide to Success with Service-Oriented Architecture (Hardcover), 978-9075414141 [30] M. Hadi Valipour, Bavar AmirZafari, Kh. Niki Maleki, Negin Daneshpour, A Brief Survey of Software Architecture Concepts and Service Oriented Architecture (http:/ / dx. doi. org/ 10. 1109/ ICCSIT. 2009. 5235004), in Proceedings of 2nd IEEE International Conference on Computer Science and Information Technology, ICCSIT'09, pp 34-38, Aug 2009, China. [31] Brayan Zimmerli Business Benefits of SOA (http:/ / www. brayan. com/ projects/ BenefitsOfSOA/ default. htm), University of Applied Science of Northwestern Switzerland, School of Business, 11 November 2009 [32] https:/ / cds. sun. com/ is-bin/ INTERSHOP. enfinity/ WFS/ CDS-CDS_Developer-Site/ en_US/ -/ USD/ ViewProductDetail-Start?ProductRef=7854-oss_service_activation-1. 0-fr-spec-oth-JSpec@CDS-CDS_Developer

Service-oriented architecture
[33] http:/ / www. jot. fm/ issues/ issue_2008_11/ column6/ index. html [34] Philip Wik, Confronting SOA's Four Horsemen of the Apocalypse, Service Technology Magazine, June 17, 2011 (http:/ / www. servicetechmag. com/ I51/ 0611-4) [35] WS-I Basic Profile (http:/ / www. ws-i. org/ Profiles/ BasicProfile-1. 0-2004-04-16. html) [36] Is There Real Business Value Behind the Hype of SOA? (http:/ / www. computerworld. com/ action/ article. do?command=viewArticleBasic& articleId=9001155& source=NLT_ROI& nlid=44), Computerworld, June 19, 2006. [37] See also: WS-MetadataExchange OWL-S [38] The Overlapping Worlds of SaaS and SOA (http:/ / cloudcomputing. sys-con. com/ ?q=node/ 1047073) [39] http:/ / blogs. zdnet. com/ service-oriented/ ?p=597 [40] Index XML documents with VTD-XML (http:/ / xml. sys-con. com/ read/ 453082. htm) [41] The Performance Woe of Binary XML (http:/ / soa. sys-con. com/ read/ 250512. htm) [42] Manipulate XML Content the Ximple Way (http:/ / www. devx. com/ xml/ Article/ 36379) [43] "The Reason SOA Isnt Delivering Sustainable Software" (http:/ / www. jpmorgenthal. com/ morgenthal/ ?p=31). jpmorgenthal.com. 2009-06-19. . Retrieved 2009-06-27. [44] "SOA services still too constrained by applications they represent" (http:/ / blogs. zdnet. com/ service-oriented/ ?p=2306). zdnet.com. 2009-06-27. . Retrieved 2009-06-27. [45] http:/ / soafacts. com [46] SOA Manifesto Official Website (http:/ / www. soa-manifesto. org) Date Accessed: 02 October 2010. [47] http:/ / www. soa-manifesto. org/ aboutmanifesto. html [48] Dion Hinchcliffe Is Web 2.0 The Global SOA? (http:/ / web2. wsj2. com/ is_web_20_the_global_soa. htm), SOA Web Services Journal, 28 October 2005 [49] Schroth, Christoph ; Janner, Till; (2007). Web 2.0 and SOA: Converging Concepts Enabling the Internet of Services (http:/ / www. alexandria. unisg. ch/ Publikationen/ 37270). IT Professional 9 (2007), Nr. 3, pp. 36-41, IEEE Computer Society. . Retrieved 2008-02-23. [50] Hoyer, Volker ; Stanoesvka-Slabeva, Katarina; Janner, Till; Schroth, Christoph; (2008). Enterprise Mashups: Design Principles towards the Long Tail of User Need (http:/ / www. alexandria. unisg. ch/ publications/ 44891). Proceedings of the 2008 IEEE International Conference on Services Computing (SCC 2008). . Retrieved 2008-07-08. [51] Jason Bloomberg Mashups and SOBAs: Which is the Tail and Which is the Dog? (http:/ / www. zapthink. com/ report. html?id=ZAPFLASH-2006320), Zapthink [52] "What Is Web 2.0" (http:/ / www. oreillynet. com/ pub/ a/ oreilly/ tim/ news/ 2005/ 09/ 30/ what-is-web-20. html). Tim O'Reilly. 2005-09-30. . Retrieved 2008-06-10. [53] Ruggaber, Rainer; (2007). Internet of ServicesA SAP Research Vision (http:/ / csdl2. computer. org/ comp/ proceedings/ wetice/ 2007/ 2879/ 00/ 28790003. pdf). IEEE Computer Society. . Retrieved 2008-02-23. [54] Yefim Natis & Roy Schulte Advanced SOA for Advanced Enterprise Projects (http:/ / www. gartner. com/ DisplayDocument?ref=g_search& id=493863), Gartner, July 13, 2006 [55] "From Web to Boarding Area: Delta's SOA is Ready" (http:/ / www. ebizq. net/ blogs/ soainaction/ 2008/ 02/ from_web_to_boarding_area_delt. php). . Retrieved 2009-05-02. [56] "The Value of An Enterprise Architecture" (http:/ / www. dabcc. com/ article. aspx?id=10100). . Retrieved 2009-05-02. [57] "Moving Toward the Zero Latency Enterprise" (http:/ / soa. sys-con. com/ node/ 39849). . Retrieved 2009-05-02.

108

External links
3-minute video (http://www.infoworld.com/d/architecture/infoclipz-service-oriented-architecture-soa-899) from InfoWorld magazine explaining SOA Recommendations for beginning an SOA initiative (http://www.infoq.com/presentations/ Beginning-an-SOA-Initiative) by Ian Robinson (video) The pros and cons of using SOAP/WSDL/WS-* and REST for SOA (http://www.infoq.com/presentations/ mark-little-soa-rest) by Mark Little (video) Technology choices and business ramifications around implementing an SOA (http://www.infoq.com/ interviews/robinson-rest-ws-soa-implementation), an interview with Ian Robinson (video) Service-oriented architecture's 6 burning questions (http://www.networkworld.com/news/2007/ 071907-burning-question.html) A comparison of SOA standards carried out for [[Ministry of Defence (United Kingdom) (http://www. modelfutures.com/file_download/17/MOD+CIO+-+Service+Analysis+Report+-+v1.3.pdf)] in 2010]

Zachman Framework

109

Zachman Framework
The Zachman Framework is an Enterprise Architecture framework for enterprise architecture, which provides a formal and highly structured way of viewing and defining an enterprise. It consists of a two dimensional classification matrix based on the intersection of six communication questions (What, Where, When, Why, Who and How) with six rows according to reification transformations.[1] The Zachman Framework is not a methodology in that it does not imply any specific method or process for collecting, managing, or using the information that it describes.[2] The Framework is named after its creator John Zachman, who first developed the concept in the 1980s at IBM. It has been updated several times since.[3]

The Zachman Framework of Enterprise Architecture

The Zachman "Framework" is a schema for organizing architectural artifacts (in other words, design documents, specifications, and models) that takes into account both whom the artifact targets (for example, business owner and builder) and what particular issue (for example, data and functionality) is being addressed.[4]

Overview
The term "Zachman Framework" has multiple meanings. It can refer to any of the frameworks proposed by John Zachman: The initial framework, named A Framework for Information Systems Architecture, by John Zachman published in an 1987 article in the IBM Systems journal.[5] The Zachman Framework for Enterprise Architecture, an update of the 1987 original in the 1990s extended and renamed .[6] One of the later versions of the Zachman Framework, offered by Zachman International as industry standard. In other sources the Zachman Framework is introduced as a framework, originated by and named after John Zachman, represented in numerous ways, see image. This framework is explained as, for example: a framework to organize and analyze data,[7] a framework for enterprise architecture.[8] a classification system, or classification scheme[9] a matrix, often in a 6x6 matrix format a two-dimensional model[10] or an analytic model. a two-dimensional schema, used to organize the detailed representations of the enterprise.[11]

Collage of Zachman Frameworks as presented in several books on Enterprise Architecture from 1997 to 2005.

Zachman Framework Beside the frameworks developed by John Zachman numerous extensions and or applications have been developed, which are also sometimes called Zachman Frameworks. The Zachman Framework summarizes a collection of perspectives involved in enterprise architecture. These perspectives are represented in a two-dimensional matrix that defines along the rows the type of stakeholders and with the columns the aspects of the architecture. The framework does not define a methodology for an architecture. Rather, the matrix is a template that must be filled in by the goals/rules, processes, material, roles, locations, and events specifically required by the organization. Further modeling by mapping between columns in the framework identifies gaps in the documented state of the organization.[12] The framework is a simple and logical structure for classifying and organizing the descriptive representations of an enterprise. It is significant to both the management of the enterprise, and the actors involved in the development of enterprise systems.[13] While there is no order of priority for the columns of the Framework, the top-down order of the rows is significant to the alignment of business concepts and the actual physical enterprise. The level of detail in the Framework is a function of each cell (and not the rows). When done by IT the lower level of focus is on information technology, however it can apply equally to physical material (ball valves, piping, transformers, fuse boxes for example) and the associated physical processes, roles, locations etc. related to those items.

110

History
In the 1980s John Zachman had been involved at IBM in the development of Business System Planning (BSP), a method for analyzing, defining and designing an information architecture of organizations. In 1982 Zachman[14] had already concluded that these analyses could reach far beyond automating systems design and managing data into the realms of strategic business planning and management science in general. It may be employed in the (in that time considered more esoteric) areas of enterprise architecture, data-driven systems design, data classification criteria, and more.[14]

Information Systems Architecture Framework


In the 1987 article "A Framework for Information Systems Architecture"[15] Zachman noted that the term "architecture" was used loosely by information systems professionals, and meant different things to planners, designers, programmers, communication specialists, and [16] others. In searching for an objective, independent basis upon which to develop a framework for information systems architecture, Zachman looked at the field of classical architecture, and Simple example of the 1992 Framework. a variety of complex engineering projects in industry. He saw a similar approach and concluded that architectures exist on many levels and involves at least three perspectives: raw material or data, function of processes, and location or networks.[16] The Information Systems Architecture is designed to be a classification schema for organizing architecture models. It provides a synoptic view of the models needed for enterprise architecture. Information Systems Architecture does not define in detail what the models should contain, it does not enforce the modeling language used for each model,

Zachman Framework and it does not propose a method for creating these models.[17]

111

Extension and formalization


In the 1992 article "Extending and Formalizing the Framework for Information Systems Architecture" John F. Sowa and John Zachman present the framework and its recent extensions and show how it can be formalized in the notation of conceptual graphs.[18] Also in 1992: John Zachmans co-author John Sowa proposed the additions of the Scope perspective of the planner (bounding lists common to the enterprise and its environment) and the Detailed Representation perspective of the sub-contractor (being the out of context vendor solution components). The Who, When and Why columns were brought into public view, the notion of the four levels of metaframeworks and a depiction of integration associations across the perspectives were all outlined in the paper. Keri Anderson Healey assisted by creating a model of the models (the framework metamodel) which was also included in the article. Stan Locke,Enterprise Convergence in Our Lifetime, from THE ENTERPRISE NEWSLETTER[19] Later during the 1990s[19] Methodologists like Clive Finkelstein refocused on the top two framework rows which he labeled Enterprise Engineering and has one of the most successful methods for converging the business needs with information engineering implementation, and determining a logical build sequence of the pieces.

Framework for enterprise architecture


In the 1997 paper "Concepts of the Framework for Enterprise Architecture" Zachman said that the framework should be referred to as a "Framework for Enterprise Architecture", and should have from the beginning. In the early 1980s however, according to Zachman, there was "little interest in the idea of Enterprise Reengineering or Enterprise Modeling and the use of formalisms and models was generally limited to some aspects of application development within the Information Systems community".[20] In 2008 Zachman Enterprise introduced the Zachman Framework: The Official Concise Definition as a new Zachman Framework standard.

Extended and modified frameworks


Since the 1990s several extended frameworks have been proposed, such as: Matthew & McGee (1990)[21] extended the three initial perspectives "what", "how" and "where", to event (the "when"), reason (the "why") and organization (the "who").[16] Evernden (1996) presented an alternative Information FrameWork. The Integrated Architecture Framework developed by Capgemini since 1996.[22] Vladan Jovanovic et all (2006) presents a Zachman Cube, an extended of the Zachman Framework into a multidimensional Zachmans Cube.[23]

Zachman Framework topics


Concept
The basic idea behind the Zachman Framework is that the same complex thing or item can be described for different purposes in different ways using different types of descriptions (e.g., textual, graphical). The Zachman Framework provides the thirty-six necessary categories for completely describing anything; especially complex things like manufactured goods (e.g., appliances), constructed structures (e.g., buildings), and enterprises (e.g., the organization and all of its goals, people, and technologies). The framework provides six different transformations of an abstract idea (not increasing in detail, but transforming) from six different perspectives.[24]

Zachman Framework It allows different people to look at the same thing from different perspectives. This creates a holistic view of the environment, an important capability illustrated in the figure.[25]

112

Views of Rows
Each row represents a total view of the solution from a particular perspective. An upper row or perspective does not necessarily have a more comprehensive understanding of the whole than a lower perspective. Each row represents a distinct, unique perspective; however, the deliverables from each perspective must provide sufficient detail to define the solution at the level of perspective and must translate to the next lower row explicitly.[26] Each perspective must take into account the requirements of the other perspectives and the restraint those perspectives impose. The constraints of each perspective are additive. For example, the constraints of higher rows affect the rows below. The constraints of lower rows can, but do not necessarily affect the higher rows. Understanding the requirements and constraints necessitates communication of knowledge and understanding from perspective to perspective. The Framework points the vertical direction for that communication between perspectives.[26] In the 1997 Zachman Framework the rows are described as follows:[26] Planner's View (Scope) - The first architectural sketch is a "bubble chart" or Venn diagram, which depicts in gross terms the size, shape, partial relationships, and basic purpose of the final structure. It corresponds to an executive summary for a planner or investor [27][28] The Veterans Affairs Zachman Framework with an explanation of its rows. who wants an overview or estimate of the scope of the system, what it would cost, and how it would relate to the general environment in which it will operate. Owner's View (Enterprise or Business Model) - Next are the architect's drawings that depict the final building from the perspective of the owner, who will have to live with it in the daily routines of business. They correspond to the enterprise (business) models, which constitute the designs of the business and show the business entities and processes and how they relate. Designer's View (Information Systems Model) - The architect's plans are the translation of the drawings into detail requirements representations from the designer's perspective. They correspond to the system model designed by a systems analyst who must determine the data elements, logical process flows, and functions that represent business entities and processes. Builder's View (Technology Model) - The contractor must redraw the architect's plans to represent the builder's perspective, with sufficient detail to understand the constraints of tools, technology, and materials. The builder's plans correspond to the technology models, which must adapt the information systems model to the details of the programming languages, input/output (I/O) devices, or other required supporting technology. Subcontractor View (Detailed Specifications) - Subcontractors work from shop plans that specify the details of parts or subsections. These correspond to the detailed specifications that are given to programmers who code individual modules without being concerned with the overall context or structure of the system. Alternatively, they could represent the detailed requirements for various commercial-off-the-shelf (COTS), government off-the-shelf (GOTS), or components of modular systems software being procured and implemented rather than built. Actual System View or The Functioning Enterprise

Zachman Framework

113

Focus of Columns
In summary, each perspective focuses attention on the same fundamental questions, then answers those questions from that viewpoint, creating different descriptive representations (i.e., models), which translate from higher to lower perspectives. The basic model for the focus (or product abstraction) remains constant. The basic model of each column is uniquely defined, yet related across and down the matrix.[26] In addition, the six categories of enterprise architecture components, and the underlying interrogatives that they answer, form the columns of the Zachman Framework and these are:[24] 1. 2. 3. 4. 5. 6. The data description What The function description How The Network description Where The people description Who The time description When The motivation description Why

In Zachmans opinion, the single factor that makes his framework unique is that each element on either axis of the matrix is explicitly distinguishable from all the other elements on that axis. The representations in each cell of the matrix are not merely successive levels of increasing detail, but actually are different representations different in context, meaning, motivation, and use. Because each of the elements on either axis is explicitly different from the others, it is possible to define precisely what belongs in each cell.[24]

Models of Cells
The kinds of models or architectural descriptive representations are made explicit at the intersections of the rows and columns. An intersection is referred to as a cell. Because a cell is created by the intersection of a perspective and a focus, each is distinctive and unique. Since each cell is distinctive and unique, the contents of the cell are normalized and explicit per the perspectives focus.[26] The cell descriptions in the table itself uses general language for a specific set of targets. Below the focus of each cell in this particular Zachman Framework is explained: Contextual 1. (Why) Goal List primary high level organization goals 2. (How) Process List list of all known processes 3. (What) Material List list of all known organizational entities 4. (Who) Organizational Unit & Role List list of all organization units, sub-units, and identified roles 5. (Where) Geographical Locations List locations important to organization; can be large and small
Current view of the Zachman Framework.

6. (When) Event List list of triggers and cycles important to organization Conceptual 1. (Why) Goal Relationship Model identifies hierarchy of goals that support primary goals 2. (How) Process Model provides process descriptions, input processes, output processes 3. (What) Entity Relationship Model identifies and describes the organizational materials and their relationships

Zachman Framework 4. (Who) Organizational Unit & Role Relationship Model identifies enterprise roles and units and the relationships between them 5. (Where) Locations Model identifies enterprise locations and the relationships between them 6. (When) Event Model identifies and describes events and cycles related by time Logical 1. (Why) Rules Diagram identifies and describes rules that apply constraints to processes and entities without regard to physical or technical implementation 2. (How) Process Diagram identifies and describes process transitions expressed as verb-noun phrases without regard to physical or technical implementation 3. (What) Data Model Diagram identifies and describes entities and their relationships without regard to physical or technical implementation 4. (Who) Role Relationship Diagram identifies and describes roles and their relations to other roles by types of deliverables without regard to physical or technical implementation 5. (Where) Locations Diagram identifies and describes locations used to access, manipulate, and transfer entities and processes without regard to physical or technical implementation 6. (When) Event Diagram identifies and describes events related to each other in sequence, cycles occur within and between events, without regard to physical or technical implementation Physical 1. (Why) Rules Specification expressed in a formal language; consists of rule name and structured logic to specify and test rule state 2. (How) Process Function Specification expressed in a technology specific language, hierarchical process elements are related by process calls 3. (What) Data Entity Specification expressed in a technology specific format; each entity is defined by name, description, and attributes; shows relationships 4. (Who) Role Specification expresses roles performing work and workflow components at the work product detailed specification level 5. (Where) Location Specification expresses the physical infrastructure components and their connections 6. (When) Event Specification expresses transformations of event states of interest to the enterprise Detailed Representation Eventually the cells with the detailed representation give Rules detail for (Why); Process detail for (How); Data detail for (What); Role detail for (Who); Location detail for (Where); and Event detail for (When). There is a sixth row in the current Zachman framework, but it is not used for enterprise architecture while the enterprise is described by rows one to six, enterprise architecture uses only rows one to five, thus only five rows are shown here.[3] Since the product development (i.e., architectural artifact) in each cell or the problem solution embodied by the cell is the answer to a question from a perspective, typically, the models or descriptions are higher-level depictions or the surface answers of the cell. The refined models or designs supporting that answer are the detailed descriptions within the cell. Decomposition (i.e., drill down to greater levels of detail) takes place within each cell. If a cell is not made explicit (defined), it is implicit (undefined). If it is implicit, the risk of making assumptions about these cells exists. If the assumptions are valid, then time and money are saved. If, however, the assumptions are invalid, it is likely to increase costs and exceed the schedule for implementation.[26]

114

Zachman Framework

115

Framework set of rules


The framework comes with a set of rules:[29] Rule 1 The columns have no order : The columns are interchangeable but cannot be reduced or created Rule 2 Each column has a simple generic model : Every column can have its own meta-model Rule 3 The basic model of each column must be unique : The basic Example of Zachman Framework Rules. model of each column, the relationship objects and the structure of it is unique. Each relationship object is interdependent but the representation objective is unique. Rule 4 Each row describes a distinct, unique perspective : Each row describes the view of a particular business group and is unique to it. All rows are usually present in most hierarchical organizations. Rule 5 Each cell is unique : The combination of 2,3 & 4 must produce unique cells where each cell represents a particular case. Example: A2 represents business outputs as they represent what are to be eventually constructed. Rule 6 The composite or integration of all cell models in one row constitutes a complete model from the perspective of that row : For the same reason as for not adding rows and columns, changing the names may change the fundamental logical structure of the Framework. Rule 7 The logic is recursive : The logic is relational between two instances of the same entity. The framework is generic in that it can be used to classify the descriptive representations of any physical object as well as conceptual objects such as enterprises. It is also recursive in that it can be used to analyze the architectural composition of itself. Although the framework will carry the relation from one column to the other, it is still a fundamentally structural representation of the enterprise and not a flow representation.

Flexibility in level of detail


One of the strengths of the Zachman Framework is that it explicitly shows a comprehensive set of views that can be addressed by enterprise architecture.[12] Some feel that following this model completely can lead to too much emphasis on documentation, as artifacts would be needed for every one of the thirty cells in the framework. John Zachman clearly states in his documentation, presentations, and seminars that, as framework, there is flexibility in what depth and breadth of detail is required for each cell of the matrix based upon the importance to a given organization. An automaker, whose business goals may necessitate an inventory and process-driven focus, could find it beneficial to focus their documentation efforts on What and How columns. Whereas a travel agent company, whose business is more concerned with people and event-timing, could find it beneficial to focus their documentation efforts on Who and When columns. However, there is no escaping the Why column's importance as it provides the business drivers for all the other columns.

Applications and influences


Since the 1990s the Zachman Framework has been widely used as a means of providing structure for Information Engineering-style enterprise modeling.[30] The Zachman Framework can be applied both in commercial companies and in government agencies. Within a government organization the framework can be applied to an entire agency at an abstract level, or it can be applied to various departments, offices, programs, subunits and even to basic operational entities.[31]

Zachman Framework

116

Customization
Zachman Framework is applied in customized frameworks such as the TEAF, built around the similar frameworks, the TEAF matrix.

TEAF Matrix of Views and Perspectives.

Framework for EA Direction, Description, and Accomplishment Overview.

TEAF Products.

TEAF Work Products for EA Direction, Description, and Accomplishment.

Other sources: The TEAF matrix is called a customization sample, see here [32], p.22

Standards based on the Zachman Framework


Zachman Framework is also used as a framework to describe standards, for example standards for healthcare and healthcare information system. Each cell of the framework contains such a series of standards for healthcare and healthcare information system.[33]

Mapping other frameworks


Another application of the Zachman Framework is as reference model for other enterprise architectures, see for example these four:

EAP mapped to the Zachman Framework, 1999

Mapping the C4ISR, 1999

DoD Products Map to the Zachman Framework Cells, 2003.

Mapping a part of the DoDAF, 2007.

Other examples: Analysis of the Rational Unified Process as a Process,[34] How the Model-driven architecture (MDA) models used in software development map to the Zachman Framework.[35] Mapping the IEC 62264 models onto the Zachman framework for analysing products information traceability.[36] Mapping the TOGAF Architecture Development Method (e.g. the methodology) to the Zachman Framework.[6]

Zachman Framework

117

Base for other enterprise architecture frameworks


Less obvious are the ways the original Zachman framework has stimulated the development of other enterprise architecture frameworks, such as in the NIST Enterprise Architecture Model, the C4ISR AE, the DOE AE, and the DoDAF:

NIST Enterprise Architecture [26] Model.

C4ISR AE, 1997.

DOE AE, 1998.

DODAF, 2003.

The Federal Enterprise Architecture Framework (FEAF) is based on the Zachman Framework but only addresses the first three columns of Zachman, using slightly different names, and focuses in the top of the three rows.[37] (see here [38]) Example: One-VA Enterprise Architecture The Zachman Framework methodology has for example been used by the United States Department of Veterans Affairs (VA) to develop and maintain its One-VA Enterprise Architecture in 2001. This methodology required defining all aspects of the VA enterprise from a business process, data, technical, location, personnel, and requirements perspective. The next step in implementing the methodology has been to define all functions related to each business process and identify associated data elements. Once identified, duplication of function and inconsistency in data definition can be identified and resolved, .[39]

Integrated Process Flow for VA IT Projects (2001)

VA Zachman Framework Portal

VA EA Repository Introduction (2008)

A Tutorial on the Zachman Architecture Framework

The Department of Veterans Affairs at the beginning of the 21st century planned to implement an enterprise architecture fully based on the Zachman Framework. The Zachman Framework was used as a reference model to initiate enterprise architecture planning in 2001. Somewhere in between the VA Zachman Framework Portal was constructed. This VA Zachman Framework Portal is still in use as a reference model for example in the determination of EA information collected from various business and project source documents. Now somewhere in the past this "A Tutorial on the Zachman Architecture Framework". Eventually an enterprise architecture repository was created at the macro level by the Zachman framework and at a cell level by the meta-model outlined below.[40]

Zachman Framework

118

VA EA Meta-Model Cell Details Enlarged.

This diagram[41] has been incorporated within the VA-EA to provide a symbolic representation of the metamodel it used, to describe the One-VA Enterprise Architecture and to build an EA Repository without the use of Commercial EA Repository Software. It was developed using an object oriented database within the Caliber-RM Software Product. Caliber-RM is intended to be used as a software configuration management tool; not as an EA repository. However, this tool permitted defining entities and relationships and for defining properties upon both entities and relationships, which made it sufficient for building an EA repository, considering the technology available in early 2003. The personal motivation in selecting this tool was that none of the commercial repository tools then available provided a true Zachman Framework representation, and were highly proprietary, making it difficult to incorporate components from other vendors or from open source. This diagram emphasizes several important interpretations of the Zachman Framework and its adaptation to information technology investment management. 1. Progressing through the rows from top to bottom, one can trace-out the Systems Development Life Cycle (SDLC) which is a de facto standard across the Information Industry; 2. The diagram emphasizes the importance of the often-neglected Zachman Row-Six (the Integrated, Operational Enterprise View). Representations in Mr. Zuechs interpretation of Zachman row-six consist, largely, of measurable service improvements and cost savings/avoidance that result from the business process and technology innovations that were developed across rows two through five. Row-six provides measured return on investment for Individual Projects and, potentially, for the entire investment portfolio. Without row-six the Framework only identifies sunk-cost, but the row-six ROI permits it to measure benefits and to be used in a continuous improvement process, capturing best practices and applying them back through row-two.

Zachman Framework

119

References
[1] "John Zachmans Concise Definition of the The Zachman Framework" (http:/ / zachman. com/ about-the-zachman-framework). Zachman International. 2008. . [2] "The Zachman Framework: The Official Concise Definition" (http:/ / zachman. com/ about-the-zachman-framework). Zachman International. 2008. . [3] "The Zachman Framework Evolution" (http:/ / zachman. com/ ea-articles-reference/ 54-the-zachman-framework-evolution). Zachman International. April, 2009. . [4] A Comparison of the Top Four Enterprise Architecture Methodologies (http:/ / msdn2. microsoft. com/ en-us/ library/ bb466232. aspx), Roger Sessions, Microsoft Developer Network Architecture Center, [5] "A framework for information systems architecture" (http:/ / zachman. com/ images/ ZI_PIcs/ ibmsj2603e. pdf). IBM SYSTEMS JOURNAL, VOL 26. NO 3,. 1987. . [6] The Open Group (19992006). "ADM and the Zachman Framework" (http:/ / www. theopengroup. org/ architecture/ togaf8-doc/ arch/ chap39. html) in: TOGAF 8.1.1 Online. Accessed 25 Jan 2009. [7] William H. Inmon, John A. Zachman, Jonathan G. Geiger (1997). Data Stores, Data Warehousing, and the Zachman Framework: Managing Enterprise Knowledge. McGraw-Hill, 1997. ISBN 0-07-031429-2. [8] Pete Sawyer, Barbara Paech, Patrick Heymans (2007). Requirements Engineering: Foundation for Software Quality. page 191. [9] Kathleen B. Hass (2007). The Business Analyst as Strategist: Translating Business Strategies Into Valuable Solutions. page 58. [10] Harold F. Tipton, Micki Krause (2008). Information Security Management Handbook, Sixth Edition, Volume 2. page 263. [11] O'Rourke, Fishman, Selkow (2003). Enterprise Architecture Using the Zachman Framework. page 9. [12] James McGovern et al. (2003). A Practical Guide to Enterprise Architecture. p. 127-129. [13] Marc Lankhorst et al. (2005). Enterprise Architecture at Work. p. 24. [14] "Business Systems Planning and Business Information Control Study: A comparisment (http:/ / www. research. ibm. com/ journal/ sj/ 211/ ibmsj2101D. pdf). In: IBM Systems Journal, vol 21, no 3, 1982. p. 31-53. [15] John A. Zachman (1987). " A Framework for Information Systems Architecture" (http:/ / www. research. ibm. com/ journal/ 50th/ applications/ zachman. html). In: IBM Systems Journal, vol 26, no 3. IBM Publication G321-5298. [16] Durward P. Jackson (1992). "Process-Based Planning in Information Resource Management". In: Emerging Information Technologies for Competitive Advantage and Economic Development. Proceedings of 1992 Information Resources Management Association International Conference. Mehdi Khosrowpour (ed). ISBN 1-878289-17-9. [17] Alain Wegmann et al. (2008). "Augmenting the Zachman Enterprise Architecture Framework with a Systemic Conceptualization" (http:/ / infoscience. epfl. ch/ record/ 129325/ files/ Wegmann_et_al-SEAM_& _Zachman-EDOC2008. pdf). Presented at the 12th IEEE International EDOC Conference (EDOC 2008), Mnchen, Germany, September 1519, 2008. [18] John F. Sowa and John Zachman (1992). "Extending and Formalizing the Framework for Information Systems Architecture" (http:/ / www. research. ibm. com/ journal/ sj/ 313/ sowa. pdf) In: IBM Systems Journal, Vol 31, no.3, 1992. p. 590-616. [19] Stan Locke (2008). "Enterprise Convergence in Our Lifetime" (http:/ / www. ies. aust. com/ ten/ TEN42. htm#Enterprise_Convergence) In: THE ENTERPRISE NEWSLETTER, TEN42 September 16, 2008 [20] John A. Zachman (1997). " Concepts of the Framework for Enterprise Architecture: Background, Description and Utility (http:/ / www. ies. aust. com/ PDF-papers/ zachman3. pdf)". Zachman International. Accessed 19 Jan 2009. [21] R.W. Matthews. &. W.C. McGee (1990). "Data Modeling for Software Development" (http:/ / www. research. ibm. com/ journal/ sj/ 292/ ibmsj2902F. pdf). in: IBM Systems Journal" 29(2). pp. 228234 [22] Jaap Schekkerman (2003). How to Survive in the Jungle of Enterprise Architecture Frameworks. page 139-144. [23] Vladan Jovanovic, Stevan Mrdalj & Adrian Gardiner (2006). A Zachman Cube (http:/ / www. iacis. org/ iis/ 2006_iis/ PDFs/ Jovanovic_Mrdalj_Gardiner. pdf). In: Issues in Information Systems. Vol VII, No. 2, 2006 p. 257-262. [24] VA Enterprise Architecture Innovation Team (2001). Enterprise Architecture: Strategy, Governance, & Implementation (http:/ / www. va. gov/ oirm/ architecture/ ea/ 2002/ VAEAVersion-10-01. pdf) report Department of Veterans Affairs, August, 2001. [25] The government information factory and the Zachman Framework (http:/ / www. inmongif. com/ _fileCabinet/ gifzach. pdf) by W. H. Inmon, 2003. p. 4. Accessed July 14, 2009. [26] The Chief Information Officers Council (1999). Federal Enterprise Architecture Framework Version 1.1 (http:/ / www. cio. gov/ documents/ fedarch1. pdf). September 1999 [27] US Department of Veterans Affairs (2002) A Tutorial on the Zachman Architecture Framework (http:/ / www. va. gov/ oirm/ architecture/ EA/ theory/ tutorial. ppt). Accessed 06 Dec 2008. [28] Bill Inmon called this image "A simple example of The Zachman Framework" in the article John Zachman - One of the Best Architects I Know (http:/ / www. b-eye-network. in/ print/ 1962) Originally published 17 November 2005. [29] Adapted from: Sowa, J.F. & J.A. Zachman, 1992, and Inmon, W.H, J.A. Zachman, & J.G. Geiger, 1997. University of Omaha (http:/ / www. isqa. unomaha. edu/ vanvliet/ arch/ ISA/ isa. htm) [30] Ian Graham (1995). Migrating to Object Technology: the semantic object modelling approach. Addison-Wesley, ISBN 0-201-59389-0. p. 322. [31] Jay D. White (2007). Managing Information in the Public Sector. p. 254. [32] http:/ / www. mega. com/ wp/ active/ document/ company/ wp_mega_zachman_en. pdf

Zachman Framework
[33] ZACHMAN ISA FRAMEWORK FOR HEALTHCARE INFORMATICS STANDARDS (http:/ / apps. adcom. uci. edu/ EnterpriseArch/ Zachman/ Resources/ ExampleHealthCareZachman. pdf), 1997. [34] DJ de Villiers (2001). "Using the Zachman Framework to Assess the Rational Unified Process" (http:/ / www. ibm. com/ developerworks/ rational/ library/ content/ RationalEdge/ mar01/ UsingtheZachmanFrameworktoAssesstheRUPMar01. pdf), In: The Rational Edge Rational Software 2001. [35] David S. Frankel et al. (2003) The Zachman Framework and the OMG's Model Driven Architecture (http:/ / www. bptrends. com/ publicationfiles/ 09-03 WP Mapping MDA to Zachman Framework. pdf) White paper. Business Process Trends. [36] Herv Panetto, Salah Bana, Grard Morel (2007). Mapping the models onto the Zachman framework for analysing products information traceability : A case Study (http:/ / hal. archives-ouvertes. fr/ docs/ 00/ 11/ 91/ 96/ PDF/ Panetto_et_al_JIM. pdf). [37] Roland Traunmller (2004). Electronic Government p. 51 [38] http:/ / books. google. nl/ books?id=QjB5c_v-uMwC& pg=PA51& dq=%22Zachman+ Framework%22+ updated& lr=lang_en& as_brr=0& as_pt=ALLTYPES [39] Statement of Dr. John A. Gauss, Assistant Secretary for Information and Technology, Department of Veterans Affairs (http:/ / www. va. gov/ oca/ testimony/ hvac/ soi/ 13mr02it. asp), before the Subcommittee on Oversight and Investigations Committee on Veterans' Affairs U.S. House of Representatives. March 13, 2002. [40] Meta-Model Cell Details (http:/ / www. va. gov/ oit/ ea/ 4_3/ process/ modeling/ metamodel. html) Accessed 25 Dec 2009 [41] This diagram is the exclusive work of Albin Martin Zuech of Annapolis Maryland, who placed it in the public domain in 2001. Al Zuech maintains the original visio diagram in numerous stages of its development between 2000 and present. Al Zuech was the Director, Enterprise Architecture Service at the Department of Veterans Affairs from 2001 until 2007.

120

External links
The Zachman Framework: The Official Concise Definition (http://zachman.com/ about-the-zachman-framework) by John A. Zachman at Zachman International, 2009. The Zachman Framework Evolution (http://zachman.com/ea-articles-reference/ 54-the-zachman-framework-evolution): overview of the evolution of the Zachman Framework by John P. Zachman at Zachman International, April 2009. UML, RUP, and the Zachman Framework: Better together (http://www.ibm.com/developerworks/rational/ library/nov06/temnenco/), by Vitalie Temnenco, IBM, 15 Nov 2006.

The Open Group Architecture Framework

121

The Open Group Architecture Framework


The Open Group Architecture Framework (TOGAF) is a framework for enterprise architecture which provides a comprehensive approach for designing, planning, implementing, and governing an enterprise information architecture. TOGAF is a registered trademark of The Open Group in the United States and other countries. [2] TOGAF is a high level and holistic approach to design, which is typically modeled at four levels: Business, Application, Data, and Technology. It tries to give a well-tested overall starting model to information architects, which can then be built upon. It relies heavily on modularization, standardization and already existing, proven technologies and products.

Overview
An architecture framework is a set of tools which can be used for developing a broad range of different architectures.[3] It should: describe a method for defining an information system in terms of a set of building blocks show how the building blocks fit together contain a set of tools provide a common vocabulary include a list of recommended standards include a list of compliant products that can be used to implement the building blocks TOGAF is such an architecture framework. The ANSI/IEEE Standard 1471-2000 specification of architecture (of software-intensive systems) may be stated as: "the fundamental organization of a system, embodied in its components, their relationships to each other and the environment, and the principles governing its design and evolution." However TOGAF has its own view, which may be specified as either a "formal description of a system, or a detailed plan of the system at component level to guide its implementation", or as "the structure of components, their interrelationships, and the principles and guidelines governing their design and evolution over time."
[1]

TOGAF 8.1.1 ADM.

The Open Group Architecture Framework

122

History
TOGAF is developed by The Open Group Architecture Forum and has been continuously evolving since the mid-1990s. In 1995, the first version of TOGAF Version was presented, which was: "...based on the Technical Architecture Framework for Information Management (TAFIM). The US Department of Defense gave The Open Group explicit permission and encouragement to create TOGAF by building on the TAFIM, which itself was the result of many years of development effort and many millions of dollars of US Government investment."[5] TOGAF 7 ("Technical Edition") was published in December DoD Standards-Based Architecture Planning Process in [4] 2001. TOGAF 8 ("Enterprise Edition") was first published in TAFIM. December 2002 and republished in updated form as TOGAF 8.1 in December 2003, which was updated in November 2006 as TOGAF 8.1.1. According to The Open Group, as of February 2011, over 15,000 individuals are TOGAF Certified [6].[7] As of September 2012 the official register has over 20,000 certified individuals. The latest version is TOGAF 9.1, launched on 1 December 2011. An evolutionary development from TOGAF 8, TOGAF 9 [8][9] includes many new features including: Increased rigor, including a formal Content Metamodel that links the artifacts of TOGAF together Elimination of unnecessary differences Many more examples and templates Additional guidelines and techniques include: A formal business-driven approach to architecture Business capability-based planning Guidance on how to use TOGAF to develop Security Architectures and SOAs The Open Group provides TOGAF free of charge to organizations for their own internal noncommercial purposes.[10]

TOGAF topics
Enterprise architecture domains
TOGAF is based on four interrelated domains

Alternative enterprise architecture frameworks


AGATE French Dlgation Gnrale pour l'Armement Atelier de Gestion de l'ArchiTEcture des systmes d'information et de communication. ArchiMate an open and independent modelling language for enterprise architecture ARCON - A Reference Architecture for Collaborative Networks - not focused on a single enterprise but rather on networks of enterprises [11][12] DoDAF United States Department of Defense Architectural Framework. CSC Catalyst [13] CSC Catalyst

The Open Group Architecture Framework DYA framework Sogeti Framework. EIF European Interoperability Framework - Enterprise architecture at the level of EU Member States IDABC Interoperable Delivery (of European egovernment services to public) Administrations, Business and Citizens Integrated Architecture Framework (IAF) created by Capgemini. FEA United States Office of Management and Budget Federal Enterprise Architecture. MIKE2.0 (Method for an Integrated Knowledge Environment) which includes an enterprise architecture framework called SAFE (Strategic Architecture for the Federated Enterprise) MODAF United Kingdom Ministry of Defence Architectural Framework. Model-driven architecture (MDA) Object Management Group's Model Driven Architecture. OBASHI (The OBASHI Business & IT methodology and framework. The Operations Systems Computing Architecture (OSCAR), initially developed in 1986 by Bell Communications Research (the forerunner of the current Telcordia Technologies) to guide development of enterprise systems for the Regional Bell Operating Companies (RBOCs). PROMIS Framework [14] The PROMIS Enterprise Architecture Framework integrated into the EA tool EVA Netmodeler SABSA a comprehensive framework for Enterprise Security Architecture and Service Management. SAP Enterprise Architecture Framework is extension of TOGAF to better support Commercial off-the-shelf and Service-Oriented Architecture IBM Enterprise Architecture Method IBM Method for Enterprise Architecture used in IBM Enterprise Architecture engagements Zachman Framework IBM Framework from the 1980s.

123

References
[1] Stephen Marley (2003). Architectural Framework (http:/ / aiwg. gsfc. nasa. gov/ esappdocs/ RPC/ RPC_Workshop_Architecture_Framework. ppt). NASA /SCI. Retrieved 10 Dec 2008. [2] TOGAF Trademark (http:/ / blog. opengroup. org/ 2011/ 02/ 08/ togaf-trademark-success/ ) [3] TOGAF Introduction (http:/ / www. opengroup. org/ architecture/ togaf8-doc/ arch/ ) The Open Group Architecture Framework. Accessed 22 Jan 2009. [4] Department of Defense (1996). Technical Architecture Framework for Information Management. Vol. 4. April 1996 [5] The Open Group (2009). Welcome to TOGAF Version 9 -- The Open Group Architecture Framework. Retrieved on 2009-02-03 from http:/ / www. opengroup. org/ architecture/ togaf9-doc/ arch/ . [6] http:/ / www. opengroup. org/ togaf9/ cert/ cert_archlist-short. tpl [7] 15,000 certifications (http:/ / blog. opengroup. org/ 2011/ 02/ 08/ togaf-trademark-success/ ) [8] http:/ / www. opengroup. org/ togaf/ [9] TOGAF 9.1 White Paper An Introduction to TOGAF Version 9.1 http:/ / www. opengroup. org/ togaf/ [10] The Open Group (2011). TOGAF Version 9 - Download. Architecture Forum. Retrieved on 2011-11-17 from http:/ / www. opengroup. org/ architecture/ togaf9/ downloads. htm. [11] L.M. Camarinha-Matos, H. Afsarmanesh, Collaborative Networks: Reference Modeling, Springer, 2008. [12] L.M. Camarinha-Matos, H. Afsarmanesh, On reference models for collaborative networked organizations, International Journal Production Research, Vol 46, N 9, May 2008, pp 24532469. [13] http:/ / www. csc. com/ delivery_excellence/ ds/ 11388-csc_catalyst [14] http:/ / pro-mis. com/ framework. html

The Open Group Architecture Framework

124

External links
Official website (http://www.togaf.info/) TOGAF 9.1 Online (http://pubs.opengroup.org/architecture/togaf9-doc/arch/) TOGAF 8.1.1 Online (http://pubs.opengroup.org/architecture/togaf8-doc/arch/) The TOGAF information site (http://www.togaf.info/) IBM developerWorks: Understand The Open Group Architecture Framework (TOGAF) and IT architecture in today's world (http://www-128.ibm.com/developerworks/ibm/library/ar-togaf1/) (February 2006) Developer.com: TOGAF: Establishing Itself As the Definitive Method for Building Enterprise Architectures in the Commercial World (http://www.developer.com/design/article.php/3374171) (June 2004) TOGAF or not TOGAF: Extending Enterprise Architecture beyond RUP (http://www-128.ibm.com/ developerworks/rational/library/jan07/temnenco/index.html) (January 2007) Practical advice: How to bring TOGAF to life (http://togaforblunder.blogspot.com/) (October 2007) Togaf Modeling using UML and BPMN (http://www.togaf-modeling.org/) (May 2010)

Federal enterprise architecture


A federal enterprise architecture (FEA) is the enterprise architecture of a federal government. It provides a common methodology for information technology (IT) acquisition, use, and disposal in the Federal government.[1] Enterprise architecture (EA) is a management practice for aligning resources to improve business performance and help government agencies better execute their core missions. An EA describes the current and future state of the agency, and lays out a plan for transitioning from the current state to the desired future state. A federal enterprise architecture is a work in progress to achieve these goals.[3] The U.S. federal enterprise architecture (FEA) is an initiative of the U.S. Structure of the U.S. "Federal Enterprise Architecture Framework" (FEAF) Components, Office of Management and Budget that [2] presented in 2001. aims to comply with the Clinger-Cohen Act and provide a common methodology for IT acquisition in the United States federal government. It is designed to ease sharing of information and resources across federal agencies, reduce costs, and improve citizen services.[4]

History
In September 1999, the Federal CIO Council published the "Federal Enterprise Architecture Framework" (FEAF) Version 1.1 for developing an Enterprise Architecture (EA) within any Federal Agency for a system that transcends multiple inter-agency boundaries. It builds on common business practices and designs that cross organizational boundaries, among others the NIST Enterprise Architecture Model. The FEAF provides an enduring standard for developing and documenting architecture descriptions of high-priority areas. It provides guidance in describing

Federal enterprise architecture architectures for multi-organizational functional segments of the Federal Government.[2] These federal architectural segments collectively constitute the federal enterprise architecture. In 2001, the Federal Architecture Working Group (FAWG) was sponsoring the development of Enterprise Architecture products for trade and grant Federal architecture segments. Methods prescribed way of approaching a particular problem. As shown in the figure, the FEAF partitions a given architecture into business, data, applications, and technology architectures. The FEAF overall framework created that time, see image, includes the first three columns of the Zachman Framework and the Spewak's Enterprise Architecture Planning methodology.[2]

125

Reference models
The FEA is built using an assortment of reference models, that develop a common taxonomy and ontology for describing IT resources. These include the, (see image): performance reference model, business reference model, service component reference model, data reference model and technical reference model.

Federal Enterprise Architecture.

[1]

It is designed to ease sharing of information and resources across federal agencies, reduce costs, and improve citizen services. It is an initiative of the US Office of Management and Budget that aims to comply with the Clinger-Cohen Act.

Performance Reference Model (PRM)


The PRM is a standardized framework to measure the performance of major IT investments and their contribution to program performance.[1] The PRM has three main purposes: 1. Help produce enhanced performance information to improve strategic and daily decision-making; 2. Improve the alignment and better articulate the contribution of inputs to outputs and outcomes, thereby creating a clear line of sight to desired results; and

Performance reference model, 2005.

[1]

3. Identify performance improvement opportunities that span traditional organizational structures and boundaries The PRM uses a number of existing approaches to performance measurement, including the Balanced Scorecard, Baldrige Criteria [5], value measuring methodology, program logic models, the value chain, and the Theory of Constraints. In addition, the PRM was informed by what agencies are currently measuring through PART assessments, GPRA, enterprise architecture, and Capital Planning and Investment Control. The PRM is currently composed of four measurement areas:

Federal enterprise architecture Mission and Business Results Customer Results Processes and Activities Technology

126

Business Reference Model (BRM)


The "FEA business reference model" is a function-driven framework for describing the business operations of the Federal Government independent of the agencies that perform them. This business reference model provides an organized, hierarchical construct for describing the day-to-day business operations of the Federal government using a functionally driven approach. The BRM is the first layer of the Federal Enterprise Architecture and it is the main viewpoint for the analysis of data, service components and [1] technology. The BRM is broken down into four areas: Services For Citizens Mode of Delivery Support Delivery of Services Management of Government Resources

Business Reference Model overview.

[1]

The Business Reference Model provides a framework that facilitates a functional (as opposed to organizational) view of the federal governments LoBs, including its internal operations and its services for the citizens, independent of the agencies, bureaus and offices that perform them. By describing the federal government around common business areas instead of by a stovepiped, agency-by-agency view, the BRM promotes agency collaboration and serves as the underlying foundation for the FEA and E-Gov strategies.[1] While the BRM does provide an improved way of thinking about government operations, it is only a model; its true utility can only be realized when it is effectively used. The functional approach promoted by the BRM will do little to help accomplish the goals of E-Government if it is not incorporated into EA business architectures and the management processes of all Federal agencies and OMB.[1]

Federal enterprise architecture

127

Service Component Reference Model (SRM)


The Service Component Reference Model (SRM) is a business and performance-driven, functional framework that classifies Service Components with respect to how they support business and/or performance objectives.[1] The SRM is intended for use to support the discovery of government-wide business and application Service Components in IT investments and assets. The SRM is structured across horizontal and vertical service domains that, independent of the business functions, can provide a leverage-able foundation to support the reuse of applications, application capabilities, components, and business services. The SRM establishes the following domains: Customer Services Process Automation Services Business Management Services Digital Asset Services Business Analytical Services Back Office Services Support Services
Service Component Reference Model. [6]

Each Service Domain is decomposed into Service Types. For example, the three Service Types associated with the Customer Services Domain are: Customer Preferences; Customer Relationship Management; and Customer Initiated Assistance. And each Service Type is decomposed further into components. For example, the four components within the Customer Preferences Service Type include: Personalization; Subscriptions; Alerts and Notifications; and Profile Management.[6]

Data Reference Model (DRM)


The Data Reference Model (DRM) describes, at an aggregate level, the data and information that support government program and business line operations. This model enables agencies to describe the types of interaction and exchanges that occur between the Federal Government and citizens.[1] The DRM categorizes government information into greater levels of detail. It also establishes a classification for Federal data and identifies duplicative data resources. A common data model
The DRM Collaboration Process. [1]

Federal enterprise architecture will streamline information exchange processes within the Federal government and between government and external stakeholders. Volume One of the DRM provides a high-level overview of the structure, usage, and data-identification constructs. This document: Provides an introduction and high-level overview of the contents that will be detailed in Volumes 24 of the model; Encourages community of interest development of the remaining volumes; and Provides the basic concepts, strategy, and structure to be used in future development. The DRM is the starting point from which data architects should develop modeling standards and concepts. The combined volumes of the DRM support data classification and enable horizontal and vertical information sharing.

128

Technical Reference Model (TRM)


The TRM is a component-driven, technical framework categorizing the standards and technologies to support and enable the delivery of Service Components and capabilities. It also unifies existing agency TRMs and E-Gov guidance by providing a foundation to advance the reuse and standardization of technology and Service Components from a government-wide perspective.[1] The TRM consists of: Service Areas : represent a technical tier supporting the secure construction, exchange, and delivery of Service Components. Each Service Area aggregates the standards and technologies into lower-level functional areas. Each Service Area consists of multiple Service Categories and Service Standards. This hierarchy provides the framework to group standards and technologies that directly support the Service Area. (Purple headings)

Technical Reference Model.

[1]

Service Categories : classify lower levels of technologies and standards with respect to the business or technology function they serve. In turn, each Service Category comprises one or more Service Standards. (Bold-face groupings) Service Standards : define the standards and technologies that support a Service Category. To support agency mapping into the TRM, many of the Service Standards provide illustrative specifications or technologies as examples.(Plain text) The figure on the right provides a high-level depiction of the TRM. Aligning agency capital investments to the TRM leverages a common, standardized vocabulary, allowing interagency discovery, collaboration, and interoperability. Agencies and the federal government will benefit from economies of scale by identifying and reusing the best solutions and technologies to support their business functions, mission, and target architecture. Organized in a hierarchy, the TRM categorizes the standards and technologies that collectively support the secure delivery, exchange, and construction of business and application Service Components that may be used and leveraged in a component-based or service-oriented architecture.[1]

Federal enterprise architecture

129

FEA Architecture levels


In the FEA enterprise, segment, and solution architecture provide different business perspectives by varying the level of detail and addressing related but distinct concerns. Just as enterprises are themselves hierarchically organized, so are the different views provided by each type of architecture. The Federal Enterprise Architecture Practice Guidance (2006) has defined three types of architecture:[3] Enterprise architecture, Segment architecture, and Solution architecture. By definition, Enterprise Architecture (EA) is fundamentally concerned with identifying common or shared assets whether they are strategies, business processes, investments, data, systems, or technologies. EA is driven by [3] strategy; it helps an agency identify Federal Enterprise Architecture levels and attributes whether its resources are properly aligned to the agency mission and strategic goals and objectives. From an investment perspective, EA is used to drive decisions about the IT investment portfolio as a whole. Consequently, the primary stakeholders of the EA are the senior managers and executives tasked with ensuring the agency fulfills its mission as effectively and efficiently as possible.[3] By contrast, "segment architecture" defines a simple roadmap for a core mission area, business service, or enterprise service. Segment architecture is driven by business management and delivers products that improve the delivery of services to citizens and agency staff. From an investment perspective, segment architecture drives decisions for a business case or group of business cases supporting a core mission area or common or shared service. The primary stakeholders for segment architecture are business owners and managers. Segment architecture is related to EA through three principles: structure: segment architecture inherits the framework used by the EA, although it may be extended and specialized to meet the specific needs of a core mission area or common or shared service. reuse : segment architecture reuses important assets defined at the enterprise level including: data; common business processes and investments; and applications and technologies. alignment : segment architecture aligns with elements defined at the enterprise level, such as business strategies, mandates, standards, and performance measures.[3] "Solution architecture" defines agency IT assets such as applications or components used to automate and improve individual agency business functions. The scope of a solution architecture is typically limited to a single project and is used to implement all or part of a system or business solution. The primary stakeholders for solution architecture are system users and developers. Solution architecture is commonly related to segment architecture and enterprise architecture through definitions and constraints. For example, segment architecture provides definitions of data or service interfaces used within a core mission area or service, which are accessed by individual solutions. Equally, a solution may be constrained to specific technologies and standards that are defined at the enterprise level.[3]

Federal enterprise architecture

130

FEA tools
A number of modeling tools enable you to capture the Federal Enterprise Architect reference models and align your enterprise architecture against them. Adaptive Inc.[7] Future Tech Systems, Inc.[8] IBM (formerly Telelogic) System Architect (software) alfabet with the software suite planningIT [9] Troux Technologies Architect Iteraplan Open Source EA Tool Opentext Metastorm Provision [10] MEGA International MEGA Suite for Federal Enterprise Architecture [11]

The CIO Council's ET.gov site [12] can be used to identify technical specifications (standards) that are not yet included in the TRM but should be. Those that have been identified thus far can be discovered using the advanced ET.gov search service [13] hosted by IntelligenX [14].

References
[1] FEA Consolidated Reference Model Document. at whitehouse.gov May 2005. This document is revised to FEA Consolidated Reference Model Document Version 2.3 (http:/ / www. whitehouse. gov/ omb/ assets/ fea_docs/ FEA_CRM_v23_Final_Oct_2007_Revised. pdf) October 2007. Accessed 28 April 2009. [2] Chief Information Officer Council (2001) A Practical Guide to Federal Enterprise Architecture (http:/ / www. enterprise-architecture. info/ Images/ Documents/ Federal Enterprise Architecture Guide v1a. pdf). Feb 2001. [3] Federal Enterprise Architecture Program Management Office (2007). FEA Practice Guidance (http:/ / www. whitehouse. gov/ sites/ default/ files/ omb/ assets/ fea_docs/ FEA_Practice_Guidance_Nov_2007. pdf). [4] Overall the FEA is mandated by a series of federal laws and mandates. These federal laws have been: GPRA 1993 : Government Performance and Reform Act PRA 1995 : Paperwork Reduction Act CCA 1996 : Clinger-Cohen Act GPEA 1998 : The Government Paper work Elimination Act FISMA 2002 : Federal Information Security Management Act E-Gov 2002 : Electronic Government Suplementairy OMB circulars have been: A-11 : Preparation, Submission and Execution of the Budget A-130 : OMB Circular A-130 Management of Federal Information Resources A-76 : Performance of Commercial Activities. [5] http:/ / www. nist. gov/ baldrige/ publications/ criteria. cfm [6] FEA (2005) FEA Records Management Profile, Version 1.0 (http:/ / www. archives. gov/ records-mgmt/ pdf/ rm-profile. pdf). December 15, 2005. [7] (http:/ / www. adaptive. com) [8] Envision VIP (http:/ / www. future-tech. com) [9] http:/ / www. alfabet. com/ en/ offering/ approach/ [10] Metastorm (http:/ / www. metastorm. com/ ) [11] (http:/ / www. mega. com/ ) [12] http:/ / et. gov [13] http:/ / etgov. i411. com/ etgov/ websearchservlet?toplevel=true [14] http:/ / www. intelligenx. com/

Federal enterprise architecture

131

External links
e-gov FEA Program Office webpage (http://www.whitehouse.gov/omb/e-gov/fea/) Federal Enterprise Architecture Institute website (http://www.feacinstitute.org) Federal Chief Information Officers Council website (http://www.cio.gov) DoD CIO Enterprise Architecture & Standards (http://cio-nii.defense.gov/policy/eas.shtml)

Operating system
An operating system (OS) is a collection of software that manages computer hardware resources and provides common services for computer programs. The operating system is a vital component of the system software in a computer system. Application programs require an operating system to function. Time-sharing operating systems schedule tasks for efficient use of the system and may also include accounting for cost allocation of processor time, mass storage, printing, and other resources. For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware,[1][2] although the application code is usually executed directly by the hardware and will frequently make a system call to an OS function or be interrupted by it. Operating systems can be found on almost any device that contains a computerfrom cellular phones and video game consoles to supercomputers and web servers. Examples of popular modern operating systems include Android, BSD, iOS, Linux, Mac OS X, Microsoft Windows,[3] Windows Phone, and IBM z/OS. All these, except Windows and z/OS, share roots in UNIX.

Types of operating system


Real-time A real-time operating system is a multitasking operating system that aims at executing real-time applications. Real-time operating systems often use specialized scheduling algorithms so that they can achieve a deterministic nature of behavior. The main objective of real-time operating systems is their quick and predictable response to events. They have an event-driven or time-sharing design and often aspects of both. An event-driven system switches between tasks based on their priorities or external events while time-sharing operating systems switch tasks based on clock interrupts. Multi-user A multi-user operating system allows multiple users to access a computer system at the same time. Time-sharing systems and Internet servers can be classified as multi-user systems as they enable multiple-user access to a computer through the sharing of time. Single-user operating systems have only one user but may allow multiple programs to run at the same time. Multi-tasking vs. single-tasking A multi-tasking operating system allows more than one program to be running at a time, from the point of view of human time scales. A single-tasking system has only one running program. Multi-tasking can be of two types: pre-emptive or co-operative. In pre-emptive multitasking, the operating system slices the CPU time and dedicates one slot to each of the programs. Unix-like operating systems such as Solaris and Linux support pre-emptive multitasking, as does AmigaOS. Cooperative multitasking is achieved by relying on each process to give time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking. 32-bit versions, both Windows NT and Win9x, used pre-emptive multi-tasking. Mac OS prior to OS X used to support cooperative multitasking. Distributed

Operating system Further information: Distributed system A distributed operating system manages a group of independent computers and makes them appear to be a single computer. The development of networked computers that could be linked and communicate with each other gave rise to distributed computing. Distributed computations are carried out on more than one machine. When computers in a group work in cooperation, they make a distributed system. Embedded Embedded operating systems are designed to be used in embedded computer systems. They are designed to operate on small machines like PDAs with less autonomy. They are able to operate with a limited number of resources. They are very compact and extremely efficient by design. Windows CE and Minix 3 are some examples of embedded operating systems.

132

History
Early computers were built to perform a series of single tasks, like a calculator. Operating systems did not exist in their modern and more complex forms until the early 1960s.[4] Basic operating system features were developed in the 1950s, such as resident monitor functions that could automatically run different programs in succession to speed up processing. Hardware features were added that enabled use of runtime libraries, interrupts, and parallel processing. When personal computers became popular in the 1980s, operating system were made for them similar in concept to those used on larger computers. In the 1940s, the earliest electronic digital systems had no operating systems. Electronic systems of this time were programmed on rows of mechanical switches or by jumper wires on plug boards. These were special-purpose systems that, for example, generated ballistics tables for the military or controlled the printing of payroll checks from data on punched paper cards. After programmable general purpose computers were invented, machine languages (consisting of strings of the binary digits 0 and 1 on punched paper tape) were introduced that sped up the programming process (Stern, 1981). In the early 1950s, a computer could execute only one program at a time. Each user had sole use of the computer for a limited period of time and would arrive at a scheduled time with program and data on punched paper cards and/or punched tape. The program would be loaded into the machine, and the machine would be set to work until the program completed or crashed. Programs could generally be debugged via a front panel using toggle switches and panel lights. It is said that Alan Turing was a master of this on the early Manchester Mark 1 machine, and he was already deriving the primitive conception of an operating system from the principles of the Universal Turing machine.[4] Later machines came with libraries of programs, which would be linked to a user's program to assist in operations such as input and output and generating computer code from human-readable symbolic code. This was the genesis of the modern-day computer system. However, machines still ran a single job at a time. At Cambridge University in England the job queue was at one time a washing line from which tapes were hung with different colored clothes-pegs to indicate job-priority.

OS/360 was used on most IBM mainframe computers beginning in 1966, including the computers that helped NASA put a man on the moon.

Mainframes

Operating system Through the 1950s, many major features were pioneered in the field of operating systems, including batch processing, input/output interrupt, buffering, multitasking, spooling, runtime libraries, link-loading, and programs for sorting records in files. These features were included or not included in application software at the option of application programmers, rather than in a separate operating system used by all applications. In 1959 the SHARE Operating System was released as an integrated utility for the IBM 704, and later in the 709 and 7090 mainframes, although it was quickly supplanted by IBSYS/IBJOB on the 709, 7090 and 7094. During the 1960s, IBM's OS/360 introduced the concept of a single OS spanning an entire product line, which was crucial for the success of the System/360 machines. IBM's current mainframe operating systems are distant descendants of this original system and applications written for OS/360 can still be run on modern machines. OS/360 also pioneered the concept that the operating system keeps track of all of the system resources that are used, including program and data space allocation in main memory and file space in secondary storage, and file locking during update. When the process is terminated for any reason, all of these resources are re-claimed by the operating system. The alternative CP-67 system for the S/360-67 started a whole line of IBM operating systems focused on the concept of virtual machines. Other operating systems used on IBM S/360 series mainframes included systems developed by IBM: COS/360 (Compatibility Operating System), DOS/360 (Disk Operating System), TSS/360 (Time Sharing System), TOS/360 (Tape Operating System), BOS/360 (Basic Operating System), and ACP (Airline Control Program), as well as a few non-IBM systems: MTS (Michigan Terminal System), MUSIC (Multi-User System for Interactive Computing), and ORVYL (Stanford Timesharing System). Control Data Corporation developed the SCOPE operating system in the 1960s, for batch processing. In cooperation with the University of Minnesota, the Kronos and later the NOS operating systems were developed during the 1970s, which supported simultaneous batch and timesharing use. Like many commercial timesharing systems, its interface was an extension of the Dartmouth BASIC operating systems, one of the pioneering efforts in timesharing and programming languages. In the late 1970s, Control Data and the University of Illinois developed the PLATO operating system, which used plasma panel displays and long-distance time sharing networks. Plato was remarkably innovative for its time, featuring real-time chat, and multi-user graphical games. Burroughs Corporation introduced the B5000 in 1961 with the MCP, (Master Control Program) operating system. The B5000 was a stack machine designed to exclusively support high-level languages with no machine language or assembler, and indeed the MCP was the first OS to be written exclusively in a high-level language ESPOL, a dialect of ALGOL. MCP also introduced many other ground-breaking innovations, such as being the first commercial implementation of virtual memory. During development of the AS400, IBM made an approach to Burroughs to licence MCP to run on the AS400 hardware. This proposal was declined by Burroughs management to protect its existing hardware production. MCP is still in use today in the Unisys ClearPath/MCP line of computers. UNIVAC, the first commercial computer manufacturer, produced a series of EXEC operating systems. Like all early main-frame systems, this was a batch-oriented system that managed magnetic drums, disks, card readers and line printers. In the 1970s, UNIVAC produced the Real-Time Basic (RTB) system to support large-scale time sharing, also patterned after the Dartmouth BC system. General Electric and MIT developed General Electric Comprehensive Operating Supervisor (GECOS), which introduced the concept of ringed security privilege levels. After acquisition by Honeywell it was renamed to General Comprehensive Operating System (GCOS). Digital Equipment Corporation developed many operating systems for its various computer lines, including TOPS-10 and TOPS-20 time sharing systems for the 36-bit PDP-10 class systems. Prior to the widespread use of UNIX, TOPS-10 was a particularly popular system in universities, and in the early ARPANET community. In the late 1960s through the late 1970s, several hardware capabilities evolved that allowed similar or ported software to run on more than one system. Early systems had utilized microprogramming to implement features on their systems in order to permit different underlying computer architectures to appear to be the same as others in a

133

Operating system series. In fact most 360s after the 360/40 (except the 360/165 and 360/168) were microprogrammed implementations. But soon other means of achieving application compatibility were proven to be more significant. The enormous investment in software for these systems made since 1960s caused most of the original computer manufacturers to continue to develop compatible operating systems along with the hardware. The notable supported mainframe operating systems include: Burroughs MCP B5000, 1961 to Unisys Clearpath/MCP, present. IBM OS/360 IBM System/360, 1966 to IBM z/OS, present. IBM CP-67 IBM System/360, 1967 to IBM z/VM, present. UNIVAC EXEC 8 UNIVAC 1108, 1967, to OS 2200 Unisys Clearpath Dorado, present.

134

Microcomputers
The first microcomputers did not have the capacity or need for the elaborate operating systems that had been developed for mainframes and minis; minimalistic operating systems were developed, often loaded from ROM and known as monitors. One notable early disk operating system was CP/M, which was supported on many early microcomputers and was closely imitated by Microsoft's MS-DOS, which became wildly popular as the operating system chosen for the IBM PC (IBM's version of it was called IBM DOS or PC DOS). In the '80s, Apple Computer Inc. (now Apple PC-DOS was an early personal computer OS that featured a command line interface. Inc.) abandoned its popular Apple II series of microcomputers to introduce the Apple Macintosh computer with an innovative Graphical User Interface (GUI) to the Mac OS. The introduction of the Intel 80386 CPU chip with 32-bit architecture and paging capabilities, provided personal computers with the ability to run multitasking operating systems like those of earlier minicomputers and mainframes. Microsoft responded to this progress by hiring Dave Cutler, who had developed the VMS operating system for Digital Equipment Corporation. He would lead the development of the Windows NT operating system, which continues to serve as the basis for Microsoft's operating systems line. Steve Jobs, a co-founder of Apple Inc., started NeXT Computer Inc., which developed the NEXTSTEP operating system. NEXTSTEP would later be acquired by Apple Inc. and used, along with code from FreeBSD as the core of Mac OS X. The GNU Project was started by activist and programmer Richard Stallman with the goal of creating a complete free software replacement to the proprietary UNIX operating system. While the project was highly successful in duplicating the functionality of various parts of UNIX, development of the GNU Hurd kernel proved to be unproductive. In 1991, Finnish computer science student Linus Torvalds, with cooperation from volunteers collaborating over the Internet, released the first version of the Linux kernel. It was soon merged with the GNU user space components and system software to form a complete operating system. Since then, the combination of the two major components has usually been referred to as simply "Linux" by the software industry, a naming convention that Stallman and the Free Software Foundation remain opposed to, preferring the name GNU/Linux. The Berkeley Software Distribution, known as BSD, is the UNIX derivative distributed by the University of California, Berkeley, starting in the 1970s. Freely distributed and ported to many minicomputers, it eventually also gained a following for use on PCs, mainly as FreeBSD, NetBSD and OpenBSD.

Operating system

135

Examples of operating systems


UNIX and UNIX-like operating systems
Ken Thompson wrote B, mainly based on BCPL, which he used to write Unix, based on his experience in the MULTICS project. B was replaced by C, and Unix developed into a large, complex family of inter-related operating systems which have been influential in every modern operating system (see History). The UNIX-like family is a diverse group of operating systems, with several major sub-categories including System V, BSD, and Linux. The name "UNIX" is a trademark of The Open Group which licenses it for Evolution of Unix systems use with any operating system that has been shown to conform to their definitions. "UNIX-like" is commonly used to refer to the large set of operating systems which resemble the original UNIX. Unix-like systems run on a wide variety of computer architectures. They are used heavily for servers in business, as well as workstations in academic and engineering environments. Free UNIX variants, such as Linux and BSD, are popular in these areas. Four operating systems are certified by the The Open Group (holder of the Unix trademark) as Unix. HP's HP-UX and IBM's AIX are both descendants of the original System V Unix and are designed to run only on their respective vendor's hardware. In contrast, Sun Microsystems's Solaris Operating System can run on multiple types of hardware, including x86 and Sparc servers, and PCs. Apple's Mac OS X, a replacement for Apple's earlier (non-Unix) Mac OS, is a hybrid kernel-based BSD variant derived from NeXTSTEP, Mach, and FreeBSD. Unix interoperability was sought by establishing the POSIX standard. The POSIX standard can be applied to any operating system, although it was originally created for various Unix variants. BSD and its descendants A subgroup of the Unix family is the Berkeley Software Distribution family, which includes FreeBSD, NetBSD, and OpenBSD. These operating systems are most commonly found on webservers, although they can also function as a personal computer OS. The Internet owes much of its existence to BSD, as many of the protocols now commonly used by computers to connect, send and receive data over a network were widely implemented and refined in BSD. The world wide web was also first demonstrated on a number of computers running an OS based on BSD called NextStep. BSD has its roots in Unix. In 1974, University of California, Berkeley installed its first Unix system.
The first server for the World Wide Web ran on NeXTSTEP, based on BSD.

Operating system Over time, students and staff in the computer science department there began adding new programs to make things easier, such as text editors. When Berkely received new VAX computers in 1978 with Unix installed, the school's undergraduates modified Unix even more in order to take advantage of the computer's hardware possibilities. The Defense Advanced Research Projects Agency of the US Department of Defense took interest, and decided to fund the project. Many schools, corporations, and government organizations took notice and started to use Berkeley's version of Unix instead of the official one distributed by AT&T. Steve Jobs, upon leaving Apple Inc. in 1985, formed NeXT Inc., a company that manufactured high-end computers running on a variation of BSD called NeXTSTEP. One of these computers was used by Tim Berners-Lee as the first webserver to create the World Wide Web. Developers like Keith Bostic encouraged the project to replace any non-free code that originated with Bell Labs. Once this was done, however, AT&T sued. Eventually, after two years of legal disputes, the BSD project came out ahead and spawned a number of free derivatives, such as FreeBSD and NetBSD. OS X Mac OS X is a line of open core graphical operating systems developed, marketed, and sold by Apple Inc., the latest of which is pre-loaded on all currently shipping Macintosh computers. Mac OS X is the successor to the original Mac OS, which had been Apple's primary operating system since 1984. Unlike its predecessor, Mac OS X is a UNIX operating system built on technology that had been developed at NeXT through the second half of the 1980s and up until Apple purchased the company in early 1997. The operating system was first released in 1999 as Mac OS X Server 1.0, with a desktop-oriented version (Mac OS X v10.0 "Cheetah") following in March 2001. Since then, six more distinct "client" and "server" editions of Mac OS X have been released, the most recent being OS X 10.8 "Mountain Lion", which was first made available on February 16, 2012 for developers, and was then released to the public on July 25, 2012. Releases of Mac OS X are named after big cats. The server edition, Mac OS X Server, is architecturally identical to its desktop counterpart but usually runs on Apple's line of Macintosh server hardware. Mac OS X Server includes work group management and administration software tools that provide simplified access to key network services, including a mail transfer agent, a Samba server, an LDAP server, a domain name server, and others. In Mac OS X v10.7 Lion, all server aspects of Mac OS X Server have been integrated into the client version.[5] Linux and GNU Linux (or GNU/Linux) is a Unix-like operating system that was developed without any actual Unix code, unlike BSD and its variants. Linux can be used on a wide range of devices from supercomputers to wristwatches. The Linux kernel is released under an open source license, so anyone can read and modify its code. It has been modified to run on a large variety of electronics. Although estimates suggest that Linux is used on 1.82% of all personal computers,[6][7] it has been widely adopted for use in servers[8] and embedded Ubuntu, desktop Linux distribution systems[9] (such as cell phones). Linux has superseded Unix in most places, and is used on the 10 most powerful supercomputers in the world.[10] The Linux kernel is used in some popular distributions, such as Red Hat, Debian, Ubuntu, Linux Mint and Google's Android.

136

Operating system

137 The GNU project is a mass collaboration of programmers who seek to create a completely free and open operating system that was similar to Unix but with completely original code. It was started in 1983 by Richard Stallman, and is responsible for many of the parts of most Linux variants. Thousands of pieces of software for virtually every operating system are licensed under the GNU General Public License. Meanwhile, the Linux kernel began as a side project of Linus Torvalds, a university student from Finland. In 1991, Torvalds began work on it, and posted information about his project on a newsgroup for computer students and programmers. He received a wave of support and volunteers who ended up creating a full-fledged kernel. Programmers from GNU took notice, and members of both projects worked to integrate the finished GNU parts with the Linux kernel in order to create a full-fledged operating system. Google Chrome OS Chrome is an operating system based on the Linux kernel and designed by Google. Since Chrome OS targets computer users who spend most of their time on the Internet, it is mainly a web browser with no ability to run applications. It relies on Internet applications (or Web apps) used in the web browser to accomplish tasks such as word processing and media viewing, as well as online storage for storing most files.

Android, a popular mobile operating system using the Linux kernel

Microsoft Windows
Microsoft Windows is a family of proprietary operating systems designed by Microsoft Corporation and primarily targeted to Intel architecture based computers, with an estimated 88.9 percent total usage share on Web connected computers.[7][11][12][13] The newest version is Windows 8 for workstations and Windows Server 2012 for servers. Windows 7 recently overtook Windows XP as most used OS.[14][15][16]

Bootable Windows To Go USB flash drive

Microsoft Windows originated in 1985 as an operating environment running on top of MS-DOS, which was the standard operating system shipped on most Intel architecture personal computers at the time. In 1995, Windows 95 was released which only used MS-DOS as a bootstrap. For backwards compatibility, Win9x could run real-mode MS-DOS[17][18] and 16 bits Windows 3.x[19] drivers. Windows Me, released in 2000, was the last version in the Win9x family. Later versions have all been based on the Windows NT kernel. Current versions of Windows run on IA-32 and x86-64 microprocessors, although Windows 8 will support ARM architecture. In the past, Windows NT supported non-Intel architectures. Server editions of Windows are widely used. In recent years, Microsoft has expended significant capital in an effort to promote the use of Windows as a server operating system. However, Windows' usage on servers is not as widespread as on personal computers, as Windows competes against Linux and BSD for server market share.[20][21]

Operating system

138

Other
There have been many operating systems that were significant in their day but are no longer so, such as AmigaOS; OS/2 from IBM and Microsoft; Mac OS, the non-Unix precursor to Apple's Mac OS X; BeOS; XTS-300; RISC OS; MorphOS and FreeMint. Some are still used in niche markets and continue to be developed as minority platforms for enthusiast communities and specialist applications. OpenVMS formerly from DEC, is still under active development by Hewlett-Packard. Yet other operating systems are used almost exclusively in academia, for operating systems education or to do research on operating system concepts. A typical example of a system that fulfills both roles is MINIX, while for example Singularity is used purely for research. Other operating systems have failed to win significant market share, but have introduced innovations that have influenced mainstream operating systems, not least Bell Labs' Plan 9.

Components
The components of an operating system all exist in order to make the different parts of a computer work together. All user software needs to go through the operating system in order to use any of the hardware, whether it be as simple as a mouse or keyboard or complex as an Internet connection.

Kernel
With the aid of the firmware and device drivers, the kernel provides the most basic level of control over all of the computer's hardware devices. It manages memory access for programs in the RAM, it determines which programs get access to which hardware resources, it sets up or resets the CPU's operating states for optimal operation at all times, and it organizes the data for long-term non-volatile storage with file systems on such media as disks, tapes, flash memory, etc. Program execution
A kernel connects the application software to the The operating system provides an interface between an application hardware of a computer. program and the computer hardware, so that an application program can interact with the hardware only by obeying rules and procedures programmed into the operating system. The operating system is also a set of services which simplify development and execution of application programs. Executing an application program involves the creation of a process by the operating system kernel which assigns memory space and other resources, establishes a priority for the process in multi-tasking systems, loads program binary code into memory, and initiates execution of the application program which then interacts with the user and with hardware devices.

Interrupts Interrupts are central to operating systems, as they provide an efficient way for the operating system to interact with and react to its environment. The alternative having the operating system "watch" the various sources of input for events (polling) that require action can be found in older systems with very small stacks (50 or 60 bytes) but are unusual in modern systems with large stacks. Interrupt-based programming is directly supported by most modern CPUs. Interrupts provide a computer with a way of automatically saving local register contexts, and running specific code in response to events. Even very basic computers support hardware interrupts, and allow the programmer to specify code which may be run when that event takes place. When an interrupt is received, the computer's hardware automatically suspends whatever program is currently running, saves its status, and runs computer code previously associated with the interrupt; this is analogous to

Operating system placing a bookmark in a book in response to a phone call. In modern operating systems, interrupts are handled by the operating system's kernel. Interrupts may come from either the computer's hardware or from the running program. When a hardware device triggers an interrupt, the operating system's kernel decides how to deal with this event, generally by running some processing code. The amount of code being run depends on the priority of the interrupt (for example: a person usually responds to a smoke detector alarm before answering the phone). The processing of hardware interrupts is a task that is usually delegated to software called device driver, which may be either part of the operating system's kernel, part of another program, or both. Device drivers may then relay information to a running program by various means. A program may also trigger an interrupt to the operating system. If a program wishes to access hardware for example, it may interrupt the operating system's kernel, which causes control to be passed back to the kernel. The kernel will then process the request. If a program wishes additional resources (or wishes to shed resources) such as memory, it will trigger an interrupt to get the kernel's attention. Modes Modern CPUs support multiple modes of operation. CPUs with this capability use at least two modes: protected mode and supervisor mode. The supervisor mode is used by the operating system's kernel for low level tasks that need unrestricted access to hardware, such as controlling how memory is written and erased, and communication with devices like graphics cards. Protected mode, in contrast, is used for almost everything else. Applications operate within protected mode, and can only use hardware by communicating with the kernel, which controls everything in Privilege rings for the x86 available in protected mode. Operating systems supervisor mode. CPUs might have other determine which processes run in each mode. modes similar to protected mode as well, such as the virtual modes in order to emulate older processor types, such as 16-bit processors on a 32-bit one, or 32-bit processors on a 64-bit one. When a computer first starts up, it is automatically running in supervisor mode. The first few programs to run on the computer, being the BIOS or EFI, bootloader, and the operating system have unlimited access to hardware - and this is required because, by definition, initializing a protected environment can only be done outside of one. However, when the operating system passes control to another program, it can place the CPU into protected mode. In protected mode, programs may have access to a more limited set of the CPU's instructions. A user program may leave protected mode only by triggering an interrupt, causing control to be passed back to the kernel. In this way the operating system can maintain exclusive control over things like access to hardware and memory. The term "protected mode resource" generally refers to one or more CPU registers, which contain information that the running program isn't allowed to alter. Attempts to alter these resources generally causes a switch to supervisor mode, where the operating system can deal with the illegal operation the program was attempting (for example, by killing the program).

139

Operating system Memory management Among other things, a multiprogramming operating system kernel must be responsible for managing all system memory which is currently in use by programs. This ensures that a program does not interfere with memory already in use by another program. Since programs time share, each program must have independent access to memory. Cooperative memory management, used by many early operating systems, assumes that all programs make voluntary use of the kernel's memory manager, and do not exceed their allocated memory. This system of memory management is almost never seen any more, since programs often contain bugs which can cause them to exceed their allocated memory. If a program fails, it may cause memory used by one or more other programs to be affected or overwritten. Malicious programs or viruses may purposefully alter another program's memory, or may affect the operation of the operating system itself. With cooperative memory management, it takes only one misbehaved program to crash the system. Memory protection enables the kernel to limit a process' access to the computer's memory. Various methods of memory protection exist, including memory segmentation and paging. All methods require some level of hardware support (such as the 80286 MMU), which doesn't exist in all computers. In both segmentation and paging, certain protected mode registers specify to the CPU what memory address it should allow a running program to access. Attempts to access other addresses will trigger an interrupt which will cause the CPU to re-enter supervisor mode, placing the kernel in charge. This is called a segmentation violation or Seg-V for short, and since it is both difficult to assign a meaningful result to such an operation, and because it is usually a sign of a misbehaving program, the kernel will generally resort to terminating the offending program, and will report the error. Windows 3.1-Me had some level of memory protection, but programs could easily circumvent the need to use it. A general protection fault would be produced, indicating a segmentation violation had occurred; however, the system would often crash anyway.

140

Operating system Virtual memory Further information: Page fault The use of virtual memory addressing (such as paging or segmentation) means that the kernel can choose what memory each program may use at any given time, allowing the operating system to use the same memory locations for multiple tasks. If a program tries to access memory that isn't in its current range of accessible memory, but nonetheless has been allocated to it, the kernel will be interrupted in the same way as it would if the program were to exceed its allocated memory. (See section on memory management.) Under UNIX this kind of interrupt is referred to as a page fault. When the kernel detects a page fault it will generally adjust the virtual memory range of the program which triggered it, granting it access to the memory requested. This gives the kernel discretionary power over where a particular application's memory is stored, or even whether or not it has actually been allocated yet. In modern operating systems, memory which is accessed less frequently can be temporarily stored on disk or other media to make that space available for use by other programs. This is called swapping, as an area of memory can be used by multiple programs, and what that memory area contains can be swapped or exchanged on demand. "Virtual memory" provides the programmer or the user with the perception that there is a much larger amount of RAM in the computer than is really there.[22] Multitasking Further information: Context switch,Preemptive multitasking,andCooperative multitasking Multitasking refers to the running of multiple independent computer programs on the same computer; giving the appearance that it is performing the tasks at the same time. Since most computers can do at most one or two things at one time, this is generally done via time-sharing, which means that each program uses a share of the computer's time to execute. An operating system kernel contains a piece of software called a scheduler which determines how much time each program will spend executing, and in which order execution control should be passed to programs. Control is passed to a process by the kernel, which allows the program access to the CPU and memory. Later, control is returned to the kernel through some mechanism, so that another program may be allowed to use the CPU. This so-called passing of control between the kernel and applications is called a context switch. An early model which governed the allocation of time to programs was called cooperative multitasking. In this model, when control is passed to a program by the kernel, it may execute for as long as it wants before explicitly returning control to the kernel. This means that a malicious or malfunctioning program may not only prevent any other programs from using the CPU, but it can hang the entire system if it enters an infinite loop.

141

Many operating systems can "trick" programs into using memory scattered around the hard disk and RAM as if it is one continuous chunk of memory, called virtual memory.

Operating system Modern operating systems extend the concepts of application preemption to device drivers and kernel code, so that the operating system has preemptive control over internal run-times as well. The philosophy governing preemptive multitasking is that of ensuring that all programs are given regular time on the CPU. This implies that all programs must be limited in how much time they are allowed to spend on the CPU without being interrupted. To accomplish this, modern operating system kernels make use of a timed interrupt. A protected mode timer is set by the kernel which triggers a return to supervisor mode after the specified time has elapsed. (See above sections on Interrupts and Dual Mode Operation.) On many single user operating systems cooperative multitasking is perfectly adequate, as home computers generally run a small number of well tested programs. The AmigaOS is an exception, having pre-emptive multitasking from its very first version. Windows NT was the first version of Microsoft Windows which enforced preemptive multitasking, but it didn't reach the home user market until Windows XP (since Windows NT was targeted at professionals). Disk access and file systems Access to data stored on disks is a central feature of all operating systems. Computers store data on disks using files, which are structured in specific ways in order to allow for faster access, higher reliability, and to make better use out of the drive's available space. The specific way in which files are stored on a disk is called a file system, and enables files to have names and attributes. It also allows them to be stored in a hierarchy of directories or folders arranged in a directory tree. Early operating systems generally supported a single Filesystems allow users and programs to organize and sort files on a type of disk drive and only one kind of file system. computer, often through the use of directories (or "folders") Early file systems were limited in their capacity, speed, and in the kinds of file names and directory structures they could use. These limitations often reflected limitations in the operating systems they were designed for, making it very difficult for an operating system to support more than one file system. While many simpler operating systems support a limited range of options for accessing storage systems, operating systems like UNIX and Linux support a technology known as a virtual file system or VFS. An operating system such as UNIX supports a wide array of storage devices, regardless of their design or file systems, allowing them to be accessed through a common application programming interface (API). This makes it unnecessary for programs to have any knowledge about the device they are accessing. A VFS allows the operating system to provide programs with access to an unlimited number of devices with an infinite variety of file systems installed on them, through the use of specific device drivers and file system drivers. A connected storage device, such as a hard drive, is accessed through a device driver. The device driver understands the specific language of the drive and is able to translate that language into a standard language used by the operating system to access all disk drives. On UNIX, this is the language of block devices. When the kernel has an appropriate device driver in place, it can then access the contents of the disk drive in raw format, which may contain one or more file systems. A file system driver is used to translate the commands used to access each specific file system into a standard set of commands that the operating system can use to talk to all file systems. Programs can then deal with these file systems on the basis of filenames, and directories/folders, contained within a hierarchical structure. They can create, delete, open, and close files, as well as gather various information about them, including access permissions, size, free space, and creation and modification dates.

142

Operating system Various differences between file systems make supporting all file systems difficult. Allowed characters in file names, case sensitivity, and the presence of various kinds of file attributes makes the implementation of a single interface for every file system a daunting task. Operating systems tend to recommend using (and so support natively) file systems specifically designed for them; for example, NTFS in Windows and ext3 and ReiserFS in Linux. However, in practice, third party drives are usually available to give support for the most widely used file systems in most general-purpose operating systems (for example, NTFS is available in Linux through NTFS-3g, and ext2/3 and ReiserFS are available in Windows through third-party software). Support for file systems is highly varied among modern operating systems, although there are several common file systems which almost all operating systems include support and drivers for. Operating systems vary on file system support and on the disk formats they may be installed on. Under Windows, each file system is usually limited in application to certain media; for example, CDs must use ISO 9660 or UDF, and as of Windows Vista, NTFS is the only file system which the operating system can be installed on. It is possible to install Linux onto many types of file systems. Unlike other operating systems, Linux and UNIX allow any file system to be used regardless of the media it is stored in, whether it is a hard drive, a disc (CD,DVD...), a USB flash drive, or even contained within a file located on another file system. Device drivers A device driver is a specific type of computer software developed to allow interaction with hardware devices. Typically this constitutes an interface for communicating with the device, through the specific computer bus or communications subsystem that the hardware is connected to, providing commands to and/or receiving data from the device, and on the other end, the requisite interfaces to the operating system and software applications. It is a specialized hardware-dependent computer program which is also operating system specific that enables another program, typically an operating system or applications software package or computer program running under the operating system kernel, to interact transparently with a hardware device, and usually provides the requisite interrupt handling necessary for any necessary asynchronous time-dependent hardware interfacing needs. The key design goal of device drivers is abstraction. Every model of hardware (even within the same class of device) is different. Newer models also are released by manufacturers that provide more reliable or better performance and these newer models are often controlled differently. Computers and their operating systems cannot be expected to know how to control every device, both now and in the future. To solve this problem, operating systems essentially dictate how every type of device should be controlled. The function of the device driver is then to translate these operating system mandated function calls into device specific calls. In theory a new device, which is controlled in a new manner, should function correctly if a suitable driver is available. This new driver will ensure that the device appears to operate as usual from the operating system's point of view. Under versions of Windows before Vista and versions of Linux before 2.6, all driver execution was co-operative, meaning that if a driver entered an infinite loop it would freeze the system. More recent revisions of these operating systems incorporate kernel preemption, where the kernel interrupts the driver to give it tasks, and then separates itself from the process until it receives a response from the device driver, or gives it more tasks to do.

143

Networking
Currently most operating systems support a variety of networking protocols, hardware, and applications for using them. This means that computers running dissimilar operating systems can participate in a common network for sharing resources such as computing, files, printers, and scanners using either wired or wireless connections. Networks can essentially allow a computer's operating system to access the resources of a remote computer to support the same functions as it could if those resources were connected directly to the local computer. This includes everything from simple communication, to using networked file systems or even sharing another computer's graphics or sound hardware. Some network services allow the resources of a computer to be accessed transparently, such as

Operating system SSH which allows networked users direct access to a computer's command line interface. Client/server networking allows a program on a computer, called a client, to connect via a network to another computer, called a server. Servers offer (or host) various services to other network computers and users. These services are usually provided through ports or numbered access points beyond the server's network address. Each port number is usually associated with a maximum of one running program, which is responsible for handling requests to that port. A daemon, being a user program, can in turn access the local hardware resources of that computer by passing requests to the operating system kernel. Many operating systems support one or more vendor-specific or open networking protocols as well, for example, SNA on IBM systems, DECnet on systems from Digital Equipment Corporation, and Microsoft-specific protocols (SMB) on Windows. Specific protocols for specific tasks may also be supported such as NFS for file access. Protocols like ESound, or esd can be easily extended over the network to provide sound from local applications, on a remote system's sound hardware.

144

Security
A computer being secure depends on a number of technologies working properly. A modern operating system provides access to a number of resources, which are available to software running on the system, and to external devices like networks via the kernel. The operating system must be capable of distinguishing between requests which should be allowed to be processed, and others which should not be processed. While some systems may simply distinguish between "privileged" and "non-privileged", systems commonly have a form of requester identity, such as a user name. To establish identity there may be a process of authentication. Often a username must be quoted, and each username may have a password. Other methods of authentication, such as magnetic cards or biometric data, might be used instead. In some cases, especially connections from the network, resources may be accessed with no authentication at all (such as reading files over a network share). Also covered by the concept of requester identity is authorization; the particular services and resources accessible by the requester once logged into a system are tied to either the requester's user account or to the variously configured groups of users to which the requester belongs. In addition to the allow/disallow model of security, a system with a high level of security will also offer auditing options. These would allow tracking of requests for access to resources (such as, "who has been reading this file?"). Internal security, or security from an already running program is only possible if all possibly harmful requests must be carried out through interrupts to the operating system kernel. If programs can directly access hardware and resources, they cannot be secured. External security involves a request from outside the computer, such as a login at a connected console or some kind of network connection. External requests are often passed through device drivers to the operating system's kernel, where they can be passed onto applications, or carried out directly. Security of operating systems has long been a concern because of highly sensitive data held on computers, both of a commercial and military nature. The United States Government Department of Defense (DoD) created the Trusted Computer System Evaluation Criteria (TCSEC) which is a standard that sets basic requirements for assessing the effectiveness of security. This became of vital importance to operating system makers, because the TCSEC was used to evaluate, classify and select trusted operating systems being considered for the processing, storage and retrieval of sensitive or classified information. Network services include offerings such as file sharing, print services, email, web sites, and file transfer protocols (FTP), most of which can have compromised security. At the front line of security are hardware devices known as firewalls or intrusion detection/prevention systems. At the operating system level, there are a number of software firewalls available, as well as intrusion detection/prevention systems. Most modern operating systems include a software firewall, which is enabled by default. A software firewall can be configured to allow or deny network traffic to or from a service or application running on the operating system. Therefore, one can install and be running an insecure service, such as Telnet or FTP, and not have to be threatened by a security breach because the firewall

Operating system would deny all traffic trying to connect to the service on that port. An alternative strategy, and the only sandbox strategy available in systems that do not meet the Popek and Goldberg virtualization requirements, is the operating system not running user programs as native code, but instead either emulates a processor or provides a host for a p-code based system such as Java. Internal security is especially relevant for multi-user systems; it allows each user of the system to have private files that the other users cannot tamper with or read. Internal security is also vital if auditing is to be of any use, since a program can potentially bypass the operating system, inclusive of bypassing auditing.

145

User interface
Every computer that is to be operated by an individual requires a user interface. The user interface is not actually a part of the operating systemit generally runs in a separate program usually referred to as a shell, but is essential if human interaction is to be supported. The user interface requests services from the operating system that will acquire data from input hardware devices, such as a keyboard, mouse or credit card reader, and requests operating system services to display prompts, status messages and such on output hardware devices, such as a video monitor or printer. The two most common forms of a user interface have historically been the command-line interface, where computer commands are typed out line-by-line, and the graphical user interface, where a visual environment (most commonly a WIMP) is present.

A screenshot of the Bourne Again Shell command line. Each command is typed out after the 'prompt', and then its output appears below, working its way down the screen. The current command prompt is at the bottom.

Graphical user interfaces Most of the modern computer systems support graphical user interfaces (GUI), and often include them. In some computer systems, such as the original implementation of Mac OS, the GUI is integrated into the kernel. While technically a graphical user interface is not an operating system service, incorporating support for one into the operating system kernel can allow the GUI to be more responsive by reducing the number of context switches required for the GUI to perform its output functions. Other operating systems are modular, separating the graphics subsystem from the kernel and the Operating System. In the 1980s UNIX, VMS and many others had operating systems that

A screenshot of the KDE Plasma Desktop graphical user interface. Programs take the form of images on the screen, and the files, folders (directories), and applications take the form of icons and symbols. A mouse is used to navigate the computer.

were built this way. Linux and Mac OS X are also built this way. Modern releases of Microsoft Windows such as Windows Vista implement a graphics subsystem that is mostly in user-space; however the graphics drawing routines

Operating system of versions between Windows NT 4.0 and Windows Server 2003 exist mostly in kernel space. Windows 9x had very little distinction between the interface and the kernel. Many computer operating systems allow the user to install or create any user interface they desire. The XWindow System in conjunction with GNOME or KDE Plasma Desktop is a commonly found setup on most Unix and Unix-like (BSD, Linux, Solaris) systems. A number of Windows shell replacements have been released for Microsoft Windows, which offer alternatives to the included Windows shell, but the shell itself cannot be separated from Windows. Numerous Unix-based GUIs have existed over time, most derived from X11. Competition among the various vendors of Unix (HP, IBM, Sun) led to much fragmentation, though an effort to standardize in the 1990s to COSE and CDE failed for various reasons, and were eventually eclipsed by the widespread adoption of GNOME and K Desktop Environment. Prior to free software-based toolkits and desktop environments, Motif was the prevalent toolkit/desktop combination (and was the basis upon which CDE was developed). Graphical user interfaces evolve over time. For example, Windows has modified its user interface almost every time a new major version of Windows is released, and the MacOS GUI changed dramatically with the introduction of MacOSX in 1999.[23]

146

Real-time operating systems


A real-time operating system (RTOS) is a multitasking operating system intended for applications with fixed deadlines (real-time computing). Such applications include some small embedded systems, automobile engine controllers, industrial robots, spacecraft, industrial control, and some large-scale computing systems. An early example of a large-scale real-time operating system was Transaction Processing Facility developed by American Airlines and IBM for the Sabre Airline Reservations System. Embedded systems that have fixed deadlines use a real-time operating system such as VxWorks, PikeOS, eCos, QNX, MontaVista Linux and RTLinux. Windows CE is a real-time operating system that shares similar APIs to desktop Windows but shares none of desktop Windows' codebase. Symbian OS also has an RTOS kernel (EKA2) starting with version 8.0b. Some embedded systems use operating systems such as Palm OS, BSD, and Linux, although such operating systems do not support real-time computing.

Operating system development as a hobby


Operating system development is one of the most complicated activities in which a computing hobbyist may engage. A hobby operating system may be classified as one whose code has not been directly derived from an existing operating system, and has few users and active developers. [24] In some cases, hobby development is in support of a "homebrew" computing device, for example, a simple single-board computer powered by a 6502 microprocessor. Or, development may be for an architecture already in widespread use. Operating system development may come from entirely new concepts, or may commence by modeling an existing operating system. In either case, the hobbyist is his/her own developer, or may interact with a small and sometimes unstructured group of individuals who have like interests. Examples of a hobby operating system include ReactOS and Syllable.

Operating system

147

Diversity of operating systems and portability


Application software is generally written for use on a specific operating system, and sometimes even for specific hardware. When porting the application to run on another OS, the functionality required by that application may be implemented differently by that OS (the names of functions, meaning of arguments, etc.) requiring the application to be adapted, changed, or otherwise maintained. This cost in supporting operating systems diversity can be avoided by instead writing applications against software platforms like Java or Qt. These abstractions have already borne the cost of adaptation to specific operating systems and their system libraries. Another approach is for operating system vendors to adopt standards. For example, POSIX and OS abstraction layers provide commonalities that reduce porting costs.

References
[1] [2] [3] [4] Stallings (2005). Operating Systems, Internals and Design Principles. Pearson: Prentice Hall. p.6. Dhotre, I.A. (2009). Operating Systems.. Technical Publications. p.1. "Operating System Market Share" (http:/ / marketshare. hitslink. com/ operating-system-market-share. aspx?qprid=10). Net Applications. . Hansen, Per Brinch, ed. (2001). Classic Operating Systems (http:/ / books. google. com/ ?id=-PDPBvIPYBkC& lpg=PP1& pg=PP1#v=onepage& q). Springer. pp.47. ISBN0-387-95113-X. .

[5] "OS X Mountain Lion - Move your Mac even further ahead" (http:/ / www. apple. com/ macosx/ lion/ ). Apple. . Retrieved 2012-08-07. [6] Usage share of operating systems [7] "Top 5 Operating Systems from January to April 2011" (http:/ / gs. statcounter. com/ #os-ww-monthly-201101-201104-bar). StatCounter. October 2009. . Retrieved November 5, 2009. [8] "IDC report into Server market share" (http:/ / www. idc. com/ about/ viewpressrelease. jsp?containerId=prUS22360110& sectionId=null& elementId=null& pageType=SYNOPSIS). Idc.com. . Retrieved 2012-08-07. [9] Linux still top embedded OS (http:/ / www. linuxdevices. com/ news/ NS4920597981. html) [10] Tom Jermoluk (2012-08-03). "TOP500 List November 2010 (1100) | TOP500 Supercomputing Sites" (http:/ / www. top500. org/ list/ 2010/ 11/ 100). Top500.org. . Retrieved 2012-08-07. [11] "Global Web Stats" (http:/ / marketshare. hitslink. com/ operating-system-market-share. aspx?qprid=8). Net Market Share, Net Applications. May 2011. . Retrieved 2011-05-07. [12] "Global Web Stats" (http:/ / www. w3counter. com/ globalstats. php). W3Counter, Awio Web Services. September 2009. . Retrieved 2009-10-24. [13] "Operating System Market Share" (http:/ / marketshare. hitslink. com/ operating-system-market-share. aspx?qprid=8). Net Applications. October 2009. . Retrieved November 5, 2009. [14] "w3schools.com OS Platform Statistics" (http:/ / www. w3schools. com/ browsers/ browsers_os. asp). . Retrieved October 30, 2011. [15] "Stats Count Global Stats Top Five Operating Systems" (http:/ / gs. statcounter. com/ #os-ww-monthly-201010-201110). . Retrieved October 30, 2011. [16] "Global statistics at w3counter.com" (http:/ / www. w3counter. com/ globalstats. php). . Retrieved 23 January 2012. [17] "Troubleshooting MS-DOS Compatibility Mode on Hard Disks" (http:/ / support. microsoft. com/ kb/ 130179/ EN-US). Support.microsoft.com. . Retrieved 2012-08-07. [18] "Using NDIS 2 PCMCIA Network Card Drivers in Windows 95" (http:/ / support. microsoft. com/ kb/ 134748/ en). Support.microsoft.com. . Retrieved 2012-08-07. [19] "INFO: Windows 95 Multimedia Wave Device Drivers Must be 16 bit" (http:/ / support. microsoft. com/ kb/ 163354/ en). Support.microsoft.com. . Retrieved 2012-08-07. [20] "Operating System Share by Groups for Sites in All Locations January 2009" (http:/ / news. netcraft. com/ SSL-Survey/ CMatch/ osdv_all). . [21] "Behind the IDC data: Windows still No. 1 in server operating systems" (http:/ / blogs. zdnet. com/ microsoft/ ?p=5408). ZDNet. 2010-02-26. . [22] Stallings, William (2008). Computer Organization & Architecture. New Delhi: Prentice-Hall of India Private Limited. p.267. ISBN978-81-203-2962-1. [23] Poisson, Ken. "Chronology of Personal Computer Software" (http:/ / www. islandnet. com/ ~kpolsson/ compsoft/ soft1998. htm). Retrieved on 2008-05-07. Last checked on 2009-03-30. [24] "My OS is less hobby than yours" (http:/ / www. osnews. com/ story/ 22638/ My_OS_Is_Less_Hobby_than_Yours). Osnews. December 21, 2009. . Retrieved December 21, 2009.

Windows to surpass Android by 2015 (http:/ / www. greatphone. co. uk/ 594/ windows-to-surpass-android-by-2015/ )

Operating system

148

Further reading
Auslander, Marc A.; Larkin, David C.; Scherr, Allan L. (1981). The evolution of the MVS Operating System (http://www.research.ibm.com/journal/rd/255/auslander.pdf). IBM J. Research & Development. Deitel, Harvey M.; Deitel, Paul; Choffnes, David. Operating Systems. Pearson/Prentice Hall. ISBN978-0-13-092641-8. Bic, Lubomur F.; Shaw, Alan C. (2003). Operating Systems. Pearson: Prentice Hall. Silberschatz, Avi; Galvin, Peter; Gagne, Greg (2008). Operating Systems Concepts. John Wiley & Sons. ISBN0-470-12872-0.

External links
Operating Systems (http://www.dmoz.org/Computers/Software/Operating_Systems/) at the Open Directory Project Multics History (http://www.cbi.umn.edu/iterations/haigh.html) and the history of operating systems How Stuff Works - Operating Systems (http://computer.howstuffworks.com/operating-system.htm) Help finding your Operating System type and version (http://whatsmyos.com)

OSI model
The Open Systems Interconnection (OSI) model (ISO/IEC 7498-1) is a product of the Open Systems Interconnection effort at the International Organization for Standardization. It is a prescription of characterizing and standardizing the functions of a communications system in terms of abstraction layers. Similar communication functions are grouped into logical layers. A layer serves the layer above it and is served by the layer below it. For example, a layer that provides error-free communications across a network provides the path needed by applications above it, while it calls the next lower layer to send and receive packets that make up the contents of that path. Two instances at one layer are connected by a horizontal connection on that layer.

History
Work on a layered model of network architecture was started and the International Organization for Standardization (ISO) began to develop its OSI framework architecture. OSI had two major components: an abstract model of networking, called the Basic Reference Model or seven-layer model, and a set of specific protocols. The concept of a seven-layer model Communication in the OSI-Model (example with layers 3 to 5) was provided by the work of Charles Bachman, Honeywell Information Services. Various aspects of OSI design evolved from experiences with the ARPANET, the fledgling Internet, NPLNET, EIN, CYCLADES network and the work in IFIP WG6.1. The new design was documented in ISO 7498 and its various addenda. In this model, a networking system was divided into layers. Within each layer, one or more entities implement its functionality. Each entity interacted directly only with the layer immediately beneath it, and

OSI model provided facilities for use by the layer above it. Protocols enabled an entity in one host to interact with a corresponding entity at the same layer in another host. Service definitions abstractly described the functionality provided to an (N)-layer by an (N-1) layer, where N was one of the seven layers of protocols operating in the local host. The OSI standards documents are available from the ITU-T as the X.200-series of recommendations.[1] Some of the protocol specifications were also available as part of the ITU-T X series. The equivalent ISO and ISO/IEC standards for the OSI model were available from ISO, but only some of them without fees.[2]

149

Description of OSI layers


According to recommendation X.200, there are seven layers, labeled 1 to 7, with layer 1 at the bottom. Each layer is generically known as an N layer. An "N+1 entity" (at layer N+1) requests services from an "N entity" (at layer N). At each level, two entities (N-entity peers) interact by means of the N protocol by transmitting protocol data units (PDU). A Service Data Unit (SDU) is a specific unit of data that has been passed down from an OSI layer to a lower layer, and which the lower layer has not yet encapsulated into a protocol data unit (PDU). An SDU is a set of data that is sent by a user of the services of a given layer, and is transmitted semantically unchanged to a peer service user. The PDU at a layer N is the SDU of layer N-1. In effect the SDU is the 'payload' of a given PDU. That is, the process of changing an SDU to a PDU, consists of an encapsulation process, performed by the lower layer. All the data contained in the SDU becomes encapsulated within the PDU. The layer N-1 adds headers or footers, or both, to the SDU, transforming it into a PDU of layer N. The added headers or footers are part of the process used to make it possible to get data from a source to a destination.
OSI Model Data unit Host Data layers Layer 7. Application Network process to application 6. Presentation 5. Session Segments 4. Transport Data representation, encryption and decryption, convert machine dependent data to machine independent data Interhost communication, managing sessions between applications End-to-end connections, reliability and flow control Path determination and logical addressing Physical addressing Media, signal and binary transmission Function

Media Packet/Datagram 3. Network layers Frame 2. Data link Bit 1. Physical

Some orthogonal aspects, such as management and security, involve every layer. Security services are not related to a specific layer: they can be related by a number of layers, as defined by ITU-T X.800 Recommendation.[3] These services are aimed to improve the CIA triad (confidentiality, integrity, and availability) of transmitted data. Actually the availability of communication service is determined by network design and/or network management protocols. Appropriate choices for these are needed to protect against denial of service.

OSI model

150

Layer 1: physical layer


The physical layer defines electrical and physical specifications for devices. In particular, it defines the relationship between a device and a transmission medium, such as a copper or fiber optical cable. This includes the layout of pins, voltages, line impedance, cable specifications, signal timing, hubs, repeaters, network adapters, host bus adapters (HBA used in storage area networks) and more. The major functions and services performed by the physical layer are: Establishment and termination of a connection to a communications medium. Participation in the process whereby the communication resources are effectively shared among multiple users. For example, contention resolution and flow control. Modulation or conversion between the representation of digital data in user equipment and the corresponding signals transmitted over a communications channel. These are signals operating over the physical cabling (such as copper and optical fiber) or over a radio link. Parallel SCSI buses operate in this layer, although it must be remembered that the logical SCSI protocol is a transport layer protocol that runs over this bus. Various physical-layer Ethernet standards are also in this layer; Ethernet incorporates both this layer and the data link layer. The same applies to other local-area networks, such as token ring, FDDI, ITU-T G.hn and IEEE 802.11, as well as personal area networks such as Bluetooth and IEEE 802.15.4.

Layer 2: data link layer


The data link layer provides the functional and procedural means to transfer data between network entities and to detect and possibly correct errors that may occur in the physical layer. Originally, this layer was intended for point-to-point and point-to-multipoint media, characteristic of wide area media in the telephone system. Local area network architecture, which included broadcast-capable multi-access media, was developed independently of the ISO work in IEEE Project 802. IEEE work assumed sublayer-ing and management functions not required for WAN use. In modern practice, only error detection, not flow control using sliding window, is present in data link protocols such as Point-to-Point Protocol (PPP), and, on local area networks, the IEEE 802.2 LLC layer is not used for most protocols on the Ethernet, and on other local area networks, its flow control and acknowledgment mechanisms are rarely used. Sliding window flow control and acknowledgment is used at the transport layer by protocols such as TCP, but is still used in niches where X.25 offers performance advantages. The ITU-T G.hn standard, which provides high-speed local area networking over existing wires (power lines, phone lines and coaxial cables), includes a complete data link layer which provides both error correction and flow control by means of a selective repeat Sliding Window Protocol. Both WAN and LAN service arrange bits from the physical layer into logical sequences called frames. Not all physical layer bits necessarily go into frames, as some of these bits are purely intended for physical layer functions. For example, every fifth bit of the FDDI bit stream is not used by the layer.

OSI model WAN protocol architecture Connection-oriented WAN data link protocols, in addition to framing, detect and may correct errors. They are also capable of controlling the rate of transmission. A WAN data link layer might implement a sliding window flow control and acknowledgment mechanism to provide reliable delivery of frames; that is the case for Synchronous Data Link Control (SDLC) and HDLC, and derivatives of HDLC such as LAPB and LAPD. IEEE 802 LAN architecture Practical, connectionless LANs began with the pre-IEEE Ethernet specification, which is the ancestor of IEEE 802.3. This layer manages the interaction of devices with a shared medium, which is the function of a media access control (MAC) sublayer. Above this MAC sublayer is the media-independent IEEE 802.2 Logical Link Control (LLC) sublayer, which deals with addressing and multiplexing on multi-access media. While IEEE 802.3 is the dominant wired LAN protocol and IEEE 802.11 the wireless LAN protocol, obsolete MAC layers include Token Ring and FDDI. The MAC sublayer detects but does not correct errors.

151

Layer 3: network layer


The network layer provides the functional and procedural means of transferring variable length data sequences from a source host on one network to a destination host on a different network (in contrast to the data link layer which connects hosts within the same network), while maintaining the quality of service requested by the transport layer. The network layer performs network routing functions, and might also perform fragmentation and reassembly, and report delivery errors. Routers operate at this layer, sending data throughout the extended network and making the Internet possible. This is a logical addressing scheme values are chosen by the network engineer. The addressing scheme is not hierarchical. The network layer may be divided into three sublayers: 1. Subnetwork access that considers protocols that deal with the interface to networks, such as X.25; 2. Subnetwork-dependent convergence when it is necessary to bring the level of a transit network up to the level of networks on either side 3. Subnetwork-independent convergence handles transfer across multiple networks. An example of this latter case is CLNP, or IPv6 ISO 8473. It manages the connectionless transfer of data one hop at a time, from end system to ingress router, router to router, and from egress router to destination end system. It is not responsible for reliable delivery to a next hop, but only for the detection of erroneous packets so they may be discarded. In this scheme, IPv4 and IPv6 would have to be classed with X.25 as subnet access protocols because they carry interface addresses rather than node addresses. A number of layer-management protocols, a function defined in the Management Annex, ISO 7498/4, belong to the network layer. These include routing protocols, multicast group management, network-layer information and error, and network-layer address assignment. It is the function of the payload that makes these belong to the network layer, not the protocol that carries them.

Layer 4: transport layer


The transport layer provides transparent transfer of data between end users, providing reliable data transfer services to the upper layers. The transport layer controls the reliability of a given link through flow control, segmentation/desegmentation, and error control. Some protocols are state- and connection-oriented. This means that the transport layer can keep track of the segments and retransmit those that fail. The transport layer also provides the acknowledgement of the successful data transmission and sends the next data if no errors occurred. OSI defines five classes of connection-mode transport protocols ranging from class 0 (which is also known as TP0 and provides the least features) to class 4 (TP4, designed for less reliable networks, similar to the Internet). Class 0

OSI model contains no error recovery, and was designed for use on network layers that provide error-free connections. Class 4 is closest to TCP, although TCP contains functions, such as the graceful close, which OSI assigns to the session layer. Also, all OSI TP connection-mode protocol classes provide expedited data and preservation of record boundaries. Detailed characteristics of TP0-4 classes are shown in the following table:[4]
Feature Name Connection oriented network Connectionless network Concatenation and separation Segmentation and reassembly Error Recovery Reinitiate connection (if an excessive number of PDUs are unacknowledged) Multiplexing and demultiplexing over a single virtual circuit Explicit flow control Retransmission on timeout Reliable Transport Service TP0 TP1 TP2 TP3 TP4 Yes Yes Yes Yes Yes No No No No No Yes

152

Yes Yes Yes Yes

Yes Yes Yes Yes Yes No No No No No No Yes Yes Yes Yes Yes No No No Yes No Yes No

Yes Yes Yes Yes Yes Yes No No No Yes

Yes Yes

An easy way to visualize the transport layer is to compare it with a Post Office, which deals with the dispatch and classification of mail and parcels sent. Do remember, however, that a post office manages the outer envelope of mail. Higher layers may have the equivalent of double envelopes, such as cryptographic presentation services that can be read by the addressee only. Roughly speaking, tunneling protocols operate at the transport layer, such as carrying non-IP protocols such as IBM's SNA or Novell's IPX over an IP network, or end-to-end encryption with IPsec. While Generic Routing Encapsulation (GRE) might seem to be a network-layer protocol, if the encapsulation of the payload takes place only at endpoint, GRE becomes closer to a transport protocol that uses IP headers but contains complete frames or packets to deliver to an endpoint. L2TP carries PPP frames inside transport packet. Although not developed under the OSI Reference Model and not strictly conforming to the OSI definition of the transport layer, the Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP) of the Internet Protocol Suite are commonly categorized as layer-4 protocols within OSI.

Layer 5: session layer


The session layer controls the dialogues (connections) between computers. It establishes, manages and terminates the connections between the local and remote application. It provides for full-duplex, half-duplex, or simplex operation, and establishes checkpointing, adjournment, termination, and restart procedures. The OSI model made this layer responsible for graceful close of sessions, which is a property of the Transmission Control Protocol, and also for session checkpointing and recovery, which is not usually used in the Internet Protocol Suite. The session layer is commonly implemented explicitly in application environments that use remote procedure calls. On this level, Inter-Process communication happen (SIGHUP, SIGKILL, End Process, etc.).

OSI model

153

Layer 6: presentation layer


The presentation layer establishes context between application-layer entities, in which the higher-layer entities may use different syntax and semantics if the presentation service provides a mapping between them. If a mapping is available, presentation service data units are encapsulated into session protocol data units, and passed down the stack. This layer provides independence from data representation (e.g., encryption) by translating between application and network formats. The presentation layer transforms data into the form that the application accepts. This layer formats and encrypts data to be sent across a network. It is sometimes called the syntax layer.[5] The original presentation structure used the Basic Encoding Rules of Abstract Syntax Notation One (ASN.1), with capabilities such as converting an EBCDIC-coded text file to an ASCII-coded file, or serialization of objects and other data structures from and to XML.

Layer 7: application layer


The application layer is the OSI layer closest to the end user, which means that both the OSI application layer and the user interact directly with the software application. This layer interacts with software applications that implement a communicating component. Such application programs fall outside the scope of the OSI model. Application-layer functions typically include identifying communication partners, determining resource availability, and synchronizing communication. When identifying communication partners, the application layer determines the identity and availability of communication partners for an application with data to transmit. When determining resource availability, the application layer must decide whether sufficient network or the requested communication exist. In synchronizing communication, all communication between applications requires cooperation that is managed by the application layer. Some examples of application-layer implementations also include: On OSI stack: FTAM File Transfer and Access Management Protocol X.400 Mail Common Management Information Protocol (CMIP) On TCP/IP stack: Hypertext Transfer Protocol (HTTP), File Transfer Protocol (FTP), Simple Mail Transfer Protocol (SMTP) Simple Network Management Protocol (SNMP).

Cross-layer functions
There are some functions or services that are not tied to a given layer, but they can affect more than one layer. Examples include the following: security service (telecommunication)[3] as defined by ITU-T X.800 Recommendation. management functions, i.e. functions that permit to configure, instantiate, monitor, terminate the communications of two or more entities: there is a specific application layer protocol, common management information protocol (CMIP) and its corresponding service, common management information service (CMIS), they need to interact with every layer in order to deal with their instances. Multiprotocol Label Switching (MPLS) operates at an OSI-model layer that is generally considered to lie between traditional definitions of layer 2 (data link layer) and layer 3 (network layer), and thus is often referred to as a "layer-2.5" protocol. It was designed to provide a unified data-carrying service for both circuit-based clients and packet-switching clients which provide a datagram service model. It can be used to carry many different kinds of traffic, including IP packets, as well as native ATM, SONET, and Ethernet frames.

OSI model ARP is used to translate IPv4 addresses (OSI layer 3) into Ethernet MAC addresses (OSI layer 2).

154

Interfaces
Neither the OSI Reference Model nor OSI protocols specify any programming interfaces, other than as deliberately abstract service specifications. Protocol specifications precisely define the interfaces between different computers, but the software interfaces inside computers, known as network sockets are implementation-specific. For example Microsoft Windows' Winsock, and Unix's Berkeley sockets and System V Transport Layer Interface, are interfaces between applications (layer 5 and above) and the transport (layer 4). NDIS and ODI are interfaces between the media (layer 2) and the network protocol (layer 3). Interface standards, except for the physical layer to media, are approximate implementations of OSI service specifications.

Examples
Layer # Name NNTP, SIP, SSI, DNS, FTP, Gopher, HTTP, NFS, NTP, DHCP, SMPP, SMTP, SNMP, Telnet, RIP, BGP MIME, SSL, TLS, XDR OSI protocols TCP/IP protocols Signaling System [6] 7 INAP, MAP, TCAP, ISUP, TUP AppleTalk IPX SNA UMTS Misc. examples

7 Application FTAM, X.400, X.500, DAP, ROSE, RTSE, [7] ACSE [8] CMIP

AFP, ZIP, RTMP, NBP

RIP, SAP

APPC

HL7, Modbus

6 Presentation ISO/IEC8823, X.226, ISO/IEC9576-1, X.236 5 Session ISO/IEC8327, X.225, ISO/IEC9548-1, X.235

AFP

TDI, ASCII, EBCDIC, MIDI, MPEG

Sockets. Session establishment in TCP, RTP

ASP, NWLink ADSP, PAP

DLC?

Named pipes, NetBIOS, SAP, half duplex, full duplex, simplex, RPC, SOCKS NBF

4 Transport

ISO/IEC8073, TCP, UDP, TP0, TP1, TP2, SCTP, DCCP TP3, TP4 (X.224), ISO/IEC8602, X.234 ISO/IEC8208, X.25 (PLP), ISO/IEC8878, X.223, ISO/IEC8473-1, CLNP X.233. IP, IPsec, ICMP, IGMP, OSPF SCCP, MTP ATP (TokenTalk or EtherTalk)

DDP, SPX

3 Network

IPX

RRC (Radio Resource Control) and BMC (Broadcast/Multicast Control)

NBF, Q.931, NDP ARP (maps layer 3 to layer 2 address), IS-IS

OSI model

155
PPP, SBTV SLIP, PPTP MTP, Q.710 LocalTalk, AppleTalk Remote Access, PPP IEEE 802.3 framing, Ethernet II framing SDLC 802.3 (Ethernet), 802.11a/b/g/n MAC/LLC, 802.1Q (VLAN), ATM, HDP, FDDI, Fibre Channel, Frame Relay, HDLC, ISL, PPP, Q.921, Token Ring, CDP, ITU-T G.hn DLL CRC, Bit stuffing, ARQ, Data Over Cable Service Interface Specification (DOCSIS), interface bonding RS-232, Full duplex, RJ45, V.35, V.34, I.430, I.431, T1, E1, 10BASE-T, 100BASE-TX, 1000BASE-T, POTS, SONET, SDH, DSL, 802.11a/b/g/n PHY, ITU-T G.hn PHY, Controller Area Network, Data Over Cable Service Interface Specification (DOCSIS)

2 Data Link

ISO/IEC7666, X.25 (LAPB), Token Bus, X.222, ISO/IEC8802-2 LLC Type 1 and [9] 2

Packet Data Convergence Protocol [10] (PDCP) , LLC (Logical Link Control), MAC (Media Access Control)

1 Physical

X.25 (X.21bis, EIA/TIA-232, EIA/TIA-449, EIA-530, [9] G.703)

MTP, Q.710

RS-232, RS-422, STP, PhoneNet

Twinax UMTS Physical layer or L1

Comparison with TCP/IP model


In the TCP/IP model of the Internet, protocols are deliberately not as rigidly designed into strict layers as in the OSI model. RFC 3439.[11] contains a section entitled "Layering." considered harmful[12] However, TCP/IP does recognize four broad layers of functionality which are derived from the operating scope of their contained protocols, namely the scope of the software application, the end-to-end transport connection, the internetworking range, and the scope of the direct links to other nodes on the local network. Even though the concept is different from the OSI model, these layers are nevertheless often compared with the OSI layering scheme in the following way: The Internet application layer includes the OSI application layer, presentation layer, and most of the session layer. Its end-to-end transport layer includes the graceful close function of the OSI session layer as well as the OSI transport layer. The internetworking layer (Internet layer) is a subset of the OSI network layer (see above), while the link layer includes the OSI data link and physical layers, as well as parts of OSI's network layer. These comparisons are based on the original seven-layer protocol model as defined in ISO 7498, rather than refinements in such things as the internal organization of the network layer document. The presumably strict peer layering of the OSI model as it is usually described does not present contradictions in TCP/IP, as it is permissible that protocol usage does not follow the hierarchy implied in a layered model. Such examples exist in some routing protocols (e.g., OSPF), or in the description of tunneling protocols, which provide a link layer for an application, although the tunnel host protocol may well be a transport or even an application layer protocol in its own right.

OSI model

156

References
[1] ITU-T X-Series Recommendations (http:/ / www. itu. int/ rec/ T-REC-X/ en) [2] "Publicly Available Standards" (http:/ / standards. iso. org/ ittf/ PubliclyAvailableStandards/ index. html). Standards.iso.org. 2010-07-30. . Retrieved 2010-09-11. [3] X.800 : Security architecture for Open Systems Interconnection for CCITT applications (http:/ / www. itu. int/ rec/ T-REC-X. 800-199103-I/ e) [4] "ITU-T Recommendation X.224 (11/1995) ISO/IEC 8073" (http:/ / www. itu. int/ rec/ T-REC-X. 224-199511-I/ en/ ). . [5] Grigonis, Richard (2000). Computer telephony encyclopedia (http:/ / books. google. com/ books?id=cUYk0ZhOxpEC& printsec=frontcover& dq=computer+ telephony+ encyclopedia& ct=result#v=onepage& q& f=false). CMP. pp.331. . [6] ITU-T Recommendation Q.1400 (03/1993) (http:/ / www. itu. int/ rec/ T-REC-Q. 1400/ en/ ), Architecture framework for the development of signaling and OA&M protocols using OSI concepts, pp 4, 7. [7] ITU Rec. X.227 (ISO 8650), X.217 (ISO 8649) [8] X.700 series of recommendations from the ITU-T (in particular X.711), and ISO 9596 [9] CISCO Cisco Systems, Inc. Internetworking Technology Handbook OSI Model Physical layer (http:/ / www. cisco. com/ en/ US/ docs/ internetworking/ technology/ handbook/ Intro-to-Internet. html#wp1020669) [10] 3GPP TS 36.300 : E-UTRA and E-UTRAN Overall Description, Stage 2, Release 11 (http:/ / www. 3gpp. org/ ftp/ Specs/ html-info/ 36300. htm) [11] RFC 3439 [12] http:/ / tools. ietf. org/ html/ rfc3439#section-3

External links
ISO/IEC standard 7498-1:1994 (http://standards.iso.org/ittf/PubliclyAvailableStandards/ s020269_ISO_IEC_7498-1_1994(E).zip) (PDF document inside ZIP archive) (requires HTTP cookies in order to accept licence agreement) ITU-T X.200 (the same contents as from ISO) (http://www.itu.int/rec/dologin_pub.asp?lang=e& id=T-REC-X.200-199407-I!!PDF-E&type=items) The ISO OSI Reference Model , Beluga graph of data units and groups of layers (http://infchg.appspot.com/ usr?at=1263939371) Zimmermann, Hubert (April 1980). "OSI Reference Model The ISO Model of Architecture for Open Systems Interconnection". IEEE Transactions on Communications 28 (4): 425432. CiteSeerX: 10.1.1.136.9497 (http:// citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.136.9497). Cisco Systems Internetworking Technology Handbook (http://docwiki.cisco.com/wiki/ Internetworking_Technology_Handbook) Collection of animations and videos concerning computer networks (http://www.khurramtanvir.com/ cs460demos.php)

Virtual private network

157

Virtual private network


A virtual private network (VPN) is a technology for using the Internet or another intermediate network to connect computers to isolated remote computer networks that would otherwise be inaccessible. A VPN provides varying levels of security so that traffic sent through the VPN connection stays isolated from other computers on the intermediate network, either through the use of a dedicated connection from one "end" of the VPN to the other, or through encryption. VPNs can connect individual users to a remote network or connect multiple networks together. For example, users may use a VPN to connect to their work computer terminal from home and access their email, files, images, etc.
VPN Connectivity overview

Through VPNs, users are able to access resources on remote networks, such as files, printers, databases, or internal websites. VPN remote users get the impression of being directly connected to the central network via a point-to-point link.[1]

History and status


Early networks allowed VPN-style remote connectivity through dial-up modems or through leased lines.[2] Virtual private networks existed for many years in the form of private networks using frame relay. IP-VPNs have become more prevalent due to significant cost-reductions, increased bandwidth, convenience and security.

Types of VPN
VPNs can be either remote-access (connecting an individual computer to a network) or site-to-site (connecting two networks together). In a corporate setting, remote-access VPNs allow employees to access their company's intranet from home or while traveling outside the office, and site-to-site VPNs allow employees in geographically separated offices to share one cohesive virtual network. A VPN can also be used to interconnect two similar networks over a dissimilar middle network; for example, two IPv6 networks over an IPv4 network.[3] VPN systems can be classified by: the protocols used to tunnel the traffic the tunnel's termination point, i.e., customer edge or network-provider edge whether they offer site-to-site or remote-access connectivity the levels of security provided the OSI layer they present to the connecting network, such as Layer 2 circuits or Layer 3 network connectivity

Virtual private network

158

Security mechanisms
VPNs typically require remote access to be authenticated and make use of encryption techniques to prevent disclosure of private information. VPNs provide security through tunneling protocols and security procedures[4] such as encryption. Their security model provides: Confidentiality such that even if traffic is sniffed, an attacker would only see encrypted data which he/she cannot understand Allowing sender authentication to prevent unauthorized users from accessing the VPN Message integrity to detect any instances of transmitted messages having been tampered with Secure VPN protocols include the following: IPSec (Internet Protocol Security) was developed by the Internet Engineering Task Force (IETF), and was initially developed for IPv6, which requires it. This standards-based security protocol is also widely used with IPv4. Layer 2 Tunneling Protocol frequently runs over IPSec. Its design meets most security goals: authentication, integrity, and confidentiality. IPSec functions through encrypting and encapsulating an IP packet inside an IPSec packet. De-encapsulation happens at the end of the tunnel, where the original IP packet is decrypted and forwarded to its intended destination. Transport Layer Security (SSL/TLS) can tunnel an entire network's traffic, as it does in the OpenVPN project, or secure an individual connection. A number of vendors provide remote access VPN capabilities through SSL. An SSL VPN can connect from locations where IPsec runs into trouble with Network Address Translation and firewall rules. Datagram Transport Layer Security (DTLS), is used in Cisco AnyConnect VPN, to solve the issues SSL/TLS has with tunneling over UDP. Microsoft Point-to-Point Encryption (MPPE) works with the Point-to-Point Tunneling Protocol and in several compatible implementations on other platforms. Microsoft's Secure Socket Tunneling Protocol (SSTP), introduced in Windows Server 2008 and in Windows Vista Service Pack 1. SSTP tunnels Point-to-Point Protocol (PPP) or Layer 2 Tunneling Protocol traffic through an SSL 3.0 channel. MPVPN (Multi Path Virtual Private Network). Ragula Systems Development Company owns the registered trademark "MPVPN".[5] Secure Shell (SSH) VPN - OpenSSH offers VPN tunneling (distinct from port forwarding) to secure remote connections to a network or inter-network links. OpenSSH server provides a limited number of concurrent tunnels and the VPN feature itself does not support personal authentication.[6][7][8]

Authentication
Tunnel endpoints must authenticate before secure VPN tunnels can be established. User-created remote access VPNs may use passwords, biometrics, two-factor authentication or other cryptographic methods. Network-to-network tunnels often use passwords or digital certificates, as they permanently store the key to allow the tunnel to establish automatically and without intervention from the user.

Virtual private network

159

Example use of a VPN Tunnel


The following steps [9] illustrate the principles of a VPN client-server interaction in simple terms. Assume a remote host with public IP address 1.2.3.4 wishes to connect to a server found inside a company network. The server has internal address 192.168.1.10 and is not reachable publicly. Before the client can reach this server, it needs to go through a VPN server / firewall device that has public IP address 5.6.7.8 and an internal address of 192.168.1.1. All data between the client and the server will need to be kept confidential, hence a secure VPN is used. 1. The VPN client connects to a VPN server via an external network interface. 2. The VPN server assigns an IP address to the VPN client from the VPN server's subnet. The client gets internal IP address 192.168.1.50, for example, and creates a virtual network interface through which it will send encrypted packets to the other tunnel endpoint (the device at the other end of the tunnel).[10] (This interface also gets the address 192.168.1.50.) 3. When the VPN client wishes to communicate with the company server, it prepares a packet addressed to 192.168.1.10, encrypts it and encapsulates it in an outer VPN packet, say an IPSec packet. This packet is then sent to the VPN server at IP address 5.6.7.8 over the public Internet. The inner packet is encrypted so that even if someone intercepts the packet over the Internet, they cannot get any information from it. They can see that the remote host is communicating with a server/firewall, but none of the contents of the communication. The inner encrypted packet has source address 192.168.1.50 and destination address 192.168.1.10. The outer packet has source address 1.2.3.4 and destination address 5.6.7.8. 4. When the packet reaches the VPN server from the Internet, the VPN server decapsulates the inner packet, decrypts it, finds the destination address to be 192.168.1.10, and forwards it to the intended server at 192.168.1.10. 5. After some time, the VPN server receives a reply packet from 192.168.1.10, intended for 192.168.1.50. The VPN server consults its routing table, and sees this packet is intended for a remote host that must go through VPN. 6. The VPN server encrypts this reply packet, encapsulates it in a VPN packet and sends it out over the Internet. The inner encrypted packet has source address 192.168.1.10 and destination address 192.168.1.50. The outer VPN packet has source address 5.6.7.8 and destination address 1.2.3.4. 7. The remote host receives the packet. The VPN client unencapsulates the inner packet, decrypts it, and passes it to the appropriate software at upper layers. Overall, it is as if the remote computer and company server are on the same 192.168.1.0/24 network.

Routing
Tunneling protocols can operate in a point-to-point network topology that would theoretically not be considered a VPN, because a VPN by definition is expected to support arbitrary and changing sets of network nodes. But since most router implementations support a software-defined tunnel interface, customer-provisioned VPNs often are simply defined tunnels running conventional routing protocols.

PPVPN building-blocks
Depending on whether the PPVPN (Provider Provisioned VPN) runs in layer 2 or layer 3, the building blocks described below may be L2 only, L3 only, or combine them both. Multiprotocol label switching (MPLS) functionality blurs the L2-L3 identity. RFC 4026 generalized the following terms to cover L2 and L3 VPNs, but they were introduced in RFC 2547.[11] More information on the devices below can also be found in Lewis, Cisco Press.[12] Customer (C) devices A device that is within a customer's network and not directly connected to the service provider's network. C devices are not aware of the VPN.

Virtual private network Customer Edge device (CE) A device at the edge of the customer's network which provides access to the PPVPN. Sometimes it's just a demarcation point between provider and customer responsibility. Other providers allow customers to configure it. Provider edge device (PE) A PE is a device, or set of devices, at the edge of the provider network which connects to customer networks through CE devices and present the provider's view of the customer site. PEs are aware of the VPNs that connect through them, and maintain VPN state. Provider device (P) A P device operates inside the provider's core network and does not directly interface to any customer endpoint. It might, for example, provide routing for many provider-operated tunnels that belong to different customers' PPVPNs. While the P device is a key part of implementing PPVPNs, it is not itself VPN-aware and does not maintain VPN state. Its principal role is allowing the service provider to scale its PPVPN offerings, for example, by acting as an aggregation point for multiple PEs. P-to-P connections, in such a role, often are high-capacity optical links between major locations of providers.

160

User-visible PPVPN services


This section deals with the types of VPN considered in the IETF; some historical names were replaced by these terms.

OSI Layer 1 services


Virtual private wire and private line services (VPWS and VPLS) In both of these services, the service provider does not offer a full routed or bridged network, but provides components to build customer-administered networks. VPWS are point-to-point while VPLS can be point-to-multipoint. They can be Layer 1 emulated circuits with no data link . The customer determines the overall customer VPN service, which also can involve routing, bridging, or host network elements. An unfortunate acronym confusion can occur between Virtual Private Line Service and Virtual Private LAN Service; the context should make it clear whether "VPLS" means the layer 1 virtual private line or the layer 2

OSI Layer 2 services


Virtual LAN A Layer 2 technique that allows for the coexistence of multiple LAN broadcast domains, interconnected via trunks using the IEEE 802.1Q trunking protocol. Other trunking protocols have been used but have become obsolete, including Inter-Switch Link (ISL), IEEE 802.10 (originally a security protocol but a subset was introduced for trunking), and ATM LAN Emulation (LANE). Virtual private LAN service (VPLS) Developed by IEEE, VLANs allow multiple tagged LANs to share common trunking. VLANs frequently comprise only customer-owned facilities. Whereas VPLS as described in the above section (OSI Layer 1 services) supports emulation of both point-to-point and point-to-multipoint topologies, the method discussed here extends Layer 2 technologies such as 802.1d and 802.1q LAN trunking to run over transports such as Metro Ethernet. As used in this context, a VPLS is a Layer 2 PPVPN, rather than a private line, emulating the full functionality of a traditional local area network (LAN). From a user standpoint, a VPLS makes it possible to interconnect several LAN segments over a packet-switched, or optical, provider core; a core transparent to the user, making the remote LAN segments behave as one single LAN.[13]

Virtual private network In a VPLS, the provider network emulates a learning bridge, which optionally may include VLAN service. Pseudo wire (PW) PW is similar to VPWS, but it can provide different L2 protocols at both ends. Typically, its interface is a WAN protocol such as Asynchronous Transfer Mode or Frame Relay. In contrast, when aiming to provide the appearance of a LAN contiguous between two or more locations, the Virtual Private LAN service or IPLS would be appropriate. IP-only LAN-like service (IPLS) A subset of VPLS, the CE devices must have L3 capabilities; the IPLS presents packets rather than frames. It may support IPv4 or IPv6.

161

OSI Layer 3 PPVPN architectures


This section discusses the main architectures for PPVPNs, one where the PE disambiguates duplicate addresses in a single routing instance, and the other, virtual router, in which the PE contains a virtual router instance per VPN. The former approach, and its variants, have gained the most attention. One of the challenges of PPVPNs involves different customers using the same address space, especially the IPv4 private address space.[14] The provider must be able to disambiguate overlapping addresses in the multiple customers' PPVPNs. BGP/MPLS PPVPN In the method defined by RFC 2547, BGP extensions advertise routes in the IPv4 VPN address family, which are of the form of 12-byte strings, beginning with an 8-byte Route Distinguisher (RD) and ending with a 4-byte IPv4 address. RDs disambiguate otherwise duplicate addresses in the same PE. PEs understand the topology of each VPN, which are interconnected with MPLS tunnels, either directly or via P routers. In MPLS terminology, the P routers are Label Switch Routers without awareness of VPNs. Virtual router PPVPN The Virtual Router architecture,[15][16] as opposed to BGP/MPLS techniques, requires no modification to existing routing protocols such as BGP. By the provisioning of logically independent routing domains, the customer operating a VPN is completely responsible for the address space. In the various MPLS tunnels, the different PPVPNs are disambiguated by their label, but do not need routing distinguishers.

Plaintext tunnels
Some virtual networks may not use encryption to protect the data contents. While VPNs often provide security, an unencrypted overlay network does not neatly fit within the secure or trusted categorization. For example a tunnel set up between two hosts that used Generic Routing Encapsulation (GRE) would in fact be a virtual private network, but neither secure nor trusted. Besides the GRE example above, native plaintext tunneling protocols include Layer 2 Tunneling Protocol (L2TP) when it is set up without IPsec and Point-to-Point Tunneling Protocol (PPTP) or Microsoft Point-to-Point Encryption (MPPE).

Virtual private network

162

Trusted delivery networks


Trusted VPNs do not use cryptographic tunneling, and instead rely on the security of a single provider's network to protect the traffic. Multi-Protocol Label Switching (MPLS) is often used to overlay VPNs, often with quality-of-service control over a trusted delivery network. Layer 2 Tunneling Protocol (L2TP)[17] which is a standards-based replacement, and a compromise taking the good features from each, for two proprietary VPN protocols: Cisco's Layer 2 Forwarding (L2F)[18] (obsolete as of 2009) and Microsoft's Point-to-Point Tunneling Protocol (PPTP).[19] From the security standpoint, VPNs either trust the underlying delivery network, or must enforce security with mechanisms in the VPN itself. Unless the trusted delivery network runs among physically secure sites only, both trusted and secure models need an authentication mechanism for users to gain access to the VPN.

VPNs in mobile environments


Mobile VPNs are used in a setting where an endpoint of the VPN is not fixed to a single IP address, but instead roams across various networks such as data networks from cellular carriers or between multiple Wi-Fi access points.[20] Mobile VPNs have been widely used in public safety, where they give law enforcement officers access to mission-critical applications, such as computer-assisted dispatch and criminal databases, while they travel between different subnets of a mobile network.[21] They are also used in field service management and by healthcare organizations,[22] among other industries. Increasingly, mobile VPNs are being adopted by mobile professionals and white-collar workers who need reliable connections.[22] They are used for roaming seamlessly across networks and in and out of wireless-coverage areas without losing application sessions or dropping the secure VPN session. A conventional VPN cannot survive such events because the network tunnel is disrupted, causing applications to disconnect, time out,[20] or fail, or even cause the computing device itself to crash.[22] Instead of logically tying the endpoint of the network tunnel to the physical IP address, each tunnel is bound to a permanently associated IP address at the device. The mobile VPN software handles the necessary network authentication and maintains the network sessions in a manner transparent to the application and the user.[20] The Host Identity Protocol (HIP), under study by the Internet Engineering Task Force, is designed to support mobility of hosts by separating the role of IP addresses for host identification from their locator functionality in an IP network. With HIP a mobile host maintains its logical connections established via the host identity identifier while associating with different IP addresses when roaming between access networks.

References
[1] Microsoft Technet. "Virtual Private Networking: An Overview" (http:/ / technet. microsoft. com/ en-us/ library/ bb742566. aspx). . [2] Metz, C. (Jan/Feb 2003). "The latest in virtual private networks: part I". IEEE Internet Computing (IEEE Computer Society) 7 (1): 8791. doi:10.1109/MIC.2003.1167346. ISSN1089-7801. "The VPN notion has been around for as long as we've had data networks. Initially, VPNs consisted of privately operated network devices interconnected over a carrier's dial-up or dedicated leased lines." [3] Technet Lab. "IPv6 traffic over VPN connections" (http:/ / lab. technet. microsoft. com/ en-us/ magazine/ cc138002). . [4] VPN Consortium. "VPN Technologies" (http:/ / www. vpnc. org/ vpn-technologies. html). . [5] Trademark Applications and Registrations Retrieval (TARR) (http:/ / tarr. uspto. gov/ servlet/ tarr?regser=serial& entry=78063238& action=Request+ Status) [6] OpenBSD ssh manual page, VPN section (http:/ / www. openbsd. org/ cgi-bin/ man. cgi?query=ssh#SSH-BASED+ VIRTUAL) [7] Unix Toolbox section on SSH VPN (http:/ / cb. vu/ unixtoolbox. xhtml#vpn) [8] Ubuntu SSH VPN how-to (https:/ / help. ubuntu. com/ community/ SSH_VPN) [9] "VPN - Virtual Private Network and OpenVPN" (http:/ / linuxconfig. org/ VPN_-_Virtual_Private_Network_and_OpenVPN). . [10] "TunTap Project" (http:/ / tuntaposx. sourceforge. net/ ). . [11] E. Rosen & Y. Rekhter (March 1999). "RFC 2547 BGP/MPLS VPNs" (http:/ / www. ietf. org/ rfc/ rfc2547. txt). Internet Engineering Task Forc (IETF). .

Virtual private network


[12] Lewis, Mark (2006). Comparing, designing, and deploying VPNs (1st print. ed.). Indianapolis, Ind.: Cisco Press. pp.56. ISBN1587051796. [13] Ethernet Bridging (OpenVPN) (http:/ / openvpn. net/ index. php/ access-server/ howto-openvpn-as/ 214-how-to-setup-layer-2-ethernet-bridging. html), [14] Address Allocation for Private Internets (http:/ / www. ietf. org/ rfc/ rfc1918. txt), RFC 1918, Y. Rekhter et al.,February 1996 [15] RFC 2917, A Core MPLS IP VPN Architecture [16] RFC 2918, E. Chen (September 2000) [17] Layer Two Tunneling Protocol "L2TP" (http:/ / www. ietf. org/ rfc/ rfc2661. txt), RFC 2661, W. Townsley et al.,August 1999 [18] IP Based Virtual Private Networks (http:/ / www. ietf. org/ rfc/ rfc2341. txt), RFC 2341, A. Valencia et al., May 1998 [19] Point-to-Point Tunneling Protocol (PPTP) (http:/ / www. ietf. org/ rfc/ rfc2637. txt), RFC 2637, K. Hamzeh et al., July 1999 [20] Phifer, Lisa. "Mobile VPN: Closing the Gap" (http:/ / searchmobilecomputing. techtarget. com/ tip/ 0,289483,sid40_gci1210989_mem1,00. html), SearchMobileComputing.com, July 16, 2006. [21] Willett, Andy. "Solving the Computing Challenges of Mobile Officers" (http:/ / www. officer. com/ print/ Law-Enforcement-Technology/ Solving-the-Computing-Challenges-of-Mobile-Officers/ 1$30992), www.officer.com, May, 2006. [22] Cheng, Roger. "Lost Connections" (http:/ / online. wsj. com/ article/ SB119717610996418467. html), The Wall Street Journal, December 11, 2007.

163

Further reading
Kelly, Sean (August 2001). "Necessity is the mother of VPN invention" (http://web.archive.org/web/ 20011217153420/http://www.comnews.com/cgi-bin/arttop.asp?Page=c0801necessity.htm). Communication News: 2628. ISSN0010-3632. "VPN Buyers Guide". Communication News: 3438. August 2001. ISSN0010-3632.

External links
JANET UK "Different Flavours of VPN: Technology and Applications" (https://www.ja.net/sites/default/ files/Different Flavours of VPN Technology and Applications.pdf) Virtual Private Network Consortium - a trade association for VPN vendors (http://www.vpnc.org/) CShip VPN-Wiki/List (http://en.cship.org/wiki/Virtual_Private_Network) Virtual Private Networks (http://www.microsoft.com/vpn) on Microsoft TechNet Creating VPNs with IPsec and SSL/TLS (http://www.linuxjournal.com/article/9916) Linux Journal article by Rami Rosen curvetun (http://netsniff-ng.org) a lightweight curve25519-based multiuser IP tunnel / VPN Using VPN to bypass internet censorship (http://en.flossmanuals.net/bypassing-censorship/ ch025_what-is-vpn/) in How to Bypass Internet Censorship (http://www.howtobypassinternetcensorship.org/), a FLOSS Manual, 10 March 2011, 240 pp

Semantic Web

164

Semantic Web
The Semantic Web is a collaborative movement led by the international standards body, the World Wide Web Consortium (W3C).[1] The standard promotes common data formats on the World Wide Web. By encouraging the inclusion of semantic content in web pages, the Semantic Web aims at converting the current web dominated by unstructured and semi-structured documents into a "web of data". The Semantic Web stack builds on the W3C's Resource Description Framework (RDF).[2] According to the W3C, "The Semantic Web provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries."[2] The term was coined by Tim Berners-Lee,[3] the inventor of the World Wide Web and director of the World Wide Web Consortium ("W3C"), which oversees the development of proposed Semantic Web standards. He defines the Semantic Web as "a web of data that can be processed directly and indirectly by machines." While its critics have questioned its feasibility, proponents argue that applications in industry, biology and human sciences research have already proven the validity of the original concept. Scholars have explored the social potential of the semantic web in the business and health sectors, and for social networking.[4] The original 2001 Scientific American article by Berners-Lee described an expected evolution of the existing Web to a Semantic Web,[5] but this has yet to happen. In 2006, Berners-Lee and colleagues stated that: "This simple idea... remains largely unrealized."[6]

History
The concept of the Semantic Network Model was coined in the early sixties by the cognitive scientist Allan M. Collins, linguist M. Ross Quillian and psychologist Elizabeth F. Loftus in various publications,[7][8][9][10][11] as a form to represent semantically structured knowledge. It extends the network of hyperlinked human-readable web pages by inserting machine-readable metadata about pages and how they are related to each other, enabling automated agents to access the Web more intelligently and perform tasks on behalf of users. The term was coined by Tim Berners-Lee,[3] the inventor of the World Wide Web and director of the World Wide Web Consortium ("W3C"), which oversees the development of proposed Semantic Web standards. He defines the Semantic Web as "a web of data that can be processed directly and indirectly by machines." Many of the technologies proposed by the W3C already existed before they were positioned under the W3C umbrella. These are used in various contexts, particularly those dealing with information that encompasses a limited and defined domain, and where sharing data is a common necessity, such as scientific research or data exchange among businesses. In addition, other technologies with similar goals have emerged, such as microformats.

Purpose
The main purpose of the Semantic Web is driving the evolution of the current Web by enabling users to find, share, and combine information more easily. Humans are capable of using the Web to carry out tasks such as finding the Irish word for "folder", reserving a library book, and searching for the lowest price for a DVD. However, machines cannot accomplish all of these tasks without human direction, because web pages are designed to be read by people, not machines. The semantic web is a vision of information that can be readily interpreted by machines, so machines can perform more of the tedious work involved in finding, combining, and acting upon information on the web. The Semantic Web, as originally envisioned, is a system that enables machines to "understand" and respond to complex human requests based on their meaning. Such an "understanding" requires that the relevant information sources be semantically structured. Tim Berners-Lee originally expressed the vision of the Semantic Web as follows:[12]

Semantic Web I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web the content, links, and transactions between people and computers. A "Semantic Web", which should make this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines. The "intelligent agents" people have touted for ages will finally materialize. The Semantic Web is regarded as an integrator across different content, information applications and systems. It has applications in publishing, blogging, and many other areas. Often the terms "semantics", "metadata", "ontologies" and "Semantic Web" are used inconsistently. In particular, these terms are used as everyday terminology by researchers and practitioners, spanning a vast landscape of different fields, technologies, concepts and application areas. Furthermore, there is confusion with regard to the current status of the enabling technologies envisioned to realize the Semantic Web. In a paper presented by Gerber, Barnard and Van der Merwe[13] the Semantic Web landscape is charted and a brief summary of related terms and enabling technologies is presented. The architectural model proposed by Tim Berners-Lee is used as basis to present a status model that reflects current and emerging technologies.[14]

165

Limitations of HTML
Many files on a typical computer can be loosely divided into human readable documents and machine readable data. Documents like mail messages, reports, and brochures are read by humans. Data, like calendars, addressbooks, playlists, and spreadsheets are presented using an application program which lets them be viewed, searched and combined in different ways. Currently, the World Wide Web is based mainly on documents written in Hypertext Markup Language (HTML), a markup convention that is used for coding a body of text interspersed with multimedia objects such as images and interactive forms. Metadata tags provide a method by which computers can categorise the content of web pages, for example: <meta name="keywords" content="computing, computer studies, computer" /> <meta name="description" content="Cheap widgets for sale" /> <meta name="author" content="John Doe" /> With HTML and a tool to render it (perhaps web browser software, perhaps another user agent), one can create and present a page that lists items for sale. The HTML of this catalog page can make simple, document-level assertions such as "this document's title is 'Widget Superstore'", but there is no capability within the HTML itself to assert unambiguously that, for example, item number X586172 is an Acme Gizmo with a retail price of 199, or that it is a consumer product. Rather, HTML can only say that the span of text "X586172" is something that should be positioned near "Acme Gizmo" and "199", etc. There is no way to say "this is a catalog" or even to establish that "Acme Gizmo" is a kind of title or that "199" is a price. There is also no way to express that these pieces of information are bound together in describing a discrete item, distinct from other items perhaps listed on the page. Semantic HTML refers to the traditional HTML practice of markup following intention, rather than specifying layout details directly. For example, the use of <em> denoting "emphasis" rather than <i>, which specifies italics. Layout details are left up to the browser, in combination with Cascading Style Sheets. But this practice falls short of specifying the semantics of objects such as items for sale or prices. Microformats represent unofficial attempts to extend HTML syntax to create machine-readable semantic markup about objects such as retail stores and items for sale.

Semantic Web

166

Semantic Web solutions


The Semantic Web takes the solution further. It involves publishing in languages specifically designed for data: Resource Description Framework (RDF), Web Ontology Language (OWL), and Extensible Markup Language (XML). HTML describes documents and the links between them. RDF, OWL, and XML, by contrast, can describe arbitrary things such as people, meetings, or airplane parts. These technologies are combined in order to provide descriptions that supplement or replace the content of Web documents. Thus, content may manifest itself as descriptive data stored in Web-accessible databases,[15] or as markup within documents (particularly, in Extensible HTML (XHTML) interspersed with XML, or, more often, purely in XML, with layout or rendering cues stored separately). The machine-readable descriptions enable content managers to add meaning to the content, i.e., to describe the structure of the knowledge we have about that content. In this way, a machine can process knowledge itself, instead of text, using processes similar to human deductive reasoning and inference, thereby obtaining more meaningful results and helping computers to perform automated information gathering and research. An example of a tag that would be used in a non-semantic web page: <item>blog</item> Encoding similar information in a semantic web page might look like this:
<item rdf:about="http://techmites.com/semantic-web-a-meaningful-search-engine/">blog</item>

Tim Berners-Lee calls the resulting network of Linked Data the Giant Global Graph, in contrast to the HTML-based World Wide Web. Berners-Lee posits that if the past was document sharing, the future is data sharing. His answer to the question of "how" provides three points of instruction. One, a URL should point to the data. Two, anyone accessing the URL should get data back. Three, relationships in the data should point to additional URLs with data.

Web 3.0
Tim Berners-Lee has described the semantic web as a component of "Web 3.0".[16] People keep asking what Web 3.0 is. I think maybe when you've got an overlay of scalable vector graphics everything rippling and folding and looking misty on Web 2.0 and access to a semantic Web integrated across a huge space of data, you'll have access to an unbelievable data resource ... Tim Berners-Lee, 2006 "Semantic Web" is sometimes used as a synonym for "Web 3.0", though each term's definition varies.

Challenges
Some of the challenges for the Semantic Web include vastness, vagueness, uncertainty, inconsistency, and deceit. Automated reasoning systems will have to deal with all of these issues in order to deliver on the promise of the Semantic Web. Vastness: The World Wide Web contains many billions of pages [17]. The SNOMED CT medical terminology ontology alone contains 370,000 class names, and existing technology has not yet been able to eliminate all semantically duplicated terms. Any automated reasoning system will have to deal with truly huge inputs. Vagueness: These are imprecise concepts like "young" or "tall". This arises from the vagueness of user queries, of concepts represented by content providers, of matching query terms to provider terms and of trying to combine different knowledge bases with overlapping but subtly different concepts. Fuzzy logic is the most common technique for dealing with vagueness. Uncertainty: These are precise concepts with uncertain values. For example, a patient might present a set of symptoms which correspond to a number of different distinct diagnoses each with a different probability.

Semantic Web Probabilistic reasoning techniques are generally employed to address uncertainty. Inconsistency: These are logical contradictions which will inevitably arise during the development of large ontologies, and when ontologies from separate sources are combined. Deductive reasoning fails catastrophically when faced with inconsistency, because "anything follows from a contradiction". Defeasible reasoning and paraconsistent reasoning are two techniques which can be employed to deal with inconsistency. Deceit: This is when the producer of the information is intentionally misleading the consumer of the information. Cryptography techniques are currently utilized to alleviate this threat. This list of challenges is illustrative rather than exhaustive, and it focuses on the challenges to the "unifying logic" and "proof" layers of the Semantic Web. The World Wide Web Consortium (W3C) Incubator Group for Uncertainty Reasoning for the World Wide Web (URW3-XG) final report [18] lumps these problems together under the single heading of "uncertainty". Many of the techniques mentioned here will require extensions to the Web Ontology Language (OWL) for example to annotate conditional probabilities. This is an area of active research.[19]

167

Standards
Standardization for Semantic Web in the context of Web 3.0 is under the care of W3C.[20]

Components
The term "Semantic Web" is often used more specifically to refer to the formats and technologies that enable it.[2] The collection, structuring and recovery of linked data are enabled by technologies that provide a formal description of concepts, terms, and relationships within a given knowledge domain. These technologies are specified as W3C standards and include: Resource Description Framework (RDF), a general method for describing information RDF Schema (RDFS) Simple Knowledge Organization System (SKOS) SPARQL, an RDF query language Notation3 (N3), designed with human-readability in mind N-Triples, a format for storing and transmitting data Turtle (Terse RDF Triple Language) Web Ontology Language (OWL), a family of knowledge representation languages

The Semantic Web Stack illustrates the architecture of the Semantic Web. The functions and relationships of the components can be summarized as follows:[21] XML provides an elemental syntax for content structure within documents, yet associates no semantics with the meaning of the content contained within. XML is not at present a necessary component of Semantic Web technologies in most cases, as alternative syntaxes exists, such as Turtle. Turtle is a de facto standard, but has not been through a formal standardization process. XML Schema is a language for providing and restricting the structure and content of elements contained within XML documents.
The Semantic Web Stack. RDF is a simple language for expressing data models, which refer to objects ("resources") and their relationships. An RDF-based model can be represented in a variety of syntaxes, e.g., RDF/XML, N3, Turtle, and RDFa.[22] RDF is a fundamental standard of the Semantic Web.[23][24][25]

Semantic Web RDF Schema extends RDF and is a vocabulary for describing properties and classes of RDF-based resources, with semantics for generalized-hierarchies of such properties and classes. OWL adds more vocabulary for describing properties and classes: among others, relations between classes (e.g. disjointness), cardinality (e.g. "exactly one"), equality, richer typing of properties, characteristics of properties (e.g. symmetry), and enumerated classes. SPARQL is a protocol and query language for semantic web data sources.

168

Current state of standardization


Well established standardization: Unicode Uniform Resource Identifier XML RDF RDFS SPARQL OWL

Ongoing standardization: Rule Interchange Format (RIF) as the Rule Layer of the Semantic Web Stack Not yet fully realized: Unifying Logic and Proof layers The intent is to enhance the usability and usefulness of the Web and its interconnected resources through: Servers which expose existing data systems using the RDF and SPARQL standards. Many converters to RDF [26] exist from different applications. Relational databases are an important source. The semantic web server attaches to the existing system without affecting its operation. Documents "marked up" with semantic information (an extension of the HTML <meta> tags used in today's Web pages to supply information for Web search engines using web crawlers). This could be machine-understandable information about the human-understandable content of the document (such as the creator, title, description, etc.) or it could be purely metadata representing a set of facts (such as resources and services elsewhere on the site). Note that anything that can be identified with a Uniform Resource Identifier (URI) can be described, so the semantic web can reason about animals, people, places, ideas, etc. Semantic markup is often generated automatically, rather than manually. Common metadata vocabularies (ontologies) and maps between vocabularies that allow document creators to know how to mark up their documents so that agents can use the information in the supplied metadata (so that Author in the sense of 'the Author of the page' won't be confused with Author in the sense of a book that is the subject of a book review) Automated agents to perform tasks for users of the semantic web using this data Web-based services (often with agents of their own) to supply information specifically to agents, for example, a Trust service that an agent could ask if some online store has a history of poor service or spamming

Semantic Web

169

Skeptical reactions
Practical feasibility
Critics (e.g. Which Semantic Web? [27]) question the basic feasibility of a complete or even partial fulfillment of the semantic web. Cory Doctorow's critique ("metacrap") is from the perspective of human behavior and personal preferences. For example, people may include spurious metadata into Web pages in an attempt to mislead Semantic Web engines that naively assume the metadata's veracity. This phenomenon was well-known with metatags that fooled the AltaVista ranking algorithm into elevating the ranking of certain Web pages: the Google indexing engine specifically looks for such attempts at manipulation. Peter Grdenfors and Timo Honkela point out that logic-based semantic web technologies cover only a fraction of the relevant phenomena related to semantics.[28][29] Where semantic web technologies have found a greater degree of practical adoption, it has tended to be among core specialized communities and organizations for intra-company projects.[30] The practical constraints toward adoption have appeared less challenging where domain and scope is more limited than that of the general public and the World-Wide Web.[30]

Censorship and privacy


Enthusiasm about the semantic web could be tempered by concerns regarding censorship and privacy. For instance, text-analyzing techniques can now be easily bypassed by using other words, metaphors for instance, or by using images in place of words. An advanced implementation of the semantic web would make it much easier for governments to control the viewing and creation of online information, as this information would be much easier for an automated content-blocking machine to understand. In addition, the issue has also been raised that, with the use of FOAF files and geo location meta-data, there would be very little anonymity associated with the authorship of articles on things such as a personal blog. Some of these concerns were addressed in the "Policy Aware Web" project[31] and is an active research and development topic.

Doubling output formats


Another criticism of the semantic web is that it would be much more time-consuming to create and publish content because there would need to be two formats for one piece of data: one for human viewing and one for machines. However, many web applications in development are addressing this issue by creating a machine-readable format upon the publishing of data or the request of a machine for such data. The development of microformats has been one reaction to this kind of criticism. Another argument in defense of the feasibility of semantic web is the likely falling price of human intelligence tasks in digital labor markets, such as the Amazon Mechanical Turk. Specifications such as eRDF and RDFa allow arbitrary RDF data to be embedded in HTML pages. The GRDDL (Gleaning Resource Descriptions from Dialects of Language) mechanism allows existing material (including microformats) to be automatically interpreted as RDF, so publishers only need to use a single format, such as HTML.

Projects
This section lists some of the many projects and tools that exist to create Semantic Web solutions.[32]

DBpedia
DBpedia is an effort to publish structured data extracted from Wikipedia: the data is published in RDF and made available on the Web for use under the GNU Free Documentation License, thus allowing Semantic Web agents to provide inferencing and advanced querying over the Wikipedia-derived dataset and facilitating interlinking, re-use and extension in other data-sources.

Semantic Web

170

FOAF
A popular vocabulary on the semantic web is Friend of a Friend (or FOAF), which uses RDF to describe the relationships people have to other people and the "things" around them. FOAF permits intelligent agents to make sense of the thousands of connections people have with each other, their jobs and the items important to their lives; connections that may or may not be enumerated in searches using traditional web search engines. Because the connections are so vast in number, human interpretation of the information may not be the best way of analyzing them. FOAF is an example of how the Semantic Web attempts to make use of the relationships within a social context.

SIOC
The Semantically-Interlinked Online Communities project (SIOC, pronounced "shock") provides a vocabulary of terms and relationships that model web data spaces. Examples of such data spaces include, among others: discussion forums, blogs, blogrolls / feed subscriptions, mailing lists, shared bookmarks and image galleries.

GoPubMed
GoPubMed [33] is a knowledge-based search engine for biomedical texts. The Gene Ontology (GO) and Medical Subject Headings (MeSH) serve as "Table of contents" in order to structure the millions of articles of the MEDLINE database. The search engine allows its users to find relevant search results significantly faster than Pubmed.

NextBio
A database consolidating high-throughput life sciences experimental data tagged and connected via biomedical ontologies. Nextbio is accessible via a search engine interface. Researchers can contribute their findings for incorporation to the database. The database currently supports gene expression or protein expression data and sequence centric data and is steadily expanding to support other biological data types.

References
[1] "XML and Semantic Web W3C Standards Timeline" (http:/ / www. dblab. ntua. gr/ ~bikakis/ XML and Semantic Web W3C Standards Timeline-History. pdf). 2012-02-04. . [2] "W3C Semantic Web Activity" (http:/ / www. w3. org/ 2001/ sw/ ). World Wide Web Consortium (W3C). November 7, 2011. . Retrieved November 26, 2011. [3] Berners-Lee, Tim; James Hendler and Ora Lassila (May 17, 2001). "The Semantic Web" (http:/ / www. sciam. com/ article. cfm?id=the-semantic-web& print=true). Scientific American Magazine. . Retrieved March 26, 2008. [4] Lee Feigenbaum (May 1, 2007). "The Semantic Web in Action" (http:/ / www. thefigtrees. net/ lee/ sw/ sciam/ semantic-web-in-action). Scientific American. . Retrieved February 24, 2010. [5] Berners-Lee, Tim (May 1, 2001). "The Semantic Web" (http:/ / www. sciam. com/ article. cfm?articleID=00048144-10D2-1C70-84A9809EC588EF21). Scientific American. . Retrieved March 13, 2008. [6] Nigel Shadbolt, Wendy Hall, Tim Berners-Lee (2006). "The Semantic Web Revisited" (http:/ / eprints. ecs. soton. ac. uk/ 12614/ 1/ Semantic_Web_Revisted. pdf). IEEE Intelligent Systems. . Retrieved April 13, 2007. [7] Allan M. Collins, A; M.R. Quillian (1969). "Retrieval time from semantic memory". Journal of verbal learning and verbal behavior 8 (2): 240247. doi:10.1016/S0022-5371(69)80069-1. PMID615603750. [8] Allan M. Collins; M. Ross Quillian (1970). "Does category size affect categorization time?". Journal of verbal learning and verbal behavior 9 (4): 432438. doi:10.1016/S0022-5371(70)80084-6. [9] Allan M. Collins, Allan M.; Elizabeth F. Loftus (1975). "A spreading-activation theory of semantic processing". Psychological Review 82 (6): 407428. doi:10.1037/0033-295X.82.6.407. [10] Quillian, MR (1967). "Word concepts. A theory and simulation of some basic semantic capabilities". Behavioral Science 12 (5): 410430. doi:10.1002/bs.3830120511. PMID6059773. [11] Semantic memory |book:Marvin Minsky (editor): Semantic information processing, MIT Press, Cambridge, Mass. 1988. [12] Berners-Lee, Tim; Fischetti, Mark (1999). Weaving the Web. HarperSanFrancisco. chapter 12. ISBN978-0-06-251587-2. [13] Gerber, AJ, Barnard, A & Van der Merwe, Alta (2006), "A Semantic Web Status Model, Integrated Design & Process Technology", Special Issue: IDPT 2006

Semantic Web
[14] Gerber, Aurona; Van der Merwe, Alta; Barnard, Andries; (2008), "A Functional Semantic Web architecture", European Semantic Web Conference 2008, ESWC'08, Tenerife, June 2008. [15] Artem Chebotko and Shiyong Lu, "Querying the Semantic Web: An Efficient Approach Using Relational Databases", LAP Lambert Academic Publishing, ISBN 978-3-8383-0264-5, 2009. [16] Victoria Shannon (June 26, 2006). "A 'more revolutionary' Web" (http:/ / www. nytimes. com/ 2006/ 05/ 23/ technology/ 23iht-web. html). International Herald Tribune. . Retrieved May 24, 2006. [17] http:/ / www. worldwidewebsize. com/ [18] http:/ / www. w3. org/ 2005/ Incubator/ urw3/ XGR-urw3-20080331/ [19] Lukasiewicz, Thomas; Umberto Straccia. "Managing uncertainty and vagueness in description logics for the Semantic Web" (http:/ / www. sciencedirect. com/ science?_ob=ArticleURL& _udi=B758F-4SPSPKW-1& _user=147018& _coverDate=11/ 30/ 2008& _rdoc=1& _fmt=& _orig=search& _sort=d& _docanchor=& view=c& _acct=C000012179& _version=1& _urlVersion=0& _userid=147018& md5=8123c273189b1148cadb12f95b87a5ef). . [20] Semantic Web Standards published by the W3C (http:/ / www. w3. org/ 2001/ sw/ wiki/ Main_Page) [21] "OWL Web Ontology Language Overview" (http:/ / www. w3. org/ TR/ owl-features/ ). World Wide Web Consortium (W3C). February 10, 2004. . Retrieved November 26, 2011. [22] "RDF tutorial" (http:/ / www. lesliesikos. com/ tutorials/ rdf/ ). Dr. Leslie Sikos. . Retrieved 2011-07-05. [23] "Resource Description Framework (RDF)" (http:/ / www. w3. org/ RDF/ ). World Wide Web Consortium. . [24] "Standard websites" (http:/ / www. lesliesikos. com/ ). Dr. Leslie Sikos. . Retrieved 2011-07-05. [25] Allemang, D., Hendler, J. (2011). "RDF The basis of the Semantic Web. In: Semantic Web for the Working Ontologist (2nd Ed.)". Morgan Kaufmann. doi:10.1016/B978-0-12-385965-5.10003-2. [26] http:/ / esw. w3. org/ topic/ ConverterToRdf [27] http:/ / portal. acm. org/ citation. cfm?id=900051. 900063& coll=ACM& dl=ACM& CFID=29933182& CFTOKEN=24611642 [28] Grdenfors, Peter (2004). How to make the Semantic Web more semantic. IOS Press. pp.1734. [29] Timo Honkela, Ville Knnen, Tiina Lindh-Knuutila and Mari-Sanna Paukkeri (2008). "Simulating processes of concept formation and communication" (http:/ / www. informaworld. com/ smpp/ content~content=a903999101). Journal of Economic Methodology. . [30] Ivan Herman (2007). "State of the Semantic Web" (http:/ / www. w3. org/ 2007/ Talks/ 0424-Stavanger-IH/ Slides. pdf). Semantic Days 2007. . Retrieved July 26, 2007. [31] http:/ / policyawareweb. org/ [32] See, for instance: Bergman, Michael K. "Sweet Tools" (http:/ / www. mkbergman. com/ ?page_id=325). AI3; Adaptive Information, Adaptive Innovation, Adaptive Infrastructure. . Retrieved January 5, 2009. [33] http:/ / www. GoPubMed. com

171

Roger Chaffin: "The concept of a semantic Relation". In: Adrienne Lehrer u.a. (Hrsg.): Frames, Fields and contrasts. New essays in semantic and lexical organisation, Erlbaum, Hillsdale, N.J. 1992, ISBN 0-8058-1089-7, S. 253288. Hermann Helbig: Die semantische Struktur natrlicher Sprache. Wissensprsentation mit MultiNet, Springer, Heidelberg 2001, ISBN 3-540-67784-4. M. Ross Quillian: "Word concepts. A theory and simulation of some basic semantic capabilities". In: Behavioral Science 12 (1967), S. 410430. M. Ross Quillian: "Semantic memory". In: Marvin Minsky (Hrsg.): Semantic information processing, MIT Press, Cambridge, Mass. 1988. Klaus Reichenberger: Kompendium semantische Netze: Konzepte, Technologie, Modellierung, Springer, Heidelberg 2010, ISBN 3-642-04314-3. John F. Sowa: Principles of semantic networks. Explorations in the representation of knowledge, Morgan Kaufmann, San Mateo, Cal. 1991, ISBN 1-55860-088-4.

Semantic Web

172

Further reading
Liyang Yu (January 6, 2011). A Developer's Guide to the Semantic Web (http://www.amazon.com/ Developers-Guide-Semantic-Web/dp/3642159699/ref=sr_1_1?ie=UTF8&qid=1321027111&sr=8-1). Springer. ISBN978-3-642-15969-5. Grigoris Antoniou, Frank van Harmelen (March 31, 2008). A Semantic Web Primer, 2nd Edition (http://www. amazon.com/Semantic-Primer-Cooperative-Information-Systems/dp/0262012421/). The MIT Press. ISBN0-262-01242-1. Dean Allemang, James Hendler (May 9, 2008). Semantic Web for the Working Ontologist: Effective Modeling in RDFS and OWL (http://www.amazon.com/Semantic-Web-Working-Ontologist-Effective/dp/0123735564/). Morgan Kaufmann. ISBN978-0-12-373556-0. John Davies (July 11, 2006). Semantic Web Technologies: Trends and Research in Ontology-based Systems (http://www.amazon.com/Semantic-Web-Technologies-Research-Ontology-based/dp/0470025964/). Wiley. ISBN0-470-02596-4. Pascal Hitzler, Markus Krtzsch, Sebastian Rudolph (August 25, 2009). Foundations of Semantic Web Technologies (http://www.semantic-web-book.org). CRCPress. ISBN1-4200-9050-X. Thomas B. Passin (March 1, 2004). Explorer's Guide to the Semantic Web (http://www.amazon.com/ Explorers-Guide-Semantic-Thomas-Passin/dp/1932394206/). Manning Publications. ISBN1-932394-20-6. Liyang Yu (June 14, 2007). Introduction to Semantic Web and Semantic Web Services (http://www.amazon. com/Introduction-Semantic-Web-Services/dp/1584889330/). CRC Press. ISBN1-58488-933-0. Jeffrey T. Pollock (March 23, 2009). Semantic Web For Dummies (http://www.amazon.com/gp/product/ 0470396792). For Dummies. ISBN0-470-39679-2. Martin Hilbert (April, 2009). The Maturing Concept of E-Democracy: From E-Voting and Online Consultations to Democratic Value Out of Jumbled Online Chatter (http://www.informaworld.com/smpp/ content~db=all~content=a911066517). Journal of Information Technology & Politics. ISBN1680802715242 . "Tim Berners-Lee Gives the Web a New Definition" (http://computemagazine.com/ man-who-invented-world-wide-web-gives-new-definition/) Folmer, Erwin; Oude Luttighuis, Paul; Hillegersberg, Jos (April 2011). "Do semantic standards lack quality? A survey among 34 semantic standards" (http://www.springerlink.com/content/h03q2454x7330574/). Electronic Markets 21 (2): 99111. doi:10.1007/s12525-011-0058-y. Retrieved 2012-05-19.

External links
Official website (http://www.w3.org/standards/semanticweb/) links collection (http://www.semanticoverflow.com/questions/1/where-can-i-learn-about-the-semantic-web) on Semantic Overflow (http://semanticoverflow.com) Semantic Technology and the Enterprise (http://www.semanticarts.com) SSWAP: Simple Semantic Web Architecture and Protocol (http://sswap.info) How Stuff Works: The Semantic Web (http://www.howstuffworks.com/semantic-web.htm) The Semantic Web Journal (http://www.semantic-web-journal.net:) Digital Flicks Semantic Web Blog (http://shivkumarganesh.in:)

COBIT

173

COBIT
Control Objectives for Information and Related Technology (COBIT), is a framework created by ISACA for information technology (IT) management and IT governance. It is a supporting toolset that allows managers to bridge the gap between control requirements, technical issues and business risks.

Overview
COBIT was first released in 1996, the current version, COBIT 5 was published in 2012. Its mission is to research, develop, publish and promote an authoritative, up-to-date, international set of generally accepted information technology control objectives for day-to-day use by business managers, IT professionals and assurance professionals.. [1] COBIT, initially an acronym for 'Control objectives for information and related technology' defines 34 generic processes to manage IT. Each process is defined together with process inputs and outputs, key process activities, process objectives, performance measures and an elementary maturity model. The framework supports governance of IT by defining and aligning business goals with IT goals and IT processes.

The COBIT framework


The framework provides good practices across a domain and process framework. The business orientation of COBIT consists of linking business goals to IT goals, providing metrics and maturity models to measure their achievement, and identifying the associated responsibilities of business and IT process owners. The process focus of COBIT is illustrated by a process model that subdivides IT into four domains (Plan and Organize, Acquire and Implement, Deliver and Support and Monitor and Evaluate) and 34 processes in line with the responsibility areas of plan, build, run and monitor. It is positioned at a high level and has been aligned and harmonized with other, more detailed, IT standards and good practices such as COSO, ITIL, ISO 27000, CMMI, TOGAF and PMBOK. COBIT acts as an integrator of these different guidance materials, summarizing key objectives under one umbrella framework that link the good practice models with governance and business requirements. The COBIT 4.1 framework specification can be obtained as a complimentary PDF at the ISACA download website [2] . (Free self-registration may be required.) COBIT 5 was released in June 2012. [3] COBIT 5 consolidates and integrates the COBIT 4.1, Val IT 2.0 and Risk IT frameworks, and draws from ISACA's IT Assurance Framework (ITAF) and the Business Model for Information Security (BMIS). It aligns with frameworks and standards such as Information Technology Infrastructure Library (ITIL), International Organization for Standardization (ISO), Project Management Body of Knowledge (PMBOK), PRINCE2 and The Open Group Architecture Framework (TOGAF).

COBIT

174

Releases
COBIT has had five major releases: In 1996, the first edition of COBIT was released. In 1998, the second edition added "Management Guidelines". In 2000, the third edition was released. In 2003, an on-line version became available. In December 2005, the fourth edition was initially released. In May 2007, the current 4.1 revision was released. COBIT 5 was released in June 2012. It consolidates and integrates the COBIT 4.1, Val IT 2.0 and Risk IT frameworks, and also draws significantly from the Business Model for Information Security (BMIS) and ITAF.

Components
The COBIT components include: Framework: Organize IT governance objectives and good practices by IT domains and processes, and links them to business requirements Process descriptions: A reference process model and common language for everyone in an organization. The processes map to responsibility areas of plan, build, run and monitor. Control objectives: Provide a complete set of high-level requirements to be considered by management for effective control of each IT process. Management guidelines: Help assign responsibility, agree on objectives, measure performance, and illustrate interrelationship with other processes Maturity models: Assess maturity and capability per process and helps to address gaps. Other ISACA Publications [4] based on the COBIT framework include: Board Briefing for IT Governances, 2nd Edition COBIT and Application Controls COBIT Control Practices, 2nd Edition IT Assurance Guide: Using COBIT Implementing and Continually Improving IT Governance COBIT Quickstart, 2nd Edition COBIT Security Baseline, 2nd Edition IT Control Objectives for Sarbanes-Oxley, 2nd Edition IT Control Objectives for Basel II COBIT User Guide for Service Managers COBIT Mappings (to ISO/IEC 27002, CMMI, ITIL, TOGAF, PMBOK etc.) COBIT Online

COBIT

175

COBIT and Sarbanes-Oxley


Companies that are publicly traded in the US are subject to the Sarbanes-Oxley Act of 2002. According to the IIA, COBIT is one of the most commonly used frameworks to comply with Sarbanes-Oxley [5] .

References
ISACA [6] Custodians of COBIT COBITCampus [7] COBIT education provided by ISACA ISO/IEC 20000 international standard for IT Service Management ISO/IEC 27000 Information Security Management Systems standards Wood, David J. 2010. "Assessing IT Governance Maturity: The Case of San Marcos, Texas". Applied Research Projects, Texas State University-San Marcos. The Institute of Internal Auditors' List of most commonly used Internal Control Frameworks [8] http://ecommons.txstate.edu/arp/345 (This paper applies a modified COBIT framework to a medium sized city).

Notes
[1] ITGI. "COBIT 4.1 Executive Summary" (http:/ / www. isaca. org/ Knowledge-Center/ cobit/ Documents/ COBIT4. pdf). COBIT 4.1 Executive Summary. ITGI. . [2] http:/ / www. isaca. org/ Knowledge-Center/ cobit/ Pages/ Downloads. aspx [3] ISACA. [ISACA Issues COBIT 5 for Information Security "ISACA Issues COBIT 5 for Information Security"]. ISACA Issues COBIT 5 for Information Security. ISACA. ISACA Issues COBIT 5 for Information Security. [4] http:/ / www. isaca. org/ Knowledge-Center/ cobit/ Pages/ Products. aspx [5] IIA. "common internal control frameworks" (http:/ / www. theiia. org/ intAuditor/ media/ images/ Burch_dec'08_artok_cx. pdf). common internal control frameworks. IIA. . [6] http:/ / www. isaca. org/ [7] http:/ / www. isaca. org/ cobitcampus [8] http:/ / www. theiia. org/ intAuditor/ media/ images/ Burch_dec'08_artok_cx. pdf

Information Technology Infrastructure Library

176

Information Technology Infrastructure Library


The Information Technology Infrastructure Library (ITIL), is a set of practices for IT service management (ITSM) that focuses on aligning IT services with the needs of business. In its current form (known as ITILv3 and ITIL 2011 edition), ITIL is published in a series of five core publications, each of which covers an ITSM lifecycle stage. ITILv3 underpins ISO/IEC 20000 (previously BS15000), the International Service Management Standard for IT service management, although differences between the two frameworks do exist. ITIL describes procedures, tasks and checklists that are not organization-specific, used by an organization for establishing a minimum level of competency. It allows the organization to establish a baseline from which it can plan, implement, and measure. It is used to demonstrate compliance and to measure improvement. The names ITIL and IT Infrastructure Library are registered trademarks of the United Kingdom's Office of Government Commerce (OGC) now part of the Cabinet Office. Following this move, the ownership is now listed as being with HM Government rather than OGC.

History
Responding to growing dependence on IT, the UK Government's Central Computer and Telecommunications Agency in the 1980s developed a set of recommendations. It recognised that without standard practices, government agencies and private sector contracts had started independently creating their own IT management practices. The IT Infrastructure Library originated as a collection of books, each covering a specific practice within IT service management. ITIL was built around a process-model based view of controlling and managing operations often credited to W. Edwards Deming and his plan-do-check-act (PDCA) cycle.[1] After the initial publication in 198996, the number of books quickly grew within ITIL v1 to more than 30 volumes. In 2000/2001, to make ITIL more accessible (and affordable), ITIL v2 consolidated the publications into 8 logical "sets" that grouped related process-guidelines to match different aspects of IT management, applications, and services. The Service Management sets (Service Support and Service Delivery) were by far the most widely used, circulated, and understood of ITIL v2 publications. In April 2001 the CCTA was merged into the Office of Government Commerce (OGC), an office of the UK Treasury.[2] In 2006, the ITIL v2 glossary was published. In May 2007, this organisation issued version 3 of ITIL (also known as the ITIL Refresh Project) consisting of 26 processes and functions, now grouped into only 5 volumes, arranged around the concept of Service lifecycle structure. In 2009, the OGC officially announced that ITIL v2 certification would be withdrawn and launched a major consultation as per how to proceed.[3] In July 2011, the 2011 edition of ITIL was published, providing an update to the version published in 2007. The OGC is no longer listed as the owner of ITIL, following the move of OGC in to the Cabinet Office. The 2011 edition is owned by HM Government.

Overview of ITIL v3
ITIL v3 is an extension of ITIL v2 and fully replaced it following the completion of the withdrawal period on 30 June 2011 [4]. ITIL v3 provides a more holistic perspective on the full life cycle of services, covering the entire IT organisation and all supporting components needed to deliver services to the customer, whereas v2 focused on specific activities directly related to service delivery and support. Most of the v2 activities remained untouched in v3, but some significant changes in terminology were introduced in order to facilitate the expansion.

Information Technology Infrastructure Library

177

Changes and characteristics of the 2011 edition of ITIL


A summary of changes has been published by HM Government [5]. In line with the 2007 edition, the 2011 edition consists of 5 core publications Service Strategy, Service Design, Service Transition, Service Operation, and Continual Service Improvement. ITIL 2011 is a major update to the ITIL framework that addresses errors and inconsistencies. There are 26 processes listed in ITIL 2011 edition and described below that shows which core publication provides the main content for each process. ITIL v3 has five volumes, published in May 2007 (2007 edition) and updated in July 2011 (2011 edition) for consistency: 1. 2. 3. 4. 5. ITIL Service Strategy[6] ITIL Service Design[7] ITIL Service Transition[8] ITIL Service Operation[9] ITIL Continual Service Improvement[10]

Service strategy
As the centre and origin point of the ITIL Service Lifecycle, the ITIL Service Strategy (SS) volume[6] provides guidance on clarification and prioritisation of service-provider investments in services. More generally, Service Strategy focuses on helping IT organizations improve and develop over the long term. In both cases, Service Strategy relies largely upon a market-driven approach. Key topics covered include service value definition, business-case development, service assets, market analysis, and service provider types. List of covered processes: 1. 2. 3. 4. 5. Strategy Management Service Portfolio Management Financial management for IT services Demand Management Business relationship management

For candidates in the ITIL Intermediate Capability stream, the Service Offerings and Agreements (SOA) Qualification course and exam are most closely aligned to the Service Strategy (SS) Qualification course and exam in the Lifecycle stream.

Financial management for IT services


IT Financial Management comprises the discipline of ensuring that the IT infrastructure is obtained at the most effective price (which does not necessarily mean cheapest) and calculating the cost of providing IT services so that an organization can understand the costs of its IT services. These costs may then be recovered from the customer of the service. This is the 2nd component of service delivery process.

Service design
The Service Design (SD) volume[7] provides good-practice guidance on the design of IT services, processes, and other aspects of the service management effort. Significantly, design within ITIL is understood to encompass all elements relevant to technology service delivery, rather than focusing solely on design of the technology itself. As such, service design addresses how a planned service solution interacts with the larger business and technical environments, service management systems required to support the service, processes which interact with the service, technology, and architecture required to support the service, and the supply chain required to support the planned service. Within ITIL, design work for an IT service is aggregated into a single service design package (SDP). Service design packages, along with other information about services, are managed within the service

Information Technology Infrastructure Library catalogues. List of covered processes: 1. 2. 3. 4. 5. 6. 7. 8. Design coordination (Introduced in ITIL 2011 Edition) Service Catalogue Service level Management Availability Management Capacity Management IT Service Continuity Management (ITSCM) Information Security Management System Supplier Management

178

Service level management


Service-level management provides for continual identification, monitoring and review of the levels of IT services specified in the Service-level agreements (SLAs). Service-level management ensures that arrangements are in place with internal IT support-providers and external suppliers in the form of Operational Level Agreements (OLAs) and Underpinning Contracts (UCs), respectively. The process involves assessing the impact of change upon service quality and SLAs. The service level management process is in close relation with the operational processes to control their activities. The central role of Service-level management makes it the natural place for metrics to be established and monitored against a benchmark. Service level management is the primary interface with the customer (as opposed to the user serviced by the service desk). Service-level management is responsible for: ensuring that the agreed IT services are delivered when and where they are supposed to be liaising with availability management, capacity management, incident management and problem management to ensure that the required levels and quality of service are achieved within the resources agreed with financial management producing and maintaining a service catalog (a list of standard IT service options and agreements made available to customers) ensuring that appropriate IT service continuity plans exist to support the business and its continuity requirements. The service-level manager relies on the other areas of the service delivery process to provide the necessary support which ensures the agreed services are provided in a cost-effective, secure and efficient manner.

Availability management
Availability management targets allowing organisations to sustain the IT service-availability to support the business at a justifiable cost. The high-level activities realise availability requirements, compile availability plan, monitor availability, and monitor maintenance obligations. Availability management addresses the ability of an IT component to perform at an agreed level over a period of time. Reliability: Ability of an IT component to perform at an agreed level at described conditions. Maintainability: The ability of an IT component to remain in, or be restored to an operational state. Serviceability: The ability for an external supplier to maintain the availability of component or function under a third-party contract. Resilience: A measure of freedom from operational failure and a method of keeping services reliable. One popular method of resilience is redundancy. Security: A service may have associated data. Security refers to the confidentiality, integrity, and availability of that data. Availability gives a clear overview of the end-to-end availability of the system.

Information Technology Infrastructure Library

179

Capacity management
Capacity management supports the optimum and cost-effective provision of IT services by helping organisations match their IT resources to business demands. The high-level activities include: application sizing workload management demand management modelling capacity planning resource management performance management

Capacity management is focused on strategic capacity, including capacity of personnel (e.g., human resources, staffing and training), system capacity, and component (or tactical) capacity.

IT service continuity management


IT service continuity management (ITSCM) covers the processes by which plans are put in place and managed to ensure that IT Services can recover and continue even after a serious incident occurs. It is not just about reactive measures, but also about proactive measures reducing the risk of a disaster in the first instance. ITSCM is regarded by the application owners as the recovery of the IT infrastructure used to deliver IT Services, but as of 2009 many businesses practice the much further-reaching process of business continuity planning (BCP), to ensure that the whole end-to-end business process can continue should a serious incident occur (at primary support level). ITSCM involves the following basic steps: prioritising the activities to be recovered by conducting a business impact analysis (BIA) performing a risk assessment (aka risk analysis) for each of the IT services to identify the assets, threats, vulnerabilities and countermeasures for each service. evaluating the options for recovery producing the contingency plan testing, reviewing, and revising the plan on a regular basis.

Information security management system


The ITIL-process Security Management[11] describes the structured fitting of information security in the management organisation. ITIL security management is based on the code of practice for information security management system (ISMS) now known as ISO/IEC 27002. A basic goal of security management is to ensure adequate information security. The primary goal of information security, in turn, is to protect information assets against risks, and thus to maintain their value to the organization. This is commonly expressed in terms of ensuring their confidentiality, integrity and availability, along with related properties or goals such as authenticity, accountability, non-repudiation and reliability. Mounting pressure for many organisations to structure their information security management systems in accordance with ISO/IEC 27001 requires revision of the ITIL v2 security management volume, and indeed a v3 release is in the works.

Information Technology Infrastructure Library

180

Service transition
Service transition, as described by the ITIL service transition volume,[8] relates to the delivery of services required by a business into live/operational use, and often encompasses the "project" side of IT rather than "BAU" (business as usual). This area also covers topics such as managing changes to the "BAU" environment. List of ITIL processes in Service Transition (ST): 1. 2. 3. 4. 5. 6. 7. Transition planning and support Change management Service asset and configuration management Release and deployment management Service validation and testing Change evaluation Knowledge management

Change management
Change Management aims to ensure that standardised methods and procedures are used for efficient handling of all changes. A change is an event that results in a new status of one or more Configuration items (CIs), and which is approved by management, cost-effective, enhances business process changes (fixes) all with a minimum risk to IT infrastructure. The main aims of change management include: Minimal disruption of services Reduction in back-out activities Economic use of resources involved in the change Common change management terminology includes: Change: the addition, modification or removal of CIs Request For Change (RFC) or, in older terminology, Change Request (CR): form used to record details of a request for a change and is sent as an input to Change Management by the Change Requestor ITIL v2 - Forward Schedule of Changes (FSC): schedule that contains details of all forthcoming Changes. ITIL v3 - Change Schedule (CS): schedule that contains details of all forthcoming Changes, and references historical data. Many people will still refer to the known term FSC.

Service asset and configuration management


Service asset and configuration management is primarily focused on maintaining information (i.e., configurations) about Configuration Items (i.e., assets) required to deliver an IT service, including their relationships. Configuration management is the management and traceability of every aspect of a configuration from beginning to end and it includes the following key process areas under its umbrella: Identification, Planning, Change Control, Change Management, Release Management, and Maintenance.

Information Technology Infrastructure Library

181

Release and deployment management


Release and deployment management is used by the software migration team for platform-independent and automated distribution of software and hardware, including license controls across the entire IT infrastructure. Proper software and hardware control ensures the availability of licensed, tested, and version-certified software and hardware, which functions as intended when introduced into existing infrastructure. Quality control during the development and implementation of new hardware and software is also the responsibility of Release Management. This guarantees that all software meets the demands of the business processes. The goals of release management include: Planning the rollout of software Designing and implementing procedures for the distribution and installation of changes to IT systems Effectively communicating and managing expectations of the customer during the planning and rollout of new releases Controlling the distribution and installation of changes to IT systems Release management focuses on the protection of the live environment and its services through the use of formal procedures and checks. A Release consists of the new or changed software and/or hardware required to implement approved changes. Release categories include: Major software releases and major hardware upgrades, normally containing large amounts of new functionality, some of which may make intervening fixes to problems redundant. A major upgrade or release usually supersedes all preceding minor upgrades, releases and emergency fixes. Minor software releases and hardware upgrades, normally containing small enhancements and fixes, some of which may have already been issued as emergency fixes. A minor upgrade or release usually supersedes all preceding emergency fixes. Emergency software and hardware fixes, normally containing the corrections to a small number of known problems. Releases can be divided based on the release unit into: Delta release: a release of only that part of the software which has been changed. For example, security patches. Full release: the entire software program is deployedfor example, a new version of an existing application. Packaged release: a combination of many changesfor example, an operating system image which also contains specific applications.

Service operation
Service Operation (SO) aims to provide best practice for achieving the delivery of agreed levels of services both to end-users and the customers (where "customers" refer to those individuals who pay for the service and negotiate the SLAs). Service operation, as described in the ITIL Service Operation volume,[9] is the part of the lifecycle where the services and value is actually directly delivered. Also the monitoring of problems and balance between service reliability and cost etc. are considered. The functions include technical management, application management, operations management and service desk as well as, responsibilities for staff engaging in Service Operation. List of processes: 1. Event management 2. Incident management 3. Request fulfillment 4. Problem management 5. Access management

Information Technology Infrastructure Library

182

ITIL functions
Service desk The service desk is one of four ITIL functions and is primarily associated with the Service Operation lifecycle stage. Tasks include handling incidents and requests, and providing an interface for other ITSM processes. Features include: single point of contact (SPOC) and not necessarily the first point of contact (FPOC) single point of entry single point of exit easier for customers data integrity streamlined communication channel

Primary purposes of a service desk include: incident control: life-cycle management of all service requests communication: keeping a customer informed of progress and advising on workarounds The service desk function can have various names, such as: Call center: main emphasis on professionally handling large call volumes of telephone-based transactions Help desk: manage, co-ordinate and resolve incidents as quickly as possible at primary support level Service desk: not only handles incidents, problems and questions but also provides an interface for other activities such as change requests, maintenance contracts, software licenses, service-level management, configuration management, availability management, financial management and IT services continuity management The three types of structure for consideration: Local service desk: to meet local business needs practical only until multiple locations requiring support services are involved Central service desk: for organisations having multiple locations reduces operational costs and improves usage of available resources Virtual service desk: for organisations having multi-country locations can be situated and accessed from anywhere in the world due to advances in network performance and telecommunications, reducing operational costs and improving usage of available resources. Application management ITIL application management[12] encompasses a set of best practices proposed to improve the overall quality of IT software development and support through the life-cycle of software development projects, with particular attention to gathering and defining requirements that meet business objectives. Software asset management (SAM) is a primary topic of ITILv2 and is closely associated with the ITILv3 Application Management function. SAM is the practice of integrating people, processes, and technology to allow software licenses and usage to be systematically tracked, evaluated, and managed. The goal of SAM is to reduce IT expenditures, human resource overhead and risks inherent in owning and managing software assets. SAM practices include: maintaining software license compliance tracking inventory and software asset use maintaining standard policies and procedures surrounding definition, deployment, configuration, use, and retirement of software assets and the definitive software library. SAM represents the software component of IT asset management. This includes hardware asset management because effective hardware inventory controls are critical to efforts to control software. This means overseeing software and

Information Technology Infrastructure Library hardware that comprise an organization's computers and network. IT operations management Refer to #ICT infrastructure management for more details. Technical management Refer to #ICT infrastructure management for more details.

183

Incident management
Incident management aims to restore normal service operation as quickly as possible and minimise the adverse effect on business operations, thus ensuring that the best possible levels of service quality and availability are maintained. 'Normal service operation' is defined here as service operation within service-level agreement (SLA) limits. An incident is defined as: V3: An unplanned interruption to an IT service or a reduction in the quality of an IT service. Failure of a configuration item that has not yet impacted service is also an incident. For example, failure of one disk from a mirror set. V2: An event which is not part of the standard operation of a service and which causes or may cause disruption to or a reduction in the quality of services and customer productivity. The objective of incident management is to restore normal operations as quickly as possible with the least possible impact on either the business or the user, at a cost-effective price. The transformation between event-to-incident is the critical junction where Application Performance Management (APM) and ITIL come together to provide tangible value back to the business.[13] Request fulfillment Request fulfillment (or request management) focuses on fulfilling Service Requests, which are often minor (standard) changes (e.g., requests to change a password) or requests for information.

Problem management
Problem management aims to resolve the root causes of incidents and thus to minimise the adverse impact of incidents and problems on business that are caused by errors within the IT infrastructure, and to prevent recurrence of incidents related to these errors. A 'problem' is an unknown underlying cause of one or more incidents, and a 'known error' is a problem that is successfully diagnosed and for which either a work-around or a permanent resolution has been identified. The CCTA (Central Computer and Telecommunications Agency) defines problems and known errors as follows A problem is a condition often identified as a result of multiple incidents that exhibit common symptoms. Problems can also be identified from a single significant incident, indicative of a single error, for which the cause is unknown, but for which the impact is significant. A known error is a condition identified by successful diagnosis of the root cause of a problem, and the subsequent development of a work-around. Problem management differs from incident management. The principal purpose of problem management is to find and resolve the root cause of a problem and thus prevent further incidents; the purpose of incident management is to return the service to normal level as soon as possible, with smallest possible business impact. The problem-management process is intended to reduce the number and severity of incidents and problems on the business, and report it in documentation to be available for the first-line and second line of the help desk. The proactive process identifies and resolves problems before incidents occur. Such processes include:

Information Technology Infrastructure Library Trend analysis Targeting support action Providing information to the organisation The error control process iteratively diagnoses known errors until they are eliminated by the successful implementation of a change under the control of the Change Management process. The problem control process aims to handle problems in an efficient way. Problem control identifies the root cause of incidents and reports it to the service desk. Other activities are: Problem identification and recording Problem classification Problem investigation and diagnosis A technique for identifying the root cause of a problem is to use an Ishikawa diagram, also referred to as a cause-and-effect diagram, tree diagram, or fishbone diagram. Alternatively, a formal Root Cause Analysis method such as Apollo Root Cause Analysis can be implemented and used to identify causes and solutions. An effective root cause analysis method and/or tool will provide the most effective/efficient solutions to address problems in the Problem Management process.

184

Identity management/access and identity management


Identity management (IdM) less commonly called Access and Identity Management (AIM) as a process focuses on granting authorised users the right to use a service, while preventing access to non-authorised users. Certain identity management processes executes policies defined in Information Security Management System

Continual service improvement (CSI)


Continual service improvement, defined in the ITIL continual service improvement volume,[10] aims to align and realign IT services to changing business needs by identifying and implementing improvements to the IT services that support the business processes. It incorporates many of the same concepts articulated in the Deming Cycle of Plan-Do-Check-Act. The perspective of CSI on improvement is the business perspective of service quality, even though CSI aims to improve process effectiveness, efficiency and cost effectiveness of the IT processes through the whole lifecycle. To manage improvement, CSI should clearly define what should be controlled and measured. CSI needs to be treated just like any other service practice. There needs to be upfront planning, training and awareness, ongoing scheduling, roles created, ownership assigned,and activities identified to be successful. CSI must be planned and scheduled as process with defined activities, inputs, outputs, roles and reporting. Continual Service Improvement and Application Performance Management (APM) are two sides of the same coin. They both focus on improvement with APM tying together service design, service transition, and service operation which in turn helps raise the bar of operational excellence for IT.[14] Improvement initiatives typically follow a seven-step process: 1. 2. 3. 4. 5. 6. 7. Identify the strategy for improvement Define what you will measure Gather the data Process the data Analyse the information and data Present and use the information Implement improvement

Information Technology Infrastructure Library

185

Overview of ITIL v2
The eight ITIL version 2 books and their disciplines are: The IT service management sets 1. Service Support 2. Service Delivery Other operational guidance 3. ICT infrastructure management 4. Security management 5. Application management 6. Software asset management To assist with the implementation of ITIL practices a further book was published (Apr 9, 2002) providing guidance on implementation (mainly of Service Management): 7. Planning to implement service management And this has more recently (Jan 26, 2006) been supplemented with guidelines for smaller IT units, not included in the original eight publications: 8. ITIL Small-scale implementation

Service support
The Service Support[15] ITIL discipline focuses on the User of the IT services and is primarily concerned with ensuring that they have access to the appropriate services to support the business functions. To a business, customers and users are the entry point to the process model. They get involved in service support by: Asking for changes Needing communication, updates Having difficulties, queries Real process delivery

The service desk functions as the single contact-point for end-users' incidents. Its first function is always to document ("create") an incident. If there is a direct solution, it attempts to resolve the incident at the first level. If the service desk cannot solve the incident then it is passed to a 2nd/3rd level group within the incident management system. Incidents can initiate a chain of processes: incident management, problem management, change management, release management and configuration management. This chain of processes is tracked using the configuration management database (CMDB), - ITIL v3 refers to configuration management system (CMS), which records each process, and creates output documents for traceability (quality management). Note - CMDB/CMS does not have to be a single database. The solution can be Federated.

Information Technology Infrastructure Library

186

Service delivery
The service delivery[16] discipline concentrates on the proactive services the ICT must deliver to provide adequate support to business users. It focuses on the business as the customer of the ICT services (compare with: service support). The discipline consisted of the following processes: Service level management Capacity management IT service continuity management Availability management Financial management

ICT infrastructure management


Information and Communication Technology (ICT) management[17] processes recommend best practice for requirements analysis, planning, design, deployment and ongoing operations management and technical support of an ICT infrastructure. The infrastructure management processes describe those processes within ITIL that directly relate to the ICT equipment and software that is involved in providing ICT services to customers. ICT design and planning ICT deployment ICT operations ICT technical support

These disciplines are less well understood than those of service management and therefore often some of their content is believed to be covered 'by implication' in service management disciplines. ICT design and planning ICT design and planning provides a framework and approach for the strategic and technical design and planning of ICT infrastructures. It includes the necessary combination of business (and overall IS) strategy, with technical design and architecture. ICT design and planning drives both the procurement of new ICT solutions through the production of statements of requirement ("SOR") and invitations to tender ("ITT") and is responsible for the initiation and management of ICT Programmes for strategic business change. Key outputs from design and planning are: ICT strategies, policies and plans the ICT overall architecture & management architecture feasibility studies, ITTs and SORs business cases

ICT deployment management ICT deployment provides a framework for the successful management of design, build, test and roll-out (deploy) projects within an overall ICT programme. It includes many project management disciplines in common with PRINCE2, but has a broader focus to include the necessary integration of release management and both functional and non functional testing. ICT operations management ICT operations management provides the day-to-day technical supervision of the ICT infrastructure. Often confused with the role of incident management from service support, operations has a more technical bias and is concerned not solely with incidents reported by users, but with events generated by or recorded by the infrastructure. ICT operations may often work closely alongside incident management and the service desk, which are not-necessarily technical, to provide an 'operations bridge'. Operations, however should primarily work from documented processes

Information Technology Infrastructure Library and procedures and should be concerned with a number of specific sub-processes, such as: output management, job scheduling, backup and restore, network monitoring/management, system monitoring/management, database monitoring/management storage monitoring/management. Operations are responsible for the following: a stable, secure ICT infrastructure a current, up to date operational documentation library ("ODL") a log of all operational events maintenance of operational monitoring and management tools. operational scripts operational procedures

187

ICT technical support ICT technical support is the specialist technical function for infrastructure within ICT. Primarily as a support to other processes, both in infrastructure management and service management, technical support provides a number of specialist functions: research and evaluation, market intelligence (particularly for design and planning and capacity management), proof of concept and pilot engineering, specialist technical expertise (particularly to operations and problem management), creation of documentation (perhaps for the operational documentation library or known error database). There are different levels of support under the ITIL structure, these being primary support level, secondary support level and tertiary support level, higher-level administrators being responsible for support at primary level.

Planning to implement service management


The ITIL discipline planning to implement service management[18] attempts to provide practitioners with a framework for the alignment of business needs and IT provision requirements. The processes and approaches incorporated within the guidelines suggest the development of a continuous service improvement program (CSIP) as the basis for implementing other ITIL disciplines as projects within a controlled program of work. Planning to implement service management focuses mainly on the service management processes, but also applies generically to other ITIL disciplines. Components include: creating vision analysing organisation setting goals implementing IT service management

Small-scale implementation
ITIL Small-scale implementation[19] provides an approach to ITIL framework implementation for smaller IT units or departments. It is primarily an auxiliary work that covers many of the same best practice guidelines as planning to implement service management, service support, and service delivery but provides additional guidance on the combination of roles and responsibilities, and avoiding conflict between ITIL priorities.

Criticisms of ITIL
ITIL has been criticised on several fronts, including: the books are not affordable for non-commercial users implementation and accreditation requires specific training debate over ITIL falling under BSM or ITSM frameworks the ITIL details are not aligned with the other frameworks like ITSM

Information Technology Infrastructure Library Rob England (also known as "IT Skeptic") has criticised the protected and proprietary nature of ITIL.[20] He urges the publisher, OGC, to release ITIL under the Open Government Licence (OGL).[21] CIO Magazine columnist Dean Meyer has also presented some cautionary views of ITIL,[22] including five pitfalls such as "becoming a slave to outdated definitions" and "Letting ITIL become religion." As he notes, "...it doesn't describe the complete range of processes needed to be world class. It's focused on ... managing ongoing services." In a 2004 survey designed by Noel Bruton (author of "How to Manage the IT Helpdesk" and "Managing the IT Services Process"), organisations adopting ITIL were asked to relate their actual experiences in having implemented ITIL. Seventy-seven percent of survey respondents either agreed or strongly agreed that "ITIL does not have all the answers". ITIL exponents accept this, citing ITIL's stated intention to be non-prescriptive, expecting organisations to engage ITIL processes with existing process models. Bruton notes that the claim to non-prescriptiveness must be, at best, one of scale rather than absolute intention, for the very description of a certain set of processes is in itself a form of prescription.[23] While ITIL addresses in depth the various aspects of service management, it does not address enterprise architecture in such depth. Many of the shortcomings in the implementation of ITIL do not necessarily come about because of flaws in the design or implementation of the service management aspects of the business, but rather the wider architectural framework in which the business is situated. Because of its primary focus on service management, ITIL has limited utility in managing poorly designed enterprise architectures, or how to feed back into the design of the enterprise architecture. Closely related to the architectural criticism, ITIL does not directly address the business applications which run on the IT infrastructure; nor does it facilitate a more collaborative working relationship between development and operations teams. The trend toward a closer working relationship between development and operations is termed: DevOps. This trend is related to increased application release rates and the adoption of agile software development methodologies. Traditional service management processes have struggled to support increased application release rates due to lack of automation and/or highly complex enterprise architecture. Some researchers group ITIL with lean, Six Sigma and Agile software development operations management. Applying Six Sigma techniques to ITIL brings the engineering approach to ITIL's framework. Applying Lean techniques promotes continuous improvement of the ITIL's best practices. However, ITIL itself is not a transformation method, nor does it offer one. Readers are required to find and associate such a method. Some vendors have also included the term Lean when discussing ITIL implementations, for example "Lean-ITIL". The initial consequences of an ITIL initiative tend to add cost with benefits promised as a future deliverable. ITIL does not provide usable methods "out of the box" to identify and target waste, or document the customer value stream as required by Lean, and measure customer satisfaction.

188

Frameworks related to ITIL


A number of frameworks exist in the field of IT Service Management alongside ITIL.

ITIL descendants
The Microsoft Operations Framework (MOF) is based on ITIL v2. While ITIL deliberately aims to be platform-agnostic, MOF is designed by Microsoft to provide a common management framework for its products. Microsoft has mapped MOF to ITIL as part of their documentation of the framework.[24] The British Educational Communications and Technology Agency (BECTA) used ITIL as the basis for their development of Framework for ICT Technical Support [25] (FITS). Their aim was to develop a framework appropriate for British schools, which often have very small IT departments. FITS became independent from BECTA in 2009 and is now maintained and supported by The FITS Foundation. FITS is now used in excess of a thousand schools in the UK, Australia and Norway as the standard for ICT Service Management in the Education

Information Technology Infrastructure Library sector.

189

Other frameworks
ITIL is generally equivalent to the scope of the ISO/IEC 20000 standard (previously BS 15000).[26] While it is not possible for an organization to be certified as being ITIL compliant, certification of an organisation is available for ISO20000 [27]. COBIT is an IT governance framework and supporting toolset developed by ISACA. ISACA view ITIL as being complementary to COBIT. They see COBIT as providing a governance and assurance role while ITIL providing guidance for service management.[28] The enhanced Telecom Operations Map eTOM published by the TeleManagement Forum offers a framework aimed at telecommunications service providers. In a joined effort, TM Forum and itSMF developed an Application Note to eTOM (GB921) that shows how the two frameworks can be mapped to each other. It addresses how eTom process elements and flows can be used to support the processes identified in ITIL.[29][30] IBM Tivoli Unified Process (ITUP) is aligned with ITIL, but is presented as a complete, integrated process model compatible with IBM's products.

Certification
Individuals
The certification scheme differs between ITIL v2 and ITIL v3, and bridge examinations allow owners of v2 certificates to transfer to the new program. ITIL v2 offers 3 certification levels: Foundation, Practitioner and Manager. These have been progressively discontinued in favor of the new ITIL v3 scheme. ITIL v3 certification levels are: Foundation, Intermediate, Expert and Master. The ITIL v3 certification scheme offers a modular approach. Each qualification is assigned a credit value; so that upon successful completion of the module, the candidate is rewarded with both a certification and a number of credits. At the lowest level Foundation candidates are awarded a certification and 2 credits. At the Intermediate level, a total of 15 credits must be earned. These credits may be accumulated in either a "Lifecycle" stream or a "Capability" stream; or combination thereof. Each Lifecycle module and exam is 3 credits. Each Capability module and corresponding exam is 4 credits. An ITILv2 Foundation Badge. A candidate wanting to achieve the Expert level will have, among other requirements, to gain the required number of credits (22). That is accomplished with two from Foundations, then 15 from Intermediate, and finally 5 credits from the "Managing Across the Lifecycle" exam. Together, the total of 22 earned credits designates one as ITIL v3 Expert.[31] The ITIL Certification Management Board (ICMB) manages ITIL certification. The Board includes representatives from interested parties within the community around the world. Members of the Board include (though are not limited to) representatives from the UK Office of Government Commerce (OGC), APM Group (APMG), The Stationery Office (TSO), V3 Examination Panel, Examination Institutes (EIs) and the IT Service Management Forum International (itSMF) as the recognised user group.[32] Since the early 1990s, EXIN and ISEB have been setting up the ITIL based certification program, developing and providing ITIL exams at three different levels: Foundation, Practitioner and Manager. EXIN[33] and BCS/ISEB[34]

Information Technology Infrastructure Library (the British Computer Society) have from that time onwards been the only two examination providers in the world to develop formally acknowledged ITIL certifications, provide ITIL exams and accredit ITIL training providers worldwide. These rights were obtained from OGC, the British government institution and owner of the ITIL trademark. OGC signed over the management of the ITIL trademark and the accreditation of examination providers to APMG in 2006. Now, after signing a contract with EXIN,[33] BCS/ISEB, LOYALIST CERTIFICATION SERVICES [35],PEOPLECERT Group [36] and other certification bodies, APMG is accrediting them as official examination bodies, to offer ITIL exams and accredit ITIL training providers. On July 20, 2006, the OGC signed a contract with the APM Group [37] to become its commercial partner for ITIL accreditation from January 1, 2007.[38] APMG manage the ITIL Version 3 exams. APMG maintains a voluntary register of ITIL v2 and v3 certified practitioners at their Successful Candidate Register.[39]

190

ITIL pins
Following the passing an APMG/EXIN exam in IT service management (based on ITIL), some people will wear a metal pin on their shirt or jacket. This badge with basic gold color is set in the form of the ITIL-logo. The ITIL pins consist of a small, diamond-like structure. The meaning and the shape of the diamond is meant to depict coherence in the IT industry (infrastructure as well). The four corners of the pin symbolise service support, service delivery, infrastructure management and IT management. There are five colors of ITIL V3 pins - each corresponds to the color of the associated core publication: ITILv3 Foundation Badge (Pastel Green). This ITIL lapel pin takes its color from the ITIL Service Strategy book and is awarded on successful completion of the ITIL v3 Foundation exam. ITILv3 Intermediate Capability Badge (Burgundy). There are four ITIL v3 Capability courses. (RCV, OSA, SOA, PPO). You are able to apply for this lapel pin once you have passed each exam. This badge shares its color with the ITIL Service Transition book. ITILv3 Intermediate Lifecycle Badge (Teal). For each of the five ITIL v3 Lifecycle courses (SS, SD, ST, SO, CSI), candidates receive this lapel pin after passing the exam. The color for this pin is based on the ITIL Service Operation book. ITILv3 Expert Badge (Lilac). This is currently the highest qualification available with ITIL v3. The lapel pin is awarded a candidate attains 22 credits through a combination of ITIL training courses. The pin takes its color from the ITIL Continual Service Improvement book. ITILv3 Master Badge (Purple). Currently in pilot phase this qualification has no training course or exam associated with it. To gain qualification as an ITIL Master, candidates have to have his/her work assessed by a panel of experts. Once an ITIL Expert has achieved this status, the ITIL Master can wear a lapel pin based on the color of the ITIL Service Design book. There are three colors of ITIL V2 pins: ITILv2 Foundation Badge (green) ITILv2 Practitioner Badge (blue) ITILv2 Manager Badge (red) Exam candidates who have successfully passed the examinations for ITIL will receive their appropriate pin from APMG, EXIN or their certification provider regional office or agent.

Information Technology Infrastructure Library

191

Organizations
Organizations and management systems cannot claim certification as "ITIL-compliant". An organization that has implemented ITIL guidance in IT Service Management (ITSM), may however, be able to achieve compliance with and seek certification under ISO/IEC 20000. Note that there are some significant differences between ISO/IEC20000 and ITIL Version 3[40] ISO20000 only recognises the management of financial assets, not assets which include "management, organization, process, knowledge, people, information, applications, infrastructure and financial capital", nor the concept of a "service asset". So ISO20000 certification does not address the management of 'assets' in an ITIL sense. ISO20000 does not recognise Configuration Management System (CMS) or Service Knowledge Management System (SKMS), and so does not certify anything beyond Configuration Management Database (CMDB). An organization can obtain ISO20000 certification without recognising or implementing the ITIL concept of Known Error, which is usually considered essential to ITIL.

References
[1] David Clifford, Jan van Bon (2008). Implementing ISO/IEC 20000 Certification: The Roadmap. ITSM Library. Van Haren Publishing. ISBN90-8753-082-X. [2] Office of Government Commerce (UK) CCTA and OGC (http:/ / www. ogc. gov. uk/ index. asp?id=1878). Retrieved May 5, 2005. [3] Office of Government Commerce (UK) (http:/ / www. ogc. gov. uk/ guidance_itil. asp). Retrieved August 19, 2009. [4] http:/ / www. ogc. gov. uk/ itil_ogc_withdrawal_of_itil_version2. asp [5] http:/ / www. itil-officialsite. com/ nmsruntime/ saveasdialog. aspx?lID=1193& sID=58 [6] FSM. David Cannon (2011). ITIL Service Strategy 2011 Edition. The Stationery Office. ISBN978-0113313044. [7] Lou Hunnebeck (2011). ITIL Service Design. The Stationery Office. ISBN978-0113313051. [8] Stuart Rance (2011). ITIL Service Transition. The Stationery Office. ISBN978-0113313068. [9] Randy A. Steinberg (2011). ITIL Service Operation. The Stationery Office. ISBN978-0113313075. [10] Vernon Lloyd (2011). ITIL Continual Service Improvement. The Stationery Office. ISBN978-0113313082. [11] Cazemier, Jacques A.; Overbeek, Paul L.; Peters, Louk M. (2000). Security Management. The Stationery Office. ISBN0-11-330014-X. [12] Office of Government Commerce (2002). Application Management. The Stationery Office. ISBN0-11-330866-3. [13] "The DNA of APM - Event to Incident Flow" (http:/ / www. apmdigest. com/ dna-application-performance-management). APM Digest. 4 June 2012. . [14] "Priorizing Gartner's APM Model: The APM Conceptual Framework" (http:/ / www. apmdigest. com/ prioritizing-gartners-apm-model). APM Digest. 15 March 2012. . [15] Office of Government Commerce (2000). Service Support. The Stationery Office. ISBN0-11-330015-8. [16] Office of Government Commerce (2001). Service Delivery. IT Infrastructure Library. The Stationery Office. ISBN0-11-330017-4. [17] Office of Government Commerce (2002). ICT Infrastructure Management. The Stationery Office. ISBN0-11-330865-5. [18] Office of Government Commerce (2002). Planning to Implement Service Management. The Stationery Office. ISBN0-11-330877-9. [19] Office of Government Commerce (2005). ITIL Small Scale Implementation. The Stationery Office. ISBN0-11-330980-5. [20] http:/ / www. itskeptic. org/ free-itil [21] http:/ / www. nationalarchives. gov. uk/ doc/ open-government-licence/ open-government-licence. htm [22] Meyer, Dean, 2005. "Beneath the Buzz: ITIL" (http:/ / web. archive. org/ web/ 20050404165524/ http:/ / www. cio. com/ leadership/ buzz/ column. html?ID=4186), CIO Magazine, March 31, 2005 [23] Survey: "The ITIL Experience Has It Been Worth It", author Bruton Consultancy 2004, published by Helpdesk Institute Europe, The Helpdesk and IT Support Show, and Hornbill Software. [24] Microsoft Operations Framework; Cross Reference ITIL v3 and MOF 4.0 (http:/ / go. microsoft. com/ fwlink/ ?LinkId=151991). Microsoft Corporation. May 2009. . [25] http:/ / www. thefitsfoundation. org [26] Van Bon, Jan; Verheijen, Tieneke (2006), Frameworks for IT Management (http:/ / books. google. com/ books?id=RV3jQ16F1_cC), Van Haren Publishing, ISBN978-90-77212-90-5, [27] http:/ / www. itsmsolutions. com/ newsletters/ DITYvol2iss3. htm [28] ISACA (2008), COBIT Mapping: Mapping of ITIL V3 With COBIT 4.1 (http:/ / www. isaca. org/ Knowledge-Center/ Research/ ResearchDeliverables/ Pages/ COBIT-Mapping-Mapping-of-ITIL-V3-With-COBIT-4-1. aspx), ITGI, ISBN978-1-60420-035-5, [29] Brooks, Peter (2006), Metrics for IT Service Management (http:/ / books. google. com/ books?id=UeWDivqKcm0C), Van Haren Publishing, pp.7677, ISBN978-90-77212-69-1,

Information Technology Infrastructure Library


[30] Morreale, Patricia A.; Terplan, Kornel (2009), "3.6.10.2 Matching ITIL to eTOM" (http:/ / books. google. com/ books?id=VEp0aMmH3iQC), CRC Handbook of Modern Telecommunications, Second Edition (2 ed.), CRC Press, ISBN978-1-4200-7800-8, [31] ITIL V3 Qualification Scheme (http:/ / www. itil-officialsite. com/ Qualifications/ ITILV3QualificationScheme. aspx). OGC Official Site. . Retrieved 2011-05-02. [32] APMG (2008). "ITIL Service Management Practices: V3 Qualifications Scheme" (http:/ / www. itil-officialsite. com/ nmsruntime/ saveasdialog. asp?lID=572& sID=86). . Retrieved 24 February 2009. [33] "EXIN Exams" (http:/ / www. exin-exams. com/ ). EXIN Exams. . Retrieved 2010-01-14. [34] "ISEB Professionals Qualifications, Training, Careers BCS The Chartered Institute for IT" (http:/ / www. bcs. org/ server. php?show=nav. 5732). BCS. . Retrieved 2010-01-14. [35] http:/ / www. loyalistexams. com [36] http:/ / www. peoplecert. org [37] http:/ / www. apmgroupltd. com/ [38] Office of Government Commerce (2006). "Best Practice portfolio: new contracts awarded for publishing and accreditation services" (http:/ / www. ogc. gov. uk/ About_OGC_news_4906. asp). . Retrieved 19 September 2006. [39] http:/ / www. apmg-international. com/ ITILSCRquery. asp [40] Office of Government Commerce (2008). "Best Management Practice: ITIL V3 and ISO/IEC 20000" (http:/ / www. best-management-practice. com/ gempdf/ ITIL_and_ISO_20000_March08. pdf). . Retrieved 24 February 2009.

192

External links
Official ITIL Website (http://www.itil-officialsite.com/home/home.asp) The Cabinet Office Best Management Practice (BMP) Portfolio web site (http://www.cabinetoffice.gov.uk/ resource-library/best-management-practice-bmp-portfolio/)

Project management
Project management is the discipline of planning, organizing, securing, managing, leading, and controlling resources to achieve specific goals. A project is a temporary endeavor with a defined beginning and end (usually time-constrained, and often constrained by funding or deliverables),[1] undertaken to meet unique goals and objectives,[2] typically to bring about beneficial change or added value. The temporary nature of projects stands in contrast with business as usual (or operations),[3] which are repetitive, permanent, or semi-permanent functional activities to produce products or services. In practice, the management of these two systems is often quite different, and as such requires the development of distinct technical skills and management strategies. The primary challenge of project management is to achieve all of the project goals[4] and objectives while honoring the preconceived constraints.[5] The primary constraints are scope, time, quality and budget.[6] The secondary and more ambitious challenge is to optimize the allocation of necessary inputs and integrate them to meet pre-defined objectives.

Project management

193

History
Until 1900 civil engineering projects were generally managed by creative architects, engineers, and master builders themselves, for example Vitruvius (first century BC), Christopher Wren (16321723), Thomas Telford (17571834) and Isambard Kingdom Brunel (18061859).[7] It was in the 1950s that organizations started to systematically apply project management tools and techniques to complex engineering projects.[8]

Roman soldiers building a fortress, Trajan's Column 113 AD

As a discipline, project management developed from several fields of application including civil construction, engineering, and heavy defense activity.[9] Two forefathers of project management are Henry Gantt, called the father of planning and control techniques,[10] who is famous for his use of the Gantt chart as a project management tool (alternatively Harmonogram first proposed by Karol Adamiecki[11]); and Henri Fayol for his creation of the five management functions that form the foundation of the body of knowledge associated with project and program management.[12] Both Gantt and Fayol were students of Frederick Winslow Taylor's theories of scientific management. His work is the forerunner to modern project management tools including work breakdown structure (WBS) and resource allocation. The 1950s marked the beginning of the modern project management era Henry Gantt (18611919), the father of planning and control techniques where core engineering fields come together to work as one. Project management became recognized as a distinct discipline arising from the management discipline with engineering model.[13] In the United States, prior to the 1950s, projects were managed on an ad-hoc basis, using mostly Gantt charts and informal techniques and tools. At that time, two mathematical project-scheduling models were developed. The "Critical Path Method" (CPM) was developed as a joint venture between DuPont Corporation and Remington Rand Corporation for managing plant maintenance projects. And the "Program Evaluation and Review Technique" or PERT, was developed by Booz Allen Hamilton as part of the United States Navy's (in conjunction with the Lockheed Corporation) Polaris missile submarine program;[14] These mathematical techniques quickly spread into many private enterprises.

Project management

194

At the same time, as project-scheduling models were being developed, technology for project cost estimating, cost management, and engineering economics was evolving, with pioneering work by Hans Lang and others. In 1956, the American Association of Cost Engineers (now AACE International; the Association for the Advancement of Cost Engineering) was formed by early practitioners of project management and the associated specialties of planning and scheduling, cost estimating, and cost/schedule control (project control). AACE continued its pioneering work and in 2006 released the first integrated process for portfolio, program and project management (Total Cost Management Framework).

PERT network chart for a seven-month project with five milestones

The International Project Management Association (IPMA) was founded in Europe in 1967,[15] as a federation of several national project management associations. IPMA maintains its federal structure today and now includes member associations on every continent except Antarctica. IPMA offers a Four Level Certification program based on the IPMA Competence Baseline (ICB).[16] The ICB covers technical, contextual, and behavioral competencies. In 1969, the Project Management Institute (PMI) was formed in the USA.[17] PMI publishes A Guide to the Project Management Body of Knowledge (PMBOK Guide), which describes project management practices that are common to "most projects, most of the time." PMI also offers multiple certifications.

Approaches
There are a number of approaches to managing project activities including lean, iterative, incremental, and phased approaches. Regardless of the methodology employed, careful consideration must be given to the overall project objectives, timeline, and cost, as well as the roles and responsibilities of all participants and stakeholders.

The traditional approach


A traditional phased approach identifies a sequence of steps to be completed. In the "traditional approach", five developmental components of a project can be distinguished (four stages plus control): 1. initiation 2. planning and design 3. execution and construction 4. monitoring and controlling systems 5. completion Not all projects will have every stage, as projects can be terminated before Typical development phases of an engineering project they reach completion. Some projects do not follow a structured planning and/or monitoring process. And some projects will go through steps 2, 3 and 4 multiple times. Many industries use variations of these project stages. For example, when working on a brick-and-mortar design and construction, projects will typically progress through stages like pre-planning, conceptual design, schematic design, design development, construction drawings (or contract documents), and construction administration. In software development, this approach is often known as the waterfall model,[18] i.e., one series of tasks after another in linear sequence. In software development many organizations have adapted the Rational Unified Process (RUP) to fit this

Project management methodology, although RUP does not require or explicitly recommend this practice. Waterfall development works well for small, well defined projects, but often fails in larger projects of undefined and ambiguous nature. The Cone of Uncertainty explains some of this as the planning made on the initial phase of the project suffers from a high degree of uncertainty. This becomes especially true as software development is often the realization of a new or novel product. In projects where requirements have not been finalized and can change, requirements management is used to develop an accurate and complete definition of the behavior of software that can serve as the basis for software development.[19] While the terms may differ from industry to industry, the actual stages typically follow common steps to problem solving"defining the problem, weighing options, choosing a path, implementation and evaluation."

195

PRINCE2
PRINCE2 is a structured approach to project management, released in 1996 as a generic project management method.[20] It combined the original PROMPT methodology (which evolved into the PRINCE methodology) with IBM's MITP (managing the implementation of the total project) methodology. PRINCE2 provides a method for managing projects within a clearly defined framework. PRINCE2 describes procedures to coordinate people and activities in a project, how to design and supervise the project, and what to do if the project has to be adjusted if it does not develop as planned.

The PRINCE2 process model

In the method, each process is specified with its key inputs and outputs and with specific goals and activities to be carried out. This allows for automatic control of any deviations from the plan. Divided into manageable stages, the method enables an efficient control of resources. On the basis of close monitoring, the project can be carried out in a controlled and organized way. PRINCE2 provides a common language for all participants in the project. The various management roles and responsibilities involved in a project are fully described and are adaptable to suit the complexity of the project and skills of the organization.

PRiSM (Projects integrating Sustainable Methods)


PRiSM[21] is a structured project management method developed to align organizational sustainability initiatives with project delivery. By design, PRiSM is a repeatable, practical and proactive methodology that ensures project success while decreasing an organization's negative environmental impact. The methodology encompasses the management, control and organization of a project with consideration and emphasis beyond the project life-cycle and on the five aspects of sustainability. PRiSM is also used to refer to the training and accreditation of authorized practitioners of the methodology who must undertake accredited qualifications based on competency to obtain the GPM certification.[22]

Project management

196

Critical chain project management


Critical chain project management (CCPM) is a method of planning and managing project execution designed to deal with uncertainties inherent in managing projects, while taking into consideration limited availability of resources (physical, human skills, as well as management & support capacity) needed to execute projects. CCPM is an application of the Theory of Constraints (TOC) to projects. The goal is to increase the flow of projects in an organization (throughput). Applying the first three of the five focusing steps of TOC, the system constraint for all projects is identified as are the resources. To exploit the constraint, tasks on the critical chain are given priority over all other activities. Finally, projects are planned and managed to ensure that the resources are ready when the critical chain tasks must start, subordinating all other resources to the critical chain. The project plan should typically undergo resource leveling, and the longest sequence of resource-constrained tasks should be identified as the critical chain. In some cases, such as managing contracted sub-projects, it is advisable to use a simplified approach without resource leveling. In multi-project environments, resource leveling should be performed across projects. However, it is often enough to identify (or simply select) a single "drum". The drum can be a resource that acts as a constraint across projects, which are staggered based on the availability of that single resource. One can also use a "virtual drum" by selecting a task or group of tasks (typically integration points) and limiting the number of projects in execution at that stage.

Event chain methodology


Event chain methodology is another method that complements critical path method and critical chain project management methodologies. Event chain methodology is an uncertainty modeling and schedule network analysis technique that is focused on identifying and managing events and event chains that affect project schedules. Event chain methodology helps to mitigate the negative impact of psychological heuristics and biases, as well as to allow for easy modeling of uncertainties in the project schedules. Event chain methodology is based on the following principles. Probabilistic moment of risk: An activity (task) in most real-life processes is not a continuous uniform process. Tasks are affected by external events, which can occur at some point in the middle of the task. Event chains: Events can cause other events, which will create event chains. These event chains can significantly affect the course of the project. Quantitative analysis is used to determine a cumulative effect of these event chains on the project schedule. Critical events or event chains: The single events or the event chains that have the most potential to affect the projects are the critical events or critical chains of events. They can be determined by the analysis. Project tracking with events: Even if a project is partially completed and data about the project duration, cost, and events occurred is available, it is still possible to refine information about future potential events and helps to forecast future project performance. Event chain visualization: Events and event chains can be visualized using event chain diagrams on a Gantt chart.

Project management

197

Process-based management
Also furthering the concept of project control is the incorporation of process-based management. This area has been driven by the use of Maturity models such as the CMMI (capability maturity model integration; see this example of a predecessor) and ISO/IEC15504 (SPICE software process improvement and capability estimation).

Agile project management


Agile project management approaches based on the principles of human interaction management are founded on a process view of human collaboration. It is "most typically used in software, website, technology, creative and marketing industries."[23] This contrasts sharply with the traditional approach. In the agile software development or flexible product development approach, the project is seen as a series of relatively small tasks conceived and executed as the situation demands in an adaptive manner, rather than as a completely pre-planned process.

The iteration cycle in agile project management

Lean project management


Lean project management combines principles from lean manufacturing with agile project management, to focus on delivering more value with less waste.

Extreme project management


In critical studies of project management it has been noted that several PERT based models are not well suited for the multi-project company environment of today. Most of them are aimed at very large-scale, one-time, non-routine projects, and currently all kinds of management are expressed in terms of projects. Using complex models for "projects" (or rather "tasks") spanning a few weeks has been proven to cause unnecessary costs and low maneuverability in several cases . Instead, project management experts try to identify different "lightweight" models, such as Extreme Programming and Scrum. The generalization of Extreme Programming to other kinds of projects is extreme project management, which may be used in combination with the process modeling and management principles of human interaction management.
Planning and feedback loops in Extreme programming (XP) with the time frames of the multiple loops.

Benefits realisation management


Benefits realization management (BRM) enhances normal project management techniques through a focus on agreeing what outcomes should change (the benefits) during the project, and then measuring to see if that is happening to help keep a project on track. This can help to reduce the risk of a completed project being a failure as instead of attempting to deliver agreed requirements the aim is to deliver the benefit of those requirements. An example of delivering a project to requirements could be agreeing on a project to deliver a computer system to process staff data with the requirement to manage payroll, holiday and staff personnel records. Under BRM the agreement would be to use the suppliers suggested staff data system to see an agreed reduction in staff hours

Project management processing and maintaining staff data (benefit reduce HR headcount).

198

Processes
Traditionally, project management includes a number of elements: four to five process groups, and a control system. Regardless of the methodology or terminology used, the same basic project management processes will be used. Major process groups generally include:[6] initiation planning or development production or execution monitoring and controlling closing

In project environments with a significant exploratory element (e.g., research and development), these stages may be supplemented with decision points (go/no go decisions) at which the project's continuation is debated and decided. An example is the Phasegate model.

The project development stages

[24]

Initiating
The initiating processes determine the nature and scope of the project.[25] If this stage is not performed well, it is unlikely that the project will be successful in meeting the business needs. The key project controls needed here are an understanding of the business environment and [24] Initiating process group processes making sure that all necessary controls are incorporated into the project. Any deficiencies should be reported and a recommendation should be made to fix them. The initiating stage should include a plan that encompasses the following areas: analyzing the business needs/requirements in measurable goals reviewing of the current operations financial analysis of the costs and benefits including a budget stakeholder analysis, including users, and support personnel for the project project charter including costs, tasks, deliverables, and schedule

Project management

199

Planning and design


After the initiation stage, the project is planned to an appropriate level of detail (see example of a flow-chart).[24] The main purpose is to plan time, cost and resources adequately to estimate the work needed and to effectively manage risk during project execution. As with the Initiation process group, a failure to adequately plan greatly reduces the project's chances of successfully accomplishing its goals. Project planning generally consists of[26] determining how to plan (e.g. by level of detail or rolling wave); developing the scope statement; selecting the planning team; identifying deliverables and creating the work breakdown structure; identifying the activities needed to complete those deliverables and networking the activities in their logical sequence; estimating the resource requirements for the activities; estimating time and cost for activities; developing the schedule; developing the budget; risk planning;

gaining formal approval to begin work. Additional processes, such as planning for communications and for scope management, identifying roles and responsibilities, determining what to purchase for the project and holding a kick-off meeting are also generally advisable. For new product development projects, conceptual design of the operation of the final product may be performed concurrent with the project planning activities, and may help to inform the planning team when identifying deliverables and planning activities.

Executing
Executing consists of the processes used to complete the work defined in the project plan to accomplish the project's requirements. Execution process involves coordinating people and resources, as well as integrating and performing the activities of the project in accordance with the project management plan. The deliverables are produced as outputs from the processes performed as defined in the project management plan and other frameworks that might be applicable to the type of project at hand.

[24] Executing process group processes

Project management

200

Monitoring and controlling


Monitoring and controlling consists of those processes performed to observe project execution so that potential problems can be identified in a timely manner and corrective action can be taken, when necessary, to control the execution of the project. The key benefit is that project performance is observed and measured regularly to identify variances from the project management plan. Monitoring and controlling includes:[27]

Monitoring and controlling process group processes

[24]

Measuring the ongoing project activities ('where we are'); Monitoring the project variables (cost, effort, scope, etc.) against the project management plan and the project performance baseline (where we should be); Identify corrective actions to address issues and risks properly (How can we get on track again); Influencing the factors that could circumvent integrated change control so only approved changes are implemented. In multi-phase projects, the monitoring and control process also provides feedback between project phases, in order to implement corrective or preventive actions to bring the project into compliance with the project management plan. Project maintenance is an ongoing process, and it includes:[6] Continuing support of end-users Correction of errors Updates of the software over time In this stage, auditors should pay attention to how effectively and quickly user problems are resolved. Over the course of any construction project, the work scope may change. Change is a normal and expected part of the construction process. Changes can be the result of necessary design modifications, differing site conditions, material availability, contractor-requested changes, value engineering and impacts from third parties, to name a few. Beyond executing the change in the field, the change normally needs to be documented to show what was Monitoring and controlling cycle actually constructed. This is referred to as change management. Hence, the owner usually requires a final record to show all changes or, more specifically, any change that modifies the tangible portions of the finished work. The record is made on the contract documents usually, but not necessarily limited to, the design drawings. The end product of this effort is what the industry terms as-built drawings, or more simply, as built. The requirement for providing them is a norm in construction contracts. When changes are introduced to the project, the viability of the project has to be re-assessed. It is important not to lose sight of the initial goals and targets of the projects. When the changes accumulate, the forecasted result may not justify the original proposed investment in the project.

Project management

201

Closing
Closing includes the formal acceptance of the project and the ending thereof. Administrative activities include the archiving of the files and documenting lessons learned. This phase consists of:[6] Project close: Finalize all activities across all of the process groups to formally close the project or a project phase
Closing process group processes. [24]

Contract closure: Complete and settle each contract (including the resolution of any open items) and close each contract applicable to the project or project phase.

Project controlling and project control systems


Project controlling should be established as an independent function in project management. It implements verification and controlling function during the processing of a project in order to reinforce the defined performance and formal goals.[28] The tasks of project controlling are also: the creation of infrastructure for the supply of the right information and its update the establishment of a way to communicate disparities of project parameters the development of project information technology based on an intranet or the determination of a project key performance index system (KPI) divergence analyses and generation of proposals for potential project regulations[29] the establishment of methods to accomplish an appropriate the project structure, project workflow organization, project control and governance creation of transparency among the project parameters[30] Fulfillment and implementation of these tasks can be achieved by applying specific methods and instruments of project controlling. The following methods of project controlling can be applied: investment analysis costbenefit analyses value benefit Analysis expert surveys simulation calculations risk-profile analyses surcharge calculations milestone trend analysis cost trend analysis target/actual-comparison[31]

Project control is that element of a project that keeps it on-track, on-time and within budget.[27] Project control begins early in the project with planning and ends late in the project with post-implementation review, having a thorough involvement of each step in the process. Each project should be assessed for the appropriate level of control needed: too much control is too time consuming, too little control is very risky. If project control is not implemented correctly, the cost to the business should be clarified in terms of errors, fixes, and additional audit fees. Control systems are needed for cost, risk, quality, communication, time, change, procurement, and human resources. In addition, auditors should consider how important the projects are to the financial statements, how reliant the stakeholders are on controls, and how many controls exist. Auditors should review the development process and procedures for how they are implemented. The process of development and the quality of the final product may also

Project management be assessed if needed or requested. A business may want the auditing firm to be involved throughout the process to catch problems earlier on so that they can be fixed more easily. An auditor can serve as a controls consultant as part of the development team or as an independent auditor as part of an audit. Businesses sometimes use formal systems development processes. These help assure that systems are developed successfully. A formal process is more effective in creating strong controls, and auditors should review this process to confirm that it is well designed and is followed in practice. A good formal systems development plan outlines: A strategy to align development with the organizations broader objectives Standards for new systems Project management policies for timing and budgeting Procedures describing the process Evaluation of quality of change

202

Topics
Project managers
A project manager is a professional in the field of project management. Project managers can have the responsibility of the planning, execution, and closing of any project, typically relating to construction industry, engineering, architecture, computing, and telecommunications. Many other fields in production engineering and design engineering and heavy industrial have project managers. A project manager is the person accountable for accomplishing the stated project objectives. Key project management responsibilities include creating clear and attainable project objectives, building the project requirements, and managing the triple constraint for projects, which is cost, time, and scope. A project manager is often a client representative and has to determine and implement the exact needs of the client, based on knowledge of the firm they are representing. The ability to adapt to the various internal procedures of the contracting party, and to form close links with the nominated representatives, is essential in ensuring that the key issues of cost, time, quality and above all, client satisfaction, can be realized.

Project management triangle


Like any human undertaking, projects need to be performed and delivered under certain constraints. Traditionally, these constraints have been listed as "scope," "time," and "cost".[1] These are also referred to as the "project management triangle", where each side represents a constraint. One side of the triangle cannot be changed without affecting the others. A further refinement of the constraints separates product "quality" or "performance" from scope, and turns quality into a fourth constraint. The time constraint refers to the amount of time available The project management triangle to complete a project. The cost constraint refers to the budgeted amount available for the project. The scope constraint refers to what must be done to produce the project's end result. These three constraints are often competing constraints: increased scope typically means increased time and increased cost, a tight time constraint could mean increased costs and reduced scope, and a tight budget could mean increased time and reduced scope.

Project management The discipline of project management is about providing the tools and techniques that enable the project team (not just the project manager) to organize their work to meet these constraints.

203

Work breakdown structure


The work breakdown structure (WBS) is a tree structure that shows a subdivision of effort required to achieve an objectivefor example a program, project, and contract. The WBS may be hardware-, product-, service-, or process-oriented (see an example in a NASA reporting structure (2001)).[32] A WBS(work break down) can be developed by starting with the end objective and successively subdividing it into manageable components in terms of size, duration, and responsibility (e.g., systems, subsystems, components, tasks, sub-tasks, and work packages), which include all steps necessary to achieve the objective.[19] The work breakdown structure provides a common framework for the natural development of the overall planning and control of a contract and is the basis for dividing work into definable increments from which the statement of work can be developed and technical, schedule, cost, and labor hour reporting can be established.[32]

Project management framework


The Program (Investment) life cycle integrates the project management and system development life cycles with the activities directly associated with system deployment and operation. By design, system operation management and related activities occur after the project is complete and are not documented within this guide[24] (see an example of an IT project management framework). For example, see figure, in the US United States Department of Veterans Affairs (VA) the program management life cycle is depicted and describe in the overall VA IT PROJECT MANAGEMENT FRAMEwork to address the integration of OMB Exhibit 300 project (investment) management activities and the overall project budgeting process. The VA IT Project Management Framework diagram illustrates Milestone 4 which occurs following the deployment of a system and the closing of the project. The project closing phase activities at the VA continues through system deployment and into system operation for the purpose of illustrating and describing the system activities the VA considers part of the project. The figure illustrates the actions and associated artifacts of the VA IT Project and Program Management process.[24]

International standards
There have been several attempts to develop project management standards, such as: Capability Maturity Model from the Software Engineering Institute. GAPPS, Global Alliance for Project Performance Standards an open source standard describing COMPETENCIES for project and program managers. A Guide to the Project Management Body of Knowledge from the Project Management Institute (PMI) HERMES method, Swiss general project management method, selected for use in Luxembourg and international organizations. The ISO standards ISO 9000, a family of standards for quality management systems, and the ISO 10006:2003, for Quality management systems and guidelines for quality management in projects. PRINCE2, PRojects IN Controlled Environments. Association for Project Management Body of Knowledge[33] Team Software Process (TSP) from the Software Engineering Institute. Total Cost Management Framework, AACE International's Methodology for Integrated Portfolio, Program and Project Management. V-Model, an original systems development method. The Logical framework approach, which is popular in international development organizations.

Project management IAPPM, The International Association of Project & Program Management, guide to project auditing and rescuing troubled projects.

204

Project portfolio management


An increasing number of organizations are using, what is referred to as, project portfolio management (PPM) as a means of selecting the right projects and then using project management techniques[34] as the means for delivering the outcomes in the form of benefits to the performing private or not-for-profit organization.

References
[1] Chatfield, Carl. "A short course in project management" (http:/ / office. microsoft. com/ en-us/ project/ HA102354821033. aspx). Microsoft. . [2] *The Definitive Guide to Project Management. Nokes, Sebastian. 2nd Ed.n. London (Financial Times / Prentice Hall): 2007. ISBN 978-0-273-71097-4 [3] Paul C. Dinsmore et al (2005) The right projects done right! John Wiley and Sons, 2005. ISBN 0-7879-7113-8. p.35 and further. [4] Lewis R. Ireland (2006) Project Management. McGraw-Hill Professional, 2006. ISBN 0-07-147160-X. p.110. [5] Joseph Phillips (2003). PMP Project Management Professional Study Guide. McGraw-Hill Professional, 2003. ISBN 0-07-223062-2 p.354. [6] PMI (2010). A Guide to the Project Management Body of Knowledge p.27-35 [7] Dennis Lock (2007) Project Management (9th ed.) Gower Publishing, Ltd., 2007. ISBN 0-566-08772-3 [8] Young-Hoon Kwak (2005). "A brief History of Project Management". In: The story of managing projects. Elias G. Carayannis et al. (9 eds), Greenwood Publishing Group, 2005. ISBN 1-56720-506-2 [9] David I. Cleland, Roland Gareis (2006). Global Project Management Handbook. "Chapter 1: "The evolution of project management". McGraw-Hill Professional, 2006. ISBN 0-07-146045-4 [10] Martin Stevens (2002). Project Management Pathways. Association for Project Management. APM Publishing Limited, 2002 ISBN 1-903494-01-X p.xxii [11] Edward R. Marsh (1975). "The Harmonogram of Karol Adamiecki". In: The Academy of Management Journal. Vol. 18, No. 2 (Jun., 1975), p. 358. ( online (http:/ / www. jstor. org/ pss/ 255537)) [12] Morgen Witzel (2003). Fifty key figures in management. Routledge, 2003. ISBN 0-415-36977-0. p. 96-101. [13] David I. Cleland, Roland Gareis (2006). Global Project Management Handbook. McGraw-Hill Professional, 2006. ISBN 0-07-146045-4. p.1-4 states: "It was in the 1950s when project management was formally recognized as a distinct contribution arising from the management discipline." [14] Booz Allen Hamilton History of Booz Allen 1950s (http:/ / www. boozallen. com/ about/ history/ history_5) [15] Bjarne Kousholt (2007). Project Management . Theory and practice.. Nyt Teknisk Forlag. ISBN 87-571-2603-8. p.59. [16] ipma.ch (http:/ / www. ipma. ch/ publication/ Pages/ ICB-IPMACompetenceBaseline. aspx) [17] F. L. Harrison, Dennis Lock (2004). Advanced project management: a structured approach. Gower Publishing, Ltd., 2004. ISBN 0-566-07822-8. p.34. [18] Winston W. Royce (1970). "Managing the Development of Large Software Systems" (http:/ / www. cs. umd. edu/ class/ spring2003/ cmsc838p/ Process/ waterfall. pdf) in: Technical Papers of Western Electronic Show and Convention (WesCon) August 2528, 1970, Los Angeles, USA. [19] Stellman, Andrew; Greene, Jennifer (2005). Applied Software Project Management (http:/ / www. stellman-greene. com/ aspm/ ). O'Reilly Media. ISBN978-0-596-00948-9. . [20] OGC PRINCE2 Background (http:/ / webarchive. nationalarchives. gov. uk/ 20110822131357/ http:/ / www. ogc. gov. uk/ methods_prince_2__background. asp) [21] http:/ / greenprojectmanagement. org [22] , http:/ / greenprojectmanagement. org/ certification [23] "What is Agile Project Management?" (http:/ / www. planbox. com/ resources/ agile-project-management). Planbox. . [24] "Project Management Guide" (http:/ / www. ppoe. oit. va. gov/ docs/ VA_IT_PM_Guide. pdf). VA Office of Information and Technology. March 3, 2005. . [25] Peter Nathan, Gerald Everett Jones (2003). PMP certification for dummies. p.63. [26] Harold Kerzner (2003). Project Management: A Systems Approach to Planning, Scheduling, and Controlling (8th Ed. ed.). Wiley. ISBN0-471-22577-0. [27] James P. Lewis (2000). The project manager's desk reference: : a comprehensive guide to project planning, scheduling, evaluation, and systems. p.185 [28] Jrg Becker, Martin Kugeler, Michael Rosemann (2003). Process management: a guide for the design of business processes. ISBN 978-3-540-43499-3. p.27. [29] Bernhard Schlagheck (2000). Objektorientierte Referenzmodelle fr das Prozess- und Projektcontrolling. Grundlagen Konstruktionen Anwendungsmglichkeiten. ISBN 978-3-8244-7162-1. p.131. [30] Josef E. Riedl (1990). Projekt Controlling in Forschung und Entwicklung. ISBN 978-3-540-51963-8. p.99.

Project management
[31] Steinle, Bruch, Lawa (1995). Projektmanagement. FAZ Verlagsbereich Wirtschaftsbcher. p.136143 [32] NASA NPR 9501.2D (http:/ / nodis3. gsfc. nasa. gov/ displayDir. cfm?Internal_ID=N_PR_9501_002D_& page_name=Chp2& format=PDF). May 23, 2001. [33] Body of Knowledge 5th edition, Association for Project Management, 2006, ISBN 1-903494-13-3 [34] Albert Hamilton (2004). Handbook of Project Management Procedures. TTL Publishing, Ltd. ISBN 0-7277-3258-7

205

External links
Guidelines for Managing Projects (http://www.berr.gov.uk/files/file40647.pdf) from the UK Department for Business, Enterprise and Regulatory Reform (BERR) Max Wideman's "Open Source" Comparative Glossary of Project Management Terms (http://www. maxwideman.com/) Open Source Project Management manual (http://www.projectmanagement-training.net/book/)

System testing
System testing of software or hardware is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. System testing falls within the scope of black box testing, and as such, should require no knowledge of the inner design of the code or logic. [1] As a rule, system testing takes, as its input, all of the "integrated" software components that have successfully passed integration testing and also the software system itself integrated with any applicable hardware system(s). The purpose of integration testing is to detect any inconsistencies between the software units that are integrated together (called assemblages) or between any of the assemblages and the hardware. System testing is a more limited type of testing; it seeks to detect defects both within the "inter-assemblages" and also within the system as a whole.

Testing the whole system


System testing is performed on the entire system in the context of a Functional Requirement Specification(s) (FRS) and/or a System Requirement Specification (SRS). System testing tests not only the design, but also the behaviour and even the believed expectations of the customer. It is also intended to test up to and beyond the bounds defined in the software/hardware requirements specification(s).

Types of tests to include in system testing


The following examples are different types of testing that should be considered during System testing: Graphical user interface testing Usability testing Software performance testing Compatibility testing Exception handling Load testing Volume testing Stress testing Security testing Scalability testing Sanity testing

Smoke testing Exploratory testing

System testing Ad hoc testing Regression testing Installation testing Maintenance testing Recovery testing and failover testing. Accessibility testing, including compliance with: Americans with Disabilities Act of 1990 Section 508 Amendment to the Rehabilitation Act of 1973 Web Accessibility Initiative (WAI) of the World Wide Web Consortium (W3C) Although different testing organizations may prescribe different tests as part of System testing, this list serves as a general framework or foundation to begin with.

206

References
[1] IEEE Standard Computer Dictionary: A Compilation of IEEE Standard Computer Glossaries; IEEE; New York, NY.; 1990.

Black, Rex; (2002). Managing the Testing Process (2nd ed.). Wiley Publishing. ISBN 0-471-22398-0

Unit testing
In computer programming, unit testing is a method by which individual units of source code, sets of one or more computer program modules together with associated control data, usage procedures, and operating procedures, are tested to determine if they are fit for use.[1] Intuitively, one can view a unit as the smallest testable part of an application. In procedural programming a unit could be an entire module but is more commonly an individual function or procedure. In object-oriented programming a unit is often an entire interface, such as a class, but could be an individual method. [2] Unit tests are created by programmers or occasionally by white box testers during the development process. Ideally, each test case is independent from the others: substitutes like method stubs, mock objects,[3] fakes and test harnesses can be used to assist testing a module in isolation. Unit tests are typically written and run by software developers to ensure that code meets its design and behaves as intended. Its implementation can vary from being very manual (pencil and paper) to being formalized as part of build automation.

Benefits
The goal of unit testing is to isolate each part of the program and show that the individual parts are correct.[1] A unit test provides a strict, written contract that the piece of code must satisfy. As a result, it affords several benefits.

Find problems early


Unit tests find problems early in the development cycle. In test-driven development (TDD), which is frequently used in both Extreme Programming and Scrum, unit tests are created before the code itself is written. When the tests pass, that code is considered complete. The same unit tests are run against that function frequently as the larger code base is developed either as the code is changed or via an automated process with the build. If the unit tests fail, it is considered to be a bug either in the changed code or the tests themselves. The unit tests then allow the location of the fault or failure to be easily traced. Since the unit tests alert the development team of the problem before handing the code off to testers or clients, it is still early in the development process.

Unit testing

207

Facilitates change
Unit testing allows the programmer to refactor code at a later date, and make sure the module still works correctly (e.g., in regression testing). The procedure is to write test cases for all functions and methods so that whenever a change causes a fault, it can be quickly identified and fixed. Readily available unit tests make it easy for the programmer to check whether a piece of code is still working properly. In continuous unit testing environments, through the inherent practice of sustained maintenance, unit tests will continue to accurately reflect the intended use of the executable and code in the face of any change. Depending upon established development practices and unit test coverage, up-to-the-second accuracy can be maintained.

Simplifies integration
Unit testing may reduce uncertainty in the units themselves and can be used in a bottom-up testing style approach. By testing the parts of a program first and then testing the sum of its parts, integration testing becomes much easier. An elaborate hierarchy of unit tests does not equal integration testing. Integration with peripheral units should be included in integration tests, but not in unit tests. Integration testing typically still relies heavily on humans testing manually; high-level or global-scope testing can be difficult to automate, such that manual testing often appears faster and cheaper.

Documentation
Unit testing provides a sort of living documentation of the system. Developers looking to learn what functionality is provided by a unit and how to use it can look at the unit tests to gain a basic understanding of the unit's API. Unit test cases embody characteristics that are critical to the success of the unit. These characteristics can indicate appropriate/inappropriate use of a unit as well as negative behaviors that are to be trapped by the unit. A unit test case, in and of itself, documents these critical characteristics, although many software development environments do not rely solely upon code to document the product in development. By contrast, ordinary narrative documentation is more susceptible to drifting from the implementation of the program and will thus become outdated (e.g., design changes, feature creep, relaxed practices in keeping documents up-to-date).

Design
When software is developed using a test-driven approach, the unit test may take the place of formal design. Each unit test can be seen as a design element specifying classes, methods, and observable behaviour. The following Java example will help illustrate this point. Here is a test class that specifies a number of elements of the implementation. First, that there must be an interface called Adder, and an implementing class with a zero-argument constructor called AdderImpl. It goes on to assert that the Adder interface should have a method called add, with two integer parameters, which returns another integer. It also specifies the behaviour of this method for a small range of values. public class TestAdder { public void testSum() { Adder adder = new AdderImpl(); // can it add positive numbers? assert(adder.add(1, 1) == 2); assert(adder.add(1, 2) == 3); assert(adder.add(2, 2) == 4); // is zero neutral?

Unit testing assert(adder.add(0, 0) == 0); // can it add negative numbers? assert(adder.add(-1, -2) == -3); // can it add a positive and a negative? assert(adder.add(-1, 1) == 0); // how about larger numbers? assert(adder.add(1234, 988) == 2222); } } In this case the unit test, having been written first, acts as a design document specifying the form and behaviour of a desired solution, but not the implementation details, which are left for the programmer. Following the "do the simplest thing that could possibly work" practice, the easiest solution that will make the test pass is shown below. interface Adder { int add(int a, int b); } class AdderImpl implements Adder { int add(int a, int b) { return a + b; } } Unlike other diagram-based design methods, using a unit-test as a design has one significant advantage. The design document (the unit-test itself) can be used to verify that the implementation adheres to the design. With the unit-test design method, the tests will never pass if the developer does not implement the solution according to the design. It is true that unit testing lacks some of the accessibility of a diagram, but UML diagrams are now easily generated for most modern languages by free tools (usually available as extensions to IDEs). Free tools, like those based on the xUnit framework, outsource to another system the graphical rendering of a view for human consumption.

208

Separation of interface from implementation


Because some classes may have references to other classes, testing a class can frequently spill over into testing another class. A common example of this is classes that depend on a database: in order to test the class, the tester often writes code that interacts with the database. This is a mistake, because a unit test should usually not go outside of its own class boundary, and especially should not cross such process/network boundaries because this can introduce unacceptable performance problems to the unit test-suite. Crossing such unit boundaries turns unit tests into integration tests, and when test cases fail, makes it less clear which component is causing the failure. See also Fakes, mocks and integration tests Instead, the software developer should create an abstract interface around the database queries, and then implement that interface with their own mock object. By abstracting this necessary attachment from the code (temporarily reducing the net effective coupling), the independent unit can be more thoroughly tested than may have been previously achieved. This results in a higher quality unit that is also more maintainable.

Unit testing

209

Parameterized Unit Testing (PUT)


Parameterized Unit Tests (PUTs) are tests that take parameters. Unlike traditional unit tests, which are usually closed methods, PUTs take any set of parameters. PUTs have been supported by JUnit 4 and various .NET test frameworks. Suitable parameters for the unit tests may be supplied manually or in some cases are automatically generated by the test framework. Various industrial testing tools also exist to generate test inputs for PUTs.

Unit testing limitations


Testing cannot be expected to catch every error in the program: it is impossible to evaluate every execution path in all but the most trivial programs. The same is true for unit testing. Additionally, unit testing by definition only tests the functionality of the units themselves. Therefore, it will not catch integration errors or broader system-level errors (such as functions performed across multiple units, or non-functional test areas such as performance). Unit testing should be done in conjunction with other software testing activities. Like all forms of software testing, unit tests can only show the presence of errors; they cannot show the absence of errors. Software testing is a combinatorial problem. For example, every boolean decision statement requires at least two tests: one with an outcome of "true" and one with an outcome of "false". As a result, for every line of code written, programmers often need 3 to 5 lines of test code.[4] This obviously takes time and its investment may not be worth the effort. There are also many problems that cannot easily be tested at all for example those that are nondeterministic or involve multiple threads. In addition, writing code for a unit test is as likely to be at least as buggy as the code it is testing. Fred Brooks in The Mythical Man-Month quotes: never take two chronometers to sea. Always take one or three. Meaning, if two chronometers contradict, how do you know which one is correct? Another challenge related to writing the unit tests is the difficulty of setting up realistic and useful tests. It is necessary to create relevant initial conditions so the part of the application being tested behaves like part of the complete system. If these initial conditions are not set correctly, the test will not be exercising the code in a realistic context, which diminishes the value and accuracy of unit test results. [5] To obtain the intended benefits from unit testing, rigorous discipline is needed throughout the software development process. It is essential to keep careful records not only of the tests that have been performed, but also of all changes that have been made to the source code of this or any other unit in the software. Use of a version control system is essential. If a later version of the unit fails a particular test that it had previously passed, the version-control software can provide a list of the source code changes (if any) that have been applied to the unit since that time. It is also essential to implement a sustainable process for ensuring that test case failures are reviewed daily and addressed immediately.[6] If such a process is not implemented and ingrained into the team's workflow, the application will evolve out of sync with the unit test suite, increasing false positives and reducing the effectiveness of the test suite. Unit testing embedded system software presents a unique challenge: Since the software is being developed on a different platform than the one it will eventually run on, you cannot readily run a test program in the actual deployment environment, as is possible with desktop programs.[7]

Unit testing

210

Applications
Extreme programming
Unit testing is the cornerstone of extreme programming, which relies on an automated unit testing framework. This automated unit testing framework can be either third party, e.g., xUnit, or created within the development group. Extreme programming uses the creation of unit tests for test-driven development. The developer writes a unit test that exposes either a software requirement or a defect. This test will fail because either the requirement isn't implemented yet, or because it intentionally exposes a defect in the existing code. Then, the developer writes the simplest code to make the test, along with other tests, pass. Most code in a system is unit tested, but not necessarily all paths through the code. Extreme programming mandates a "test everything that can possibly break" strategy, over the traditional "test every execution path" method. This leads developers to develop fewer tests than classical methods, but this isn't really a problem, more a restatement of fact, as classical methods have rarely ever been followed methodically enough for all execution paths to have been thoroughly tested. Extreme programming simply recognizes that testing is rarely exhaustive (because it is often too expensive and time-consuming to be economically viable) and provides guidance on how to effectively focus limited resources. Crucially, the test code is considered a first class project artifact in that it is maintained at the same quality as the implementation code, with all duplication removed. Developers release unit testing code to the code repository in conjunction with the code it tests. Extreme programming's thorough unit testing allows the benefits mentioned above, such as simpler and more confident code development and refactoring, simplified code integration, accurate documentation, and more modular designs. These unit tests are also constantly run as a form of regression test. Unit testing is also critical to the concept of emergent design. As emergent design is heavily dependent upon refactoring, unit tests are integral component.[8]

Techniques
Unit testing is commonly automated, but may still be performed manually. The IEEE does not favor one over the other.[9] A manual approach to unit testing may employ a step-by-step instructional document. Nevertheless, the objective in unit testing is to isolate a unit and validate its correctness. Automation is efficient for achieving this, and enables the many benefits listed in this article. Conversely, if not planned carefully, a careless manual unit test case may execute as an integration test case that involves many software components, and thus preclude the achievement of most if not all of the goals established for unit testing. To fully realize the effect of isolation while using an automated approach, the unit or code body under test is executed within a framework outside of its natural environment. In other words, it is executed outside of the product or calling context for which it was originally created. Testing in such an isolated manner reveals unnecessary dependencies between the code being tested and other units or data spaces in the product. These dependencies can then be eliminated. Using an automation framework, the developer codes criteria into the test to verify the unit's correctness. During test case execution, the framework logs tests that fail any criterion. Many frameworks will also automatically flag these failed test cases and report them in a summary. Depending upon the severity of a failure, the framework may halt subsequent testing. As a consequence, unit testing is traditionally a motivator for programmers to create decoupled and cohesive code bodies. This practice promotes healthy habits in software development. Design patterns, unit testing, and refactoring often work together so that the best solution may emerge.

Unit testing

211

Unit testing frameworks


Unit testing frameworks are most often third-party products that are not distributed as part of the compiler suite. They help simplify the process of unit testing, having been developed for a wide variety of languages. Examples of testing frameworks include open source solutions such as the various code-driven testing frameworks known collectively as xUnit, and proprietary/commercial solutions such as TBrun, JustMock, Isolator.NET, Isolator++, Parasoft Test (C/C++test, Jtest, dotTEST), Testwell CTA++ and VectorCAST/C++. It is generally possible to perform unit testing without the support of a specific framework by writing client code that exercises the units under test and uses assertions, exception handling, or other control flow mechanisms to signal failure. Unit testing without a framework is valuable in that there is a barrier to entry for the adoption of unit testing; having scant unit tests is hardly better than having none at all, whereas once a framework is in place, adding unit tests becomes relatively easy.[10] In some frameworks many advanced unit test features are missing or must be hand-coded.

Language-level unit testing support


Some programming languages directly support unit testing. Their grammar allows the direct declaration of unit tests without importing a library (whether third party or standard). Additionally, the boolean conditions of the unit tests can be expressed in the same syntax as boolean expressions used in non-unit test code, such as what is used for if and while statements. Languages that directly support unit testing include: C# Corba D Java Obix

Notes
[1] Kolawa, Adam; Huizinga, Dorota (2007). Automated Defect Prevention: Best Practices in Software Management (http:/ / www. wiley. com/ WileyCDA/ WileyTitle/ productCd-0470042125. html). Wiley-IEEE Computer Society Press. p.426. ISBN0-470-04212-5. . [2] Xie, Tao (unknown). "Towards a Framework for Differential Unit Testing of Object-Oriented Programs" (http:/ / people. engr. ncsu. edu/ txie/ publications/ ast07-diffut. pdf). . Retrieved 2012-07-23. [3] Fowler, Martin (2007-01-02). "Mocks aren't Stubs" (http:/ / martinfowler. com/ articles/ mocksArentStubs. html). . Retrieved 2008-04-01. [4] Cramblitt, Bob (2007-09-20). "Alberto Savoia sings the praises of software testing" (http:/ / searchsoftwarequality. techtarget. com/ originalContent/ 0,289142,sid92_gci1273161,00. html). . Retrieved 2007-11-29. [5] Kolawa, Adam (2009-07-01). "Unit Testing Best Practices" (http:/ / www. parasoft. com/ unit-testing-best-practices). . Retrieved 2012-07-23. [6] daVeiga, Nada (2008-02-06). "Change Code Without Fear: Utilize a regression safety net" (http:/ / www. ddj. com/ development-tools/ 206105233). . Retrieved 2008-02-08. [7] Kucharski, Marek (2011-11-23). "Making Unit Testing Practical for Embedded Development" (http:/ / electronicdesign. com/ article/ embedded/ Making-Unit-Testing-Practical-for-Embedded-Development). . Retrieved 2012-05-08. [8] "Agile Emergent Design" (http:/ / www. agilesherpa. org/ agile_coach/ engineering_practices/ emergent_design/ ). Agile Sherpa. 2010-08-03. . Retrieved 2012-05-08. [9] IEEE Standards Board, "IEEE Standard for Software Unit Testing: An American National Standard, ANSI/IEEE Std 1008-1987" (http:/ / aulas. carlosserrao. net/ lib/ exe/ fetch. php?media=0910:1008-1987_ieee_standard_for_software_unit_testing. pdf) in IEEE Standards: Software Engineering, Volume Two: Process Standards; 1999 Edition; published by The Institute of Electrical and Electronics Engineers, Inc. Software Engineering Technical Committee of the IEEE Computer Society. [10] Bullseye Testing Technology (20062008). "Intermediate Coverage Goals" (http:/ / www. bullseye. com/ coverage. html#intermediate). . Retrieved 24 March 2009.

Unit testing

212

External links
Unit Testing Guidelines from GeoSoft (http://geosoft.no/development/unittesting.html) Test Driven Development (Ward Cunningham's Wiki) (http://c2.com/cgi/wiki?TestDrivenDevelopment) Unit Testing 101 for the Non-Programmer (http://www.saravanansubramanian.com/Saravanan/ Articles_On_Software/Entries/2010/1/19_Unit_Testing_101_For_Non-Programmers.html) Step-by-Step Guide to JPA-Enabled Unit Testing (Java EE) (http://www.sizovpoint.com/2010/01/ step-by-step-guide-to-jpa-enabled-unit.html)

Regression testing
Regression testing is any type of software testing that seeks to uncover new software bugs, or regressions, in existing functional and non-functional areas of a system after changes, such as enhancements, patches or configuration changes, have been made to them. The intent of regression testing is to ensure that a change such as those mentioned above has not introduced new faults.[1] One of the main reasons for regression testing is to determine whether a change in one part of the software affects other parts of the software.[2] Common methods of regression testing include rerunning previously-completed tests and checking whether program behavior has changed and whether previously-fixed faults have re-emerged. Regression testing can be used to test a system efficiently by systematically selecting the appropriate minimum set of tests needed to adequately cover a particular change.

Background
Experience has shown that as software is fixed, emergence of new and/or reemergence of old faults is quite common. Sometimes reemergence occurs because a fix gets lost through poor revision control practices (or simple human error in revision control). Often, a fix for a problem will be "fragile" in that it fixes the problem in the narrow case where it was first observed but not in more general cases which may arise over the lifetime of the software. Frequently, a fix for a problem in one area inadvertently causes a software bug in another area. Finally, it may happen that, when some feature is redesigned, some of the same mistakes that were made in the original implementation of the feature are made in the redesign. Therefore, in most software development situations, it is considered good coding practice, when a bug is located and fixed, to record a test that exposes the bug and re-run that test regularly after subsequent changes to the program.[3] Although this may be done through manual testing procedures using programming techniques, it is often done using automated testing tools.[4] Such a test suite contains software tools that allow the testing environment to execute all the regression test cases automatically; some projects even set up automated systems to automatically re-run all regression tests at specified intervals and report any failures (which could imply a regression or an out-of-date test).[5] Common strategies are to run such a system after every successful compile (for small projects), every night, or once a week. Those strategies can be automated by an external tool, such as BuildBot, Tinderbox, Hudson, Jenkins, TeamCity or Bamboo. Regression testing is an integral part of the extreme programming software development method. In this method, design documents are replaced by extensive, repeatable, and automated testing of the entire software package throughout each stage of the software development cycle. In the corporate world, regression testing has traditionally been performed by a software quality assurance team after the development team has completed work. However, defects found at this stage are the most costly to fix. This problem is being addressed by the rise of unit testing. Although developers have always written test cases as part of

Regression testing the development cycle, these test cases have generally been either functional tests or unit tests that verify only intended outcomes. Developer testing compels a developer to focus on unit testing and to include both positive and negative test cases.[6]

213

Uses
Regression testing can be used not only for testing the correctness of a program, but often also for tracking the quality of its output.[7] For instance, in the design of a compiler, regression testing could track the code size, simulation time and compilation time of the test suite cases. "Also as a consequence of the introduction of new bugs, program maintenance requires far more system testing per statement written than any other programming. Theoretically, after each fix one must run the entire batch of test cases previously run against the system, to ensure that it has not been damaged in an obscure way. In practice, such regression testing must indeed approximate this theoretical idea, and it is very costly." Fred Brooks, The Mythical Man Month, p. 122 Regression tests can be broadly categorized as functional tests or unit tests. Functional tests exercise the complete program with various inputs. Unit tests exercise individual functions, subroutines, or object methods. Both functional testing tools and unit testing tools tend to be third-party products that are not part of the compiler suite, and both tend to be automated. A functional test may be a scripted series of program inputs, possibly even involving an automated mechanism for controlling mouse movements and clicks. A unit test may be a set of separate functions within the code itself, or a driver layer that links to the code without altering the code being tested.

References
[1] Myers, Glenford (2004). The Art of Software Testing. Wiley. ISBN978-0-471-46912-4. [2] Savenkov, Roman (2008). How to Become a Software Tester. Roman Savenkov Consulting. p.386. ISBN978-0-615-23372-7. [3] Kolawa, Adam; Huizinga, Dorota (2007). Automated Defect Prevention: Best Practices in Software Management (http:/ / www. wiley. com/ WileyCDA/ WileyTitle/ productCd-0470042125. html). Wiley-IEEE Computer Society Press. p.73. ISBN0-470-04212-5. . [4] Automate Regression Tests When Feasible (http:/ / safari. oreilly. com/ 0201794292/ ch08lev1sec4), Automated Testing: Selected Best Practices, Elfriede Dustin, Safari Books Online [5] daVeiga, Nada (February 2008). "Change Code Without Fear: Utilize a Regression Safety Net" (http:/ / www. ddj. com/ development-tools/ 206105233;jsessionid=2HN1TRYZ4JGVAQSNDLRSKH0CJUNN2JVN). Dr. Dobb's Journal. . [6] Dudney, Bill (2004-12-08). "Developer Testing Is 'In': An interview with [[Alberto Savoia (http:/ / www. sys-con. com/ read/ 47359. htm)] and Kent Beck"]. . Retrieved 2007-11-29. [7] Kolawa, Adam. "Regression Testing, Programmer to Programmer" (http:/ / www. wrox. com/ WileyCDA/ Section/ id-291252. html). Wrox. .

External links
Microsoft regression testing recommendations (http://msdn.microsoft.com/en-us/library/aa292167(VS.71). aspx) Gauger performance regression visualization tool (https://gnunet.org/gauger/)

Acceptance testing

214

Acceptance testing
In engineering and its various subdisciplines, acceptance testing is a test conducted to determine if the requirements of a specification or contract are met. It may involve chemical tests, physical tests, or performance tests. In systems engineering it may involve black-box testing performed on a system (for example: a piece of software, lots of manufactured mechanical parts, or batches of chemical products) prior to its delivery.[1] Software developers often distinguish Acceptance testing of an aircraft catapult acceptance testing by the system provider from acceptance testing by the customer (the user or client) prior to accepting transfer of ownership. In the case of software, acceptance testing performed by the customer is known as user acceptance testing (UAT), end-user testing, site (acceptance) testing, or field (acceptance) testing. A smoke test is used as an acceptance test prior to introducing a build to the main testing process.

Overview
Testing generally involves running a suite of tests on the completed system. Each individual test, known as a case, exercises a particular operating condition of the user's environment or feature of the system, and will result in a pass or fail, or boolean, outcome. There is generally no degree of success or failure. The test environment is usually designed to be identical, or as close as possible, to the anticipated user's environment, including extremes of such. These test cases must each be accompanied by test case input data or a formal description of the operational activities (or both) to be performedintended to thoroughly exercise the specific caseand a formal description of the expected results. Acceptance Tests/Criteria (in Agile Software Development) are usually created by business customers and expressed in a business domain language. These are high-level tests to test the completeness of a user story or stories 'played' during any sprint/iteration. These tests are created ideally through collaboration between business customers, business analysts, testers and developers, however the business customers (product owners) are the primary owners of these tests. As the user stories pass their acceptance criteria, the business owners can be sure of the fact that the developers are progressing in the right direction about how the application was envisaged to work and so it's essential that these tests include both business logic tests as well as UI validation elements (if need be). Acceptance test cards are ideally created during sprint planning or iteration planning meeting, before development begins so that the developers have a clear idea of what to develop. Sometimes (due to bad planning!) acceptance tests may span multiple stories (that are not implemented in the same sprint) and there are different ways to test them out during actual sprints. One popular technique is to mock external interfaces or data to mimic other stories which might not be played out during an iteration (as those stories may have been relatively lower business priority). A user story is not considered complete until the acceptance tests have passed.

Acceptance testing

215

Process
The acceptance test suite is run against the supplied input data or using an acceptance test script to direct the testers. Then the results obtained are compared with the expected results. If there is a correct match for every case, the test suite is said to pass. If not, the system may either be rejected or accepted on conditions previously agreed between the sponsor and the manufacturer. The objective is to provide confidence that the delivered system meets the business requirements of both sponsors and users. The acceptance phase may also act as the final quality gateway, where any quality defects not previously detected may be uncovered. A principal purpose of acceptance testing is that, once completed successfully, and provided certain additional (contractually agreed) acceptance criteria are met, the sponsors will then sign off on the system as satisfying the contract (previously agreed between sponsor and manufacturer), and deliver final payment.

User acceptance testing


User Acceptance Testing (UAT) is a process to obtain confirmation that a system meets mutually agreed-upon requirements. A Subject Matter Expert (SME), preferably the owner or client of the object under test, provides such confirmation after trial or review. In software development, UAT is one of the final stages of a project and often occurs before a client or customer accepts the new system. Users of the system perform these tests, which developers derive from the client's contract or the user requirements specification. Test designers draw up formal tests and devise a range of severity levels. Ideally the designer of the user acceptance tests should not be the creator of the formal integration and system test cases for the same system. The UAT acts as a final verification of the required business function and proper functioning of the system, emulating real-world usage conditions on behalf of the paying client or a specific large customer. If the software works as intended and without issues during normal use, one can reasonably extrapolate the same level of stability in production. User tests, which are usually performed by clients or end-users, do not normally focus on identifying simple problems such as spelling errors and cosmetic problems, nor showstopper defects, such as software crashes; testers and developers previously identify and fix these issues during earlier unit testing, integration testing, and system testing phases. In the industrial sector, a common UAT is a "Factory Acceptance Test" (FAT). This test is performed before installation of the concerned equipment. Most of the time it is not only checked if the equipment meets the pre-set specification, but also if the equipment is fully functional. A FAT usually includes a check of completeness, a verification against contractual requirements, a proof of functuality (either by simulation or a conventional function test) and a final inspection.[2][3] The results of these tests give confidence to the clients as to how the system will perform in production. There may also be legal or contractual requirements for acceptance of the system.

Acceptance testing in Extreme Programming


Acceptance testing is a term used in agile software development methodologies, particularly Extreme Programming, referring to the functional testing of a user story by the software development team during the implementation phase. The customer specifies scenarios to test when a user story has been correctly implemented. A story can have one or many acceptance tests, whatever it takes to ensure the functionality works. Acceptance tests are black box system tests. Each acceptance test represents some expected result from the system. Customers are responsible for verifying the correctness of the acceptance tests and reviewing test scores to decide which failed tests are of highest priority. Acceptance tests are also used as regression tests prior to a production release. A user story is not considered

Acceptance testing complete until it has passed its acceptance tests. This means that new acceptance tests must be created for each iteration or the development team will report zero progress.[4]

216

Types of acceptance testing


Typical types of acceptance testing include the following User acceptance testing This may include factory acceptance testing, i.e. the testing done by factory users before the factory is moved to its own site, after which site acceptance testing may be performed by the users at the site. Operational Acceptance Testing (OAT) Also known as operational readiness testing, this refers to the checking done to a system to ensure that processes and procedures are in place to allow the system to be used and maintained. This may include checks done to back-up facilities, procedures for disaster recovery, training for end users, maintenance procedures, and security procedures. Contract and regulation acceptance testing In contract acceptance testing, a system is tested against acceptance criteria as documented in a contract, before the system is accepted. In regulation acceptance testing, a system is tested to ensure it meets governmental, legal and safety standards. Alpha and beta testing Alpha testing takes place at developers' sites, and involves testing of the operational system by internal staff, before it is released to external customers. Beta testing takes place at customers' sites, and involves testing by a group of customers who use the system at their own locations and provide feedback, before the system is released to other customers. The latter is often called field testing.

List of development to production (testing) environments


Development Environment Development Testing Environment Testing Environment Development Integration Testing Development System Testing System Integration Testing User Acceptance Testing Production Environment

List of acceptance-testing frameworks


Cucumber, a BDD acceptance test framework Fabasoft app.test for automated acceptance tests FitNesse, a fork of Fit Framework for Integrated Test (Fit) iMacros ItsNat Java Ajax web framework with built-in, server based, functional web testing capabilities. Ranorex Robot Framework

Selenium Test Automation FX

Acceptance testing Watir

217

References
[1] Black, Rex (August 2009). Managing the Testing Process: Practical Tools and Techniques for Managing Hardware and Software Testing. Hoboken, NJ: Wiley. ISBN0-470-40415-9. [2] "Factory Acceptance Test (FAT). Retrieved on 09-18-2012" (http:/ / www. tuv. com/ en/ corporate/ business_customers/ materials_testing_and_inspection/ supply_chain_services/ factory_acceptance_test/ factory_acceptance_test. jsp). Tuv.com. . Retrieved September 18, 2012. [3] "Factory Acceptance Test. Retrieved on 09-18-2012" (http:/ / www. inspection-for-industry. com/ factory-acceptance-test. html). Inspection-for-industry.com. . Retrieved September 18, 2012. [4] Don Wells. "Acceptance Tests" (http:/ / www. extremeprogramming. org/ rules/ functionaltests. html). Extremeprogramming.org. . Retrieved September 20, 2011.

External links
Acceptance Test Engineering Guide (http://testingguidance.codeplex.com) by Microsoft patterns & practices (http://msdn.com/practices) Article Using Customer Tests to Drive Development (http://www.methodsandtools.com/archive/archive. php?id=23) from Methods & Tools (http://www.methodsandtools.com/) Article Acceptance TDD Explained (http://www.methodsandtools.com/archive/archive.php?id=72) from Methods & Tools (http://www.methodsandtools.com/)

Software testing
Software testing is an investigation conducted to provide stakeholders with information about the quality of the product or service under test.[1] Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include, but are not limited to, the process of executing a program or application with the intent of finding software bugs (errors or other defects). Software testing can be program/application/product: stated as the process of validating and verifying that a computer

meets the requirements that guided its design and development, works as expected, can be implemented with the same characteristics, and satisfies the needs of stakeholders.

Software testing, depending on the testing method employed, can be implemented at any time in the development process. Traditionally most of the test effort occurs after the requirements have been defined and the coding process has been completed, but in the Agile approaches most of the test effort is on-going. As such, the methodology of the test is governed by the chosen software development methodology. Different software development models will focus the test effort at different points in the development process. Newer development models, such as Agile, often employ test-driven development and place an increased portion of the testing in the hands of the developer, before it reaches a formal team of testers. In a more traditional model, most of the test execution occurs after the requirements have been defined and the coding process has been completed.

Software testing

218

Overview
Testing can never completely identify all the defects within software.[2] Instead, it furnishes a criticism or comparison that compares the state and behavior of the product against oraclesprinciples or mechanisms by which someone might recognize a problem. These oracles may include (but are not limited to) specifications, contracts,[3] comparable products, past versions of the same product, inferences about intended or expected purpose, user or customer expectations, relevant standards, applicable laws, or other criteria. A primary purpose of testing is to detect software failures so that defects may be discovered and corrected. Testing cannot establish that a product functions properly under all conditions but can only establish that it does not function properly under specific conditions.[4] The scope of software testing often includes examination of code as well as execution of that code in various environments and conditions as well as examining the aspects of code: does it do what it is supposed to do and do what it needs to do. In the current culture of software development, a testing organization may be separate from the development team. There are various roles for testing team members. Information derived from software testing may be used to correct the process by which software is developed.[5] Every software product has a target audience. For example, the audience for video game software is completely different from banking software. Therefore, when an organization develops or otherwise invests in a software product, it can assess whether the software product will be acceptable to its end users, its target audience, its purchasers, and other stakeholders. Software testing is the process of attempting to make this assessment.

Defects and failures


Not all software defects are caused by coding errors. One common source of expensive defects is caused by requirement gaps, e.g., unrecognized requirements, that result in errors of omission by the program designer.[6] A common source of requirements gaps is non-functional requirements such as testability, scalability, maintainability, usability, performance, and security. Software faults occur through the following processes. A programmer makes an error (mistake), which results in a defect (fault, bug) in the software source code. If this defect is executed, in certain situations the system will produce wrong results, causing a failure.[7] Not all defects will necessarily result in failures. For example, defects in dead code will never result in failures. A defect can turn into a failure when the environment is changed. Examples of these changes in environment include the software being run on a new computer hardware platform, alterations in source data or interacting with different software.[7] A single defect may result in a wide range of failure symptoms.

Input combinations and preconditions


A very fundamental problem with software testing is that testing under all combinations of inputs and preconditions (initial state) is not feasible, even with a simple product.[4][8] This means that the number of defects in a software product can be very large and defects that occur infrequently are difficult to find in testing. More significantly, non-functional dimensions of quality (how it is supposed to be versus what it is supposed to do)usability, scalability, performance, compatibility, reliabilitycan be highly subjective; something that constitutes sufficient value to one person may be intolerable to another. Software developers can't test everything, but they can use combinatorial test design to identify the minimum number of tests needed to get the coverage they want. Combinatorial test design enables users to get greater test coverage with fewer tests. Whether they are looking for speed or test depth, they can use combinatorial test design methods to build structured variation into their test cases.[9]

Software testing

219

Economics
A study conducted by NIST in 2002 reports that software bugs cost the U.S. economy $59.5 billion annually. More than a third of this cost could be avoided if better software testing was performed.[10] It is commonly believed that the earlier a defect is found the cheaper it is to fix it. The following table shows the cost of fixing the defect depending on the stage it was found.[11] For example, if a problem in the requirements is found only post-release, then it would cost 10100 times more to fix than if it had already been found by the requirements review. With the advent of modern continuous deployment practices and cloud-based services, the cost of re-deployment and maintenance may lessen over time.
Cost to fix a defect Time detected Requirements Architecture Construction System test Post-release Time introduced Requirements Architecture Construction 1 3 1 510 10 1 10 15 10 10100 25100 1025

Roles
Software testing can be done by software testers. Until the 1980s the term "software tester" was used generally, but later it was also seen as a separate profession. Regarding the periods and the different goals in software testing,[12] different roles have been established: manager, test lead, test designer, tester, automation developer, and test administrator.

History
The separation of debugging from testing was initially introduced by Glenford J. Myers in 1979.[13] Although his attention was on breakage testing ("a successful test is one that finds a bug"[13][14]) it illustrated the desire of the software engineering community to separate fundamental development activities, such as debugging, from that of verification. Dave Gelperin and William C. Hetzel classified in 1988 the phases and goals in software testing in the following stages:[15] Until 1956 - Debugging oriented[16] 19571978 - Demonstration oriented[17] 19791982 - Destruction oriented[18] 19831987 - Evaluation oriented[19] 19882000 - Prevention oriented[20]

Testing methods
Static vs. dynamic testing
There are many approaches to software testing. Reviews, walkthroughs, or inspections are referred to as static testing, whereas actually executing programmed code with a given set of test cases is referred to as dynamic testing. Static testing can be omitted, and unfortunately in practice often is. Dynamic testing takes place when the program itself is used. Dynamic testing may begin before the program is 100% complete in order to test particular sections of code and are applied to discrete functions or modules. Typical techniques for this are either using stubs/drivers or execution from a debugger environment.

Software testing

220

The box approach


Software testing methods are traditionally divided into white- and black-box testing. These two approaches are used to describe the point of view that a test engineer takes when designing test cases. White-Box testing White-box testing (also known as clear box testing, glass box testing, transparent box testing, and structural testing) tests internal structures or workings of a program, as opposed to the functionality exposed to the end-user. In white-box testing an internal perspective of the system, as well as programming skills, are used to design test cases. The tester chooses inputs to exercise paths through the code and determine the appropriate outputs. This is analogous to testing nodes in a circuit, e.g. in-circuit testing (ICT). While white-box testing can be applied at the unit, integration and system levels of the software testing process, it is usually done at the unit level. It can test paths within a unit, paths between units during integration, and between subsystems during a systemlevel test. Though this method of test design can uncover many errors or problems, it might not detect unimplemented parts of the specification or missing requirements. Techniques used in white-box testing include: API testing (application programming interface) - testing of the application using public and private APIs Code coverage - creating tests to satisfy some criteria of code coverage (e.g., the test designer can create tests to cause all statements in the program to be executed at least once) Fault injection methods - intentionally introducing faults to gauge the efficacy of testing strategies Mutation testing methods Static testing methods Code coverage tools can evaluate the completeness of a test suite that was created with any method, including black-box testing. This allows the software team to examine parts of a system that are rarely tested and ensures that the most important function points have been tested.[21] Code coverage as a software metric can be reported as a percentage for: Function coverage, which reports on functions executed Statement coverage, which reports on the number of lines executed to complete the test 100% statement coverage ensures that all code paths, or branches (in terms of control flow) are executed at least once. This is helpful in ensuring correct functionality, but not sufficient since the same code may process different inputs correctly or incorrectly. Black-box testing Black-box testing treats the software as a "black box", examining functionality without any knowledge of internal implementation. The tester is only aware of what the software is supposed to do, Black box diagram not how it does it.[22] Black-box testing methods include: equivalence partitioning, boundary value analysis, all-pairs testing, state transition tables, decision table testing, fuzz testing, model-based testing, use case testing, exploratory testing and specification-based testing. Specification-based testing aims to test the functionality of software according to the applicable requirements.[23] This level of testing usually requires thorough test cases to be provided to the tester, who then can simply verify that for a given input, the output value (or behavior), either "is" or "is not" the same as the expected value specified in the test case. Test cases are built around specifications and requirements, i.e., what the application is supposed to do. It uses external descriptions of the software, including specifications, requirements, and designs to derive test cases. These tests can be functional or non-functional, though usually functional.

Software testing Specification-based testing may be necessary to assure correct functionality, but it is insufficient to guard against complex or high-risk situations.[24] One advantage of the black box technique is that no programming knowledge is required. Whatever biases the programmers may have had, the tester likely has a different set and may emphasize different areas of functionality. On the other hand, black-box testing has been said to be "like a walk in a dark labyrinth without a flashlight."[25] Because they do not examine the source code, there are situations when a tester writes many test cases to check something that could have been tested by only one test case, or leaves some parts of the program untested. This method of test can be applied to all levels of software testing: unit, integration, system and acceptance. It typically comprises most if not all testing at higher levels, but can also dominate unit testing as well. Grey-box testing Grey-box testing (American spelling: gray-box testing) involves having knowledge of internal data structures and algorithms for purposes of designing tests, while executing those tests at the user, or black-box level. The tester is not required to have full access to the software's source code.[26] Manipulating input data and formatting output do not qualify as grey-box, because the input and output are clearly outside of the "black box" that we are calling the system under test. This distinction is particularly important when conducting integration testing between two modules of code written by two different developers, where only the interfaces are exposed for test. However, modifying a data repository does qualify as grey-box, as the user would not normally be able to change the data outside of the system under test. Grey-box testing may also include reverse engineering to determine, for instance, boundary values or error messages. By knowing the underlying concepts of how the software works, the tester makes better-informed testing choices while testing the software from outside. Typically, a grey-box tester will be permitted to set up his testing environment; for instance, seeding a database; and the tester can observe the state of the product being tested after performing certain actions. For instance, in testing a database product he/she may fire an SQL query on the database and then observe the database, to ensure that the expected changes have been reflected. Grey-box testing implements intelligent test scenarios, based on limited information. This will particularly apply to data type handling, exception handling, and so on.[27]

221

Visual testing
The aim of visual testing is to provide developers with the ability to examine what was happening at the point of software failure by presenting the data in such a way that the developer can easily nd the information he requires, and the information is expressed clearly.[28][29] At the core of visual testing is the idea that showing someone a problem (or a test failure), rather than just describing it, greatly increases clarity and understanding. Visual testing therefore requires the recording of the entire test process capturing everything that occurs on the test system in video format. Output videos are supplemented by real-time tester input via picture-in-a-picture webcam and audio commentary from microphones. Visual testing provides a number of advantages. The quality of communication is increased dramatically because testers can show the problem (and the events leading up to it) to the developer as opposed to just describing it and the need to replicate test failures will cease to exist in many cases. The developer will have all the evidence he requires of a test failure and can instead focus on the cause of the fault and how it should be fixed. Visual testing is particularly well-suited for environments that deploy agile methods in their development of software, since agile methods require greater communication between testers and developers and collaboration within small teams. Ad hoc testing and exploratory testing are important methodologies for checking software integrity, because they require less preparation time to implement, whilst important bugs can be found quickly. In ad hoc testing, where testing takes place in an improvised, impromptu way, the ability of a test tool to visually record everything that

Software testing occurs on a system becomes very important. Visual testing is gathering recognition in customer acceptance and usability testing, because the test can be used by many individuals involved in the development process. For the customer, it becomes easy to provide detailed bug reports and feedback, and for program users, visual testing can record user actions on screen, as well as their voice and image, to provide a complete picture at the time of software failure for the developer. Further information: Graphical user interface testing

222

Testing levels
Tests are frequently grouped by where they are added in the software development process, or by the level of specificity of the test. The main levels during the development process as defined by the SWEBOK guide are unit-, integration-, and system testing that are distinguished by the test target without implying a specific process model.[30] Other test levels are classified by the testing objective.[30]

Unit testing
Unit testing, also known as component testing, refers to tests that verify the functionality of a specific section of code, usually at the function level. In an object-oriented environment, this is usually at the class level, and the minimal unit tests include the constructors and destructors.[31] These types of tests are usually written by developers as they work on code (white-box style), to ensure that the specific function is working as expected. One function might have multiple tests, to catch corner cases or other branches in the code. Unit testing alone cannot verify the functionality of a piece of software, but rather is used to assure that the building blocks the software uses work independently of each other.

Integration testing
Integration testing is any type of software testing that seeks to verify the interfaces between components against a software design. Software components may be integrated in an iterative way or all together ("big bang"). Normally the former is considered a better practice since it allows interface issues to be localised more quickly and fixed. Integration testing works to expose defects in the interfaces and interaction between integrated components (modules). Progressively larger groups of tested software components corresponding to elements of the architectural design are integrated and tested until the software works as a system.[32]

System testing
System testing tests a completely integrated system to verify that it meets its requirements.[33]

Acceptance testing
At last the system is delivered to the user for Acceptance testing.

Testing approach
Top-down and bottom-up
Bottom Up Testing is an approach to integrated testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested. All the bottom or low-level modules, procedures or functions are integrated and then tested. After the integration testing of lower level integrated modules, the next level of modules will be formed and can be used for integration

Software testing testing. This approach is helpful only when all or most of the modules of the same development level are ready. This method also helps to determine the levels of software developed and makes it easier to report testing progress in the form of a percentage. Top Down Testing is an approach to integrated testing where the top integrated modules are tested and the branch of the module is tested step by step until the end of the related module.

223

Objectives of testing
Installation testing
An installation test assures that the system is installed correctly and working at actual customer's hardware.

Compatibility testing
A common cause of software failure (real or perceived) is a lack of its compatibility with other application software, operating systems (or operating system versions, old or new), or target environments that differ greatly from the original (such as a terminal or GUI application intended to be run on the desktop now being required to become a web application, which must render in a web browser). For example, in the case of a lack of backward compatibility, this can occur because the programmers develop and test software only on the latest version of the target environment, which not all users may be running. This results in the unintended consequence that the latest work may not function on earlier versions of the target environment, or on older hardware that earlier versions of the target environment was capable of using. Sometimes such issues can be fixed by proactively abstracting operating system functionality into a separate program module or library.

Smoke and sanity testing


Sanity testing determines whether it is reasonable to proceed with further testing. Smoke testing is used to determine whether there are serious problems with a piece of software, for example as a build verification test.

Regression testing
Regression testing focuses on finding defects after a major code change has occurred. Specifically, it seeks to uncover software regressions, or old bugs that have come back. Such regressions occur whenever software functionality that was previously working correctly stops working as intended. Typically, regressions occur as an unintended consequence of program changes, when the newly developed part of the software collides with the previously existing code. Common methods of regression testing include re-running previously run tests and checking whether previously fixed faults have re-emerged. The depth of testing depends on the phase in the release process and the risk of the added features. They can either be complete, for changes added late in the release or deemed to be risky, to very shallow, consisting of positive tests on each feature, if the changes are early in the release or deemed to be of low risk.

Software testing

224

Acceptance testing
Acceptance testing can mean one of two things: 1. A smoke test is used as an acceptance test prior to introducing a new build to the main testing process, i.e. before integration or regression. 2. Acceptance testing performed by the customer, often in their lab environment on their own hardware, is known as user acceptance testing (UAT). Acceptance testing may be performed as part of the hand-off process between any two phases of development.

Alpha testing
Alpha testing is simulated or actual operational testing by potential users/customers or an independent test team at the developers' site. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing, before the software goes to beta testing.[34]

Beta testing
Beta testing comes after alpha testing and can be considered a form of external user acceptance testing. Versions of the software, known as beta versions, are released to a limited audience outside of the programming team. The software is released to groups of people so that further testing can ensure the product has few faults or bugs. Sometimes, beta versions are made available to the open public to increase the feedback field to a maximal number of future users.

Functional vs non-functional testing


Functional testing refers to activities that verify a specific action or function of the code. These are usually found in the code requirements documentation, although some development methodologies work from use cases or user stories. Functional tests tend to answer the question of "can the user do this" or "does this particular feature work." Non-functional testing refers to aspects of the software that may not be related to a specific function or user action, such as scalability or other performance, behavior under certain constraints, or security. Testing will determine the flake point, the point at which extremes of scalability or performance leads to unstable execution. Non-functional requirements tend to be those that reflect the quality of the product, particularly in the context of the suitability perspective of its users.

Destructive testing
Destructive testing attempts to cause the software or a sub-system to fail. It verifies that the software functions properly even when it receives invalid or unexpected inputs, thereby establishing the robustness of input validation and error-management routines. Software fault injection, in the form of fuzzing, is an example of failure testing. Various commercial non-functional testing tools are linked from the software fault injection page; there are also numerous open-source and free software tools available that perform destructive testing. Further information: Exception handlingandRecovery testing

Software testing

225

Software performance testing


Performance testing is in general executed to determine how a system or sub-system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage. Load testing is primarily concerned with testing that the system can continue to operate under a specific load, whether that be large quantities of data or a large number of users. This is generally referred to as software scalability. The related load testing activity of when performed as a non-functional activity is often referred to as endurance testing. Volume testing is a way to test software functions even when certain components (for example a file or database) increase radically in size. Stress testing is a way to test reliability under unexpected or rare workloads. Stability testing (often referred to as load or endurance testing) checks to see if the software can continuously function well in or above an acceptable period. There is little agreement on what the specific goals of performance testing are. The terms load testing, performance testing, reliability testing, and volume testing, are often used interchangeably. Further information: Scalability testing

Usability testing
Usability testing is needed to check if the user interface is easy to use and understand. It is concerned mainly with the use of the application.

Accessibility
Accessibility testing may include compliance with standards such as: Americans with Disabilities Act of 1990 Section 508 Amendment to the Rehabilitation Act of 1973 Web Accessibility Initiative (WAI) of the World Wide Web Consortium (W3C)

Security testing
Security testing is essential for software that processes confidential data to prevent system intrusion by hackers.

Internationalization and localization


The general ability of software to be internationalized and localized can be automatically tested without actual translation, by using pseudolocalization. It will verify that the application still works, even after it has been translated into a new language or adapted for a new culture (such as different currencies or time zones).[35] Actual translation to human languages must be tested, too. Possible localization failures include: Software is often localized by translating a list of strings out of context, and the translator may choose the wrong translation for an ambiguous source string. Technical terminology may become inconsistent if the project is translated by several people without proper coordination or if the translator is imprudent. Literal word-for-word translations may sound inappropriate, artificial or too technical in the target language. Untranslated messages in the original language may be left hard coded in the source code. Some messages may be created automatically at run time and the resulting string may be ungrammatical, functionally incorrect, misleading or confusing. Software may use a keyboard shortcut which has no function on the source language's keyboard layout, but is used for typing characters in the layout of the target language. Software may lack support for the character encoding of the target language.

Software testing Fonts and font sizes which are appropriate in the source language may be inappropriate in the target language; for example, CJK characters may become unreadable if the font is too small. A string in the target language may be longer than the software can handle. This may make the string partly invisible to the user or cause the software to crash or malfunction. Software may lack proper support for reading or writing bi-directional text. Software may display images with text that was not localized. Localized operating systems may have differently named system configuration files and environment variables and different formats for date and currency.

226

Development testing
Development Testing is a software development process that involves synchronized application of a broad spectrum of defect prevention and detection strategies in order to reduce software development risks, time, and costs. It is performed by the software developer or engineer during the construction phase of the software development lifecycle. Rather than replace traditional QA focuses, it augments it. Development Testing aims to eliminate construction errors before code is promoted to QA; this strategy is intended to increase the quality of the resulting software as well as the efficiency of the overall development and QA process. Depending on the organization's expectations for software development, Development Testing might include static code analysis, data flow analysis metrics analysis, peer code reviews, unit testing, code coverage analysis, traceability, and other software verification practices.

The testing process


Traditional CMMI or waterfall development model
A common practice of software testing is that testing is performed by an independent group of testers after the functionality is developed, before it is shipped to the customer.[36] This practice often results in the testing phase being used as a project buffer to compensate for project delays, thereby compromising the time devoted to testing.[37] Another practice is to start software testing at the same moment the project starts and it is a continuous process until the project finishes.[38]

Agile or Extreme development model


In contrast, some emerging software disciplines such as extreme programming and the agile software development movement, adhere to a "test-driven software development" model. In this process, unit tests are written first, by the software engineers (often with pair programming in the extreme programming methodology). Of course these tests fail initially; as they are expected to. Then as code is written it passes incrementally larger portions of the test suites. The test suites are continuously updated as new failure conditions and corner cases are discovered, and they are integrated with any regression tests that are developed. Unit tests are maintained along with the rest of the software source code and generally integrated into the build process (with inherently interactive tests being relegated to a partially manual build acceptance process). The ultimate goal of this test process is to achieve continuous integration where software updates can be published to the public frequently. [39] [40]

Software testing

227

A sample testing cycle


Although variations exist between organizations, there is a typical cycle for testing.[41] The sample below is common among organizations employing the Waterfall development model. Requirements analysis: Testing should begin in the requirements phase of the software development life cycle. During the design phase, testers work with developers in determining what aspects of a design are testable and with what parameters those tests work. Test planning: Test strategy, test plan, testbed creation. Since many activities will be carried out during testing, a plan is needed. Test development: Test procedures, test scenarios, test cases, test datasets, test scripts to use in testing software. Test execution: Testers execute the software based on the plans and test documents then report any errors found to the development team. Test reporting: Once testing is completed, testers generate metrics and make final reports on their test effort and whether or not the software tested is ready for release. Test result analysis: Or Defect Analysis, is done by the development team usually along with the client, in order to decide what defects should be assigned, fixed, rejected (i.e. found software working properly) or deferred to be dealt with later. Defect Retesting: Once a defect has been dealt with by the development team, it is retested by the testing team. AKA Resolution testing. Regression testing: It is common to have a small test program built of a subset of tests, for each integration of new, modified, or fixed software, in order to ensure that the latest delivery has not ruined anything, and that the software product as a whole is still working correctly. Test Closure: Once the test meets the exit criteria, the activities such as capturing the key outputs, lessons learned, results, logs, documents related to the project are archived and used as a reference for future projects.

Automated testing
Many programming groups are relying more and more on automated testing, especially groups that use test-driven development. There are many frameworks to write tests in, and continuous integration software will run tests automatically every time code is checked into a version control system. While automation cannot reproduce everything that a human can do (and all the ways they think of doing it), it can be very useful for regression testing. However, it does require a well-developed test suite of testing scripts in order to be truly useful.

Testing tools
Program testing and fault detection can be aided significantly by testing tools and debuggers. Testing/debug tools include features such as: Program monitors, permitting full or partial monitoring of program code including: Instruction set simulator, permitting complete instruction level monitoring and trace facilities Program animation, permitting step-by-step execution and conditional breakpoint at source level or in machine code Code coverage reports Formatted dump or symbolic debugging, tools allowing inspection of program variables on error or at chosen points Automated functional GUI testing tools are used to repeat system-level tests through the GUI Benchmarks, allowing run-time performance comparisons to be made Performance analysis (or profiling tools) that can help to highlight hot spots and resource usage

Software testing Some of these features may be incorporated into an Integrated Development Environment (IDE). A regression testing technique is to have a standard set of tests, which cover existing functionality that result in persistent tabular data, and to compare pre-change data to post-change data, where there should not be differences, using a tool like diffkit. Differences detected indicate unexpected functionality changes or "regression".

228

Measurement in software testing


Usually, quality is constrained to such topics as correctness, completeness, security, but can also include more technical requirements as described under the ISO standard ISO/IEC 9126, such as capability, reliability, efficiency, portability, maintainability, compatibility, and usability. There are a number of frequently used software metrics, or measures, which are used to assist in determining the state of the software or the adequacy of the testing.

Testing artifacts
The software testing process can produce several artifacts. Test plan A test specification is called a test plan. The developers are well aware what test plans will be executed and this information is made available to management and the developers. The idea is to make them more cautious when developing their code or making additional changes. Some companies have a higher-level document called a test strategy. Traceability matrix A traceability matrix is a table that correlates requirements or design documents to test documents. It is used to change tests when related source documents are changed, to select test cases for execution when planning for regression tests by considering requirement coverage. Test case A test case normally consists of a unique identifier, requirement references from a design specification, preconditions, events, a series of steps (also known as actions) to follow, input, output, expected result, and actual result. Clinically defined a test case is an input and an expected result.[42] This can be as pragmatic as 'for condition x your derived result is y', whereas other test cases described in more detail the input scenario and what results might be expected. It can occasionally be a series of steps (but often steps are contained in a separate test procedure that can be exercised against multiple test cases, as a matter of economy) but with one expected result or expected outcome. The optional fields are a test case ID, test step, or order of execution number, related requirement(s), depth, test category, author, and check boxes for whether the test is automatable and has been automated. Larger test cases may also contain prerequisite states or steps, and descriptions. A test case should also contain a place for the actual result. These steps can be stored in a word processor document, spreadsheet, database, or other common repository. In a database system, you may also be able to see past test results, who generated the results, and what system configuration was used to generate those results. These past results would usually be stored in a separate table. Test script A test script is a procedure, or programing code that replicates user actions. Initially the term was derived from the product of work created by automated regression test tools. Test Case will be a baseline to create test scripts using a tool or a program. Test suite

Software testing The most common term for a collection of test cases is a test suite. The test suite often also contains more detailed instructions or goals for each collection of test cases. It definitely contains a section where the tester identifies the system configuration used during testing. A group of test cases may also contain prerequisite states or steps, and descriptions of the following tests. Test fixture or test data In most cases, multiple sets of values or data are used to test the same functionality of a particular feature. All the test values and changeable environmental components are collected in separate files and stored as test data. It is also useful to provide this data to the client and with the product or a project. Test harness The software, tools, samples of data input and output, and configurations are all referred to collectively as a test harness.

229

Certifications
Several certification programs exist to support the professional aspirations of software testers and quality assurance specialists. No certification now offered actually requires the applicant to show their ability to test software. No certification is based on a widely accepted body of knowledge. This has led some to declare that the testing field is not ready for certification.[43] Certification itself cannot measure an individual's productivity, their skill, or practical knowledge, and cannot guarantee their competence, or professionalism as a tester.[44] Software testing certification types Exam-based: Formalized exams, which need to be passed; can also be learned by self-study [e.g., for ISTQB or QAI][45] Education-based: Instructor-led sessions, where each course has to be passed [e.g., International Institute for Software Testing (IIST)]. Testing certifications Certified Associate in Software Testing (CAST) offered by the QAI [46] CATe offered by the International Institute for Software Testing[47] Certified Manager in Software Testing (CMST) offered by the QAI [46] Certified Software Tester (CSTE) offered by the Quality Assurance Institute (QAI)[46] Certified Software Test Professional (CSTP) offered by the International Institute for Software Testing[47] CSTP (TM) (Australian Version) offered by K. J. Ross & Associates[48] ISEB offered by the Information Systems Examinations Board ISTQB Certified Tester, Foundation Level (CTFL) offered by the International Software Testing Qualification Board [49][50] ISTQB Certified Tester, Advanced Level (CTAL) offered by the International Software Testing Qualification Board [49][50] TMPF TMap Next Foundation offered by the Examination Institute for Information Science[51] TMPA TMap Next Advanced offered by the Examination Institute for Information Science[51] Quality assurance certifications CMSQ offered by the Quality Assurance Institute (QAI).[46] CSQA offered by the Quality Assurance Institute (QAI)[46] CSQE offered by the American Society for Quality (ASQ)[52] CQIA offered by the American Society for Quality (ASQ)[52]

Software testing

230

Controversy
Some of the major software testing controversies include: What constitutes responsible software testing? Members of the "context-driven" school of testing[53] believe that there are no "best practices" of testing, but rather that testing is a set of skills that allow the tester to select or invent testing practices to suit each unique situation.[54] Agile vs. traditional Should testers learn to work under conditions of uncertainty and constant change or should they aim at process "maturity"? The agile testing movement has received growing popularity since 2006 mainly in commercial circles,[55][56] whereas government and military[57] software providers use this methodology but also the traditional test-last models (e.g. in the Waterfall model). Exploratory test vs. scripted
[58]

Should tests be designed at the same time as they are executed or should they be designed beforehand?

Manual testing vs. automated Some writers believe that test automation is so expensive relative to its value that it should be used sparingly.[59] More in particular, test-driven development states that developers should write unit-tests of the XUnit type before coding the functionality. The tests then can be considered as a way to capture and implement the requirements. Software design vs. software implementation
[60]

Should testing be carried out only at the end or throughout the whole process?

Who watches the watchmen? The idea is that any form of observation is also an interactionthe act of testing can also affect that which is being tested.[61]

Related processes
Software verification and validation
Software testing is used in association with verification and validation:[62] Verification: Have we built the software right? (i.e., does it implement the requirements). Validation: Have we built the right software? (i.e., do the requirements satisfy the customer). The terms verification and validation are commonly used interchangeably in the industry; it is also common to see these two terms incorrectly defined. According to the IEEE Standard Glossary of Software Engineering Terminology: Verification is the process of evaluating a system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. Validation is the process of evaluating a system or component during or at the end of the development process to determine whether it satisfies specified requirements. According to the IS0 9000 standard: Verification is confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled. Validation is confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled.

Software testing

231

Software quality assurance (SQA)


Software testing is a part of the software quality assurance (SQA) process.[4] In SQA, software process specialists and auditors are concerned for the software development process rather than just the artifacts such as documentation, code and systems. They examine and change the software engineering process itself to reduce the number of faults that end up in the delivered software: the so-called "defect rate". What constitutes an "acceptable defect rate" depends on the nature of the software; A flight simulator video game would have much higher defect tolerance than software for an actual airplane. Although there are close links with SQA, testing departments often exist independently, and there may be no SQA function in some companies. Software testing is a task intended to detect defects in software by contrasting a computer program's expected results with its actual results for a given set of inputs. By contrast, QA (quality assurance) is the implementation of policies and procedures intended to prevent defects from occurring in the first place.

References
[1] Exploratory Testing (http:/ / www. kaner. com/ pdfs/ ETatQAI. pdf), Cem Kaner, Florida Institute of Technology, Quality Assurance Institute Worldwide Annual Software Testing Conference, Orlando, FL, November 2006 [2] Software Testing (http:/ / www. ece. cmu. edu/ ~koopman/ des_s99/ sw_testing/ ) by Jiantao Pan, Carnegie Mellon University [3] Leitner, A., Ciupa, I., Oriol, M., Meyer, B., Fiva, A., "Contract Driven Development = Test Driven Development - Writing Test Cases" (http:/ / se. inf. ethz. ch/ people/ leitner/ publications/ cdd_leitner_esec_fse_2007. pdf), Proceedings of ESEC/FSE'07: European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering 2007, (Dubrovnik, Croatia), September 2007 [4] Kaner, Cem; Falk, Jack and Nguyen, Hung Quoc (1999). Testing Computer Software, 2nd Ed.. New York, et al: John Wiley and Sons, Inc.. pp.480 pages. ISBN0-471-35846-0. [5] Kolawa, Adam; Huizinga, Dorota (2007). Automated Defect Prevention: Best Practices in Software Management (http:/ / www. wiley. com/ WileyCDA/ WileyTitle/ productCd-0470042125. html). Wiley-IEEE Computer Society Press. pp.4143. ISBN0-470-04212-5. . [6] Kolawa, Adam; Huizinga, Dorota (2007). Automated Defect Prevention: Best Practices in Software Management (http:/ / www. wiley. com/ WileyCDA/ WileyTitle/ productCd-0470042125. html). Wiley-IEEE Computer Society Press. p.426. ISBN0-470-04212-5. . [7] Section 1.1.2, Certified Tester Foundation Level Syllabus (http:/ / www. istqb. org/ downloads/ syllabi/ SyllabusFoundation. pdf), International Software Testing Qualifications Board [8] Principle 2, Section 1.3, Certified Tester Foundation Level Syllabus (http:/ / www. bcs. org/ upload/ pdf/ istqbsyll. pdf), International Software Testing Qualifications Board [9] "Proceedings from the 5th International Conference on Software Testing and Validation (ICST). Software Competence Center Hagenberg. "Test Design: Lessons Learned and Practical Implications." (http:/ / ieeexplore. ieee. org/ xpl/ freeabs_all. jsp?arnumber=4578383). . [10] Software errors cost U.S. economy $59.5 billion annually (http:/ / www. abeacha. com/ NIST_press_release_bugs_cost. htm), NIST report [11] McConnell, Steve (2004). Code Complete (2nd ed.). Microsoft Press. p.29. ISBN0-7356-1967-0. [12] see D. Gelperin and W.C. Hetzel [13] Myers, Glenford J. (1979). The Art of Software Testing. John Wiley and Sons. ISBN0-471-04328-1. [14] Company, People's Computer (1987). "Dr. Dobb's journal of software tools for the professional programmer" (http:/ / books. google. com/ ?id=7RoIAAAAIAAJ). Dr. Dobb's journal of software tools for the professional programmer (M&T Pub) 12 (16): 116. . [15] Gelperin, D.; B. Hetzel (1988). "The Growth of Software Testing". CACM 31 (6). ISSN0001-0782. [16] until 1956 it was the debugging oriented period, when testing was often associated to debugging: there was no clear difference between testing and debugging. Gelperin, D.; B. Hetzel (1988). "The Growth of Software Testing". CACM 31 (6). ISSN0001-0782. [17] From 19571978 there was the demonstration oriented period where debugging and testing was distinguished now - in this period it was shown, that software satisfies the requirements. Gelperin, D.; B. Hetzel (1988). "The Growth of Software Testing". CACM 31 (6). ISSN0001-0782. [18] The time between 19791982 is announced as the destruction oriented period, where the goal was to find errors. Gelperin, D.; B. Hetzel (1988). "The Growth of Software Testing". CACM 31 (6). ISSN0001-0782. [19] 19831987 is classified as the evaluation oriented period: intention here is that during the software lifecycle a product evaluation is provided and measuring quality. Gelperin, D.; B. Hetzel (1988). "The Growth of Software Testing". CACM 31 (6). ISSN0001-0782. [20] From 1988 on it was seen as prevention oriented period where tests were to demonstrate that software satisfies its specification, to detect faults and to prevent faults. Gelperin, D.; B. Hetzel (1988). "The Growth of Software Testing". CACM 31 (6). ISSN0001-0782. [21] Introduction (http:/ / www. bullseye. com/ coverage. html#intro), Code Coverage Analysis, Steve Cornett [22] Ron, Patton. Software Testing. [23] Laycock, G. T. (1993) (PostScript). The Theory and Practice of Specification Based Software Testing (http:/ / www. mcs. le. ac. uk/ people/ gtl1/ thesis. ps. gz). Dept of Computer Science, Sheffield University, UK. . Retrieved 2008-02-13.

Software testing
[24] Bach, James (June 1999). "Risk and Requirements-Based Testing" (http:/ / www. satisfice. com/ articles/ requirements_based_testing. pdf) (PDF). Computer 32 (6): 113114. . Retrieved 2008-08-19. [25] Savenkov, Roman (2008). How to Become a Software Tester. Roman Savenkov Consulting. p.159. ISBN978-0-615-23372-7. [26] Patton, Ron. Software Testing. [27] "www.crosschecknet.com" (http:/ / www. crosschecknet. com/ soa_testing_black_white_gray_box. php). . [28] "Visual testing of software - Helsinki University of Technology" (http:/ / www. cs. hut. fi/ ~jlonnber/ VisualTesting. pdf) (PDF). . Retrieved 2012-01-13. [29] "Article on visual testing in Test Magazine" (http:/ / www. testmagazine. co. uk/ 2011/ 04/ visual-testing). Testmagazine.co.uk. . Retrieved 2012-01-13. [30] "SWEBOK Guide - Chapter 5" (http:/ / www. computer. org/ portal/ web/ swebok/ html/ ch5#Ref2. 1). Computer.org. . Retrieved 2012-01-13. [31] Binder, Robert V. (1999). Testing Object-Oriented Systems: Objects, Patterns, and Tools. Addison-Wesley Professional. p.45. ISBN0-201-80938-9. [32] Beizer, Boris (1990). Software Testing Techniques (Second ed.). New York: Van Nostrand Reinhold. pp.21,430. ISBN0-442-20672-0. [33] IEEE (1990). IEEE Standard Computer Dictionary: A Compilation of IEEE Standard Computer Glossaries. New York: IEEE. ISBN1-55937-079-3. [34] van Veenendaal, Erik. "Standard glossary of terms used in Software Testing" (http:/ / www. astqb. org/ educational-resources/ glossary. php#A). . Retrieved 17 June 2010. [35] "Globalization Step-by-Step: The World-Ready Approach to Testing. Microsoft Developer Network" (http:/ / msdn. microsoft. com/ en-us/ goglobal/ bb688148). Msdn.microsoft.com. . Retrieved 2012-01-13. [36] EtestingHub-Online Free Software Testing Tutorial. "e)Testing Phase in Software Testing:" (http:/ / www. etestinghub. com/ testing_lifecycles. php#2). Etestinghub.com. . Retrieved 2012-01-13. [37] Myers, Glenford J. (1979). The Art of Software Testing. John Wiley and Sons. pp.145146. ISBN0-471-04328-1. [38] Dustin, Elfriede (2002). Effective Software Testing. Addison Wesley. p.3. ISBN0-201-79429-2. [39] Marchenko, Artem (November 16, 2007). "XP Practice: Continuous Integration" (http:/ / agilesoftwaredevelopment. com/ xp/ practices/ continuous-integration). . Retrieved 2009-11-16. [40] Gurses, Levent (February 19, 2007). "Agile 101: What is Continuous Integration?" (http:/ / www. jacoozi. com/ blog/ ?p=18). . Retrieved 2009-11-16. [41] Pan, Jiantao (Spring 1999). "Software Testing (18-849b Dependable Embedded Systems)" (http:/ / www. ece. cmu. edu/ ~koopman/ des_s99/ sw_testing/ ). Topics in Dependable Embedded Systems. Electrical and Computer Engineering Department, Carnegie Mellon University. . [42] IEEE (1998). IEEE standard for software test documentation. New York: IEEE. ISBN0-7381-1443-X. [43] Kaner, Cem (2001). "NSF grant proposal to "lay a foundation for significant improvements in the quality of academic and commercial courses in software testing"" (http:/ / www. testingeducation. org/ general/ nsf_grant. pdf) (PDF). . [44] Kaner, Cem (2003). "Measuring the Effectiveness of Software Testers" (http:/ / www. testingeducation. org/ a/ mest. pdf) (PDF). . [45] Black, Rex (December 2008). Advanced Software Testing- Vol. 2: Guide to the ISTQB Advanced Certification as an Advanced Test Manager. Santa Barbara: Rocky Nook Publisher. ISBN1-933952-36-9. [46] "Quality Assurance Institute" (http:/ / www. qaiglobalinstitute. com/ ). Qaiglobalinstitute.com. . Retrieved 2012-01-13. [47] "International Institute for Software Testing" (http:/ / www. testinginstitute. com/ ). Testinginstitute.com. . Retrieved 2012-01-13. [48] K. J. Ross & Associates (http:/ / www. kjross. com. au/ cstp/ ) [49] "ISTQB" (http:/ / www. istqb. org/ ). . [50] "ISTQB in the U.S." (http:/ / www. astqb. org/ ). . [51] "EXIN: Examination Institute for Information Science" (http:/ / www. exin-exams. com). Exin-exams.com. . Retrieved 2012-01-13. [52] "American Society for Quality" (http:/ / www. asq. org/ ). Asq.org. . Retrieved 2012-01-13. [53] "context-driven-testing.com" (http:/ / www. context-driven-testing. com). context-driven-testing.com. . Retrieved 2012-01-13. [54] "Article on taking agile traits without the agile method" (http:/ / www. technicat. com/ writing/ process. html). Technicat.com. . Retrieved 2012-01-13. [55] Were all part of the story (http:/ / stpcollaborative. com/ knowledge/ 272-were-all-part-of-the-story) by David Strom, July 1, 2009 [56] IEEE article about differences in adoption of agile trends between experienced managers vs. young students of the Project Management Institute (http:/ / ieeexplore. ieee. org/ Xplore/ login. jsp?url=/ iel5/ 10705/ 33795/ 01609838. pdf?temp=x). See also Agile adoption study from 2007 (http:/ / www. ambysoft. com/ downloads/ surveys/ AgileAdoption2007. ppt) [57] Willison, John S. (April 2004). "Agile Software Development for an Agile Force" (http:/ / web. archive. org/ web/ 20051029135922/ http:/ / www. stsc. hill. af. mil/ crosstalk/ 2004/ 04/ 0404willison. html). CrossTalk (STSC) (April 2004). Archived from the original (http:/ / www. stsc. hill. af. mil/ crosstalk/ 2004/ 04/ 0404willison. htm) on unknown. . [58] "IEEE article on Exploratory vs. Non Exploratory testing" (http:/ / ieeexplore. ieee. org/ iel5/ 10351/ 32923/ 01541817. pdf?arnumber=1541817). Ieeexplore.ieee.org. . Retrieved 2012-01-13. [59] An example is Mark Fewster, Dorothy Graham: Software Test Automation. Addison Wesley, 1999, ISBN 0-201-33140-3. [60] "Article referring to other links questioning the necessity of unit testing" (http:/ / java. dzone. com/ news/ why-evangelising-unit-testing-). Java.dzone.com. . Retrieved 2012-01-13.

232

Software testing
[61] Microsoft Development Network Discussion on exactly this topic (http:/ / channel9. msdn. com/ forums/ Coffeehouse/ 402611-Are-you-a-Test-Driven-Developer/ ) [62] Tran, Eushiuan (1999). "Verification/Validation/Certification" (http:/ / www. ece. cmu. edu/ ~koopman/ des_s99/ verification/ index. html). In Koopman, P.. Topics in Dependable Embedded Systems. USA: Carnegie Mellon University. . Retrieved 2008-01-13.

233

Further reading
Bertrand Meyer, "Seven Principles of Software Testing," Computer, vol. 41, no. 8, pp.99101, Aug. 2008, doi:10.1109/MC.2008.306; available online (http://se.ethz.ch/~meyer/publications/testing/principles.pdf).

External links
Software testing tools and products (http://www.dmoz.org/Computers/Programming/Software_Testing/ Products_and_Tools/) at the Open Directory Project "Software that makes Software better" Economist.com (http://www.economist.com/science/tq/displaystory. cfm?story_id=10789417) Automated software testing metrics including manual testing metrics (http://idtus.com/img/ UsefulAutomatedTestingMetrics.pdf)

Business process modeling


Business Process Modeling (BPM) in systems engineering is the activity of representing processes of an enterprise, so that the current process may be analyzed and improved. BPM is typically performed by business analysts and managers who are seeking to improve process efficiency and quality. The process improvements identified by BPM may or may not require Information Technology involvement, although that is a common driver for the need to model a business process, by creating a process master. Change management programs are typically involved to put the improved business processes into practice. With advances in technology from large platform vendors, the vision of BPM models becoming fully executable (and capable of simulations and round-trip engineering) is coming closer to reality every day.

History
Techniques to model business process such as the flow chart, functional flow block diagram, control flow diagram, Gantt chart, PERT diagram, and IDEF have emerged since the beginning of the 20th century. The Gantt charts were among the first to arrive around 1899, the flow charts in the 1920s, Functional Flow Block Diagram and PERT in the 1950s, Data Flow Diagrams and IDEF in the 1970s. Among the modern methods are Unified Modeling Language and Business Process Modeling Notation. Still, these represent just a fraction of the methodologies used over the years to document business processes.[1] The term "business process modeling" itself was coined in the 1960s in the field of systems engineering by S. Williams in his 1967 article "Business Process Modeling Improves Administrative Control".[2] His idea was that techniques for obtaining a better understanding of physical control systems could be used in a similar way for business processes. It was not until the 1990s that the term became popular. In the 1990s the term "process" became a new productivity paradigm.[3] Companies were encouraged to think in processes instead of functions and procedures. Process thinking looks at the chain of events in the company from purchase to supply, from order retrieval to sales etc. The traditional modeling tools were developed to picture time and costs, while modern methods focus on cross-function activities. These cross-functional activities have increased severely in number and importance due to the growth of complexity and dependencies. New methodologies such as business process redesign, business process innovation, business process management, integrated business planning among others all "aiming at improving processes across the traditional functions that comprise a company".[3]

Business process modeling In the field of software engineering the term "business process modeling" opposed the common software process modeling, aiming to focus more on the state of the practice during software development.[4] In that time early 1990s all existing and new modeling techniques to picture business processes were considered and called "business process modeling languages." In the Object Oriented approach, it was considered to be an essential step in the specification of Business Application Systems. Business process modeling became the base of new methodologies, that for example also supported data collection, data flow analysis, process flow diagrams and reporting facilities. Around 1995 the first visually oriented tools for business process modeling and implementation were being presented.

234

BPM topics
Business model
A business model is a framework for creating economic, social, and/or other forms of value. The term 'business model' is thus used for a broad range of informal and formal descriptions to represent core aspects of a business, including purpose, offerings, strategies, infrastructure, organizational structures, trading practices, and operational processes and policies. In the most basic sense, a business model is the method of doing business by which a company can sustain itself. That is, generate revenue. The business model spells-out how a company makes money by specifying where it is positioned in the value chain.

Business process
A business process is a collection of related, structured activities or tasks that produce a specific service or product (serve a particular goal) for a particular customer or customers. There are three main types of business processes: 1. Management processes, the processes that govern the operation of a system. Typical management processes include "Corporate Governance" and "Strategic Management". 2. Operational processes, processes that constitute the core business and create the primary value stream. Typical operational processes are Purchasing, Manufacturing, Marketing, and Sales. 3. Supporting processes, which support the core processes. Examples include Accounting, Recruitment, Technical support. A business process can be decomposed into several sub-processes, which have their own attributes, but also contribute to achieving the goal of the super-process. The analysis of business processes typically includes the mapping of processes and sub-processes down to activity level. A business process model is a model of one or more business processes, and defines the ways in which operations are carried out to accomplish the intended objectives of an organization. Such a model remains an abstraction and depends on the intended use of the model. It can describe the workflow or the integration between business processes. It can be constructed in multiple levels. A workflow is a depiction of a sequence of operations, declared as work of a person, work of a simple or complex mechanism, work of a group of persons,[5] work of an organization of staff, or machines. Workflow may be seen as any abstraction of real work, segregated in workshare, work split or whatever types of ordering. For control purposes, workflow may be a view on real work under a chosen aspect.

Artifact-centric Business process


The Artifact-centric business process model has emerged as a new promising approach for modeling business processes, as it provides a highly flexible solution to capture operational specifications of business processes. It particularly focuses on describing the data of business processes, known as artifacts, by characterizing business-relevant data objects, their lifecycles, and related services. The artifact-centric process modelling approach fosters the automation of the business operations and supports the flexibility of the workflow enactment and evolution.

Business process modeling

235

Business process modeling tools


Business process modeling tools provide business users with the ability to model their business processes, implement and execute those models, and refine the models based on as-executed data. As a result, business process modeling tools can provide transparency into business processes, as well as the centralization of corporate business process models and execution metrics.[6]

Modeling and simulation


Modeling and simulation functionality allows for pre-execution what-if modeling and simulation. Post-execution optimization is available based on the analysis of actual as-performed metrics.[6] Use case diagrams created by Ivar Jacobson, 1992. Currently integrated in UML Activity diagrams, also currently adopted by UML Some business process modeling techniques are: Business Process Model and Notation (BPMN) Cognition enhanced Natural language Information Analysis Method (CogNIAM) Extended Business Modeling Language (xBML) Event-driven process chain (EPC)

ICAM DEFinition (IDEF0) Unified Modeling Language (UML), extensions for business process such as Eriksson-Penker's

Programming language tools for BPM


BPM suite software provides programming interfaces (web services, application program interfaces (APIs)) which allow enterprise applications to be built to leverage the BPM engine.[6] This component is often referenced as the engine of the BPM suite. Programming languages that are being introduced for BPM include:[7] BPMN Business Process Execution Language (BPEL), Web Services Choreography Description Language (WS-CDL). XML Process Definition Language (XPDL),

Some vendor-specific languages: Architecture of Integrated Information Systems (ARIS) supports EPC, Java Process Definition Language (JBPM), Other technologies related to business process modeling include model-driven architecture and service-oriented architecture.

Business process modeling

236

Related topics
Business reference model
A business reference model is a reference model, concentrating on the functional and organizational aspects of an enterprise, service organization or government agency. In general a reference model is a model of something that embodies the basic goal or idea of something and can then be looked at as a reference for various purposes. A business reference model is a means to describe the business [8] Example of the US Federal Government Business Reference Model. operations of an organization, independent of the organizational structure that perform them. Other types of business reference model can also depict the relationship between the business processes, business functions, and the business areas business reference model. These reference models can be constructed in layers, and offer a foundation for the analysis of service components, technology, data, and performance. The most familiar business reference model is the Business Reference Model of the US Federal Government. That model is a function-driven framework for describing the business operations of the Federal Government independent of the agencies that perform them. The Business Reference Model provides an organized, hierarchical construct for describing the day-to-day business operations of the Federal government. While many models exist for describing organizations - organizational charts, location maps, etc. - this model presents the business using a functionally driven approach.[9]

Business process integration

Business process modeling

237

A business model, which may be considered an elaboration of a business process model, typically shows business data and business organizations as well as business processes. By showing business processes and their information flows a business model allows business stakeholders to define, understand, and validate their business enterprise. The data model part of the business model shows how business information is stored, which is useful for developing software code. See the figure on the right for an example of the interaction between business process models and data models.[10]
Example of the interaction between business process and data models.

[10]

Usually a business model is created after conducting an interview, which is part of the business analysis process. The interview consists of a facilitator asking a series of questions to extract information about the subject business process. The interviewer is referred to as a facilitator to emphasize that it is the participants, not the facilitator, who provide the business process information. Although the facilitator should have some knowledge of the subject business process, but this is not as important as the mastery of a pragmatic and rigorous method interviewing business experts. The method is important because for most enterprises a team of facilitators is needed to collect information across the enterprise, and the findings of all the interviewers must be compiled and integrated once completed.[10] Business models are developed as defining either the current state of the process, in which case the final product is called the "as is" snapshot model, or a concept of what the process should become, resulting in a "to be" model. By comparing and contrasting "as is" and "to be" models the business analysts can determine if the existing business processes and information systems are sound and only need minor modifications, or if reengineering is required to correct problems or improve efficiency. Consequently, business process modeling and subsequent analysis can be used to fundamentally reshape the way an enterprise conducts its operations.[10]

Business process reengineering

Business process modeling

238

Business process reengineering (BPR) is an approach aiming at improvements by means of elevating efficiency and effectiveness of the processes that exist within and across organizations. The key to business process reengineering is for organizations to look at their business processes from a "clean slate" perspective and determine how they can best construct these processes to improve how they conduct business. Business process reengineering (BPR) began as a private sector technique to help organizations fundamentally rethink how they do their work in order to dramatically improve customer service, cut operational costs, and become world-class competitors. A key stimulus for reengineering has been the continuing development and Business Process Reengineering Cycle. deployment of sophisticated information systems and networks. Leading organizations are becoming bolder in using this technology to support innovative business processes, rather than refining current ways of doing work.[11]

Business process management


Business process management is a field of management focused on aligning organizations with the wants and needs of clients. It is a holistic management approach that promotes business effectiveness and efficiency while striving for innovation, flexibility and integration with technology. As organizations strive for attainment of their objectives, business process management attempts to continuously improve processes - the process to define, measure and improve your processes a "process optimization" process.

References
[1] Thomas Dufresne & James Martin (2003). "Process Modeling for E-Business" (http:/ / web. archive. org/ web/ 20061220024049/ http:/ / mason. gmu. edu/ ~tdufresn/ paper. doc). INFS 770 Methods for Information Systems Engineering: Knowledge Management and E-Business. Spring 2003 [2] Williams, S. (1967) "Business Process Modeling Improves Administrative Control," In: Automation. December, 1967, pp. 44 - 50. [3] Asbjrn Rolstads (1995). "Business process modeling and reengineering". in: Performance Management: A Business Process Benchmarking Approach. p. 148-150. [4] Brian C. Warboys (1994). Software Process Technology: Third European Workshop EWSPT'94, Villard de Lans, France, February 79, 1994 : Proceedings. p. 252. [5] See e.g., ISO 12052:2006 (http:/ / www. iso. org) [6] Workflow/Business Process Management (BPM) Service Pattern (http:/ / enterprisearchitecture. nih. gov/ ArchLib/ AT/ TA/ WorkflowServicePattern. htm) June 27, 2007. Accessed 29 nov 2008. [7] "Business Process Modeling FAQ" (http:/ / www. BPModeling. com/ faq/ ). . Retrieved 2008-11-02. [8] FEA (2005) FEA Records Management Profile, Version 1.0 (http:/ / www. archives. gov/ records-mgmt/ pdf/ rm-profile. pdf). December 15, 2005. [9] FEA Consolidated Reference Model Document (http:/ / www. whitehouse. gov/ sites/ default/ files/ omb/ assets/ fea_docs/ FEA_CRM_v23_Final_Oct_2007_Revised. pdf). Oct 2007. [10] Paul R. Smith & Richard Sarfaty (1993). Creating a strategic plan for configuration management using Computer Aided Software Engineering (CASE) tools. (http:/ / www. osti. gov/ energycitations/ servlets/ purl/ 10160331-YhIRrY/ ) Paper For 1993 National DOE/Contractors and Facilities CAD/CAE User's Group. [11] Business Process Reengineering Assessment Guide (http:/ / www. gao. gov/ special. pubs/ bprag/ bprag. pdf), United States General Accounting Office, May 1997.

Business process modeling

239

Further reading
Lambertus Johannes Hommes, Bart-Jan Hommes (2004). The Evaluation of Business Process Modeling Techniques. Doctoral thesis. Technische Universiteit Delft. Hvard D. Jrgensen (2004). Interactive Process Models (http://www.idi.ntnu.no/grupper/su/publ/phd/ Jorgensen-thesis.pdf). Thesis Norwegian University of Science and Technology Trondheim, Norway. Manuel Laguna, Johan Marklund (2004). Business Process Modeling, Simulation, and Design. Pearson/Prentice Hall, 2004. Ovidiu S. Noran (2000). Business Modelling: UML vs. IDEF (http://www.cit.gu.edu.au/~noran/Docs/ UMLvsIDEF.pdf) Paper Griffh University Jan Recker (2005). "Process Modeling in the 21st Century" (http://www.bptrends.com/publicationfiles/ 05-06-ART-ProcessModeling21stCent-Recker1.pdf). In: BP Trends, May 2005. Ryan K. L. Ko, Stephen S. G. Lee, Eng Wah Lee (2009) Business Process Management (BPM) Standards: A Survey (http://ryanko.files.wordpress.com/2008/03/bpm-journal-koleelee-bpms-survey.pdf). In: Business Process Management Journal, Emerald Group Publishing Limited. Volume 15 Issue 5. ISSN 1463-7154. Jan Vanthienen, S. Goedertier and R. Haesen (2007). "EM-BrA2CE v0.1: A vocabulary and execution model for declarative business process modeling" (https://lirias.kuleuven.be/bitstream/123456789/162944/1/ KBI_0728.pdf). DTEW - KBI_0728.

Joint application design


Joint application design (JAD) is a process used in the prototyping life cycle area of the Dynamic Systems Development Method (DSDM) to collect business requirements while developing new information systems for a company. "The JAD process also includes approaches for enhancing user participation, expediting development, and improving the quality of specifications." It consists of a workshop where knowledge workers and IT specialists meet, sometimes for several days, to define and review the business requirements for the system.[1] The attendees include high level management officials who will ensure the product provides the needed reports and information at the end. This acts as a management process which allows Corporate Information Services (IS) departments to work more effectively with users in a shorter time frame.[2] Through JAD workshops the knowledge workers and IT specialists are able to resolve any difficulties or differences between the two parties regarding the new information system. The workshop follows a detailed agenda in order to guarantee that all uncertainties between parties are covered and to help prevent any miscommunications. Miscommunications can carry far more serious repercussions if not addressed until later on in the process. (See below for Key Participants and Key Steps to an Effective JAD). In the end, this process will result in a new information system that is feasible and appealing to both the designers and end users. "Although the JAD design is widely acclaimed, little is actually known about its effectiveness in practice." According to Journal of Systems and Software, a field study was done at three organizations using JAD practices to determine how JAD influenced system development outcomes. The results of the study suggest that organizations realized modest improvement in systems development outcomes by using the JAD method. JAD use was most effective in small, clearly focused projects and less effective in large complex projects.

Joint application design

240

Origin
Joint Application Design (JAD) was originally co-developed by Tony Crawford and Chuck Morris of IBM in the late 1970's. It was first deployed at Canadian International Paper. JAD was used in IBM Canada for a while before being brought to the US. Initially, IBM used JAD to help sell and implement a software program they sold, called COPICS. It was widely adapted to many uses (system requirements, grain elevator design, problem-solving, etc.). Tony Crawford later developed JAD-Plan and then JAR (Joint Application Requirements). Originally, JAD was designed to bring system developers and users of varying backgrounds and opinions together in a productive as well as creative environment. The meetings were a way of obtaining quality requirements and specifications. The structured approach provides a good alternative to traditional serial interviews by system analysts.

Key participants
Executive Sponsor: The executive who charters the project, the system owner. They must be high enough in the organization to be able to make decisions and provide the necessary strategy, planning, and direction. Subject Matter Experts: These are the business users, the IS professionals, and the outside experts that will be needed for a successful workshop. This group is the backbone of the meeting; they will drive the changes. Facilitator/Session Leader: Chairs the meeting and directs traffic by keeping the group on the meeting agenda. The facilitator is responsible for identifying those issues that can be solved as part of the meeting and those which need to be assigned at the end of the meeting for follow-up investigation and resolution. The facilitator serves the participants and does not contribute information to the meeting. Scribe/Modeller/Recorder/Documentation Expert: Records and publish the proceedings of the meeting and does not contribute information to the meeting. Observers: Generally members of the application development team assigned to the project. They are to sit behind the participants and are to silently observe the proceedings.

9 Key Steps
1. Identify project objectives and limitations It is vital to have clear objectives for the workshop and for the project as a whole. The pre-workshop activities, the planning and scoping, set the expectations of the workshop sponsors and participants. Scoping identifies the business functions that are within the scope of the project. It also tries to assess both the project design and implementation complexity. The political sensitivity of the project should be assessed. Has this been tried in the past? How many false starts were there? How many implementation failures were there? Sizing is important. For best results, systems projects should be sized so that a complete design - right down to screens and menus - can be designed in 8 to 10 workshop days. 2. Identify critical success factors It is important to identify the critical success factors for both the development project and the business function being studied. How will we know that the planned changes have been effective? How will success be measured? Planning for outcomes assessment helps to judge the effectiveness and the quality of the implemented system over its entire operational life. 3. Define project deliverables In general, the deliverables from a workshop are documentation and a design. It is important to define the form and level of detail of the workshop documentation. What types of diagrams will be provided? What type or form of narrative will be supplied? It is a good idea to start using a CASE tool for diagramming support right from the start. Most of the available tools have good to great diagramming capabilities but their narrative support is generally weak. The narrative is best produced with your standard word processing software. 4. Define the schedule of workshop activities Workshops vary in length from one to five days. The initial workshop for a project should not be less than three days. It takes the participants most of the first day to get

Joint application design comfortable with their roles, with each other, and with the environment. The second day is spent learning to understand each other and developing a common language with which to communicate issues and concerns. By the third day, everyone is working together on the problem and real productivity is achieved. After the initial workshop, the team-building has been done. Shorter workshops can be scheduled for subsequent phases of the project, for instance, to verify a prototype. However, it will take the participants from one to three hours to re-establish the team psychology of the initial workshop. Select the participants These are the business users, the IS professionals, and the outside experts that will be needed for a successful workshop. These are the true "back bones" of the meeting who will drive the changes. Prepare the workshop material Before the workshop, the project manager and the facilitator perform an analysis and build a preliminary design or straw man to focus the workshop. The workshop material consists of documentation, worksheets, diagrams, and even props that will help the participants understand the business function under investigation. Organize workshop activities and exercises The facilitator must design workshop exercises and activities to provide interim deliverables that build towards the final output of the workshop. The pre-workshop activities help design those workshop exercises. For example, for a Business Area Analysis, what's in it? A decomposition diagram? A high-level entity-relationship diagram? A normalized data model? A state transition diagram? A dependency diagram? All of the above? None of the above? It is important to define the level of technical diagramming that is appropriate to the environment. The most important thing about a diagram is that it must be understood by the users. Once the diagram choice is made, the facilitator designs exercises into the workshop agenda to get the group to develop those diagrams. A workshop combines exercises that are serially oriented to build on one another, and parallel exercises, with each sub-team working on a piece of the problem or working on the same thing for a different functional area. High-intensity exercises led by the facilitator energize the group and direct it towards a specific goal. Low-intensity exercises allow for detailed discussions before decisions. The discussions can involve the total group or teams can work out the issues and present a limited number of suggestions for the whole group to consider. To integrate the participants, the facilitator can match people with similar expertise from different departments. To help participants learn from each other, the facilitator can mix the expertise. It's up to the facilitator to mix and match the sub-team members to accomplish the organizational, cultural, and political objectives of the workshop. A workshop operates on both the technical level and the political level. It is the facilitator's job to build consensus and communications, to force issues out early in the process. There is no need to worry about the technical implementation of a system if the underlying business issues cannot be resolved. Prepare, inform, educate the workshop participants All of the participants in the workshop must be made aware of the objectives and limitations of the project and the expected deliverables of the workshop. Briefing of participants should take place 1 to 5 days before the workshop. This briefing may be teleconferenced if participants are widely dispersed. The briefing document might be called the Familiarization Guide, Briefing Guide, Project Scope Definition, or the Management Definition Guide - or anything else that seems appropriate. It is a document of eight to twelve pages, and it provides a clear definition of the scope of the project for the participants. The briefing itself lasts two to four hours. It provides the psychological preparation everyone needs to move forward into the workshop. Coordinate workshop logistics Workshops should be held off-site to avoid interruptions. Projectors, screens, PCs, tables, markers, masking tape, Post-It notes, and lots of other props should be prepared. What specific facilities and props are needed is up to the facilitator. They can vary from simple flip charts to electronic white boards. In any case, the layout of the room must promote the communication and interaction of the participants.

241

5. 6.

7.

8.

9.

Joint application design

242

Advantages
JAD decreases time and costs associated with requirements elicitation process. During 2-4 weeks information not only is collected, but requirements, agreed upon by various system users, are identified. Experience with JAD allows companies to customize their systems analysis process into even more dynamic ones like Double Helix, a methodology for mission-critical work. JAD sessions help bring experts together giving them a chance to share their views, understand views of others, and develop the sense of project ownership. The methods of JAD implementation are well known, as it is "the first accelerated design technique available on the market and probably best known", and can easily be applied by any organization. Easy integration of CASE tools into JAD workshops improves session productivity and provides systems analysts with discussed and ready to use models.

Challenges
1. "Do your homework". Without multifaceted preparation for a JAD session, valuable time of professionals can be wasted easily. The wrong problem can be addressed, the wrong people can be invited to participate, inadequate resources for problem-solving can be used - all these scenarios can happen if organizers of the JAD session do not study the elements of the system being evaluated. 2. The team chosen to participate in a JAD workshop should include employees able to provide input on most, if not all, of the necessary parts of the problem. That is why particular attention should be paid during participant selection. The group should consist not only of employees from various departments who will interact with the new system, but also from different places on the organizational ladder. This variety of thought process understanding will reflect different, sometimes even conflicting points of view, but will allow participants to see a "different side of the coin". With better understanding of the undercurrents of processes JAD will bring to light a better model outline. 3. The facilitator as a smoothing and motivational force has to make sure that all participants, not only the most vocal ones, have a chance to offer their opinions, ideas, thoughts. All business experts on the JAD team should be encouraged to offer their input, making discussions more fruitful.

References
[1] Haag, Stephen; Maeve Cummings, Donald J. McCubbrey, Pinsonneult, and Donovan (2006). "Phase 2: Analysis". Information Management Systems for the Information Age. McGraw-Hill Ryerson. ISBN978-0-07-281947-2. [2] Jennerich, Bill (November 1990). "Joint Application Design: Business Requirements Analysis for Successful Re-Engineering" (http:/ / www. bee. net/ bluebird/ jaddoc. htm). . Retrieved 2009-02-06.

Bibliography
Yatco, Mei C. (1999). "Joint Application Design/Development" (http://www.umsl.edu/~sauter/analysis/JAD. html). University of Missouri-St. Louis. Retrieved 2009-02-06. Soltys, Roman; Anthony Crawford (1999). "JAD for business plans and designs" (http://www.thefacilitator. com/htdocs/article11.html). Retrieved 2009-02-06. Dennis, Alan R.; Glenda S. Hayes, and Robert M. Daniels, Jr. (Spring 1999). "Business process modeling with group support systems" (http://www.jmis-web.org/articles/v15_n4_p115/index.html). Journal of Management Information Systems 15 (4): 115142. Botkin, John C.. "Customer Involved Participation as Part of the Application Development Process" (http:// wwwsgi.ursus.maine.edu/gisweb/spatdb/amfm/am94001.html). Retrieved 1999-11-09. Moeller, Walter E.. "Facilitated Information Gathering Sessions: An Information Engineering Technique" (http:// principlepartners.com/PPI-article-figs1.html). Retrieved 2010-03-22.

Joint application design Bill Jennerich "Joint Application Design -- Business Requirements Analysis for Successful Re-engineering." 18:50, 26 June 2006 (UTC) (http://www.bee.net/bluebird/jaddoc.htm) Last update time unknown. Accessed on Nov. 14, 1999. Gary Rush "The History of JAD -- MGR Consulting Newsletter." July 2006 (http://www.mgrconsulting.com/ Newsletters/eNewsletter_06_07.pdf) Davidson, E.J. (1999). Joint application design (JAD) in practice. Journal of Systems & Software, 45(3),215-223. Retrieved from Scienc Direct Database. (http://www.sciencedirect.com/science/article/pii/ S0164121298100808) Gottesdiener, Ellen; Requirements by Collaboration: Workshops for Defining Needs, Addison-Wesley, 2002, ISBN 0-201-78606-0. Wood, Jane and Silver, Denise; Joint Application Development, John Wiley & Sons Inc, ISBN 0-471-04299-4

243

Software development process


A software development process, also known as a software development life-cycle (SDLC), is a structure imposed on the development of a software product. Similar terms include software life cycle and software process. It is often considered a subset of systems development life cycle. There are several models for such processes, each describing approaches to a variety of tasks or activities that take place during the process. Some people consider a life-cycle model a more general term and a software development process a more specific term. For example, there are many specific software development processes that 'fit' the spiral life-cycle model. ISO/IEC 12207 is an international standard for software life-cycle processes. It aims to be the standard that defines all the tasks required for developing and maintaining software.

Overview
The large and growing body of software development organizations implement process methodologies. Many of them are in the defense industry, which in the U.S. requires a rating based on 'process models' to obtain contracts. The international standard for describing the method of selecting, implementing and monitoring the life cycle for software is ISO/IEC 12207. A decades-long goal has been to find repeatable, predictable processes that improve productivity and quality. Some try to systematize or formalize the seemingly unruly task of writing software. Others apply project management techniques to writing software. Without project management, software projects can easily be delivered late or over budget. With large numbers of software projects not meeting their expectations in terms of functionality, cost, or delivery schedule, effective project management appears to be lacking. Organizations may create a Software Engineering Process Group (SEPG), which is the focal point for process improvement. Composed of line practitioners who have varied skills, the group is at the center of the collaborative effort of everyone in the organization who is involved with software engineering process improvement.

Software development process

244

Software development activities


Billing Information Systems Patient Ward Planning Holding several activities exploring the information required in building Information Systems Inpatient Billing which includes information "patient data", "data doctor", "drug data" and "data the patient's medical history". Once the general requirements are collected, the analysis of the scope of the development should be determined and clearly stated. This is often called a scope document.

Software development models


Several models exist to streamline the development process. Each one has its pros and cons, and it's up to the development team to adopt the most appropriate one for the project. Sometimes a combination of the models may be more suitable.

Waterfall model
The waterfall model shows a process, where developers are to follow these phases in order: 1. Requirements specification (Requirements analysis) 2. 3. 4. 5. 6. Software design Implementation and Integration Testing (or Validation) Deployment (or Installation) Maintenance

In a strict Waterfall model, after each phase is finished, it proceeds to the next one. Reviews may occur before moving to the next phase which allows for the possibility of changes (which may involve a formal change control process). Reviews may also be employed to ensure that the phase is indeed complete; the phase completion criteria are often referred to as a "gate" that the project must pass through to move to the next phase. Waterfall discourages revisiting and revising any prior phase once it's complete. This "inflexibility" in a pure Waterfall model has been a source of criticism by supporters of other more "flexible" models.

Spiral model
The key characteristic of a Spiral model is risk management at regular stages in the development cycle. In 1988, Barry Boehm published a formal software system development "spiral model," which combines some key aspect of the waterfall model and rapid prototyping methodologies, but provided emphasis in a key area many felt had been neglected by other methodologies: deliberate iterative risk analysis, particularly suited to large-scale complex systems. The Spiral is visualized as a process passing through some number of iterations, with the four quadrant diagram representative of the following activities: 1. Formulate plans to: identify software targets, selected to implement the program, clarify the project development restrictions 2. Risk analysis: an analytical assessment of selected programs, to consider how to identify and eliminate risk 3. Implementation of the project: the implementation of software development and verification Risk-driven spiral model, emphasizing the conditions of options and constraints in order to support software reuse, software quality can help as a special goal of integration into the product development. However, the spiral model has some restrictive conditions, as follows: 1. The spiral model emphasizes risk analysis, and thus requires customers to accept this analysis and act on it. This requires both trust in the developer as well as the willingness to spend more to fix the issues, which is the reason

Software development process why this model is often used for large-scale internal software development. 2. If the implementation of risk analysis will greatly affect the profits of the project, the spiral model should not be used. 3. Software developers have to actively look for possible risks, and analyze it accurately for the spiral model to work. The first stage is to formulate a plan to achieve the objectives with these constraints, and then strive to find and remove all potential risks through careful analysis and, if necessary, by constructing a prototype. If some risks can not be ruled out, the customer has to decide whether to terminate the project or to ignore the risks and continue anyway. Finally, the results are evaluated and the design of the next phase begins.

245

Iterative and incremental development


Iterative development[1] prescribes the construction of initially small but ever-larger portions of a software project to help all those involved to uncover important issues early before problems or faulty assumptions can lead to disaster.

Agile development
Agile software development uses iterative development as a basis but advocates a lighter and more people-centric viewpoint than traditional approaches. Agile processes use feedback, rather than planning, as their primary control mechanism. The feedback is driven by regular tests and releases of the evolving software. There are many variations of agile processes: In Extreme Programming (XP), the phases are carried out in extremely small (or "continuous") steps compared to the older, "batch" processes. The (intentionally incomplete) first pass through the steps might take a day or a week, rather than the months or years of each complete step in the Waterfall model. First, one writes automated tests, to provide concrete goals for development. Next is coding (by a pair of programmers), which is complete when all the tests pass, and the programmers can't think of any more tests that are needed. Design and architecture emerge out of refactoring, and come after coding. The same people who do the coding do design. (Only the last feature merging design and code is common to all the other agile processes.) The incomplete but functional system is deployed or demonstrated for (some subset of) the users (at least one of which is on the development team). At this point, the practitioners start again on writing tests for the next most important part of the system.[2] Scrum Dynamic systems development method

Code and fix


"Code and fix" development is not so much a deliberate strategy as an artifact of naivet and schedule pressure on software developers.[3] Without much of a design in the way, programmers immediately begin producing code. At some point, testing begins (often late in the development cycle), and the inevitable bugs must then be fixed before the product can be shipped. See also: Continuous integration and Cowboy coding.

Process improvement models


Capability Maturity Model Integration The Capability Maturity Model Integration (CMMI) is one of the leading models and based on best practice. Independent assessments grade organizations on how well they follow their defined processes, not on the quality of those processes or the software produced. CMMI has replaced CMM. ISO 9000 ISO 9000 describes standards for a formally organized process to manufacture a product and the methods of managing and monitoring progress. Although the standard was originally created for the manufacturing sector,

Software development process ISO 9000 standards have been applied to software development as well. Like CMMI, certification with ISO 9000 does not guarantee the quality of the end result, only that formalized business processes have been followed. ISO/IEC 15504 ISO/IEC 15504 Information technology Process assessment also known as Software Process Improvement Capability Determination (SPICE), is a "framework for the assessment of software processes". This standard is aimed at setting out a clear model for process comparison. SPICE is used much like CMMI. It models processes to manage, control, guide and monitor software development. This model is then used to measure what a development organization or project team actually does during software development. This information is analyzed to identify weaknesses and drive improvement. It also identifies strengths that can be continued or integrated into common practice for that organization or team.

246

Formal methods
Formal methods are mathematical approaches to solving software (and hardware) problems at the requirements, specification, and design levels. Formal methods are most likely to be applied to safety-critical or security-critical software and systems, such as avionics software. Software safety assurance standards, such as DO-178B, DO-178C, and Common Criteria demand formal methods at the highest levels of categorization. For sequential software, examples of formal methods include the B-Method, the specification languages used in automated theorem proving, RAISE, VDM, and the Z notation. Formalization of software development is creeping in, in other places, with the application of Object Constraint Language (and specializations such as Java Modeling Language) and especially with model-driven architecture allowing execution of designs, if not specifications. For concurrent software and systems, Petri nets, process algebra, and finite state machines (which are based on automata theory - see also virtual finite state machine or event driven finite state machine) allow executable software specification and can be used to build up and validate application behavior. Another emerging trend in software development is to write a specification in some form of logic (usually a variation of first-order logic (FOL), and then to directly execute the logic as though it were a program. The OWL language, based on Description Logic (DL), is an example. There is also work on mapping some version of English (or another natural language) automatically to and from logic, and executing the logic directly. Examples are Attempto Controlled English, and Internet Business Logic, which do not seek to control the vocabulary or syntax. A feature of systems that support bidirectional English-logic mapping and direct execution of the logic is that they can be made to explain their results, in English, at the business or scientific level.

References
[1] ieeecomputersociety.org (http:/ / doi. ieeecomputersociety. org/ 10. 1109/ MC. 2003. 1204375) [2] Kent Beck, Extreme Programming, 2000. [3] McConnell, Steve. "7: Lifecycle Planning". Rapid Development. Redmond, Washington: Microsoft Press. p.140.

External links
Gerhard Fischer, "The Software Technology of the 21st Century: From Software Reuse to Collaborative Software Design" (http://l3d.cs.colorado.edu/~gerhard/papers/isfst2001.pdf), 2001 Lydia Ash: The Web Testing Companion: The Insider's Guide to Efficient and Effective Tests, Wiley, May 2, 2003. ISBN 0-471-43021-8 SaaSSDLC.com (http://SaaSSDLC.com/) Software as a Service Systems Development Life Cycle Project

Software development process Software development life cycle (SDLC) [visual image], software development life cycle (http://www.notetech. com/images/software_lifecycle.jpg) Heraprocess.org (http://www.heraprocess.org/) Hera is a light process solution for managing web projects

247

Agile software development


Agile software development is a group of software development methods based on iterative and incremental development, where requirements and solutions evolve through collaboration between self-organizing, cross-functional teams. It promotes adaptive planning, evolutionary development and delivery, a time-boxed iterative approach, and encourages rapid and flexible response to change. It is a conceptual framework that promotes foreseen interactions throughout the development cycle. The Agile Manifesto[1] introduced the term in 2001.

History

Agile software development poster

Generic diagram of an agile methodology for software development

Agile software development

248

Predecessors
Incremental software development methods have been traced back to 1957.[2] In 1974, a paper by E. A. Edmonds introduced an adaptive software development process.[3] So-called lightweight software development methods evolved in the mid-1990s as a reaction against heavyweight methods, which were characterized by their critics as a heavily regulated, regimented, micromanaged, waterfall model of development. Proponents of lightweight methods (and now agile methods) contend that they are a return to development practices from early in the history of software development.[2]

Early implementations of lightweight methods include Scrum (1995), Crystal Clear, Extreme Programming (1996), Adaptive Software Development, Feature Driven Development, and Dynamic Systems Development Method (DSDM) (1995). These are now typically referred to as agile methodologies, after the Agile Manifesto published in 2001.[4]

Martin Fowler, widely recognized as one of the key founders of Agile methods

Agile Manifesto
In February 2001, 17 software developers[5] met at the Snowbird, Utah resort, to discuss lightweight development methods. They published the Manifesto for Agile Software Development[1] to define the approach now known as agile software development. Some of the manifesto's authors formed the Agile Alliance, a nonprofit organization that promotes software development according to the manifesto's principles. The Agile Manifesto reads, in its entirety, as follows:[1] We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value: Individuals and interactions over processes and tools Working software over comprehensive documentation Customer collaboration over contract negotiation Responding to change over following a plan That is, while there is value in the items on the right, we value the items on the left more. The meanings of the manifesto items on the left within the agile software development context are described below: Individuals and Interactions in agile development, self-organization and motivation are important, as are interactions like co-location and pair programming. Working software working software will be more useful and welcome than just presenting documents to clients in meetings. Customer collaboration requirements cannot be fully collected at the beginning of the software development cycle, therefore continuous customer or stakeholder involvement is very important. Responding to change agile development is focused on quick responses to change and continuous development.[6] Twelve principles underlie the Agile Manifesto, including:[7] Customer satisfaction by rapid delivery of useful software Welcome changing requirements, even late in development Working software is delivered frequently (weeks rather than months) Working software is the principal measure of progress Sustainable development, able to maintain a constant pace

Agile software development Close, daily co-operation between business people and developers Face-to-face conversation is the best form of communication (co-location) Projects are built around motivated individuals, who should be trusted Continuous attention to technical excellence and good design Simplicity- The art of maximizing the amount of work not done - is essential Self-organizing teams Regular adaptation to changing circumstances

249

In 2005, a group headed by Alistair Cockburn and Jim Highsmith wrote an addendum of project management principles, the Declaration of Interdependence,[8] to guide software project management according to agile development methods.

Characteristics
There are many specific agile development methods. Most promote development, teamwork, collaboration, and process adaptability throughout the life-cycle of the project. Agile methods break tasks into small increments with minimal planning and do not directly involve long-term planning. Iterations are short time frames (timeboxes) that typically last from one to four weeks. Each iteration involves a cross functional team working in all functions: planning, requirements analysis, design, coding, unit testing, and acceptance testing. At the end of the iteration a working product is demonstrated to stakeholders. This Pair programming, an agile development technique minimizes overall risk and allows the project to adapt to changes used by XP. Note information radiators in the background. quickly. An iteration might not add enough functionality to warrant a market release, but the goal is to have an available release (with minimal bugs) at the end of each iteration.[9] Multiple iterations might be required to release a product or new features. Team composition in an agile project is usually cross-functional and self-organizing, without consideration for any existing corporate hierarchy or the corporate roles of team members. Team members normally take responsibility for tasks that deliver the functionality an iteration requires. They decide individually how to meet an iteration's requirements. Agile methods emphasize face-to-face communication over written documents when the team is all in the same location. Most agile teams work in a single open office (called a bullpen), which facilitates such communication. Team size is typically small (5-9 people) to simplify team communication and team collaboration. Larger development efforts can be delivered by multiple teams working toward a common goal or on different parts of an effort. This might require a coordination of priorities across teams. When a team works in different locations, they maintain daily contact through videoconferencing, voice, e-mail, etc. No matter what development disciplines are required, each agile team will contain a customer representative. This person is appointed by stakeholders to act on their behalf[10] and makes a personal commitment to being available for developers to answer mid-iteration questions. At the end of each iteration, stakeholders and the customer representative review progress and re-evaluate priorities with a view to optimizing the return on investment (ROI) and ensuring alignment with customer needs and company goals. Most agile implementations use a routine and formal daily face-to-face communication among team members. This specifically includes the customer representative and any interested stakeholders as observers. In a brief session, team members report to each other what they did the previous day, what they intend to do today, and what their

Agile software development roadblocks are. This face-to-face communication exposes problems as they arise. "These meetings, sometimes referred as daily stand-ups or daily scrum meetings, are held at the same place and same time every day and should last no more than 15 minutes. Standing up usually enforces that rule."[11] Agile development emphasizes working software as the primary measure of progress. This, combined with the preference for face-to-face communication, produces less written documentation than other methods. The agile method encourages stakeholders to prioritize "wants" with other iteration outcomes, based exclusively on business value perceived at the beginning of the iteration (also known as value-driven).[12] Specific tools and techniques, such as continuous integration, automated or xUnit test, pair programming, test-driven development, design patterns, domain-driven design, code refactoring and other techniques are often used to improve quality and enhance project agility. Light Agile Development (LAD) is a flavor of agile methodology that applies hand picked techniques from the wider range of agile practices to suit different companies, development teams, situations and environments. Another key aspect of LAD is that it tends to be user-centric, focusing primarily on the user experience and usable software interfaces and uses agile methodologies to deliver them. Most real-world implementations of Agile are really LAD in practice, since a core value of the methodology is to be flexible, sensible and to focus on getting stuff done. In agile software development, an information radiator is a (normally large) physical display placed in a prominent location in an office, where passers-by can see it, and which presents an up-to-date summary of the status of a software product or products.[13][14] The name was coined by Alistair Cockburn, and described in his 2002 book Agile Software Development.[14] A build light indicator may be used to inform a team about the current status of their project.

250

Comparison with other methods


Methods exist on a continuum from adaptive to predictive.[15] Agile methods lie on the adaptive side of this continuum. Adaptive methods focus on adapting quickly to changing realities. When the needs of a project change, an adaptive team changes as well. An adaptive team will have difficulty describing exactly what will happen in the future. The further away a date is, the more vague an adaptive method will be about what will happen on that date. An adaptive team cannot report exactly what tasks they will do next week, but only which features they plan for next month. When asked about a release six months from now, an adaptive team might be able to report only the mission statement for the release, or a statement of expected value vs. cost. Predictive methods, in contrast, focus on planning the future in detail. A predictive team can report exactly what features and tasks are planned for the entire length of the development process. Predictive teams have difficulty changing direction. The plan is typically optimized for the original destination and changing direction can require completed work to be started over. Predictive teams will often institute a change control board to ensure that only the most valuable changes are considered. Formal methods, in contrast to adaptive and predictive methods, focus on computer science theory with a wide array of types of provers. A formal method attempts to prove the absence of errors with some level of determinism. Some formal methods are based on model checking and provide counter examples for code that cannot be proven. Agile teams may employ highly disciplined formal methods.[16] Agile methods have much in common with the Rapid Application Development techniques from the 1980/90s as espoused by James Martin and others. In addition to technology-focused methods, customer- and design-centered methods, such as Visualization-Driven Rapid Prototyping developed by Brian Willison, work to engage customers and end users to facilitate agile software development. In 2008 the Software Engineering Institute (SEI) published the technical report "CMMI or Agile: Why Not Embrace Both"[17] to make clear that Capability Maturity Model Integration and agile can co-exist. CMMI Version 1.3 includes support for Agile Software Development.[18]

Agile software development One of the differences between agile and waterfall, is that testing of the software is conducted at different points during the software development lifecycle. In the waterfall model, there is a separate testing phase after implementation. In Agile XP, testing is done concurrently with implemetation.

251

Agile methods
Well-known agile software development methods include: Agile Modeling Agile Unified Process (AUP) Crystal Clear Dynamic Systems Development Method (DSDM) Extreme Programming (XP) Feature Driven Development (FDD) GSD Kanban (development) Lean software development Scrum Velocity tracking

Method tailoring
In the literature, different terms refer to the notion of method adaptation, including method tailoring, method fragment adaptation and situational method engineering. Method tailoring is defined as: A process or capability in which human agents determine a system development approach for a specific project situation through responsive changes in, and dynamic interplays between contexts, intentions, and method fragments.[19] Potentially, almost all agile methods are suitable for method tailoring. Even the DSDM method is being used for this purpose and has been successfully tailored in a CMM context.[20] Situation-appropriateness can be considered as a distinguishing characteristic between agile methods and traditional software development methods, with the latter being relatively much more rigid and prescriptive. The practical implication is that agile methods allow project teams to adapt working practices according to the needs of individual projects. Practices are concrete activities and products that are part of a method framework. At a more extreme level, the philosophy behind the method, consisting of a number of principles, could be adapted (Aydin, 2004).[19] Extreme Programming (XP) makes the need for method adaptation explicit. One of the fundamental ideas of XP is that no one process fits every project, but rather that practices should be tailored to the needs of individual projects. Partial adoption of XP practices, as suggested by Beck, has been reported on several occasions.[21] Mehdi Mirakhorli [22] proposes a tailoring practice that he feels provides a sufficient roadmap and guidelines for adapting all the practices. RDP Practice is designed for customizing XP. This practice, first proposed as a long research paper in the APSO workshop at the ICSE 2008 conference, is currently the only proposed and applicable method for customizing XP. Although it is specifically a solution for XP, this practice has the capability of extending to other methodologies. At first glance, this practice seems to be in the category of static method adaptation but experiences with RDP Practice says that it can be treated like dynamic method adaptation. The distinction between static method adaptation and dynamic method adaptation is subtle.[23] The key assumption behind static method adaptation is that the project context is given at the start of a project and remains fixed during project execution. The result is a static definition of the project context. Given such a definition, route maps can be used to determine which structured method fragments should be used for that particular project, based on predefined sets of criteria. Dynamic method adaptation, in contrast, assumes that projects are situated in an emergent context. An emergent context implies that a project has to deal with emergent factors that affect relevant conditions but are not predictable. This also means that a project

Agile software development context is not fixed, but changing during project execution. In such a case prescriptive route maps are not appropriate. The practical implication of dynamic method adaptation is that project managers often have to modify structured fragments or even innovate new fragments, during the execution of a project (Aydin et al., 2005).[23]

252

Software development life cycle


The agile methods are focused on different aspects of the software development life cycle. Some focus on the practices (extreme programming, pragmatic programming, agile modeling), while others focus on managing the software projects (the scrum approach). Yet, there are approaches providing full coverage over the development life cycle (dynamic systems development method, or DSDM, and the IBM Rational Unified Process, or RUP), while most of them are suitable from the requirements specification phase on (feature-driven [21] Software development life-cycle support development, or FDD, for example). Thus, there is a clear difference between the various agile software development methods in this regard. Whereas DSDM and RUP do not need complementing approaches to support software development, the others do to a varying degree. DSDM can be used by anyone (although only DSDM members can offer DSDM products or services). RUP, then, is a commercially sold development environment (Abrahamsson, Salo, Rankainen, & Warsta, 2002).[21]

Measuring agility
While agility can be seen as a means to an end, a number of approaches have been proposed to quantify agility. Agility Index Measurements (AIM)[24] score projects against a number of agility factors to achieve a total. The similarly named Agility Measurement Index,[25] scores developments against five dimensions of a software project (duration, risk, novelty, effort, and interaction). Other techniques are based on measurable goals.[26] Another study using fuzzy mathematics[27] has suggested that project velocity can be used as a metric of agility. There are agile self-assessments to determine whether a team is using agile practices (Nokia test,[28] Karlskrona test,[29] 42 points test[30]). While such approaches have been proposed to measure agility, the practical application of such metrics has yet to be seen. Historically, there is a lack of data on agile projects that failed to produce good results. Studies can be found that report poor projects due to a deficient implementation of an agile method, or methods, but none where it was felt that they were executed properly and failed to deliver on its promise. "This may be a result of a reluctance to publish papers on unsuccessful projects, or it may in fact be an indication that, when implemented correctly, Agile Methods work."[31] However, there is agile software development ROI data available from the DACS ROI Dashboard.[32]

Experience and reception


One of the early studies reporting gains in quality, productivity, and business satisfaction by using Agile methods was a survey conducted by Shine Technologies from November 2002 to January 2003.[33] A similar survey conducted in 2006 by Scott Ambler, the Practice Leader for Agile Development with IBM Rational's Methods Group reported similar benefits.[34] In a survey conducted by VersionOne (a provider of software for planning and tracking agile software development projects) in 2008, 55% of respondents answered that agile methods had been successful in 90-100% of cases.[35] Others claim that agile development methods are still too young to require extensive academic proof of their success.[36]

Agile software development

253

Suitability
Large-scale agile software development remains an active research area.[37][38] Agile development has been widely seen as being more suitable for certain types of environment, including small teams of experts.[39][40]:157 Positive reception towards Agile methods has been observed in Embedded domain across Europe in recent years.[41] Some things that may negatively impact the success of an agile project are: Large-scale development efforts (>20 developers), though scaling strategies[38] and evidence of some large projects[42] have been described. Distributed development efforts (non-colocated teams). Strategies have been described in Bridging the Distance[43] and Using an Agile Software Process with Offshore Development[44] Forcing an agile process on a development team[45] Mission-critical systems where failure is not an option at any cost (e.g. software for surgical procedures). The early successes, challenges and limitations encountered in the adoption of agile methods in a large organization have been documented.[46] In terms of outsourcing agile development, Michael Hackett, Sr. Vice President of LogiGear Corporation has stated that "the offshore team ... should have expertise, experience, good communication skills, inter-cultural understanding, trust and understanding between members and groups and with each other."[47] Risk analysis can also be used to choose between adaptive (agile or value-driven) and predictive (plan-driven) methods.[12] Barry Boehm and Richard Turner suggest that each side of the continuum has its own home ground, as follows:[39]

Suitability of different development methods


Agile home ground Low criticality Senior developers Requirements change often Small number of developers Plan-driven home ground High criticality Junior developers Formal methods Extreme criticality Senior developers

Requirements do not change often Limited requirements, limited features see Wirth's law Large number of developers Requirements that can be modeled Extreme quality

Culture that responds to change Culture that demands order

Criticism
One common criticism of agile software development methods is that it is developer-centric rather than user-centric. Agile software development focuses on processes for getting requirements and developing code and does not focus on product design. Mike Gualtieri, principal analyst of agile software development at Forrester Research, published a widely read criticism stating that software developers are not coders, but experience creators.[48] Agile methodologies can also be inefficient in large organizations and certain types of projects. Agile methods seem best for developmental and non-sequential projects. Many organizations believe that agile methodologies are too extreme, and adopt a hybrid approach that mixes elements of agile and plan-driven approaches.[49]

Agile software development

254

References
[1] Beck, Kent; et al. (2001). "Manifesto for Agile Software Development" (http:/ / agilemanifesto. org/ ). Agile Alliance. . Retrieved 14 June 2010. [2] Gerald M. Weinberg, as quoted in Larman, Craig; Basili, Victor R. (June 2003). "Iterative and Incremental Development: A Brief History". Computer 36 (6): 4756. doi:10.1109/MC.2003.1204375. ISSN0018-9162. "We were doing incremental development as early as 1957, in Los Angeles, under the direction of Bernie Dimsdale [at IBM's ServiceBureau Corporation]. He was a colleague of John von Neumann, so perhaps he learned it there, or assumed it as totally natural. I do remember Herb Jacobs (primarily, though we all participated) developing a large simulation for Motorola, where the technique used was, as far as I can tell .... All of us, as far as I can remember, thought waterfalling of a huge project was rather stupid, or at least ignorant of the realities. I think what the waterfall description did for us was make us realize that we were doing something else, something unnamed except for 'software development.'" [3] Edmonds, E. A. (1974). "A Process for the Development of Software for Nontechnical Users as an Adaptive System". General Systems 19: 21518. [4] Larman, Craig (2004). Agile and Iterative Development: A Manager's Guide. Addison-Wesley. p.27. ISBN978-0-13-111155-4 [5] Kent Beck, Mike Beedle, Arie van Bennekum, Alistair Cockburn, Ward Cunningham, Martin Fowler, James Grenning, Jim Highsmith, Andrew Hunt, Ron Jeffries, Jon Kern, Brian Marick, Robert C. Martin, Stephen J. Mellor, Ken Schwaber, Jeff Sutherland, and Dave Thomas [6] Ambler, S.W. "Examining the Agile Manifesto" (http:/ / www. ambysoft. com/ essays/ agileManifesto. html). Retrieved 6 April 2011. [7] Beck, Kent; et al. (2001). "Principles behind the Agile Manifesto" (http:/ / www. agilemanifesto. org/ principles. html). Agile Alliance. Archived (http:/ / web. archive. org/ web/ 20100614043008/ http:/ / www. agilemanifesto. org/ principles. html) from the original on 14 June 2010. . Retrieved 6 June 2010. [8] Anderson, David (2005). "Declaration of Interdependence" (http:/ / pmdoi. org). . [9] Beck, Kent (1999). "Embracing Change with Extreme Programming". Computer 32 (10): 7077. doi:10.1109/2.796139. [10] Gauthier, Alexandre (17 August 2011). "What is scrum" (http:/ / www. planbox. com/ resources/ agile-artifacts#roles). Planbox. . [11] "Daily Stand-up Meeting" (http:/ / www. planbox. com/ resources/ daily-scrum). Planbox. . [12] Sliger, Michele; Broderick, Stacia (2008). The Software Project Manager's Bridge to Agility. Addison-Wesley. p.46. ISBN0-321-50275-2. [13] Cockburn, Alistair. "Information radiator" (http:/ / alistair. cockburn. us/ Information+ radiator). . [14] Ambler, Scott (12 April 2002). Agile Modeling: Effective Practices for EXtreme Programming and the Unified Process. John Wiley & Sons. pp.12, 164, 363. ISBN978-0471202820. [15] Boehm, B.; R. Turner (2004). Balancing Agility and Discipline: A Guide for the Perplexed. Boston, MA: Addison-Wesley. ISBN0-321-18612-5. Appendix A, pages 165-194 [16] Black, S. E.; Boca., P. P.; Bowen, J. P.; Gorman, J.; Hinchey, M. G. (September 2009). "Formal versus agile: Survival of the fittest". IEEE Computer 49 (9): 3945. [17] TECHNICAL NOTE CMU/SEI-2008-TN-003 CMMI or Agile: Why Not Embrace Both (http:/ / www. sei. cmu. edu/ library/ abstracts/ reports/ 08tn003. cfm) [18] Support in CMMI Agile Support in CMMI Version 1.3 [19] Aydin, M.N., Harmsen, F., Slooten, K. v., & Stagwee, R. A. (2004). An Agile Information Systems Development Method in use. Turk J Elec Engin, 12(2), 127-138 [20] Abrahamsson, P., Warsta, J., Siponen, M.T., & Ronkainen, J. (2003). New Directions on Agile Methods: A Comparative Analysis. Proceedings of ICSE'03, 244-254 [21] Abrahamsson, P., Salo, O., Ronkainen, J., & Warsta, J. (2002). Agile Software Development Methods: Review and Analysis. VTT Publications 478 [22] http:/ / portal. acm. org/ citation. cfm?id=1370143. 1370149& coll=ACM& dl=ACM& CFID=69442744& CFTOKEN=96226775, [23] Aydin, M.N., Harmsen, F., Slooten van K., & Stegwee, R.A. (2005). On the Adaptation of An Agile Information(Suren) Systems Development Method. Journal of Database Management Special issue on Agile Analysis, Design, and Implementation, 16(4), 20-24 [24] "David Bock's Weblog : Weblog" (http:/ / jroller. com/ page/ bokmann?entry=improving_your_processes_aim_high). Jroller.com. . Retrieved 2 April 2010. [25] "Agility measurement index" (http:/ / doi. acm. org/ 10. 1145/ 1185448. 1185509). Doi.acm.org. . Retrieved 2 April 2010. [26] Peter Lappo; Henry C.T. Andrew. "Assessing Agility" (http:/ / www. smr. co. uk/ presentations/ measure. pdf). . Retrieved 6 June 2010. [27] Kurian, Tisni (2006). Agility Metrics: A Quantitative Fuzzy Based Approach for Measuring Agility of a Software Process, ISAM-Proceedings of International Conference on Agile Manufacturing'06(ICAM-2006), Norfolk, U.S. [28] Joe Little (2 December 2007). "Nokia test, A scrum-specific test" (http:/ / agileconsortium. blogspot. com/ 2007/ 12/ nokia-test. html). Agileconsortium.blogspot.com. . Retrieved 6 June 2010. [29] Mark Seuffert, Piratson Technologies, Sweden. "Karlskrona test, A generic agile adoption test" (http:/ / www. piratson. se/ archive/ Agile_Karlskrona_Test. html). Piratson.se. . Retrieved 6 June 2010. [30] "How agile are you, a scrum-specific test" (http:/ / www. agile-software-development. com/ 2008/ 01/ how-agile-are-you-take-this-42-point. html). Agile-software-development.com. . Retrieved 6 June 2010. [31] David Cohen, Mikael Lindvall, Patricia Costa "Agile Software Development" (http:/ / www. thedacs. com/ techs/ abstract/ 345473), Data & Analysis Center for Software, January 2003 [32] DACS ROI Dashboard (http:/ / www. thedacs. com/ databases/ roi/ ) Retrieved 11 November 2011.

Agile software development


[33] "Agile Methodologies Survey Results" (http:/ / www. shinetech. com/ attachments/ 104_ShineTechAgileSurvey2003-01-17. pdf) (PDF). Shine Technologies (http:/ / www. shinetech. com). January 2003. . Retrieved 3 June 2010. "95% [stated] that there was either no effect or a cost reduction . . . 93% stated that productivity was better or significantly better . . . 88% stated that quality was better or significantly better . . . 83% stated that business satisfaction was better or significantly better" [34] Ambler, Scott (3 August 2006). "Survey Says: Agile Works in Practice" (http:/ / www. drdobbs. com/ architecture-and-design/ 191800169;jsessionid=2QJ23QRYM3H4PQE1GHPCKH4ATMY32JVN?queryText=agile+ survey). Dr. Dobb's. . Retrieved 3 June 2010. "Only 6 percent indicated that their productivity was lowered . . . No change in productivity was reported by 34 percent of respondents and 60 percent reported increased productivity. . . . 66 percent [responded] that the quality is higher. . . . 58 percent of organizations report improved satisfaction, whereas only 3 percent report reduced satisfaction." [35] "The State of Agile Development" (http:/ / www. versionone. com/ pdf/ 3rdAnnualStateOfAgile_FullDataReport. pdf) (PDF). VersionOne, Inc.. 2008. . Retrieved 3 July 2010. "Agile delivers" [36] "Answering the "Where is the Proof That Agile Methods Work" Question" (http:/ / www. agilemodeling. com/ essays/ proof. htm). Agilemodeling.com. 19 January 2007. . Retrieved 2 April 2010. [37] Agile Processes Workshop II Managing Multiple Concurrent Agile Projects. Washington: OOPSLA 2002 [38] W. Scott Ambler (2006) Supersize Me (http:/ / www. drdobbs. com/ 184415491) in Dr. Dobb's Journal, 15 February 2006. [39] Boehm, B.; R. Turner (2004). Balancing Agility and Discipline: A Guide for the Perplexed. Boston, MA: Addison-Wesley. pp.5557. ISBN0-321-18612-5. [40] Beck, K. (1999). Extreme Programming Explained: Embrace Change. Boston, MA: Addison-Wesley. ISBN0-321-27865-8. [41] K Petersen's doctoral research in Sweden Implementing Lean and Agile Software Development in Industry (http:/ / www. bth. se/ tek/ aps/ kps. nsf/ pages/ phd-studies) [42] Schaaf, R.J. (2007). Agility XL Systems and Software Technology Conference 2007 (http:/ / www. sstc-online. org/ Proceedings/ 2007/ pdfs/ RJS1722. pdf), Tampa, FL [43] "Bridging the Distance" (http:/ / www. drdobbs. com/ architecture-and-design/ 184414899). Sdmagazine.com. . Retrieved 1 February 2011. [44] Martin Fowler. "Using an Agile Software Process with Offshore Development" (http:/ / www. martinfowler. com/ articles/ agileOffshore. html). Martinfowler.com. . Retrieved 6 June 2010. [45] [The Art of Agile Development James Shore & Shane Warden pg 47] [46] Evans, Ian. "Agile Delivery at British Telecom" (http:/ / www. methodsandtools. com/ archive/ archive. php?id=43). . Retrieved 21 February 2011. [47] (http:/ / www. logigear. com/ in-the-news/ 973-agile. html) LogiGear, PC World Viet Nam, Jan 2011 [48] Gualtieri, Mike (2011 [last update]). "Agile Software Is A Cop-Out; Here's What's Next | Forrester Blogs" (http:/ / blogs. forrester. com/ mike_gualtieri/ 11-10-12-agile_software_is_a_cop_out_heres_whats_next). blogs.forrester.com. . Retrieved 28 November 2011. [49] Barlow, Jordan B.; Justin Scott Giboney, Mark Jeffery Keith, David W. Wilson, Ryan M. Schuetzler, Paul Benjamin Lowry, Anthony Vance (2011). "Overview and Guidance on Agile Development in Large Organizations" (http:/ / aisel. aisnet. org/ cais/ vol29/ iss1/ 2/ ). Communications of the Association for Information Systems 29 (1): 2544. .

255

Further reading
Annual State of Agile Development Survey: 2011 trends (http://www.versionone.com/ state_of_agile_development_survey/11/) The Future of Agile Software Development (http://www.targetprocess.com/rightthing.html) Abrahamsson, P., Salo, O., Ronkainen, J., & Warsta, J. (2002). Agile Software Development Methods: Review and Analysis. (http://agile.vtt.fi/publications.html) VTT Publications 478. Cohen, D., Lindvall, M., & Costa, P. (2004). An introduction to agile methods. In Advances in Computers (pp.166). New York: Elsevier Science. Dingsyr, Torgeir, Dyb, Tore and Moe, Nils Brede (ed.): Agile Software Development: Current Research and Future Directions (http://www.amazon.co.uk/Agile-Software-Development-Research-Directions/dp/ 3642125743), Springer, Berlin Heidelberg, 2010. Fowler, Martin. Is Design Dead? (http://www.martinfowler.com/articles/designDead.html). Appeared in Extreme Programming Explained, G. Succi and M. Marchesi, ed., Addison-Wesley, Boston. 2001. Larman, Craig and Basili, Victor R. Iterative and Incremental Development: A Brief History IEEE Computer, June 2003 (http://www.highproductivity.org/r6047.pdf) Riehle, Dirk. A Comparison of the Value Systems of Adaptive Software Development and Extreme Programming: How Methodologies May Learn From Each Other (http://www.riehle.org/computer-science/research/2000/ xp-2000.html). Appeared in Extreme Programming Explained, G. Succi and M. Marchesi, ed., Addison-Wesley, Boston. 2001.

Agile software development Rother, Mike (2009). [[Toyota Kata (http://books.google.com/?id=_1lhPgAACAAJ&dq=toyota+kata)]]. McGraw-Hill. ISBN0-07-163523-8 M. Stephens, D. Rosenberg. Extreme Programming Refactored: The Case Against XP. Apress L.P., Berkeley, California. 2003. (ISBN 1-59059-096-1) Shore, J., & Warden S. (2008). The Art of Agile Development. OReilly Media, Inc. Wik,Philip Effective Top-Down SOA Management In An Efficient Bottom-Up Agile World (http://soamag.com/ I38/0410-1.php) Service Technology Magazine, April, 2010. Willison, Brian (2008). Iterative Milestone Engineering Model. New York, NY. Willison, Brian (2008). Visualization Driven Rapid Prototyping. Parsons Institute for Information Mapping.

256

External links
Agile (http://www.dmoz.org/Computers/Programming/Methodologies/Agile/) at the Open Directory Project Two Ways to Build a Pyramid (http://www.informationweek.com/two-ways-to-build-a-pyramid/6507351), John Mayo-Smith (VP Of Technology At R/GA), October 22, 2001 The New Methodology (http://martinfowler.com/articles/newMethodology.html) Martin Fowler's description of the background to agile methods Ten Authors of The Agile Manifesto Celebrate its Tenth Anniversary (http://www.pragprog.com/magazines/ 2011-02/agile--) A look into the PMI-ACP (Agile Certified Practitioner) (http://www.pmi.org/Certification/ New-PMI-Agile-Certification.aspx) Agile Manifesto (http://agilemanifesto.org/)

Article Sources and Contributors

257

Article Sources and Contributors


Information systems Source: https://en.wikipedia.org/w/index.php?oldid=519143640 Contributors: 16@r, 1ForTheMoney, 2help, AKA MBG, Aaksulu, Ababask, Abeg92, Absentmindedprof, Acather96, Agogino, Alan ffm, Alansohn, Albinsson, Amna3001, Andyjsmith, Anoopnair2050, Anshuk, Archivingcontext, Athewma, BarryList, Bhny, Bilby, Billinghurst, Blanchardb, Blethering Scot, Blue Em, Boing! said Zebedee, Bongwarrior, CBF28, COMPATT, CaroleHenson, Caytruc, Cdlw93, Cenarium, Cheeni, Citicat, Cokoli, Compo, CwieZebra, DXBari, Dabomb87, Daithiohea, Daniel G Rego, Davewho2, Dbmesser, DerHexer, Detlevhfischer, Dexter5440, Dien-Universal, Discospinster, Domster, Donfowley, DoubleBlue, Drat, Dtgree01, Ekalin, El C, Emahdizargar, Enviroboy, Equilibrioception, Eric Wester, Erkan Yilmaz, FiftyNine, Fit, Flavonoid, Fox2k11, Fraggle81, Frederick.nyash, GateKeeper, Giftlite, Gracenotes, Greman Knight, Groucho NL, Gurt Posh, Haymaker, Hede2000, Hennaahmad, Hintss, Holizz, Hsn mhd, Hydrogen Iodide, ISTSCHOOL, Iancarter, Ida Shaw, Imtiyazali4all, Informationist1, Informatwr, Informingscience, Iuhkjhk87y678, IvanLanin, J.delanoy, JHP, JeLuF, Jeff02, Jim1138, Jj137, Jlpinar83, Joe Decker, Joejoe92, Johnchrism, Jordav, Jpbowen, Juliancolton, Justicewik21, Jw lamp, Katarighe, Katwahine, Kerace, Larry laptop, Legendstudy, LoZmaster, Lokiasan, Lubdeb, Lunalona, Lvr, Lycurgus, MER-C, Macromediax, Magog the Ogre, Malleus Fatuorum, Manfred-jeu, Martinship, MaryEF, Maurice Carbonaro, Mdd, Metagraph, Mgc0wiki, MissAlyx, Mjbinfo, Modify, Moptophaha, Mpeisenbr, Mr D O'Connor, Mr. Vernon, Mrt3366, Mweast, Mwtoews, Mwv2, N5iln, Nahrihra, Natashak 81, Nathanashleywild, Newguyhere, Njan, Nurg, Olaf, Optimist on the run, Orr, Patthew, Petrb, Pfuchs722, Pgreenfinch, Phobis, Piano non troppo, Pseetharaman, Q8i2020, Qwfp, Radagast83, RadioFan, Ramu50, Rdu11, RexNL, Rich Farmbrough, Riqbal11, Roberto Cruz, RomanX, Romanm, Roscelese, Roystonea, Rrburke, Ruud Koot, Saddhiyama, Saeed.Veradi, Sam Korn, Samuelokellogum, Sanjiv swarup, SchreyP, Senator Palpatine, Seth Nimbosa, Shatner2011, Showard, Signalhead, Smith bruce, Sole Soul, Stdazi, Strike Eagle, Syed Zeeshan Haider, SyedHasan.Mahmood82, Sysesc, Talk2xpert, Tee Owe, Thesocialweb, Tide rolls, Tomas e, Tomrobb4567, Tonysoprano777, Tpbradbury, Traxs7, Vaceituno, VasilievVV, Venustas 12, Versageek, Vinhtantran, W4chris, WaltBusterkeys, Wca42, Wjejskenewr, Wtmitchell, Xelgen, Y0kai, Y2kcrazyjoker4, Yamamoto Ichiro, Ycsinger, Yuda03, Yueni, Zaidiutm, Zr2d2, 576 anonymous edits Chief information officer Source: https://en.wikipedia.org/w/index.php?oldid=518333478 Contributors: 72Dino, Aecis, Aldie, Alphachimp, Andrewmin, Aranea Mortem, Benjamin Mako Hill, BlueSkyNoRain, Bruce404, Buster7, C172rg, CIOGuru, Cebess, ChristianBk, Comfychaos, Curtis Clark, Cybercobra, Dandv, Daniel73480, David Shay, Davidp, Dismas, Diza, Djharrity, Dmccreary, Dub617, Dukeofgaming, EFormsBlogger, Egil, Ehheh, Fbarw, Gadfium, Gaia Octavia Agrippa, Gatemansgc, Greyengine5, Groverxv, Hede2000, Hqb, Ian Pitchford, Iswive, Iznogoud, JForget, Jamalystic, Jaysee21, Jazmatician, Jmilestaylor, Johnloia2112, JonHarder, Joseph.strunz, Jplarkin, Julsssark, Jvlock, Kuru, Laran.evans, Leslie Mateus, LiDaobing, Lotje, Mairi, Millstream3, Mitchello, Mote, Mpeisenbr, MrOllie, Mrflip, Neutrality, Nslonim, Nurtsio, Ohnoitsjamie, OminousSkitter, OnceAlpha, OneGyT, Ottawahitech, Pan Dan, PaulHanson, Poweroid, Pravin.taskseveryday, Rhobite, Rich Farmbrough, Rjwilmsi, Robert Merkel, Robofish, Rock2e, Ronz, Rrastin, Schneelocke, Sebesta, Shajela, Smpclient, SusanLesch, Svetovid, Swineinfluenza, Tellmewhy1, Tempodivalse, The Thing That Should Not Be, Thecheesykid, Themfromspace, Thiagoeh, Thomasjl, Vague, Wiki3 1415, Woohookitty, Yarnalgo, Zelig123456, Zondor, 152 , anonymous edits Information technology management Source: https://en.wikipedia.org/w/index.php?oldid=512294534 Contributors: Berolina, Bgwhite, BigBen212, Bkonrad, CaliMan88, Carolynh 98, Charansagar, DanMS, Danim, Edcolins, Elronxenu, Emrekenci, Erich2007, Eskandarany, Evans1982, FayssalF, Fredrik, Gary King, Gerry.avilez, Greenboite, Gtchild, Gurch, Howsund, IMSoP, Iain99, Iridescent, JMJeditor, Jianhhuang, Jovianeye, KagamiNoMiko, Kku, Lady space, Manasprakash79, Mdd, MrOllie, Mstandart, Nickmalik, Notinasnaid, Orgio89, PaulHanson, Prokopenya Viktor, Quality advisor, RLFX, Raysonho, Reedy, RichardRannard, Rjwilmsi, Rponamgi, Rwwww, S.K., SchreiberBike, Shanes, Shorbagy10, Stwalkersock, SueHay, Thingg, Tommy2010, Tony1, Utrechtse, Victor Lopes, Wolf grey, Wstankich, 81 anonymous edits Information technology audit Source: https://en.wikipedia.org/w/index.php?oldid=519241366 Contributors: 5994995, A. B., ABDM3, AMBACISA, Acalamari, Anakata, Andiodavinci, Angryxpeh, Apparition11, Ash, Avraham, Beetstra, Bovineone, Bpuli, Bradgarland, Camitommy, Camw, CaptainCarrot, DGG, David Hoelzer, Dbirdz, DocendoDiscimus, Dontaskme, Download, Dusti, Epeefleche, Erandi.lansakara, Farnumj, Feco, FireballDWF2, Frank Kai Fat Chow, GLKeeney, Golbez, Grafen, Group6, GroupOne, Guddalk, Hu12, ITSecurityGuy, Jafeluv, Joe3600, Joy, Jstunda, K s srini, Kbdank71, Kbh3rd, Kmorozov, Kungfuadam, Kuru, La goutte de pluie, Lachnej, Leandro Palacios, LeaveSleaves, Martin451, Masgatotkaca, Mauls, MementoVivere, Mhilvoorde001, Microcell, Mike Cline, Mild Bill Hiccup, Minimac, MrOllie, Paiev, Pathiker chithi, PresN, Przepla, Pvosta, Quarl, REDMBACISA, RJFJR, RedWolf, Rhobite, Rich Farmbrough, RickK, Rob smith 06, SamTheButcher, Seattleu542, Senator Palpatine, Stevecalloway, Teqaxe, The Famous Movie Director, Tide rolls, Tmh, Tom yorks, Trusilver, Tvh2k, Unschool, Wikid77, Zondor, 222 anonymous edits Corporate governance of information technology Source: https://en.wikipedia.org/w/index.php?oldid=520018770 Contributors: A. B., AMPJA, Adrius42, Alanpc1, Andyjsmith, Ash, Beland, Brandguru, Cbosco, CelloerTB, Cfoglia, Charles T. Betz, China Crisis, Cirrus Editor, Cnilep, Coreygorman, Cverlinden, DaddyAndy, Dandan1, David.t.bath, Derek R Bullamore, Djamund, Edcolins, Ehheh, Everton1984, Frank Kai Fat Chow, Gforcewebdesign, GilbertoSilvaFan, Hairhorn, HectorM, Hervegirod, IT2BE, Isarra (HG), IvanLanin, JEH, Joe3600, Jpbowen, Juanpablosoto, Juster84, Jzupan, Kernel.package, Khalid hassani, Kim Maes 23, Kimchi.sg, Kimleonard, Kitdaddio, Kku, Kubanczyk, Kuru, LMPerry, LOL, Lambiam, Leoplus, Lotje, Lumos3, Mantagroup, Marsim, Mgillett, Mild Bill Hiccup, MissionInn.Jim, Mmiriam, Moonriddengirl, NSR, Nancy, Nickcarr, Nikthestunned, Olivier.bally, Ps07swt, Randomalious, Rcardenas, Renaultal, Rich Farmbrough, Richard Bartholomew, RichardVeryard, Rkaseler, Rodecode, Ronz, Roystonea, Sadads, Saurabhdutta, Sayom, Singhalawap, Sjhnumber1, Steverapaport, Supadawg, TScabbard, Technopilgrim, Televi, TheTito, Tmg1165, Tonywhite68, Vaceituno, Van helsing, Wayraw, Wiki3 1415, Xerstau, Yogishpai, 134 anonymous edits Systems development life-cycle Source: https://en.wikipedia.org/w/index.php?oldid=520178674 Contributors: 90, ABF, AGK, AbbasSarfraz, Adamgibb, Ahoerstemeier, Aitias, Ajbrulez, Albanaco, AlephGamma, Allan McInnes, Allens, Allmightyduck, Allstarecho, Alpha Quadrant (alt), Andaryj, Anonymous Dissident, Ansell, Arjun01, Arunsingh16, Avaloan, BasilBoom, Beiber the justin, Bfigura's puppy, Bilby, BillyPreset, Bobo192, BrianY, CFMWiki1, CalebNoble, Can357, CanadianPenguin, Cbdorsett, Chillllls, Chris 73, Chrislk02, Chsh, Cliffydcw, Closedmouth, Cochiloco, Copyeditor22, Cory Donnelly, Croepha, DARTH SIDIOUS 2, DEddy, DGJM, Dandv, Danielphin, David.alex.lamb, DavidDouthitt, DeadEyeArrow, Devarishi, Dgw, Djsasso, DrKranium, DuncanHill, ESkog, Eastlaw, Ebessman, Ekren, Enauspeaker, Ensign beedrill, Epbr123, Erkan Yilmaz, Evaders99, FalconL, FiRe, Flyerwolf, GD 6041, Gaff, Gamcall, GarbagEcol, Gert7, Gilliam, Gioto, Graeme Bartlett, Grafen, Happysailor, Harsha.nagaraju, Herwin.a, Hghyux, Holsen266, Hughiki, Hutcher, Hydrogen Iodide, HyperSonic X, Informatwr, IvanLanin, J.delanoy, J04n, JaGa, JamesBWatson, Jdg12345, Jeff G., Jeremy Visser, Jim1138, Jinian, Jncraton, Joe4210, John Vandenberg, John.stark, Johnjeric, Jonverve, Josephedouard, Jpbowen, Kalai545, Kdridder, Kenyon, Kesla, Kevin, Khalid hassani, King of Hearts, Kingpin13, Kku, KnowledgeOfSelf, Kr suraesh, Kuru, Kurumban, Kushal one, L Kensington, LAX, Latka, Lectonar, Liftarn, Little Mountain 5, Lousyd, Lugia2453, Luna Santin, M4gnum0n, Madman2001, Maghnus, MahdiEynian, Marek69, Mark Renier, Materialscientist, Matt.sigs, Mdd, Meisterflexer, Mephistophelian, Michael Hardy, Mick1990, Mike Rosoft, Mikesc86, MrOllie, Mrwojo, Mummy34, NTox, Natkeeran, Natl1, Neurophyre, Niaz, Nkattari, NuclearWarfare, OMouse, Ocaasi, Octahedron80, Ohnoitsjamie, Oliverclews, Omicronpersei8, Orange Suede Sofa, PS., Pascal666, Paxsimius, Philip Trueman, Piano non troppo, Pikajedi3, Pinethicket, Pingveno, Pip2andahalf, Pluke, Pnm, Qaiassist, Qrsdogg, R'n'B, RadioFan, Ranjithsutari, Raymondwinn, Recognizance, Rich Farmbrough, Rjwilmsi, Ronz, Rooseveltavi, Rwwww, Salvio giuliano, Sam Hobbs, Sarloth, Satishshah123, SchmuckyTheCat, ScottWAmbler, Seahorseruler, SentientSystem, Seth Ilys, Shadowjams, ShwetaCool, Sigma 7, SigmaJargon, Specs112, Srushe, Steelchain, SteveChervitzTrutane, Stickee, Strawny02, Suneelkushwaha, Supran, Swep01, Syrthiss, Tanvir Ahmmed, Tedder, Tee Owe, Teles, Teohhanhui, The Anome, ThePointblank, TheProject, TheTallOne, ThomasO1989, Thomasjeffreyandersontwin, Throwaway85, Tide rolls, Timo395, Tommy2010, Tony1, Tonyshan, Transcendence, TwilligToves, Vashtihorvat, VectorThorn, Versus22, Veryprettyfish, WikHead, Wiki alf, Wikipelli, Wimt, Woohookitty, Wspencer11, Wtmitchell, Xphile2868, Xsmith, Yeng-Wang-Yeh, Yogi de, Yossiea, Zondor, 1073 anonymous edits End-user computing Source: https://en.wikipedia.org/w/index.php?oldid=488786482 Contributors: Akidd dublin, Alex5678, Andreas Kaufmann, Aponar Kestrel, Aron.Foster, Bachrach44, Bunnyhop11, C5mjohn, Closedmouth, Dfense, Diego Moya, Dragentsheets, EagleFan, Gronky, J04n, JMSwtlk, Jak86, Jeff3000, Ktpenrose, Kubigula, LittleWink, Midnightcomm, Mk*, Rodii, Shimei, Sligocki, Trebor, Tulip19, Uncopy, Veinor, WookieInHeat, Yaron K., ZimZalaBim, 28 anonymous edits Middleware Source: https://en.wikipedia.org/w/index.php?oldid=518134560 Contributors: Arjayay, Beyond My Ken, Cj211, DRAGON BOOSTER, G.goks, Gusegn, HighKing, Jarble, Jean.julius, Jim1138, Kyle van der Meer, LauraMClark, MMSequeira, Materialscientist, Mild Bill Hiccup, Ohnoitsjamie, Openclovis, Papaya76, Paul Foxworthy, Zad68, 16 anonymous edits Enterprise content management Source: https://en.wikipedia.org/w/index.php?oldid=516059405 Contributors: 16x9, Aaronbrick, Accounting4Taste, Alanps, AlphaMatrix, Alusayman, Andrewpmk, Andy Dingley, Ankursen, Anna Lincoln, Antonrojo, Archite, Ash H., Ashleymayer22, Asocall, Avicennasis, Barek, Bazzargh, Bctg23, BenjaminNeale, Bloodshedder, Bobrayner, Bonadea, Cameron Scott, Can't sleep, clown will eat me, CanisRufus, Capricorn42, Car031, Casper2k3, CatSchley, Catfoo, Ceefour, Ceptorial, Chaojoker, Chophe, Chris the speller, Clinton0, Cmg07068, Cmosso, Cmstester, Colonies Chris, Correogsk, CraigBailey, Craigb81, Cristin1987, Cynical, DDerby, DabMachine, Danielchalef, Danieljames, Danim, DaveJohnsonSD, Dawnseeker2000, Deejmer, Docmgmt, Dougher, Dougsey, Download, Dragan Varagic, Dyun, ECM Dan, ECMExpert, ECMgirl, Ecm2006, Ecmguru, Ecmondemand, Edward, Elagatis, Eloi.gerard, EncMstr, EoGuy, Epbr123, Equendil, Everest9005, Evolve2k, Farkasattila, Favonian, Fdigital2011, Fmccown, Fotofan, Fratrep, Fred Bradstadt, Funandtrvl, F, Gary.goldsmith, Gdm, Gedman, Gioto, GoingBatty, GoodDamon, GraemeL, Grafen, Guillaume2303, Gvanrensburg, Gwern, Haakon, HaloProcess, Harvey the rabbit, Hebsgaard, Herbythyme, Hessamnia, Ian Pitchford, Info.abstracta, Iridescent, J.delanoy, J04n, JHunterJ, JLaTondre, Jaeger5432, Jauerback, Jmancini77, John Seward, Johnkmetha, Johntex, Jojalozzo, JonHarder, Jonverve, Joshuaklind, Jpbowen, Jsloey, Kaitlin510, Katiem222, Kevinxml007, Kff, Klsmith88, Kraftlos, Kuru, Kyle.Lindsay, Lambiam, Lee1872, Lfstevens, LilHelpa, Lkinkade, Longshot14, Luna Santin, Macwhiz, Maglish, Mahanchian, Manwichosu, MarkPDF, Marqoz, Mathisfenne, Matslats, Mattousai, MaxHund, Mbrousseau1, Merana, MichaelFrayDK, Mikelawrence1, Mild Bill Hiccup, Miyagawa, Mkmcconn, Mkuehn10, Monirugg, MrOllie, Muhandes, Mushroom, Napwilson, NawlinWiki, Nbvsrk, Ncw0617, Nick Number, Nicolas1981, Nlupini, Nmaquaire, Noq, Nurg, OBJECT1VE*CORPORATION, Ohnoitsjamie, Oicumayberight, OnePt618, Opagecrtr, Palapa, PaulHanson, Paulawalka, Pbaan, Peak, Pearle, Petrb, Pganza, Pippa007, PowerFlower, PriscillaBerry, Puneeta prose, Ray3055, RedLavaLamp, ReynoldLeming, Rich Farmbrough, Rjwilmsi, Rob smith 06, Rodamaker, Rogerbowne, Ronz, Rosiestep, Roystonea, Rwwww, S.K., Satori Son, Sebastian Pikur, Sfermigier, Sharnden, Signalhead, Sindri, SiobhanHansa, Sjhnumber1, Skellison, Spliffy, Sprmw7, Squids and Chips, StevenBirnam, Storkk, Sunsaturn, Sworthy246, Tbyrne, The Thing That Should Not Be, Thehelpfulone, Therealpowerflower, Thingg, Thumperward, Tide rolls, Timc, Tjcoppet, TommyG, Trust2520, Ttallaksen, Ubiscope, UnitedStatesian, Van der Hoorn, Venache, Vendettax, VernoWhitney, Versageek, Vikramsetia, Vivicia, Voidxor, Volphy, Walkingbeard, Webmistress25, Welsh, Winterst, Wyatt Riot, Ytee25, ZimZalaBim, 481

Article Sources and Contributors


anonymous edits Knowledge management Source: https://en.wikipedia.org/w/index.php?oldid=519258542 Contributors: ABF, ALR, Aacurtis, AbsolutDan, AcademicBusinessResearch, Achristoffersen, AdamRetchless, Al E., Alan Liefting, Alan1001, Albert Simard, Alex Kosorukoff, Alex Rio Brazil, AlexAnglin, AlistairMcMillan, Allstar18, Alphachimp, Alphax, Altenmann, Amy.C, An Fior Eireannach, Andredries, Andrewspencer, Andygray.yo, Andyjsmith, Angela, Angelalves, Anna Lincoln, Anonymous editor, Anthere, Ap, Arpabr, Ash, Asocall, AuburnPilot, Auntof6, Avoided, Axon, B9 hummingbird hovering, Banefirex, Banno, Baseballbits, Benechanuk, Betaeleven, Betterusername, Bhojarajug, Bijose, Bilby, Billyb22, Bkonrad, Bmccaff, Bonatto, Boxplot, Brockert, Broonesh, Browneffect, Bynary, C628, CALR, Cablue, Caiaffa, Carbuncle, CarolineSdano, Cffig, Charles Matthews, Chase me ladies, I'm the Cavalry, Chasehrlich, ChemGardener, ChrisCollison, Christoph Thalhammer, Chuq, Cielomobile, Clayoquot, Clemmerl, Closedmouth, Coasterlover1994, Compo, Corp Vision, Craigb81, Crayon101, Crisis, Cybercobra, Cyr S., DGG, DabMachine, DarknessEnthroned, Daveg0403, Davidkazuhiro, Dawnseeker2000, Dejudicibus, Dgrey, Dgurteen, Dholakia ratna, DigitalDoug, Dineshasewwandi, Discospinster, Djw, Dmezei, DokReggar, DonJuan8701, DoriSmith, Dorineruter, Dr.Yogesh.Malhotra, Drc79, Drdan01, EagleFan, EddyVanderlinden, Ehheh, EntmootsOfTrolls, Erianna, Ericblazek, Espoo, Everyking, Farah100, Fieldday-sunday, Finn-Zoltan, Fish and karate, Fivetrees, Flewis, Fnielsen, Fran.rod.rod, Freakofnurture, Fredbauder, Func, Garycairns, Gavin.collins, Gazpacho, Geoff.parcell, Gf uip, Gholden84, Gizzakk, Glendac, Gmelli, GraemeL, Gregory S. Waddell, Gthere, Hanaxa, Harmjoosse, Harvey the rabbit, Headbomb, Heapg, Heather Husson, Hede2000, Heisss, HelgeHan, Helgex, Hephaestos, Hirzel, HisSpaceResearch, Hobartimus, Hongjone, Hu12, Hvgard, IEdML, Iamnumbertwentyfour, Icairns, Icestorm815, Ihcoyc, Ingo77, Intellect12345, Irrc irri, Isotope23, IvanLanin, J.delanoy, J04n, JG 41102, JacHeale1, Jackvinson, Jalali.ie, James Baraldi, Jr., Janeel j-man, Jarble, Jarkeld, Jarothbart, Jauerback, Jbw2, Jeff3000, Jeffrey Newman, Jenniferz, JerseyBoy1, Jfiling, Jheuristic, JimBrander, JimR, Jinian, Joe McCrea, JohannesKnopp, Johnmarkh, Jojalozzo, Jon Awbrey, JonesC-NC, Jorjani, Jppigott, Jsarmi, Jtrojan, Jwissick, JzG, KConWiki, KYPark, Kapilg99, Karbinski, Kash100, Ke5crz, Kellylautt, Kenstandfield, Kevin B12, Kff, Khalid, Khalid hassani, Kku, Kmexpert, Knowledge2society, Koavf, Kordzik, Kpanek1, Ktr101, Kuru, L Kensington, LMackinnon, Lafleur, LaggedOnUser, Lanaesuejones, Lapis.br, LeeHunter, Leongwenhow, Lfstevens, Linda Vandergriff, Lmatt, Logicalgregory, Lsutiger1987, Lubaf, MC MasterChef, MC10, MSGJ, Madman2001, Mahee3007, Mailer diablo, Makemi, Malcohol, Manuel Anastcio, Marcelo1229, Mark Arsten, Marktan, Martin Wagenleiter, Marudubshinki, Materialscientist, Mattg82, Matusz, Maurreen, Maybury, MaynardClark, Mdd, Mdwhorley, Megan1967, Mentifisto, Michael Hardy, Michael Veenswyk, Mike Rosoft, Mikel Gmez, Mireille Jansma, MissJM77, Mneff, Moe Epsilon, Mollerup (usurped), Mootros, MrOllie, Msenthi, Mtaseer, Muistio, Mydogategodshat, N5iln, Nabeth, Nasa-verve, NeilN, Neilpw1, Nelson50, NeniPogarcic, Netvisionary, Nighthawk2050, Nimpal, Nimroth, Nurdin KG, Nurg, O.Koslowski, Oberst, Oddbodz, Ogemah, Ohconfucius, Ohio Mailman, OnomaKai, Osuke02, Overnumerousness, PFHLai, Pdorfman, Pearle, Perfecto, Peter-anthony, Phalseid, PhnomPencil, Piano non troppo, Pichai Asokan, Pigsonthewing, Pjshettar, Pmbhagat, Pmlineditor, Poetista, Prashant1615, Prof 7, Punja2002, QSECOFR, Quadell, RHaworth, RJBurkhart3, RS2008, Rahul-Nawani, Raja2002, Ramsan, Random contributor, RaptaGzus, Rasaj, Rcbutcher, Records Solutions, RedHillian, Renesis, Rettetast, Rhobite, RicDod, Richard fishburn, RichardF, Rmudambi, Roberdan, Roberto Cruz, RodC, Ronz, Rumpelstiltskin223, Ruud Koot, Ryulong, S KTT, SEWilco, SLiberman, SQL, ST47, Sabinegrabner, Sam Hocevar, Sandeep999, Sandymok, Sango123, Sarnholm, SchfiftyThree, Schmoozing, SchuminWeb, Scobeen, Sdriessen, Senthil.Muthusamy, Sgalup, ShelleyAdams, Sherry Happel, Shizhao, Siegfried Kreutzer, Silvonen, SimonP, Sintaku, Skhan123, Skomorokh, Snowded, Snowwecocke, SpeedyGonsales, Spitfire5000, Sspct99, Sswood18, SteinbDJ, Stephen Bain, Stevenson-Perez, Stewartdallas, Stic rotide, Strife911, Sunray, Swandogz, Symo 61, Sysdt, THEN WHO WAS PHONE?, Techdoer, Techna1, Teshel, The wub, TheAMmollusc, TheRhani, Theeverst87, Thehelpfulone, Thiseye, ThreePD, Thseamon, Titoxd, Tiuks, Tmp373, Tony1, Topiarydan, Torpenhow3, Toussaint, Traroth, Truthtable1, Tsgttucker, Tshalini, Turnover3g, Twergywhite, Universimmedia, Unyoyega, User68, Valentino k, Veinor, VeniWikiVici, Vespristiano, Vishaltayal, Vivek sam123, VladGenie, Waggers, Wally Tharg, WaysToEscape, Westendgirl, Whazzi, Wichitalineman, Wiki4fun, Wikid77, Wikilibrarian, Wikireviewer99, Wikisteff, Woohookitty, Xyzzyplugh, YUL89YYZ, Yannis05, YeahIKnow, Yunshui, ZimZalaBim, Zzuuzz, , 1540 anonymous edits Expert system Source: https://en.wikipedia.org/w/index.php?oldid=516232791 Contributors: .mdk., 2help, 4twenty42o, 767Mobile, A5b, Abaddon314159, Abdull, AbsoluteFlatness, Acroterion, Adrian.walker, Agutie, Ahoerstemeier, Al E., Alansohn, Aldarsior, Alksub, Alphachimp, Aluion, Amypro, Andrew Parke Yang Yu, AndyTheGrump, Angela, Anonymous Dissident, Anselmobattisti, Arnehans, Arpabr, Arthur Rubin, Arved, Asheeshgoja, Barneca, Bdonlan, Beetstra, Beland, Benjaminb, Bentogoa, BertSeghers, Bevan.coleman, BlackHatMentor, BlueNovember, Boing! said Zebedee, Bookandcoffee, Boothy443, Brandonm0720, Brat32, CaliforniaDreaming, Calltech, Canley, Carlosliu, CatherineMunro, Caulde, CharlesGillingham, Chocolateboy, Chris 73, Chris the speller, Christian75, Clemwang, Coll7, Control.optimization, Conversion script, Courcelles, Criffin, Crysb, Cse.buddhika, DARTH SIDIOUS 2, DXBari, Dan Gan, Davehi1, DavidsonZ, Dbfirs, Dgw, DieSkurk, Discospinster, Doc Tropics, Dottydotdot, Download, Dreadstar, Drwu82, Dspradau, Dullfig, Dupz, EAi, EagleFan, Econrad, Ee79, Eh7, Ehheh, Ehn, Elphington, Epbr123, Erkan Yilmaz, Eternal dragon, Excirial, Favonian, Floris V, Frecklefoot, Freek Verkerk, Furrykef, Fvasconcellos, Fyyer, F, Gadfium, Gh23, Gilliam, Gmstanley, GoingBatty, Graham king 3, Graymornings, Greenchilefrog, Greenrd, Gtg204y, Gurch, Gwernol, Gkhan, H-ko, Harvey the rabbit, Hetar, Hingram, Hirzel, History2007, Hobartimus, Hoziron, Hubyrod, Hut 8.5, Ike9898, Iluvcapra, Imran, Informavores, Ingolfson, IvanLanin, Ixfd64, JMSwtlk, JTN, Jahiegel, JamesBWatson, Jeff G., Jerome Charles Potts, Jez9999, Jim McKeeth, Jim15936, Jmaio2, Jmchauvet, Joachim.baumeister, Joanna Bryson, JonHarder, Joydurgin, Joyous!, Jpbowen, Jtp15, Kaimiddleton, Karthikanandraj, Kelovy, Kenyon, Khalid hassani, Kingpin13, Komandobb, Kpjas, Kristensson, Kuru, Kvng, Kyle1278, Lastorset, Lbhales, Lear's Fool, Lights, LittleOldMe old, Looxix, Louharris, Lradrama, Luciandrei, Luk, Luna Santin, Lupo, Lycurgus, MC MasterChef, MCrawford, MER-C, Madelinefelkins, Manop, Mariannehgv, MarkJFAllen, Marudubshinki, Massic80, Materialscientist, MattBan, Maurreen, Mav, Mdd, Merbabu, Mi ai guy, Michael Hardy, MichaelBillington, Mikael Hggstrm, Mlaffs, Mmernex, Mononomic, MonsterSRM, Mormegil, Mr. Lefty, Mrholybrain, Mrmatiko, Mrseibert, Muchness, Murdomck, Mwi33, Mydogategodshat, Mzucherman, Mslimix, Nabeth, Neilc, Nick Number, NonNobisSolum, Noodlez84, Oddbodz, OingoBoingo2, OldManInACoffeeCan, Oliver202, Ooz dot ie, Orenburg1, OrgasGirl, Paddles, Pakaran, Pat grenier, Pavel Vozenilek, Pd75, Pearle, Perfectblue97, Pgr94, Pharaoh of the Wizards, Philip Trueman, Piano non troppo, Pinethicket, Prakash Nadkarni, Predictor, Programmar, Prokopenya Viktor, Proofreader77, PseudoSudo, Quintote, Qwertyus, Qxz, R Calvete, R'n'B, RDBury, Rails, RapidR, Reach Out to the Truth, Reconsider the static, Rich Farmbrough, Richardhenricks, RickK, Rmosler2100, Robert Kowalski, Ronz, Rspeer, Rursus, Ryulong, SNowwis, Sally6767, Sam Korn, Sam42, Sander Sde, SarekOfVulcan, Sarnholm, Saswat012, SatuSuro, Savant13, SchreiberBike, Scott Dial, Semperf, Serdar XoXo, Sesu Prime, Shadowjams, Shaggyjacobs, Shepard, Shizhao, SimonArlott, SimonLyall, Skinneal, Skyezx, Skysmith, SlackerMom, Sminthopsis84, Snowolf, Solitude, Sortior, SpeedyGonsales, Spellmaster, Sprocc, Squidfish, Srleffler, Stephen Bain, Stevag, Storkk, Supten, SusikMkr, Swollib, Sylvain Mielot, Syntaxapse, Talk2xpert, TerrorTowers, TexMurphy, The Epopt, The Transhumanist, Thistheman, Tide rolls, Tim1357, Titov Anton, Tmg1165, Tobias Bergemann, Tom Pippens, Topiarydan, Treyt021, Ulric1313, Useight, Van helsing, Vangelis12, VictorAnyakin, Vishumoon50, Vrenator, WaltBusterkeys, Wayiran, West Brom 4ever, WikiLaurent, Wikilibrarian, Wikipelli, WillWare, Woohookitty, X96lee15, Yonkie, Youblend2, Youssefsan, ZENDELBACH, Zoicon5, Zzuuzz, , 701 anonymous edits Reference data Source: https://en.wikipedia.org/w/index.php?oldid=512424638 Contributors: DrydenGeary, Evert r, Gsilverm, John of Reading, Keefhr, Kubanczyk, Kuru, Lpm, Markcowan, Ottomachin, Rknasc, Scottmzt, Tetracube, WikHead, 6 anonymous edits Master data Source: https://en.wikipedia.org/w/index.php?oldid=515758288 Contributors: Alanl, Alexander.isacson, Atulgg, Cander0000, Chzz, Dwandelt, FisherQueen, Fx.nicolas, Guerchoig, Joepiskai, Kuru, Mdaconta, Quentin brazil, Scottmzt, Stammdaten, Stephenpace, Vrenator, 18 anonymous edits Conceptual schema Source: https://en.wikipedia.org/w/index.php?oldid=513374940 Contributors: Baz.77.243.99.32, Bobby122, Charles Matthews, Crysb, Danim, DavidCHay, EagleFan, Gregbard, ILovePlankton, J'raxis, Johannes Simon, Jsmethers, Karthikshanth, KeyStroke, Khalid hassani, Kku, Libcub, Mark Renier, Mdd, Modify, Nichtich, Ps07swt, Qu3a, Rbrewer42, RedWolf, Rich Farmbrough, Sminthopsis84, Stevertigo, Tarotcards, Tsarris, Walter Grlitz, Woohookitty, Zanaq, 41 anonymous edits Entityrelationship model Source: https://en.wikipedia.org/w/index.php?oldid=520716336 Contributors: 16@r, AJackl, Ahy1, Alksentrs, Alleborgo, Amitchaudhary, Anarchitect, Andrejj, Ann Stouter, Anshuk, Ardonik, Armoreno10, Arronax50, Arunloboforever, Asavari24, Axeltroike, Ayush Samantroy, Bencoder, Benthompson, BigSmoke, Bignose, Bilby, Bobo192, Bolo1729, Borgx, BozMo, Brick Thrower, Bsodmike, CDV, Caltas, CapitalR, Causa sui, Cedric.claidiere, Centrx, Charlesbc, ChrisEich, Claygate, Cohesion, Colin Angus Mackay, Corb3126, Craigwb, DStoykov, Danhuby, Danim, Datapolitical, Dburbank, DeadEyeArrow, Deragon, Diligentdogs, Diptanshu.D, Discospinster, Diza, Doctorambient, Dodo bird, Doscmaker, Doug Bell, Dream of Nyx, Duke Ganote, Dumelow, ERfan111, EWikist, EagleFan, Edward, Egrabczewski, Eirikr, Ekillaby, Elsendero, Elwikipedista, Erics, Fan-1967, Fieldday-sunday, Finell, Firsfron, Forage, Ftiercel, Fuper, Gebe, Godrickwok, Gonwin, GraemeL, GreyCat, Gronky, Gsmgm, HTS3000, Heightwatcher, HerrSchnapps, Hex, HexaChord, HisSpaceResearch, Hu, Hu12, Hugsandy, ITBlair, Ian channing, Informatwr, Ixfd64, J.delanoy, JD554, JLaTondre, Ja 62, Jan Hidders, Jandalhandler, Jason Quinn, Jay, Jay Gatsby, Jeff3000, JerkerNyberg, Johncartmell, Johncurrier, Jolta, Jonathan Webley, JorisvS, Jwilkinson, Kalotus, Karlcuya, Karnesky, Kate, Keilana, Kemorgan, KennethJ, KeyStroke, Khalid hassani, Khazar2, Kirklm, Klausness, Kliford, Koavf, Kocio, Kop, Kubigula, Kushalbiswas777, Lambmj, Lancew, Legaia, Leggattst, Libcub, Ligulem, LockeShocke, Lprichar, MBisanz, Malcolmxl5, Mann jess, Manning Bartlett, Marekich, Mark Arsten, Mark Renier, Master of Puppets, Materialscientist, Matthew 1130, Mcowpland, Mdd, Meegs, MelbourneStar, Mflatischler, Mhkay, Michael Hardy, Michael miceli, Miriup, Misteror, Mjb, Motyka, Msnicki, NBthee, NSLE, Nareshyalagala, Nickcarr, Nishadha, Nn123645, Northernhenge, Northumbrian, Nt777, Octahedron80, Ohnoitsjamie, OsamaBinLogin, Ottomachin, Patrick Deelen, Patriotic dissent, Paul Bassin, Paulgeorgebassin, Paullewis4372, Pde, Perrydegner, Perter Nokia, Petr Dlouh, Plhale667, Pm master, Pmmulligan, Pooryorick, Puffin, Raffen, Ralph Corderoy, Rambaldi47, Raven4x4x, Ravinjit, Rejeantremblay, Ren, Riccardo.fabris, RichardVeryard, Roadmr, RobertG, RobertRoggenbuck, Robsta, RockMFR, Ronjouch, Ronz, Rosattin, Rp, Rsrikanth05, Rvstephenson, Sae1962, SarekOfVulcan, Saturday, Saurabha5, Saustin1, Sbachmn, ScottHJohnson, Sean Whitton, Slovakjoe, SoftwareDeveloper, Softwiki, Sstrauch, Stanislav Nowak, Stephenb, SteveChervitzTrutane, SteveLichtman, Stuque, Stwalkerster, Sulai, Supten, Svirf, THEN WHO WAS PHONE?, Tdrewry, Texture, The undertow, TheMattrix, Thejoshwolfe, Threazy, Tide rolls, Toyota prius 2, Tumble, Turnstep, TwoTwoHello, TypoBoy, Udit90, Udo Altmann, UnitedStatesian, Vespristiano, Vishnava, Vrenator, Wdhoke, Whadda, Willking1979, Wizgha, Wsandiego, Wwmbes, Xilaworp, Yahya Abdal-Aziz, Zanaq, Zhenyu, Zondor, ZweiOhren, ^demon, 541 anonymous edits Object-oriented modeling Source: https://en.wikipedia.org/w/index.php?oldid=499174956 Contributors: Alksentrs, Alphachimp, Andreas Kaufmann, CanadianLinuxUser, Connection, Frap, GregorB, Mark Renier, Michael Hardy, PocklingtonDan, Pol098, RainbowCrane, Robth, Stevertigo, Teammm, 13 anonymous edits Logical data model Source: https://en.wikipedia.org/w/index.php?oldid=513047048 Contributors: Apugazh, Brick Thrower, Christianvinter, DavidCHay, Fl, Francois Cartier, Gaslight1900, Haakondahl, Hooperbloob, Jalwikip, Jonnich, Kalotus, KeyStroke, Kku, Lucky 6.9, Mark Renier, Matthew 1130, MauriceJFox3, Mcclarke, Mdd, N8d1, Pukekobird, Razorbliss, RitigalaJayasena, Rodasmith, SchreiberBike, Signalhead, Sumanda, Walter Grlitz, Xyzzyplugh, Zidanie5, 41 anonymous edits

258

Article Sources and Contributors


RDF query language Source: https://en.wikipedia.org/w/index.php?oldid=518354186 Contributors: Andy Dingley, Fleminra, ForrestCroce, Ian Spackman, Jamelan, Jerryobject, Jleedev, John Vandenberg, John of Reading, Juanjobnt, MovGP0, Rjwilmsi, Rollxx, Toussaint, Universimmedia, 7 anonymous edits Web Ontology Language Source: https://en.wikipedia.org/w/index.php?oldid=506734157 Contributors: 4th-otaku, A3 nm, Afreet, Ajvz, Akim Demaille, Alexamies, Aloomis, Anca, Andy Dingley, Ankitasdeveloper, Apoc2400, AxelBoldt, Baojie, BeakerK44, Beland, Ben Webber, Blambert, Blomskoe, BoP, Bovlb, Bryan Derksen, Carlosayam, Charivari, Crazynas, Dan Polansky, Daniele Gallesio, Danny, David Borrell Cintas, Denny, Devkmem, Didier So, Dmb000006, Dmccreary, DonKamillo1980, Donama, Doug Bell, Download, Dricherby, Duncan.Hull, Ebelular, Eisnel, Engelec, Erianna, Ferris37, Finin, FrankTobia, Frecklefoot, Ghettoblaster, Gjking, GoingBatty, Gtondello, Gwil, Hex, Hiranmay ghosh, Husond, IanDBailey, Idfwest, Ijon, InShaneee, Isiaunia, It writer, Itai, JeremyA, Jjalexand, Jluuwindsorca, Jodi.a.schneider, JonHarder, Jotomicron, Jreferee, Jrw@pobox.com, Jstewart, Jusdafax, K1Bond007, Kate, Kghbln, Kiranoush, Kku, LEW21, Langec, Leggattst, Legless the oaf, Lguzenda, LiDaobing, LilHelpa, Linforest, Llacy, Looxix, Lucideer, M schnei, MBisanz, MME, Macrakis, Mark Renier, Markus12, Massic80, MattW1, Mdd, Mendicott, Michael Hardy, Michiel Duvekot, Mignon, Millosh, Monedula, Monobi, Mtiffany, Mxn, MyU5ernam3, N2e, Neilc, Nick Number, Nickshanks, NoBu11, Novum, Nurg, OldakQuill, Papercutexit, Phil Boswell, Poeloq, Pol098, PreThinker, R'n'B, RayGates, RickiRich, Rinke Hoekstra, Rjwilmsi, Rodasmith, Ronz, Rvcx, Salahx, Sapphic, ShaunMacPherson, Sibersandi, SnappingTurtle, Solphusion, Sonnambula, Stoni, Tadora, Text1, Tezza2k1, The Anome, Toussaint, Ultimatewisdom, Umar.mahmud, Universimmedia, Urhixidur, Vastag, Warnet, Witbrock, Wsu-aw-zarna, Yaron K., Yworo, Zebehn, Zeno Gantner, Zinjixmaggir, Zviedris, 275 anonymous edits Enterprise architecture Source: https://en.wikipedia.org/w/index.php?oldid=520691491 Contributors: 0mark0, 0tsa0, AGK, Alienjim, Andy.k.chang, Antonio Lopez, AntonioVallecillo, Architectchao, Arpabr, Athaenara, Atifmkhan, Bihal, Billaronson, BlackKnight, Boothy443, Brandguru, Brandy Downs, Brian a lee, Can't sleep, clown will eat me, Carl presscott, Catof, Cfoglia, Charles T. Betz, Chris the speller, Clapstick77, Colinwheeler, Colonies Chris, Creacon, Cybercobra, DGG, Daljitbanger, Darp-a-parp, David Waters, David.t.bath, DearMyrah, Deville, Dmccreary, Dr Gangrene, Dubtank, Dvanberg, Dwbrockway, EAIG, EAPRACTICE, Eaguru, Eamteam, Eateacher, EdJohnston, Eddiexx77, Efitu, Eissaf, Emahdizargar, Emrekenci, Enoril, EnterprisePlanner, Erianna, EricStephens, ErikvanB, Erkan Yilmaz, Eustress, Evmako, Factician, Faradayplank, Ff1959, FlyHigh, Foobarnix, Fortmyers, Fred Bradstadt, Gardar Rurak, Gletiecq, Graham Berrisford, H.meligy@ieee.org, Hazarbx, Headbomb, Heppelle, Hervegirod, Hmains, Horatio Huxham, Ian Glossop, IanDBailey, Id027411, In fact, Indon, Inwind, JD Beckingham, JEH, JForget, Jarble, Jayron32, Jcs2006, Jeff Zhuk, Jmgendron, Johngt, Jpbowen, Jroschella, Jstehlin, KAtremer, Keithstagner, Khalid hassani, Kjenks, Kpellegrino, Kuru, Kustere, Kvoges, Lakeworks, Larry laptop, Larsonk, Leowashere, Lewisskinner, Lijealso, Lockezachman, Lotje, Lradrama, M4gnum0n, MER-C, Mabdul, Mahanchian, Manishpk, Marcoprez, Mark Renier, Marklane0913, MatthewFordKern, MaxHund, Mdd, Metaframe, Michael Hardy, Michig, Mikeyboysan, Mirkoki, MithrasPriest, Mmalotau, Modify, Moejorris, Momo san, Monash123, Mormegil, Mudgen, Nav102, Nazrani, Ncw0617, Niceguyedc, Nickmalik, Nocontributionsever, Noru66, OhioGuy814, Ohnoitsjamie, Oicumayberight, Ontist, Per.foldager, Petko, Picus viridis, Pkearney, Poppafuze, Poweroid, Qartman, Qatestdev, R'n'B, Raholijaona, Raju bhupesh, Rapsar, Rfoard, Ricardus64, Rich Farmbrough, Richard R White, Richard.McGuire88, Richard@lbrc.org, RichardVeryard, Roleplayer, Ron Gaba, Ronz, Roypittman, Rriverapadilla, Rudytheman, SWA2, Saccland, Saigeethamn, Samhiggins1973, ScottWAmbler, Sct72, Senright, Shajela, Sightreadernow, Signalhead, Somerwind, Sonyasunshine, Spanglej, Sri1968, Sun Creator, SunSw0rd, Sutch, Svtveld, Tbyrne, Toheller, Tom Corn, Tonyshan, Tribaal, Tschirl, Turnb345, UdayanBanerjee, Vandvik, Veinor, Vicpulido, VishalVPatil, VivekRaman, Winterst, Yahya Abdal-Aziz, Ylvabarker, YoavShapira, Yogishpai, , 397 anonymous edits Segment architecture Source: https://en.wikipedia.org/w/index.php?oldid=477336170 Contributors: Fabrictramp, Gregloew, Katharineamy, Malcolma, RichardVeryard, 2 anonymous edits Solution architecture Source: https://en.wikipedia.org/w/index.php?oldid=519499737 Contributors: Darrell Greenwood, Graham Berrisford, Jim1138, Mark Renier, MatthewFordKern, Mdd, P4eb, RichardVeryard, Robert Brockway, Thisisjohn, Victor Lopes, 7 anonymous edits Service-oriented architecture Source: https://en.wikipedia.org/w/index.php?oldid=520745842 Contributors: A. B., AGK, Aamironline, Aaronaberg, Aarsanjani, Abrobitroy, Adoble, AdventurousSquirrel, Afrab null, Ahmadar, Akademy, Akmani, Aleenf1, Alexliu, Aligilman, Alinabalint, Alksentrs, Allen4names, Allstarecho, Alvin Seville, Ambitus, AncientArk, AngelaMartin2008, Angr, Anil Kumar Kartham, AnitaRogel, Anshuman.dwivedi, Anwar saadat, Arms & Hearts, ArnoldReinhold, AroundLAin80Days, Arthur Frayn, Ashishvaid, Ashwineek, Assentt, Atr0x, Avriette, Axlq, Azambataro, BBCWatcher, Badja, BananaFiend, Bangsanegara, Bansipatel, Barek, Basil.bourque, Bbarthel, Beantowngal, Beetstra, Beland, Betswiki, Betterusername, Bevo, Billhunt, Billing geek, Binuraghavan, BioPupil, Biskup2010, Bjdehut, Bkell, Blablablob, BlueNovember, Blueshark, Boaworm, Bob.gourley@comcast.net, Bpmsoa, Bramschoenmakers, Brandguru, Brandon, Breedlov, BrotherE, Bruno rodin, Bugg, Bunnyhop11, BusinessMashups, CIOGuru, Caknuck, Cander0000, Capricorn42, Carl presscott, Carusus, Cem Basman, Cesar.obach, Chiya07, Chprasad, Chris the speller, Christian List, Christopher norton, Cj211, Cmagururaj, Cnb, Colm.mcmullan, Cometstyles, Corycasanave, Crysb, Curtbeckmann, CutOffTies, Cybercobra, Cybernetic, Czaftan, DEng, Dactyllic, DamsonDragon, Dancinggoat, DanielGEllis, Danperryy, Darshana.jayasinghe, Dataprit, Dave6, Daveh1, DavidInCambridge, Davidslinthicum, Dblanchard, Ddhanasekharan, Defragyourmind, Dfass, Diego Moya, Digerateur, Digitalfunda, Discospinster, Djmackenzie, Dlafont, Dlelash, Dlwl, DocendoDiscimus, Dp462090, Dreadstar, Drphallus, DucerGraphic, E.s.cohenlevy, ESkog, Ebertelsen, Edward, Ehheh, Elwood j blues, Empathy321, EnOreg, Enarceus, Enderminh, Erebus555, Ericsenunas, ErikTownsend, ErnstRohlicek, Erpgenie, Escape Orbit, Everyguy, Evmako, Exostor, Expertjohn, Fassihi, Fayenatic london, FayssalF, Fbahr, Feigling, Felagund, Feravoon, Fiftytwo thirty, Fioranoweb, Fkroon, Fogartis, Fpbear, Fram, Frecklefoot, Fuzheado, Gabhala, Gadfium, Galoubet, GarbagEcol, Gentgeen, GeoffMacartney, George The Dragon, George392, Gilliam, Giraffedata, Gnanaguru, Gogo Dodo, GraemeL, GregorB, Greyskinnedboy, Grshiplett, Gunaseelapandian, GuySh, Gwernol, H.E. Hall, H2g2bob, Haakon, Hadipedia, Hajoworldwide, Haleyga, Halhelms, HamburgerRadio, Hanay, HappyInGeneral, Hasalaka, Heather.taylor, HelloSOAWorld, Helon, Hervegirod, HighFlyer12, HighKing, Hominidx, HouKid, Hslayer, Hu12, Hulmem, Hvh22, ITBusinessEdge, Iain Cheyne, IanDBailey, Ibspublishing, Igoldste, Iolar, Iskatel, ItchyDE, IvanLanin, IvoTotev, JCLately, JHP, JLRedperson, JLaTondre, JVersteeg, Jacobolus, Jamehealy, JamesLWilliams2010, Jann.poppinga, Jeff DLB, JeffAdkins, Jen2000, Jiri Pavelka, Jjalexand, Jmilgram, Joe.jackson, JoeSmack, John of Reading, Jojalozzo, Jonabbey, Jots and graphs, Jpbowen, Jstrachan, JuJube, Julesd, Jvlock, Jvlock527, Jzhang2007, Jzupan, Kckid, Kensall, Kgoarany, Khalid hassani, Kingpin13, Kiran.kaipa, Kirby2010, Kirrages, Kku, Kmittal, Kmorozov, Kobakhet, Kraupu, Kristen puckett, Kuru, Kustere, Kyz, Lacomoedia, Lancelotlinc, LandNav, LarsHolmberg, Lastorset, Lelguea, LeonardWalstad, Linkspamremover, Lipton, Lmxspice, Lozw, Lradrama, Luis Mailhos, Lus Felipe Braga, M@, MER-C, MHPSM, MRqtH2, MZMcBride, Maard, Magioladitis, Malharbarai, Malik fairose, ManOnPipes, Mange01, Mangst, Manishpk, Maria C Mosak, Marioct, Mark Renier, Markcowan, Martin Wagenleiter, MartinSpamer, Marx Gomes, MasterDirk, Matt Darby, Mattmcc, MauriceKA, Maxim, Maximus Rex, Mbruggen, McGeddon, Mccready, Mdd, Mdhmdh31, Meitar, MertyWiki, Metlin, Michael Hardy, Michig, Midnight Madness, MiguelMunoz, Milais, MkuhbocK, Mmichaelbell, Moejorris, Mogman1, Mojei, Monster1218, MrOllie, Mrego, Mschneid@agentos.net, Mtriana, Muzikman67, Mymallandnews, Nav141174, Ncw0617, Neilc, NetManage, Netsql, Nick Number, Nickg, Nickull, Nigelj, Nklata, Nnh, Nobar, Nottrobin, NuclearWarfare, Nunquam Dormio, Nutcracker, OWV, Obankston, Oblivious, Odcdtd45, Odie5533, Ohnoitsjamie, OlEnglish, Olimpiu.pop, Orangesodakid, Ordinant, Ozgod, Paladin1979, Parker007, Paul Foxworthy, Peashy, Pedant17, Penedo, Pengo, Peter Campbell, Phantast, Pharaoh of the Wizards, Philoserf, Pinar, Pkkao2, Pne, Polluks, Prari, Projectmbt, Prov519, Ptrb, Q Chris, Qiuzhuang.Lian, Quaysea, Radharajgopal, Rajah9, Rajeevgm, Ramachandra20, Randimae, Randybardwell, Ravisriv, RayGates, Raysonho, Razorflame, Rednblu, Reedy, Renesis, Repton, Rfc1394, Rhobite, Rich Farmbrough, Richard R White, Richard.McGuire88, RichardVeryard, Ricky123tony, Riverawynter, Rjabate, Robert Illes, Rogerd, Rohit Sood, Rohitcool, Roma emu, Ronz, Rose Booth, S.K., SAE1962, SJP, SReddy, ST47, Sadads, Sae1962, Salliesatt, SamJohnston, SamTheButcher, Samiam70002, SanderEvers, Sathya sh, Saxifrage, Sboehringer, Sbono, Sbvb, Schastain, Sciagent, Scorpiuss, Sean R Fox, SeanAhern, Sempertinal, Senfo, Sfan00 IMG, ShachiDube, ShakingSpirit, Shanes, Sharkface217, Shd, Shigdon, Shuumass, Sidna, Simon South, Slp1, Smet, Sneakymoose, Soa101, Soaguy, Soalib, Softwareqa, Solidpoint, Soumyasch, Spa015, Spir, Sprhodes, Starfishjones, Stefan, Stefan Ivanovich, SteveLetwin, Stevepolyak, Stevers1, Stevietheman, Straxus, Stuartyeates, Stwomack, Sudhanshuss, Sunrisesunset12345, Svick, Swtechwr, SynAsha, TBadger, Taibah U, Tayal01, Taylorrx, Technologyvoices, Techwonder, Tedickey, Texture, That Guy, From That Show!, The Rapper, The Thing That Should Not Be, Thecheesykid, Themfromspace, Think4yourself, ThomasOwens, Thumperward, Tiago simoes, Timosa, Tom Morris, Tomerfiliba, Tony1, Tonyshan, Torc2, Tr0lll, Traxs7, Tvarnoe, Usrnme h8er, Varco, Varlaam, Vatsakatta, Velella, Vinaysingla, Vocaro, Vrenator, WTGDMan1986, Wagnermr14, Walter Grlitz, Wambugunet, Warpsmith, Warren, Wasbeer, Wayiran, WebWombat, Weyes, Whoisjohngalt, William Pietri, Williamglasby, Wintaborn, Winterst, Wmahan, Wsibob, Wykis, Xagronaut, Xeroboxer, Y2ksw, Yadoo86, Yetasoli, Yocto42, Yogishpai, Youngfu, Yveschaix, Z10x, Zbalai, Ziroby, Zntrip, ZooFari, Zorabi, Zzglenm, Zzuuzz, 1369 anonymous edits Zachman Framework Source: https://en.wikipedia.org/w/index.php?oldid=508925218 Contributors: Aarktica, Adnanmukhtar, AngelovdS, AnneBoleyn1536, Blaxthos, Boppet, BryanG, CQ, CasperGoodwood, Cat4567nip, Chowbok, ChristopheS, CommonsDelinker, DEddy, DGG, Darklilac, Deville, Duke Ganote, EagleFan, EdJohnston, Elpecek, Fang 23, Fredrick day, Gaius Cornelius, Gark08, GoingBatty, GoodStuff, GrWikiMan, Graham Berrisford, Gregbard, Greyskinnedboy, Grgarza, Gzornenplatz, HGB, Hagge, Hu12, Ian Pitchford, Ideasintegration, J04n, John of Reading, Jpzachman, Kmorozov, Larrygoldberg, Larsonbennett, Len Morrow, LilHelpa, Lockezachman, Lousyd, Louv, Mahanchian, Mandarax, Maria C Mosak, Mdd, Metaframe, Metaman1, Mhavila, Mlibby, Muhandes, Natalya, Next2u, Nickmalik, Ontist, Paddles, PaulTanenbaum, Phogg2, Phwedh, R'n'B, Rcbapb, RichardVeryard, Ron Gaba, Ronz, Sbowers3, ScottWAmbler, Sct72, Skapur, SunSw0rd, Tcrowley7, Tennekis, Thehelpfulone, Tom Corn, Woohookitty, Ynhockey, , 128 anonymous edits The Open Group Architecture Framework Source: https://en.wikipedia.org/w/index.php?oldid=519065773 Contributors: Apokrif, BatesyM, Cat4567nip, Chowbok, CiTrusD, Colinwheeler, Craigrmartin, Damian Gawlowski, Decoy, Desphi, Discospinster, Dominik Kuropka, Dwelliott, Edhgoose, Elpecek, ErikvanB, Faramir4, FireballDWF2, Georgewilliamherbert, Ghettoblaster, Gmjohnston, Graham Berrisford, GrahamHardy, Greyskinnedboy, Hassan210360, Hervegirod, Hojimachong, IP 84.5, IanDBailey, IvanLanin, JForget, Jamelan, Jeff Zhuk, John Vandenberg, Jonste, Juliank, Kmorozov, Laoyao99, Larsonk, Lokpest, Mahanchian, Maria C Mosak, Markphahn, Mdd, Mgl795, Mohamad Hani ELJAMAL, Nurg, Phantom309, Pittspilot, Prof 7, QuiteUnusual, RHaworth, Richard R White, RichardVeryard, Ronz, Rvodden, Sgryphon, Spike Wilbury, Stephen, Stuartyeates, SunSw0rd, Tdubendorf, TechnoMorphWiki, Timdwilliams, ToolPioneer, Turnb345, Uppalj, Yan Kuligin, Zanaq, 139 anonymous edits Federal enterprise architecture Source: https://en.wikipedia.org/w/index.php?oldid=514559629 Contributors: Adi4094, Brandguru, Cat4567nip, Chowbok, Cmholm, Crkey, Daveshawley, David.t.bath, Dmccreary, DoctorW, Epolk, Giraffedata, Gletiecq, GoodStuff, Ihuxley, Ironwolf, Jpbowen, JubalHarshaw, Kalai545, Kitdaddio, Krauss, Kuru, Lanma726, Lantrix, Lockezachman, Louv, Maria C Mosak, Mark Arsten, Mark Renier, Mdd, Nageh, Nisteditor2, PaulHanson, Piels, Pinethicket, RichardVeryard, Ronz, Senright, Sheardsheard, Snidhi22, The Horst Mann, Tony1, ToolPioneer, Traveler23380, , 37 anonymous edits Operating system Source: https://en.wikipedia.org/w/index.php?oldid=520742873 Contributors: 10metreh, 12.245.75.xxx, 1297, 130.64.31.xxx, 149AFK, 151.30.199.xxx, 1exec1, 1yesfan, 2001:4898:0:FFF:0:5EFE:A79:6C06, 216.150.138.xxx, 28421u2232nfenfcenc, 2D, 2nth0nyj, 62.253.64.xxx, 789455dot38, 9258fahsflkh917fas, 9marksparks9, =Josh.Harris, A876, APH, AR bd,

259

Article Sources and Contributors


AVRS, AVand, Aaron north, Aaronstj, Abazgiri, Abhik0904, Ablewisuk, Ablonus, Acerperi, Achowat, Adair2324, Adams kevin, Addshore, Adityachodya, AdjustShift, Adriaan, Ae-a, Afed, After Midnight, Agentlame, Ahoerstemeier, Ahunt, Ahy1, Aim Here, Aitias, Aladdin Sane, Alanbrowne, Alansohn, Alasdair, Ale jrb, AlefZet, Alegoo92, Alenaross07, AlexDitto, Alexei-ALXM, Alexf, Alexius08, Alexswilliams, Alextyhy, Alexwcovington, Alisha.4m, AlistairMcMillan, Alksentrs, Alll, Alsandro, Altay437, Alten, Althepal, Am088, Amicon, Amillar, Amphlett7, Anaxial, Andre Engels, Andrew Maiman, Andrewpmk, Android Mouse, Andy pyro, Andy16666, Andyzweb, Ang3lboy2001, Anna Lincoln, AnnaFrance, AnonMoos, Anouymous, Ansumang, Antandrus, Antonielly, Antonio Lopez, Applechair, Arakunem, Aranea Mortem, Arch dude, Archanamiya, Archer3, Ark, Arman Cagle, Aruton, Ashikpa, Ashish Gaikwad, Ashleypurdy, Astatine-210, Astral, Atlant, Atomician, Attitude2000, Avenged Eightfold, Awaterl, Ayla, BMF81, Bachinchi, Bact, Badhaker, Badriram, Baron1984, Baronnet, Bastique, Bbbl67, Bbuss, Beland, Ben Webber, BenAveling, Bencherlite, Benneman, Beno1000, Betacommand, Bevo, Bhu z Crecelu, BiT, Bidgee, Big Brother 1984, BigDunc, Bigdumbdinosaur, Bijesh nair, BjrnBergman, Blainster, Bleh999, Blu Aardvark III, Blue520, Bluemask, Bobo192, Boing! said Zebedee, Bonadea, Bongwarrior, Bookinvestor, Bornslippy, Branddobbe, Brianga, Brion VIBBER, Brolin Empey, Brownga, Bsadowski1, Btate, Bubba hotep, Buonoj, Burkeaj, Bwildasi, Cactus.man, Caknuck, Calabe1992, Callmejosh, Calltech, CalumH93, Camilo Sanchez, Caminoix, Can You Prove That You're Human, Can't sleep, clown will eat me, CanadianLinuxUser, Canageek, CanisRufus, Canterbury Tail, Capricorn42, Captain Goggles, Captain-n00dle, CarbonUnit, CarbonX, CardinalDan, Carlosvigopaz, CaroleHenson, Cartread, Casull, Cdills, Celebere, CesarB, Cfallin, Chairman S., Chaitanya.lala, Chamal N, ChaoticHeavens, Charles Nguyen, Charles dye, CharlotteWebb, Chase@osdev.org, Chatul, Chikoosahu, Chris1219, Chrisch, Chrislk02, Christian List, Christian75, Ck lostsword, Cleduc, Clindhartsen, Cllnk, Closedmouth, Clsin, Cncxbox, Cobi, Coffee, CommonsDelinker, Comperr, Conan, Conti, Conversion script, Cookdn, Coolcaesar, CoolingGibbon, Corpx, Courcelles, Cp111, Cpiral, Cps274203, Cpuwhiz11, Crazycomputers, Create g77, Creativename, Credema, Creidieki, Crossy1234, Cul22dude, Cuvtixo, Cybercobra, Cybiko123, CyborgTosser, D, D6, DARTH SIDIOUS 2, DBishop1984, DJ Craig, DStoykov, DVdm, Daesotho, Dainomite, Damian Yerrick, Dan100, DanDoughty, Daniel C, Danieltobey, Dantheman88, Darkwind, Darth Panda, Dasnov, Daverocks, David Santos, DavidCary, DavidHalko, Davidam, Davidm617617, Dawnseeker2000, DeDroa, DeadEyeArrow, Deagle AP, Debloper, Deconstructhis, Defender of torch, Dekard, Dekisugi, Delinka, Demiurge, Demonkoryu, Denisarona, Deon, DerHexer, Desolator12, DestroyerPC, Dexter Nextnumber, Dhardik007, DiaNoCHe, DigitallyBorn, Dirkbb, DirkvdM, Discospinster, Dispenser, DivineAlpha, Djonesuk, Djsasso, Dmerrill, Dmlandfair, Doh5678, Donhoraldo, Dori, Dosman, Download, Doyley, DrDnar, DreamFieldArts, Drmies, Drummondjacob, Dsda, Dudboi, Duke56, DuncanHill, Dvn805, Dyl, Dynaflow, Dysprosia, Dzubint, E Wing, E.mammadli, ERcheck, ESkog, Eab28, Easwarno1, Echo95, EconoPhysicist, EdEColbert, Edivorce, Edward, Edwy, Eeekster, Ehheh, El C, Eleete, Elkman, Ellmist, Elockid, Elsendero, Elvenmuse, Ems2, Emurphy42, Emwave, Emx, Endothermic, Enigmar007, Enna59, Enno, Ente75, Enviroboy, Epbr123, Erickanner, Erkan Yilmaz, ErkinBatu, Escape Orbit, Ethan.hardman, Ethanhardman3, EurekaLott, Eurleif, Evercat, EwokiWiki, Excirial, Eyreland, Face, Falcon Kirtaran, Falcon8765, Favonian, Feedintm, Felyza, Ferrenrock, Fish and karate, Flewis, Flonase, Florian Blaschke, Flubbit, Fobenavi, Foot, ForrestVoight, Fram, Francis2795, Frankdushantha, Frap, Fred Gandt, FredStrauss, Fredrik, Freyr, Friecode, Fronx, Fsiler, Fubar Obfusco, Furrykef, Fvasconcellos, Fyver528, GB fan, GRAHAMUK, Gail, Gaius Cornelius, Gardar Rurak, Garlovel, Gaurav1146, Gauravdce07, Gazpacho, Gbeeker, Geekman314, GeneralChrisV, Georgia guy, Geph, Gepotto, Gerard Czadowski, Ghakko, Ghettoblaster, Ghyll, Giftlite, Glacialfox, Glen, Gogo Dodo, Gogodidi, Golfington, Golftheman, GoneAwayNowAndRetired, Goodnightmush, GorillaWarfare, Gorrister, Gortu, Grafen, Graham87, Grandscribe, GrayFullbuster, Greensburger, GrooveDog, Grosscha, Ground Zero, Grover cleveland, Grunt, Gschizas, Gscshoyru, Gtgray1948, Guess Who, Gumbos, Gurch, Guy Harris, HDrake, Hair, Hammersoft, Hanii Puppy, Hannes Hirzel, Hansfn, Harris james, Harry, Harryboyles, Hashar, Hawaiian717, Hazard-SJ, Hdante, HeikoEvermann, Henriquevicente, Heron, HexaChord, Hillel, Hirzel, Hmains, Holden15, Hqb, Hrundi Bakshi, Htaccess, Huszone, Hut 8.5, Hydrogen Iodide, II MusLiM HyBRiD II, IMSoP, Iamunknown, Ian Dunster, Ian Pitchford, Ian.thomson, Icefirearceus, Ida Shaw, Ideogram, Idleguy, Ilmari Karonen, Incnis Mrsi, Indon, Inferno, Lord of Penguins, Ino5hiro, Insanity Incarnate, Integralexplora, Intgr, Ioeth, Iridescent, IronGargoyle, Ishanjand, Iswariya.r, It Is Me Here, ItsMeowAnywhere, Ixfd64, J Milburn, J.delanoy, J00tel, JForget, JHunterJ, JLaTondre, JSpudeman, Jaan513, Jackfork, Jackmiles2006, James pic, JamesAM, JamesBWatson, Janitor Starr, Jarble, Jasper Deng, Jatkins, Javierito92, Jaxl, Jaysweet, Jbarta, Jclemens, Jdm64, Jdrowlands, Jebus989, Jedikaiti, Jeff G., Jeffwang, Jeltz, JeremyA, Jerome Charles Potts, Jeronimo, Jerry, Jerryobject, Jerrysmp, Jesse V., JetBlast, Jfg284, Jfmantis, Jh51681, Jhonsrid, Jijojohnpj, Jim1138, JimPlamondon, Jimmi Hugh, Jjk, Jjupiter100, Jkl4201, JoanneB, Jobrad, JoeSmack, Joecoolatjunkmaildotcom, Joemaza, Joffeloff, John Nevard, Johnnaylor, Johnny039, Johnuniq, JonHarder, Jonathan Hall, Jordi Burguet Castell, Jorge.guillen, JorgePeixoto, Josef.94, Josepsbd, Josh the Nerd, Joshlk, Joshua Gyamfi, Joy, Jpeeling, Jschnur, Jstirling, Jsysinc, Julepalme, Jumbuck, Jusdafax, K7jeb, KAMiKAZOW, KAtremer, KDesk, KGasso, Ka Faraq Gatri, Kagredon, Kajasudhakarababu, Kamanleodickson, Karabulutis252, Karimarie, Karnesky, Karol Langner, Karolinski, Kashmiri, Katalaveno, Kathleen.wright5, Katieh5584, Kaustubh.singh, Kaypoh, Kbdank71, Kbrose, Kcordina, Ke5crz, KenBest, Kenny sh, KenshinWithNoise, Kenyon, Kerowhack, Kev19, Kevin586, Kgoetz, Khoikhoi, Kidde, Kim Bruning, Kimdino, Kjaleshire, Kjetil r, Kjkolb, Kku, Klungel, Knokej, Knownot, Kokamomi, Kotiwalo, KrakatoaKatie, Krauss, Kubanczyk, Kuru, Kushalbiswas777, Kusma, Kusunose, Kwiki, Kyle1278, Kyng, Kyuuseishu, L Kensington, LFaraone, La Pianista, Lambiam, Landroo, Latka, Law, Leaflord, LeaveSleaves, Lejarrag, LeoNomis, Letdorf, Leuko, Lifemaestro, Lightedbulb, Lindert, Linkspamremover, Linnell, Littlegeisha, Livajo, Lkatkinsmith, Lmmaaaoooo, Loadmaster, Logan, Logixoul, Lordmarlineo, Lost.goblin, Love manjeet kumar singh, Lovelac7, Lowellian, Lradrama, Lt monu, Ltomuta, Lucid, Lucy-seline, Lucyin, Luk, Lumos3, Luna Santin, Lvken7, Lysander89, Lyt701, M.r santosh kumar., M2Ys4U, M4gnum0n, MBisanz, MC MasterChef, MER-C, MONGO, Mabdul, Macintosh User, Macintosh123, Magnus Manske, Maitchy, Makeemlighter, Manassehkatz, Mandarax, Manickam001, Manmohan Brahma, Manojbp07, Manticore, March23.1999, Marek69, MarioRadev, MarkSG, Markaci, Marko75, MarmotteNZ, Martarius, Martin smith 637, Martinwguy, Masonkinyon, Materialscientist, MattGiuca, Mattbr, Matthardingu, Matthuxtable, MattieTK, Mav, Max Naylor, Maxim, Maximus Rex, Maziotis, Mbalamuruga, Mblumber, Mc hammerutime, McDutchie, McLovin34, Mcloud91, Mdd, Mdikici, Meaghan, Medovina, Meegs, Melab-1, Melsaran, Memset, Mendalus, Meneth, Meowmeow8956, Merlion444, MetaEntropy, Miaers, Michael B. Trausch, MichaelR., Michaelas10, Mickyfitz13, Mike33, Mike92591, MikeLynch, Mikeblas, Milan Kerlger, Mild Bill Hiccup, Minesweeper, Minghong, Miquonranger03, Mirror Vax, Miss Madeline, MisterCharlie, Mistman123, MithrandirAgain, Mmxx, Mnemoc, Mononomic, Monz, Moondyne, Mortus Est, MovGP0, Mppl3z, Mptb3, Mr.Z-man, MrOllie, MrPaul84, MrX, Mrankur, Mthomp1998, Mualif02, Muehlburger, Mufka, Mujz1, Muralihbh, Murderbike, Musiphil, Mwanner, Mwheatland, Mwtoews, Mxn, Mslimix, N sharma000, N419BH, N5iln, NNLauron, NPrice, NULL, Nakon, Nanshu, Naohiro19 revertvandal, NapoliRoma, Nasnema, NawlinWiki, Nayak143, Nayvik, Ndavidow, NellieBly, Nergaal, Neversay.misher, Ngch89, Ngien, Ngyikp, Nick, Nikai, Ninuxpdb, Nixeagle, Njuuton, Nk, Nlu, No Guru, Nobody Ent, Noldoaran, Nono64, Norm, Northamerica1000, NotAnonymous0, Nothingisoftensomething, Notinasnaid, Nrabinowitz, Nsaa, Numlockfishy, Nvt, Nwusr123log, O.Koslowski, OKtosiTe, Ocolon, Oda Mari, Odell421, Odie5533, Ohnoitsjamie, Olathe, Oliverdl, Olivier, OllieWilliamson, OlurotimiO, Omicronpersei8, Omniplex, Ondertitel, Onorem, Oosoom, Openmikenite, Optimisticrizwan, OrgasGirl, Orrs, Oxymoron83, P.Marlow, Papadopa, Parasti, Patato, Patrick, Paul E T, Paul1337, Pcbsder, Pepper, PeterStJohn, Petrb, PhJ, PhantomS, Phgao, Phil websurfer@yahoo.com, Philip Howard, Philip Trueman, Photonik UK, Piano non troppo, Pierre Monteux, Pinethicket, Pinkadelica, Pithree, Plasticup, PlutosGeek, Pmlineditor, Polluks, Polyamorph, Pontiacsunfire08, Posix memalign, Prashanthomesh, PrestonH, Programming geek, Prolog, Prophile, Pruefer, Public Menace, Puffin, Qaanol, Quarkuar, QuiteUnusual, Qwerty0, Qwyrxian, R'n'B, R. S. Shaw, RA0808, RB972, RTC, Raanoo, Rabi Javed, Raffaele Megabyte, RainbowOfLight, Rainsak, Rami R, Ramif 47, Random Hippopotamus, RandomAct, RaseaC, Ratnadeepm, RattusMaximus, RavenXtra, Rayngwf, Raysonho, Raywil, RazorICE, Rbakels, Rbanzai, Rdsmith4, Reach Out to the Truth, RedWolf, Reedy, Rektide, Remixsoft10, Rettetast, RexNL, Rfc1394, Rhyswynne, Riana, Rich Farmbrough, Rilak, Rjgarr, Rjwilmsi, Rlinfinity, Rmere, Rmhermen, Robert K S, Robert Merkel, RobertG, Robertwharvey, RockMaster, Rockstone35, Rodri316, RogierBrussee, Rokfaith, Rolandg, Romanm, Ronark, Ronhjones, RossPatterson, Rotem Dan, RoyBoy, Rreagan007, Rrelf, Rror, Rubena, Rubicon, Ruud Koot, Rzelnik, S.borchers, S10462, SF007, SNIyer12, SPQRobin, Safinaskar, Sainath468, Sakariyerirash, Sam Vimes, SampigeVenkatesh, Sander123, Sanfranman59, Sango123, Sardanaphalus, Sarikaanand, Scherr, SchmuckyTheCat, SchreyP, SchuminWeb, Schwallex, Schzmo, Sdfisher, Sean William, Seba5618, Sedmic, Senator Palpatine, Sewing, Shadowjams, Sharanbngr, Sharkert, SheikYerBooty, Shizhao, Shreevatsa, Shreshth91, Shriram, Sidious1741, Sigma 7, Signalhead, Silas S. Brown, Simon the Dragon, SimonP, Simxp, Sir Nicholas de Mimsy-Porpington, SirGrant, SirGre, SivaKumar, Skarebo, Skomes, Slgrandson, Slogan621, Slon02, SmackEater, Smadge1, Snowmanradio, Snowolf, Socalaaron, Socrates2008, SocratesJedi, SolKarma, SolarisBigot, Sommers, Sophus Bie, South Bay, Sp, Spanglegluppet, Sparkle24, SpooK, SpuriousQ, Squash, Sridip, Staffwaterboy, Stealthmartin, Stephen Gilbert, Stephen Turner, Stephenb, Stephenchou0722, SteveSims, Stevenj, Stewartadcock, Stickee, Stormie, SudoGhost, Sun Creator, SunCountryGuy01, Sunay419, Super Mac Gamer, SuperLuigi31, Superswade, SusanLesch, Susheel verma, Sven Manguard, Sven nestle, Sweet blueberry pie, Synchronism, Syzygy, THEN WHO WAS PHONE?, THeReDragOn, Ta bu shi da yu, Tannin, TarkusAB, Tarmo Tanilsoo, Tarquin, Tasting boob, Tatrgel, Tdrtdr, Tdscanuck, TempestSA, Teply, TexasAndroid, Texture, Tgeairn, Tgnome, The Anome, The Random Editor, The Thing That Should Not Be, The undertow, The1DB, TheAMmollusc, TheNewPhobia, TheWorld, Thecomputist, Theda, Thedjatclubrock, TheguX, Themoose8, Theshibboleth, Thine Antique Pen, Thingg, Thorpe, Thumperward, Tide rolls, TigerShark, Timir Saxa, Titoxd, Tnxman307, Tobias Bergemann, Toddst1, Tokai, Tom Hek, Tom harrison, Tomcool, Tommy2010, TommyB7973, Tompsci, Tony1, Tothwolf, Touch Of Light, Tpbradbury, Tpk5010, Traroth, Travelbird, Trevj, Trevjs, Trimaine, Triona, Trisweb, Triwbe, TurboForce, Twas Now, Twistedkevin, Twitty666, Twsx, Tyler, Tyomitch, Typhoon, Tyrel, Ultimus, Umofomia, Unbreakable MJ, Uncle Dick, Unixguy, Unknown-xyz, Uogl, Upthegro, Ursu17, Urvashi.iyogi, Useight, User A1, Utahraptor ostrommaysi, Utilitytrack, VampWillow, Vanessaezekowitz, Vanished user 39948282, Vbigdeli, VegaDark, Verrai, Vicenarian, Vikrant manore, Vincenzo.romano, Viriditas, Vorosgy, Vox Humana 8', Vrenator, W163, WJetChao, Wapcaplet, Wareh, Warren, Warut, Wasted Sapience, Waterjuice, Wavelength, Wayward, Wbm1058, Wdfarmer, Wdflake, Weedwhacker128, WellHowdyDoo, White Shadows, Who.was.phone, Widefox, WikHead, Wiki Wikardo, Wiki alf, WikiDan61, WikiPuppies, WikiTome, Wikievil666, Wikiloop, Wikipelli, WilyD, Winchelsea, Wingnutamj, Winhunter, Winston365, Wisconsinsurfer, Wk muriithi, Wknight94, Wluka, Woohookitty, World-os.com, WorldBrains, Wormsgoat, Wtmitchell, Wtshymanski, Wwagner, X42bn6, Xdenizen, Yaronf, Yellowdesk, Yes-minister, Yidisheryid, Yoink23, Youwillnevergetthis, Yunshui, Yworo, Zephyrus67, Zfr, Zidonuke, Zigger, Ziiike, Zlemming, Zondor, Zotel, Zx-man, Zzuuzz, , var Arnfjr Bjarmason, - , , , , 3247 anonymous edits OSI model Source: https://en.wikipedia.org/w/index.php?oldid=520474572 Contributors: 0612, 0x6D667061, 1337 JN, 1966batfan, 24.12.199.xxx, 28bytes, 336, 63.227.96.xxx, 7, 75th Trombone, 802geek, 90 Auto, @modi, A412, A930913, ABF, Abarry, Abune, Adamantios, Addshore, Adibob, Adityagaur 7, Adj08, Adoniscik, Adrianwn, Advancedtelcotv, Ageekgal, Ahoerstemeier, Aitias, Ajo Mama, Ajw901, Alansohn, Albanaco, Aldie, Ale jrb, AlistairMcMillan, Allens, Alphachimp, Alucard 16, Alvestrand, Amillar, Amitbhatia76, Amtanoli, Andjohn2000, Andre Engels, Andybryant, Angrysockhop, Animum, Anjola, AnkhMorpork, Anna Lincoln, Anon lynx, Anonymous anonymous, Another-anomaly, Apocryphite, Apparition11, Arroww, Artur Perwenis, Arunachalammanohar, Ashutosh.mcse, Aslambasha09, Asn1tlv, AtomicDragon, Atreyu42, Audunv, Avitesh, AxelBoldt, Ayengar, B4hand, BACbKA, BDerrly, Bakilas, Balajia82, Bariswheel, Bchiap, Bdamokos, Beelaj, BenLiyanage, Beno1000, Biblbroks, Bjelleklang, Bletch, Blueskies238, Bmylez, Bobo192, Bogdangiusca, Boikej, Bojer, BommelDing, Bonobosarenicer, Booyabazooka, Borgx, Brambleclawx, Brandon, Brick Thrower, Brougham96, Bryan Derksen, BuickCenturyDriver, Bzimage.it, Bcherwrmlein, CDima, CIreland, CMBJ, Caerwine, Caesura, Calmer Waters, Caltas, CambridgeBayWeather, Camw, Can You Prove That You're Human, Can't sleep, clown will eat me, CanadianLinuxUser, Candc4, Caper13, Carre, Casey Abell, Causa sui, Cburnett, Cbustapeck, Cflm001, Charles Edward, Charm, Che090572, Chester Markel, Chfalcao, Chimpex, Chirag, Chrislk02, Chupon, Cikicdragan, Citicat, Closedmouth, Cokoli, Cometstyles, Conquest ace, Conversion script, Coriron, Courcelles, Cputrdoc, CraSH, CraigBox, Crasheral, Crimsonmargarine, Cs mat3, Ctbolt, Cxxl, Cybercobra, CyborgTosser, Cyktsui, CynicalMe, DARTH SIDIOUS 2, DJPohly, DSParillo, Damian Yerrick, Daniel, Danlev, Dave2, Davetrainer, David Edgar, David Gerard, David0811, DavidBarak, DavidLevinson, Davidjk, Dcooper, Dcovell, Deagle AP, Delfeye, Delldot, DeltaQuad, Demitsu, Denisarona, DennyColt, Dgtsyb, Dicklyon, Dili, Dino.korah, Discospinster, Dispenser, Djib, Djmoa, DmitryKo, Dmohantyatgmail, Doniago, Dpark, DrDOS, DrSpice, Drat, Dreish, Drwarpmind, Duey111, Dumbledad, Dzubint, EJDyksen, EJSawyer, ENeville, EagleOne, Eazy007, Ed g2s, EdH, Edivorce, Edward, ElKevbo, Eldiablo, Eleassar, Elfosardo, Eliezerb, Elipongo, Emperorbma, EnOreg, Enjoi4586, Enochlau, Epbr123, Eric Soyke, Everyking, Evillawngnome, Ewlyahoocom, Excirial, FF2010, Falk.H.G.,

260

Article Sources and Contributors


Fang Aili, Feezo, Fiable.biz, Filemon, Finlay McWalter, Fjpanna, Fleg31789, Flewis, Flowanda, Fraggle81, FrankTobia, Fred Bradstadt, Fredrik, Free Bear, FreshPrinz, Fresheneesz, Friday, Friedo, Friginator, Fullstop, Fumitol, Fuzheado, Fvw, F, GDonato, GGShinobi, Gadfium, Gafex, GarethGilson, Gary King, Gasp01, Gazpacho, Geek2003, General Rommel, Ghostalker, Giftlite, Gilliam, GlassCobra, Glenn, Goodnightmush, Graeme Bartlett, Grafen, Graham.rellinger, GreYFoXGTi, Grendelkhan, Grubber, Gsl, Gurchzilla, Guy Harris, Gwernol, Gkhan, H2g2bob, H34d, HMGb, Haakon, Hadal, HamatoKameko, HarisM, Hatch68, Hcberkowitz, Hdante, Helix84, Hellomarius, Henrikholm, Herbee, Heron, Hes Nikke, Hetar, HexaChord, Hgerstung, Hiddekel, Highpriority, Honeyman, I dream of horses, IMSoP, IReceivedDeathThreats, Iambk, Iambossatghari, Ideoplex, Ifroggie, Ilario, Immunize, Inkhorn, Inkling, Insineratehymn, Intgr, Inversetime, InvisibleK, Inwind, Iridescent, Irnavash, IronGargoyle, Ishikawa Minoru, Island Monkey, Isofox, Isthisthingworking, Itpastorn, Itusg15q4user, Iviney, J.delanoy, JMatthews, JV Smithy, Jake Wartenberg, JamesBWatson, Jannetta, Jatinsinha, Jauerback, Jchristn, Jcw69, Jdrrmk, JeTataMe, Jeanjour, Jeff G., Jeffrey Mall, Jessemerriman, Jetekus, Jhilving, JidGom, Jim1138, Jimw338, Jjenkins5123, Jmorgan, Jnc, JoanneB, JodyB, Joebeone, John Hopley, John Vandenberg, John254, Johnblade, Johnleemk, Johnuniq, JonHarder, Jonathanwagner, Jonwatson, Joodas, Josef Sbl cz, Josh Parris, Jovianeye, Joy, Jpta, Jrodor, Jschnur, Jschoon4, Jsonheld, Jusdafax, Kaaveh Ahangar, Kallaspriit, Karelklic, Karpouzi, Kaszeta, Katalaveno, Kaz219, Kazrak, Kbrose, Kcordina, KerryVeenstra, Kesla, Kevin Rector, Kgrr, Khat17, Killiondude, Kim Rubin, Kingpin13, Kinu, Kirill Lokshin, Kkbairi, KnowledgeOfSelf, Kraftlos, Kramerino, Krampo, Krellis, Kronn8, Kuru, Kvng, Kyllys, LOL, LOTRrules, Lachlancooper, Lankiveil, Lawrence Cohen, Lazarus666, Leafyplant, Lear's Fool, Lectonar, Lee Carre, Lighthead, Lights, LittleOldMe, LittleWink, LizardJr8, Lockcole, Logictheo, Logthis, Lomn, Looxix, Lord Chamberlain, the Renowned, Lordeaswar, Lotje, Lulu of the Lotus-Eaters, Luna Santin, Lupin, Lynnallendaly, M, MBisanz, MER-C, MIT Trekkie, Maguscrowley, Mahanga, Mahesh Sarmalkar, Majorly, Mange01, Manishar us, Marek69, MarkSutton, MarkWahl, Markb, Markhurd, Markolinsky, MartinHarper, Martinkop, Marvin01, Materialscientist, Mattalyst, Matthew Yeager, Mattjgalloway, Mattmill30, Mbc362, Mboverload, McGinnis, Mcnuttj, Mdd, Meepster, MelbourneStar, Mendel, Mephistophelian, Merlion444, Metaclassing, Micahcowan, Michael Hardy, Michael miceli, Mike Rosoft, Mikel Ward, Mikeo, Mikeyh56, Milind m2255, Minimac, Mkweise, Mlewis000, Mmeerman, Mmernex, Mmmeg, Mobius R, Mohitjoshi999, Mohitsport, Mojalefa247, Monterey Bay, Morten, Moxfyre, Mr Elmo, Mr Stephen, Mr.ghlban, MrOllie, Mrankur, MrsValdry, Mtd2006, MuZemike, Mudasir011, Mulad, Mwtoews, Myanw, Myheadspinsincircles, N-Man, N5iln, Naishadh, Nanshu, Naohiro19, Naresh jangra, Nasa-verve, Natarajuab, Nate Silva, Nathanashleywild, NawlinWiki, Nbarth, Nbhatla, Neevan99, Nejko, Nemesis of Reason, Nethgirb, Netsnipe, Niaz, Nick, Nickshanks, Nicolas1981, Nisavid, Nitecruzr, Nivix, Nk, Nkansahrexford, Noahspurrier, Nolyann, Northamerica1000, Nsaa, Nubiatech, NuclearWarfare, Nux, OSUKid7, Octahedron80, Odie5533, Ogress, Oita2001, OlEnglish, Omicronpersei8, Orange Suede Sofa, Ore4444, Originalharry, Ott, Ottosmo, Ouishoebean, Oxymoron83, PGWG, Palltrast, Pamri, Panser Born, Paparodo, Parakalo, Pastore Italy, Patch1103, Patrikor, Patstuart, Paul August, PaulWIKIJeffery, Payal1234, Pb30, Peni, Penno, Pethr, Petrb, Phatom87, Phil Boswell, Philip Trueman, PhilipMW, Pluyo8989, Pmorkert, Pointillist, Postdlf, Postmortemjapan, Praggu, ProPuke, Pseudomonas, Psiphiorg, Public Menace, Puchiko, Puckly, PyreneesJIM, Pytom, RainbowOfLight, Raju5134, RandomAct, Ravikiran r, RazorICE, Rcannon100, Rebroad, Recognizance, RedWolf, Reedy, Rejax, Rettetast, Rfc1394, Rgilchrist, Rhobite, Rich Farmbrough, RichardVeryard, Richwales, Rick Sidwell, Rjgodoy, Rjstinyc, Rlaager, Rnbc, RobEby, Robert K S, RobertL30, RockMFR, Rohwigan03, Ronz, Roo314159, RoscoMck, RossPatterson, Roux, Roux-HG, RoyBoy, Rsiddharth, Runis57, Runtux, Rursus, Ryan au, Ryt, Ryulong, S, S3000, SMC, Saad ziyad, Saddy Dumpington, Safety Cap, Saintfiends, Sakurambo, Savh, SaxicolousOne, Scarian, Schumi555, Scientus, Scohoust, Scolobb, Scottonsocks, Seaphoto, Sesu Prime, Shadow1, Shadowjams, SharePointStacy, Shell Kinney, Shirik, Shoeofdeath, ShornAssociates, Shraddha deshmukh, Shrofami, Sietse Snel, Simonfl, Simple Bob, SineChristoNon, Sir Nicholas de Mimsy-Porpington, Skier Dude, Sliceofmiami, Slrobertson, Smalljim, Smokizzy, Snehasapte, SnowFire, Snowolf, Soosed, Sp33dyphil, SpaceFlight89, Speaker to Lampposts, SpeedyGonsales, Spitfire8520, SpuriousQ, Sridev, StaticGull, Stemonitis, Stephan Leeds, Stephen Gilbert, StephenFalken, Stevage, Steven Zhang, StuartBrady, Subfrowns, Sunilmalik1107, Suruena, Suyashparth, Swapcouch, Syntaxsystem, TAS, THEN WHO WAS PHONE?, Tagishsimon, Tangotango, Tarekradi, Taruntan, Tbsdy lives, Tcncv, Techtoucian, Tedickey, Tellyaddict, Tempodivalse, The Anome, The Athlon Duster, The Haunted Angel, The Thing That Should Not Be, Theopolisme, Therumakna, Thief12, Thingg, Think4amit, ThreeDee912, ThunderBird, Tide rolls, Tim Q. Wells, TinyTimZamboni, Tom harrison, TomPhil, Tommy2010, Tompsci, Tony1, Tooki, Tpbradbury, Tpvibes, Tranzz, Travelbird, Tree Biting Conspiracy, Trevor MacInnis, Triona, TripleF, Triwbe, Troy 07, Turb0chrg, Tyler.szabo, UU, Umair ahmed123, Uncle Dick, Unkownkid123, Vegaswikian, Venu62, Versus22, VidGa, Vishnava, Visor, Vk anantha, Vmguruprasath, Voidxor, WLU, Waggers, Warrierrakesh, Wayfarer, Weregerbil, Whitejay251, WikiDan61, Wikipelli, William Avery, Willking1979, Wilson.canadian, Wily duck, Wingman4l7, Winston Chuen-Shih Yang, Wire323, Wireless friend, Wishingtown, Wizardist, Wknight94, WoiKiCK, Woohookitty, Wrlee, Wrs1864, Wtmitchell, Wtshymanski, Yamamoto Ichiro, YamiKaitou, Yamike, Yms, YolanCh, Yuokool12, ZX81, ZachPruckowski, Zachary, Zoobee79, , uman, 3285 anonymous edits Virtual private network Source: https://en.wikipedia.org/w/index.php?oldid=519854380 Contributors: (, 1984, 2005, 2A01:E34:EEE1:48F0:E4D7:D2:ECE5:4166, 33rogers, ARTamb, Aaron north, Abune, Acole67, Adi4094, Aditya, Aeon17x, Agaffin, Alansohn, Aldie, Alexamies, AlexeyN, Allstarecho, Alphawave, Alvestrand, Americaninseoul, AndreasJS, Andrew Gray, Andrewpmk, Angelbo, Anirvan, Anon lynx, Anthony Appleyard, Apankrat, Apothecia, Armando, Art LaPella, Ashwin ambekar, Ausinha, Avaarga, Az1568, Azadk, Barek, Barek-public, Bbbone, Belmontian, Ben 9876, BenAveling, Bevo, Bewildebeast, BiT, Bigjust12345, BirdValiant, Bishopolis, Blacklogic, Blonkm, BlueJaeger, BlueNovember, Bmusician, Boardista, Bobo192, Borgx, Bovineone, Brainix, Brandon, Braviojk, Brwave, Bryan Derksen, Bswilson, CWenger, CYD, Can't sleep, clown will eat me, Carbuncle, Cfleisch, CharlotteWebb, Chenghui, Chris Mounce, Chris the speller, Chris400, Chrisbolt, Chrisch, Chupacabras, Cleared as filed, ClementSeveillac, Cligra, Closedmouth, Cometstyles, Corinna128, Cpartsenidis, Cr0w, Crazytales, CutOffTies, Cwolfsheep, DBigXray, DKEdwards, Danno uk, David H Braun (1964), David Martland, David Woodward, Davidoff, Dbrodbeck, Decltype, Deeahbz, Deice, Deli nk, Delldot, DerHexer, Dgtsyb, Diablo-D3, Discospinster, Djg2006, Dmktg, Dmol, Doctorfluffy, Dpotter, DrFausty, Drable, DreamGuy, Drugonot, Dugosz, E. Ripley, EagleOne, Edcolins, Edderso, Eenu, Efa, ElTopo, Eli77e, Elinruby, Emmatheartist, EncMstr, EoGuy, Epbr123, Escape Orbit, Eubene, Evansda, Everyking, Evice, Extraordinary, FAchi, FJPB, Falcon8765, Fancy steve, Fangfufu, Fieldday-sunday, Fijal, Fleminra, Flockmeal, Foggy Morning, Fosterbt, Foxb, Funnyfarmofdoom, Fuzheado, GL, GSK, Gardar Rurak, Gascreed, Gaurav.khatri, GenOrl, Gershwinrb, Gkstyle, Glane23, Glenn, Godsmoke, Gracefool, Gracenotes, GraemeL, Ground Zero, Gurdipsclick, Hadal, Haemo, Hal 2001, Harryzilber, Hcberkowitz, Hellisp, Heron, HisSpaceResearch, Humannetwork, Hyakugei, Ianmacm, Iceb, Ieopo, Informedbanker, Inkling, Intgr, Invenio, Ironman5247, Irulet, Isilanes, IvanStepaniuk, Izwalito, J'raxis, J.delanoy, JGXenite, JHunterJ, JNW, Ja 62, Jaan513, Jackcsk, Jackfork, Jadams76, Jairo lopez, Jandalhandler, Jasper Deng, Jazappi, Jcap1ln, Jdzarlino, Jeremiah.l.burns, Jerome Charles Potts, JidGom, Jim.henderson, Jim1138, Jino123, Jlavepoze, Jleedev, Jmccormac, Jmundo, JoeSmack, John Vandenberg, John254, Johnuniq, Jojalozzo, JonHarder, Jonomacdrones, Joshk, Joy, Jrapo, Jrgetsin, Juliancolton, K-secure, Kaaveh Ahangar, Karlzt, Kateshortforbob, Katkay, Katkay1, Kbrose, Kevinzhouyan, Khag7, Kielvon, Kikbguy, Kimchi.sg, Kku, KnowledgeOfSelf, Kurt Jansson, Kuru, Kvng, L Kensington, LOL, Leafyplant, LeaveSleaves, Les boys, LetMeLookItUp, Lightmouse, LindArlaud, LittleBenW, Lkstrand, Lmstearn, Lucaweb, Ludovic.ferre, Luna Santin, M. B., Jr., MCB, MER-C, MFNickster, Ma8thew, Majorly, Manop, MarcoTolo, Mashouri, Matt Crypto, MattTM, MattieTK, Maxgrin, Me.rs, MeToo, Mercury543210, Mercy11, Mfalaura, Mhking, Michaelas10, Mike Rosoft, Mindmatrix, Ministry of Truth, Minna Sora no Shita, Mjs1991, Mkidson, Mmernex, Mohsen Basirat, Monkeyman, Movingonup, Mr.Clown, MrOllie, Mu Mind, Mxn, Naba san, Nacnud22032, Nardixsempre, Natalie Erin, Nealmcb, Negrulio, Neoalian, Netmotion1234, Niffweed17, Nkansahrexford, Nklatt, Noah Salzman, Novastorm, Nqtrung, Ntsimp, Nubiatech, Nuno Tavares, Nurg, Nuttycoconut, Octahedron80, Ohconfucius, Oli Filth, Omicronpersei8, Optimist on the run, Ottawa4ever, OverlordQ, Pascalv, Paulehoffman, Pauli133, Pdcook, Pearle, Peteinterpol, Peter M Dodge, Phatom87, Philomathoholic, Phr, Pinchomic, Plat'Home, Plyd, Pmcm, Pnm, Pokrajac, PositiveNetworks, Prakash Nadkarni, Prari, ProfPolySci45, PuerExMachina, Quarl, R'n'B, R. S. Shaw, RFightmaster, RHaworth, Raanoo, Rafigordon, RainbowOfLight, Raprap321, Ray Dassen, RayAYang, Razorflame, Rearden9, RedHillian, Redlazer, Rees11, Regancy42, Reliablehosting, Res2216firestar, Rgore, Rhobite, Rich45, Rjwilmsi, Rninneman, Robert Brockway, Rocketron5, Rosothefox, SPat, START-newsgroup, Saimhe, Sajjad36122061, Scarpy, Schecky4, Scott.somohano, SecurityManager, Selah28, Sepersann, Sgarson, Shadowjams, Shandon, ShelleyAdams, Shierro, Shijiree88, ShorelineWA, Sijokjose, SilentAshes, Sintesia, Skarebo, Skier Dude, Smallman12q, SmartGuy Old, Smartchain, Smithkkj, Snaxe920, Snowolf, SpaceFlight89, SqueakBox, Stephenb, Student geek, Sujathasubhash, Sunny2who, Superpixelpro, Swartik, Sydbarrett74, Szquirrel, THEN WHO WAS PHONE?, Tahren B, Talinus, TastyPoutine, Tech editor007, TechyOne, TehPhil, Teknetz, Thatguyflint, Thaurisil, The Anome, The Thing That Should Not Be, TheBilly, TheNeutroniumAlchemist, ThePromenader, Therefore, Tide rolls, Timurx, Tlroche, Tobias Bergemann, Tom Foley, Tomlee1968, Tommy2010, TonyUK, Torqueing, Trailbum, Tryggvia, Tslocum, Tuxa, Tuxcrafter, TwoTwoHello, Unixer, Utcursch, Vanderdecken, Vanisheduser12345, Veinor, Vicarious, Vickey2020, Visiting1, Vjardin, Vladkornea, W.F.Galway, WEJohnston, Wackywace, WakingLili, WarrenA, Wavelength, WebHamster, Webster21, Webwat, Whaa?, Wik, Wiki 101, Wikievil666, Wikingtubby, Williameboley, Wimt, Winchelsea, Winterst, Wknight94, Wodkreso, Woohookitty, Ww, Xpclient, YUL89YYZ, Yama, Yamamoto Ichiro, YordanGeorgiev, Youssefsan, Ywalker79, ZeroOne, Zeroshell, Ziabhat, Zzuuzz, , 1315 anonymous edits Semantic Web Source: https://en.wikipedia.org/w/index.php?oldid=520077893 Contributors: 4twenty42o, 65.2.226.xxx, 9.112, AThing, Aabs, Abmac, Abstraction&logic, Acarvin, Acdx, AdamAtlas, Afreet, After Midnight, Ahunt, Aillema, Aitias, Ajchen, Akindofmagick, Aleman, Alisonjones1, AlistairMcMillan, Alnokta, Alvations, Amgine, An Fior Eireannach, Anajemstaht, Anarchitect, Ancheta Wis, Andy Dingley, AnmaFinotera, Ansell, Aomarks, Aproche, ArnoldReinhold, Artawiki, ArtistScientist, Artw, Ash.banerjee@gmail.com, Astral highway, Aswarp, Averell23, Aviados, Axistive, BBurgDave, Backslash Forwardslash, Baojie, Barts1a, Bdesham, Beetstra, Beland, BenAveling, Bender235, BenjaminZClifford, BetweenMyths, Bigdaddy1978, Bigpinkthing, BioPupil, Bjankuloski06en, Blathnaid, Bleakgadfly, Boba1213, BokicaK, Bonegang, Bonzodoggy, Boredzo, Borgx, Bovlb, Bunnyhop11, Burgher, Burschik, C Chiara, Calabraxthis, Canjimifan, Captain-n00dle, Carnes csc6991, Chrisf1963, ClockworkSoul, Cobaltcigs, Conversion script, Courcelles, Cp111, Cquan, Cretog8, Curb Chain, Cycurious, Cygri, Cyrusc, DBooth, DMacks, Damian Yerrick, DanBri, Danarbaugh, Daniel Brockman, DanielDemaret, Danim, Danja, Davemck, DavidLeeLambert, DavidLevinson, Daybreak, Dchooge, Ddwebguru, Deathtrap3000, Deeptext, Demonkoryu, Denny, DesertSteve, Devkmem, Dggoldst, Dhammapal, Dieblacken, Disavian, Diza, Dlrohrer2003, Dmccreary, Docben, Dolda2000, Dors, Dothebart, Dragentsheets, Dragon Dave, Dreftymac, Drewyates, Drkarger, DuncanCragg, E-at-windmills, Eaefremov, Ecaepekam, EddyVanderlinden, Edward, El C, ElBenevolente, Emperor, Eng.amira, Eperotao, Erianna, Erick.Antezana, Eshleyy, EvenT, Evmako, Excirial, Fastjoo, Favonian, FeralOink, Fermiparadox, Ferris37, Fijimus, Finin, Finnneuro, Floridi, Fosod, Fparreiras, Fragglet, Fram, Francesco sclano, FrankTobia, Frap, Fred Gandt, Fuhghettaboutit, Fvilla, GEBStgo, Gaia Octavia Agrippa, Galoubet, Gary King, Gauramma, Geldsack, Gendojohn, Gherson2, Giraffedata, Gixbrown, Gogo Dodo, Gr8tk8t, Graham87, Gregman2, GregorB, Gregory j, Gui, Gzcsy3, Gzmask, Haakon, Haitrieu0828, Hansfeuss, Harnad, Harvey the rabbit, Hdzimmermann, Headbomb, Hede2000, Heirpixel, Hex, Hiflyer, Hirzel, Hpatel44, Hthth, Hu12, Hulagutten, Hydrargyrum, Hymek, Icedog, Identityandconsulting, Idio ltd, Iggykin, Igoldsmid, Igoropopov, Intgr, Iolasov, Isiaunia, It Is Me Here, It writer, Itai, Iterator12n, J04n, JForget, JMizzi, JaGa, Jake Ben Delbek, Jammus, Jarble, Jatkins, Jdatsoton, Jeff3000, Jellypuzzle, JimR, Jnothman, Jo Lorib, Jodi.a.schneider, Joe Dunford, Joe Jarvis, Johannesvillavelius, John Broughton, JohnPritchard, Johnleemk, Johnuniq, Jordgette, Joseph Solis in Australia, JoshKirschner, Jsharpminor, Jw7-soton, JzG, KYPark, Kadambarid, Karada, Karima Rafes, Kbdank71, Keith Edkins, Kennyluck, Kensall, Kesla, KevinLocker, Khalid hassani, Kibqaai, Kingboyk, KingsleyIdehen, Kiranoush, Kishore12, Kitsunegami, Kjetil, Kku, Klatif, Kmcinnes, Knutux, Korny O'Near, Kozuch, Kraykray, Krlis1337, Kuda, Kunal.nes, Kwinkunks, LNRyan, Lalvers, Lam Kin Keung, Langec, Laval, Lemming, Lifeboatpres, Lightmouse, Ligulem, Lisbk, Liujiang, Llacy, Loren.wilton, Lou Quillio, Lousyd, Lpgeffen, Lumos3, Lysdexia, M3wiki1, Maarten Hermans, MacTed, Mandarax, Mani1, Maoj-wsu-ad, Marios1991cy, Mark Renier, Markus Krtzsch, Matt.smart, Matthew Yeager, Matticus78, Mauro Bieg, Mav, Mayflower3, Mdd, Metajohng, Michael A. White, Michael Hardy, Michelle Roberts, Mike Schwartz, MikeWren, Minghong, Miq, Mironearth, Mjb, Mjthomas43, Mkbergman, Mkouklis, Mmortal03, Modify, Monedula, Mourakshit, Mrevan, N2e, NeilN, Neilc, NeoPhyteRep, Neutrality, Neverclear, Nicolas1981, Nigelj, Nikosbik, Niteowlneils, Nloth, Noii, Nomanislam82, NumberNumber5342, Numskll, Nurg, Obankston, Oci-One Kanubi, Odoncaoa, Ohka-, Oort, OpenToppedBus, OrgasGirl, Owen214, Ozonfrei, Parkywiki, Pashi, Pasquale.popolizio, Pdiperna, Penagate, PerfectStorm, Perry R. Peterson, Peter Campbell, Pfaff9, Pharaoh of the

261

Article Sources and Contributors


Wizards, Philip Trueman, Pictureuploader, Pigsonthewing, Pkchan, Plaga701, Playmobilonhishorse, Pldms, Pmc, Poesys, Powdahound, Prakash Nadkarni, Prasanthsaig, ProVega, Project mosaic, Puckly, Pdraig Coogan, Quiddity, Qwertyus, RJASE1, Rabideau, Rangoon11, Ravedave, Raysonho, Raznice, Rbonvall, RedRollerskate, RedWolf, Researchadvocate, Rhooker1236, Rich Farmbrough, RichardF, Ripe, Rjd0060, Rjwilmsi, Robert Buzink, Robin klein, Robykiwi, RockRockOn, Ronz, Rznc, SEWilco, Sa'y, Sagaciousuk, Sagar.webdesigner, Salahx, Salvadors, Sarnholm, Scarlet Lioness, Schandi, Schmiddy, SchuminWeb, Sdorrance, Searchmaven, Sebastianblakehoward, Sebastjanmm, Shar deo, ShaunMacPherson, Shepard, Shermanmonroe, Shiulihuang, Shivkumarganesh, ShlomoS, Sihnen, Sjforman, Skschmitt, Skysmith, Skywriter, Snackysrikanth, Snigbrook, Soeren1611, Solphusion, Somegeek, Sonicsuns, Spalding, Sqrt66, StWeasel, SteeleJ, Steve.hassan, Steveprutz, Stoni, Storagewonk, Suruena, TMLutas, Ta bu shi da yu, Tarcieri, Tarquin, Tcncv, Techbiz, TechnoFaye, Technologist9, TeemuN, That Guy, From That Show!, Thaurisil, The Anome, The wub, TheParanoidOne, ThePromenader, TheRingess, Timbl, Timohonkela, Tobias Hoevekamp, Tom Joad 2k, Tom Morris, Tommy2010, Toniher, Toussaint, Towel401, Traceymarr, Tredontho, Ttgeorgescu, Turnstep, TwoOneTwo, UkPaolo, Ultimatewisdom, Universimmedia, Vacapuer, Van der Hoorn, Vastag, Vuongvina, WSU-AW-JG, Wadroit, Wahuskies96, Walkerhamilton, Wangi, Watchsmart, Waveguy, Wavelength, Wbeek, Wdchk, Wesley, Whatfor999, Whiskey in the Jar, Widefox, Wiki12345678, Wikidemon, Wikitonic, William Graham, Wireless friend, Wizard2010, Woohookitty, Wsu-aw-paul, Wsu-dm-jb, Wwwwolf, X-Fi6, XMLer, Xhosan, Yaron K., Yingbom, Zacchiro, ZachPD, Zeno Gantner, Zippy, 748 anonymous edits COBIT Source: https://en.wikipedia.org/w/index.php?oldid=520059443 Contributors: Adrius42, Aeonx, Alan.rezende, Alanpc1, Alcmaeonid, Alejo2083, AlephGamma, AllanBz, Ash, Awillcox, Azimout, Binarygal, Camitommy, Canton Viaduct, Chowbok, Chrisco2005, CiudadanoGlobal, Cpaidhrin, Crazie 88, DanGalligan, David.T.Bath, David.t.bath, DiePerfekteWelle, Dogbertwp, Duke Ganote, Duncanrshannon, Dwandelt, Eccentric67, Face, Fedkad, Ferkelparade, FireballDWF2, Folajimi, Glen, Greyskinnedboy, Groovecoder, Hassan210360, Id027411, IvanLanin, Jamelan, Jef-Infojef, Jimmyheschl, Jordav, Kcordina, Kitdaddio, Kjohnrussell, Kku, Kpk.in, Kuru, Lachnej, LanceBarber, LeaveSleaves, LiDaobing, Lowzeewee, M.e, Manscher, MarkWahl, Mauls, MissionInn.Jim, Mitnick, Oliver Lineham, PSzczepanski, Paradox11, Pastore Italy, PaulHanson, Peterhgregory, Ps07swt, REDMBACISA, RHaworth, Reaper Eternal, Refalm, RichardVeryard, Robertbowerman, Ron Richard, Royce, SK ISACA, ST47, Silvrous, Smitbruc, Snori, Spartyguy32, Sspecter, StaticGull, Stefano.Ferroni, SueHay, Sumantra sarkar, Super-Magician, TScabbard, Ta bu shi da yu, The Thing That Should Not Be, Timotheos, Vaceituno, Vashtihorvat, Xelgen, Xezbeth, Zaphod119, Zumbo, 294 anonymous edits Information Technology Infrastructure Library Source: https://en.wikipedia.org/w/index.php?oldid=520416339 Contributors: 2A00:F480:4:2A1:C401:324C:34A:3A14, A. B., A3RO, AMe, Aberdeenwaters, Acm, Acpt22, ActiveSelective, Adrian.benko, Aerotheque, Aitias, Akbradford, Alanpc1, AlephGamma, Alexcuervo, Alimozcan, Allen4names, Andrea kempter, Andrzejkrajewski, Ankur onlyur, Anna Frodesiak, Antidoter, Apcotton, Apparition11, Aranel, AreJay, Ash, Atbat82, Aussieaubs, Avr, B Fizz, Barinder hothi, Baseball Bugs, Beland, BenAveling, Bhny, BibTheLion, Bibikoff, Billinghurst, Binarygal, Black Kite, Blehfu, Blroques, Bmusician, Bobrayner, Boekelj, Bradyn12, Brandguru, Brandon, Brianj hill, Brookie, Broset, Bunnyhop11, Burrowsm, Butrain, CALR, CPrehn, Cain Mosni, Can't sleep, clown will eat me, Canderson7, Captain panda, Cblanquer, Ccordray, Cgroberts, Charles T. Betz, Chowbok, Chris.fischer, ChrisCork, ChrisG, Chzz, Cinfosys01, Cjdavis, Cjsawaia, Cmdrjameson, Coherers, Cometstyles, Credible58, Cuttysc, Damon, Dancter, Danialshazly, DanielPenfield, DanielVonEhren, Danielgwj, Darinkeir, Darth Panda, DaveWFarthing, Davebremer, Daveeburke, David Biddulph, David.T.Bath, Davidbspalding, Dennisc68, Dia^, Discospinster, Djharrity, Djwaustin-wiki, Dnblack, Dnicolaid, Docu, Doug Alford, Dpv, DragonHawk, DragonflySixtyseven, Dritil, Eahiv, Edholden, Edoe, Ehheh, Emba7 EilertE, Emesis, Epbr123, Ericbakkum, Estherschindler, Etbe, Eugene-elgato, Evilmn, Excirial, Firsfron, Fortdj33, Foxj, Frank, Freek Verkerk, Fstop22, Fudoreaper, Fustbariclation, Gaius Cornelius, Gcanyon, GerardM, Ghewgill, Ghw777, Goonies, Guigui NYC, Haakon, Halfgoku, Hengeloov, Hennek, Herbys, Hkroger, Hu12, Hulmem, IBM Press, IDefx, IET-Solutions, IIVQ, ITServiceGuy, Iness it, Itbeme, Itildude, Itsgeneb, IvanLanin, J.delanoy, J04n, JT72, Jake Wartenberg, Jammus, Jandalhandler, Jasenlee, JbRIV, Jclemens, Jeffmconnolly, Jlmata, JoeSmack, Johcas 108, JonHarder, Jonik, Jovianeye, Jowanner, Jspurr01, KKuczko, Kaihsu, Kartisan, Keebrook, Keilana, Kernel.package, Kevnosisyx, Kf4bdy, Kinu, Kitdaddio, Krackpot, Krusch, Kubieziel, Kuru, Kycook, KyuuA4, LarryDragich, Laug, Lehmannjl, Leirith, Letuo, Lluinenb, MC MasterChef, MER-C, MMSequeira, Madgerly, Madman37, Magnus Manske, Majid iqbal, Malleus Fatuorum, Malo, Mandarkatdare, Marcelo Pinto, Marcusmccoy, Markhoney, Martey, Martian, Materialscientist, Matthewedwards, Mauls, Maximaximax, Mboverload, Mcsboulder, Mdd, Mellery, Metagraph, Mgillett, Michael Hardy, Michael J. Cunningham, Michig, Michigan user, MikeDogma, MikeEagling, Moeron, Morel, MrOllie, Mrehere, Mserge, Mudgen, Myszliak, Mzrahman, Najeh, Nasnema, NeilN, Niceguyedc, NickBush24, Nicoatridge, Niteowlneils, Nk, NoticeBored, Nslonim, Nuno Tavares, Nuujinn, Ocaasi, Oleh77, Olivier Debre, Omicronpersei8, OnePt618, OsvaldoCarvalho, Otto ter Haar, PRRfan, Panlm, Pansearch, Pascal.kotte, Patchworker, Paulbruton, Paulbuchanan, Paulseatonsmith, Peterl, Pg133, Phil websurfer@yahoo.com, Philip ea, Piano non troppo, Pion, Pocopocopocopoco, Pparazorback, Pukerua, RHaworth, Rachelswiki1, RainbowOfLight, Ralphbk, Raspolsky, Ravizone2000, Raysonho, Rchandra, Rehan20784, Rich Farmbrough, Rnsimmons, Robocoder, Rockfang, Rojomoke, Ron Richard, Ronz, Rpeasley, Rugops, Runefrost, Sam Hocevar, SamG1978, SamTheButcher, Sandy ts2k, SaulPerdomo, Sbrumpton, Scott McNay, ScottWAmbler, Sdr, Sharpner, Shawnse, Smulvih2, Snori, Soifranc, Some jerk on the Internet, Somnoliento, Sorrentinolui, Srandaman, Sspecter, Stakfry526, Stephanobianco, Stephenbarboza, Stevegregory79, StewartNetAddict, StoneIsle, StuartR, SunSw0rd, Sunray, Svetlev, TDavisBMC, TGreenfield, Ta bu shi da yu, Tabletop, Takamaxa, Tandric, Tarun.Varshney, Tassedethe, Tbsdy lives, Tcwilliams, Technobadger, The Anome, The Letter J, The Thing That Should Not Be, Thecheesykid, Thingg, Thumperward, Ticaro, Tiptoety, Tjarrett, Tlaresch, Tobryant, Tony1, Tpbradbury, TutterMouse, Twirligig, Twostardav, U11720, Uncle G, Vanbregt, Vashtihorvat, Veinor, VerdigrisP, Versageek, Vince.meulle, VinitAdhopia, Vipinhari, Vlad, WECullen, Waggers, Watroba, Waturuochav, WebCoder11, WeeWillieWiki, West81, Wik, Wiki3 1415, WikiNickEN, Winterst, Woohookitty, Wren337, WxGopher, Xsmith, Zachlipton, Zsh, , 1467 anonymous edits Project management Source: https://en.wikipedia.org/w/index.php?oldid=519753890 Contributors: 05proedl, 152.98.195.xxx, 1959frenchy, 4RugbyRd, 62.158.194.xxx, 9Nak, A.harris0708, AGK, ALargeElk, Aaronbrick, AbsolutDan, Achalmeena, Acheah, Aeon1006, Aidanmharrington, Aitias, Akbradford, Ale jrb, Alessandro57, Alisha0512, Allstarecho, Alphamu57, Alsuara, Altrock78, Anakin101, Ancheta Wis, AndrewStellman, AndyBrandt, AngelOfSadness, Anitanandi, Ankit Maity, Anodynomine, Antillarum, Ap, Apparition11, Aranel, ArmadilloFromHell, Arsenikk, Artemis Fowl Rules, Asannuti, Asoucek, AstareGod, Atena.kouchaki, Atif673, Auntof6, Austinm, AxelBoldt, BSJWright, Bananaman68, Barek, BartaS, Bdouthwaite, Beano, Beetstra, Belovedfreak, Bendoly, Benfellows, Bento00, Bernd in Japan, Bertha32, Billaronson, Binafhmz, Blanchardb, Blathnaid, Bmartel, Bmicomp, Bnorrie, Bob Bolin, Bobo192, Bonadea, Boxplot, Brentwills, Brion.finlay, Buissy, Burner0718, Butrain, CALR, CFMWiki1, CPMTutor, Calvadosser, Calvin 1998, Camw, CarlGuass, Ccorpusa, Cerrol, Chadloder, ChemGardener, Chiefwhite, Chris Roy, ChrisG, Chrispreece2007, Christiebiehl, Christopherlin, Christyoc, Chuq, Clad2020, Claidheamohmor, Clf99, Closedmouth, Cloud10pm, Cmaley, Colabtech31, Colin Marquardt, Cometstyles, CommonsDelinker, ConstructionSoftwareExperts, Conversion script, Craigwb, Creacon, Crzer07, Cst17, Ct31, Cybercobra, DARTH SIDIOUS 2, DBlomgren, DVD R W, DVdm, Dan Polansky, DanielDeibler, Danielhegglin, Dansedmonson, David VS West, David.alex.lamb, Dawnseeker2000, Dbfirs, DeadEyeArrow, Deimos814, Deli nk, Delirium, DeltaOperator, Dendlai, Dennal68, Dennis.wittekind, DenzilS, Derek Ross, Deville, Dghutchinson, Dgmoran, Dgw, Dickietr, DisneyG, DominikusH, Donreed, Doroli, DougsTech, Dougweller, Dr PDG, DrDooBig, Drshields, Dtarver, Dycedarg, ESkog, Earthandfire, Ebe123, EdBever, Edward, Eeekster, Ehheh, Elena1234, Elvismcgrady, Englishman in Provence, Epbr123, Eric Pement, Erkan Yilmaz, EronMain, Escape Orbit, Eshirazi, Exir Kamalabadi, FMMonty, Fabricationary, FactsAndFigures, Faithlessthewonderboy, Falcon9x5, FalconZero, Fang Aili, Favonian, Firien, Fongamanda, Forestsmith, Fpolack, Frankfshsu, Fred Bradstadt, Freeformer, Freepmstudy, Freeskies, Frontelo, Frostbitcanadian, Fullstop, Funatic, Fxsunny, F, GAPPS, GESICC, Garrybooker, GeoffWilson, Geoffsauer, GerK, Gerritklaschke, Gfani, Ghaag, Giftlite, Globalprofessor, Goethean, Gop 62, Graeme Bartlett, GraemeL, Graham87, Graibeard, Granite07, Greyskinnedboy, Gruffi, Gsaup, Gshills, Gunnala, Gurch, Guy Van Hooveld, Gwernol, Hadal, Haikon, HamburgerRadio, HappyCamper, Hazmat2, Hech xwijnerj, Herbythyme, Himdey njitwill, Hirzel, Hongooi, Howardjp, Hroulf, Hu12, Hubbardaie, Hubertus, Hudec, Hux, ICSGlobal, ITServiceGuy, Ian Pitchford, Ian.thomson, IjonTichyIjonTichy, Imroy, IngaRea, Inwind, Itgov, Ixfd64, J.delanoy, Jaberwocky6669, Jackaranga, Jamezbailey, Janbenes, Jbcarboni, Jburks97, Jcardinal, Jdtoellner, Jeff3000, Jeffmcneill, Jeltz, Jetojedno, Jgritz, Jiang, Jim1138, JimGleaves, Jkhcanoe, Jlao04, Jmciver, Jmi41, Jmlk17, Jn.arch, Jnankivel, John Richard Parker, John Vandenberg, John.j.smitherson, JohnManuel, Jojhutton, Jonpro, Jordiferrer, Josemoromelon, Jp361, Judy Payne, Julesd, Jurajv, Just plain Bill, KGun10, Kaisersoze1, Kanags, Kanojia, Karl-Henner, Kbh3rd, Kcone, Kelemendani, Kenmckinley, Kenstandfield, Ketiltrout, Kevin B12, Khalid, Khalid hassani, Khusroks, Kilmer-san, Kim Kris, KimBecker, Kingpin13, Kinu, Kltownsend, Kokcharov, Krappie, Ktlonergan, Kubigula, Kuru, Kwertii, L3aa-cademy, LFaraone, LeaveSleaves, Lecard, Leonardo Aloi, Leszek Jaczuk, Levana77, Levineps, Liao, LightAnkh, LilHelpa, Linkspamremover, LizardJr8, Lmarinho, Loflabr, Longdongniner, Loren.wilton, Lotje, Luk, Lumos3, Luna Santin, Lundholm, Lynbarn, M4gnum0n, MY2sense, Macoykolokoy, MagnaMopus, Mann jess, Manop, Maokart444, Mapador, Marco Krohn, Margeru, Mark Millard, Mark Renier, Mark.murphy, Markkh, Matt Deres, Maurreen, Mav, Mbrylant, Mdd, Media lib, Meitar, Melashri, Mephistophelian, Merovingian, Mgillett, Michael Hardy, MichaelDawin, Milkau111, Mimihitam, Mindmatrix, Minesweeper, Mini.here, Mkoval, Mlavannis, Mmpubs, Mneser, Monkey Bounce, Moonriddengirl, MorrisRob, Mpleahy, Mr.Z-man, MrKris, MrOllie, Mrt3366, Mudgen, Mugunth Kumar, Muminshawaf, Mummy34, Munazanjum, Mwanner, Mwfnwa, Mydogategodshat, Mywikiid99, NOKESS, Nandak89, Nankivel, NawlinWiki, Nazmanager, Ngoult, Nickg, Nicos interests, Nighthawkx15, Nikai, Ninadelis, Nishalegend, Niteowlneils, Nixdorf, Norm, OSUKid7, Oberiko, Oblomoff, Ocrakate, Octahedron80, Oicumayberight, Ojigiri, Oldschoolosama, OliverTwisted, Orange Suede Sofa, Overviewer, Owain.wilson, Padraig1888, Paltpappa, Paradoxic, Parent5446, PatrickWeaver, Paul W, Pavel Vozenilek, Pcremer2270, Pcremerfluno, Pdcook, Pepper, Peter Reusch, Peterbud, Pgauld, Pgreenfinch, PhilHibbs, PhilKnight, Phreed, Pigsonthewing, Pilgaard, Pinkadelica, Pixievamps, Pjganon, Plakhi24, Pm by day, Pm master, Pmtoolbox, Pmyteh, Poli08, Porchcorpter, PrestonH, ProgrammeUK, Project mosaic, Projectmagic, Protr, Psaico, Pstansbu, Pstout, Pukivruki, Pythia90, Qaiassist, Qarakesek, Qatestdev, Quadell, RAM, RJASE1, RJBurkhart, RJaguar3, RSedor, Radagast83, Radavi, RainbowOfLight, Rami R, RandyKaelber, Rangoon11, Raymundsy, Raywil, Rcannon100, Readysetpass, Reconsider the static, RedHillian, Redux, Reedy, Reliablesources, Renebach, Renesis, Rernst, Research2007, Rich Farmbrough, Rich257, Richard Allen, Richard Harvey, RichardF, Richardgush, Richi, Richman9, Rlolsen, Rmp80ind, Ron Richard, Ronhjones, Ronz, RoyHanney, Royallarry, Rrburke, Rrjanbiah, Rror, Rspanton, RuM, Rubysixty6, Rumblesnuf, Ruud Koot, Rwgreen1173, Rwil02, S.K., SE SME, SJP, Salliesatt, Sandymok, Sara050805, Sarah, Saros136, Sbugs, Scaevus, Scientizzle, Scjessey, Scmbwis, Sean Whitaker, Seanieboy1974, Seaphoto, Search4Lancer, Sebasanjuan, Securiger, Seraphim, Shadowjams, Shanes, Sharkface217, Shawn in Montreal, Shoeofdeath, Shoessss, Shokolada, Shoy, Sisalto, Skumar.rakesh, Sleepyhead81, Smartse, Smiker, Smpickens, Solipsist, Sonialoud, SorenAndersen, SpaceFlight89, Spalding, Spangineer, Spartikus411, Steevm, Stevenwmccrary58, SueHay, Sutanumartand, TVBZ28, Tarquin, TastyPoutine, Teammetz, Technopat, Tephlon, Tetraedycal, TetsuoTheRob, That Guy, From That Show!, The Led, The Letter J, The Thing That Should Not Be, The manekin, Thebluemanager, Theboymusic, Thebrownell, Theroadislong, Thingg, Thopper, Thrane, ThreePD, Tijuana Brass, Tmopkisn, Tobias Bergemann, Tobryant, Tohd8BohaithuGh1, Tommy2010, Tony1, Tosblt, Toytoy, Transity, Traroth, Trewinpa, Trial, Triz231, Trout001, Truthbro, Tslocum, Tswelch, Turnstep, Twestgard, Tzartzam, Uqjwhitt, Urbanette, Utcursch, VARies, Vaceituno, Vald, Valerie.sather@stage-gate.com, Van der Hoorn, Vanderzyden, Vanished user 39948282, Vans0100, Vcmohanonline, Versageek, Vgranucci, Vigo10, Vincehk, Vineetgandhi, Viokiori, Vrenator, WJBscribe, WKirschling, Wacko39, Weatherman90, Weregerbil, Weyes, Wgoetsch, Widefox, Wik, WikHead, Wikid77, Wikipelli, Wikke41, Wireless friend, Wissons, Woohookitty, Wrduncan3, Wwmarket, X201, Xavexgoem, Xholyrelicx, Xlynx, Yamamoto Ichiro, Yendor1958, Ykimva, Ylebihan, Yongliang08, Zigger, Zntrip, Zscout370, Zugerbueb, Zzuuzz, , 1565 anonymous edits

262

Article Sources and Contributors


System testing Source: https://en.wikipedia.org/w/index.php?oldid=510770195 Contributors: A bit iffy, Abdull, AliveFreeHappy, Aman sn17, Anant vyas2002, AndreChou, Argon233, Ash, Beland, Bex84, Bftsg, BiT, Bobo192, Ccompton, ChristianEdwardGruber, Closedmouth, DRogers, Downsize43, Freek Verkerk, GeorgeStepanek, Gilliam, Harveysburger, Hooperbloob, Ian Dalziel, Jewbacca, Kingpin13, Kubigula, Lauwerens, Manway, Michig, Morning277, Mpilaeten, Myhister, NickBush24, Philip Trueman, Pinecar, RCHenningsgard, RainbowOfLight, Ravialluru, Ronz, Solde, Ssweeting, Suffusion of Yellow, SusanLarson, Thv, Tmopkisn, Vishwas008, Vmahi9, Walter Grlitz, Wchkwok, Woohookitty, Zhenqinli, 152 anonymous edits Unit testing Source: https://en.wikipedia.org/w/index.php?oldid=519787926 Contributors: .digamma, Ahc, Ahoerstemeier, AliveFreeHappy, Allan McInnes, Allen Moore, Alumd, Anderbubble, Andreas Kaufmann, Andy Dingley, Angadn, Anorthup, Ardonik, Asavoia, Attilios, Autarch, Bakersg13, Bdijkstra, BenFrantzDale, Brian Geppert, CanisRufus, Canterbury Tail, Chris Pickett, ChristianEdwardGruber, ChuckEsterbrook, Ciaran.lyons, Clausen, Colonies Chris, Corvi, Craigwb, DRogers, DanMS, Denisarona, Derbeth, Dflam, Dillard421, Discospinster, Dmulter, Dougher, Earlypsychosis, Edaelon, Edward Z. Yang, Eewild, El T, Elilo, Evil saltine, Excirial, FlashSheridan, FrankTobia, Fredrik, Furrykef, GTBacchus, Garionh, Gggggdxn, Goswamivijay, Guille.hoardings, Haakon, Hanacy, Hari Surendran, Hayne, Hfastedge, Hooperbloob, Hsingh77, Hypersonic12, I dream of horses, Ibbn, Influent1, J.delanoy, JamesBWatson, Jjamison, Joeggi, Jogloran, Jonhanson, Jpalm 98, Kamots, KaragouniS, Karl Dickman, Kku, Konman72, Kuru, Leomcbride, Longhorn72, Looxix, Mark.summerfield, Martin Majlis, Martinig, MaxHund, MaxSem, Mcsee, Mheusser, Mhhanley, Michig, MickeyWiki, Miker@sundialservices.com, Mild Bill Hiccup, Mortense, Mr. Disguise, MrOllie, Mtomczak, Nat hillary, Nate Silva, Nbryant, Neilc, Nick Lewis CNH, Notinasnaid, Ohnoitsjamie, Ojan53, OmriSegal, Ottawa4ever, PGWG, Pablasso, Paling Alchemist, Pantosys, Paul August, Paulocheque, Pcb21, Pinecar, Pmerson, Radagast3, RainbowOfLight, Ravialluru, Ravindrat, RenniePet, Rich Farmbrough, Richardkmiller, Rjnienaber, Rjwilmsi, Rogerborg, Rookkey, RoyOsherove, Ryans.ryu, S.K., S3000, SAE1962, Saalam123, ScottyWZ, Shyam 48, SimonTrew, Sketch051, Skunkboy74, Sligocki, Smalljim, Solde, Sozin, Spamguy, Ssd, Sspiro, Stephenb, SteveLoughran, Stumps, Sujith.srao, Svick, Swtechwr, Sybersnake, TFriesen, Themillofkeytone, Thv, Timo Honkasalo, Tlroche, Tobias Bergemann, Toddst1, Tony Morris, Tyler Oderkirk, Unittester123, User77764, VMS Mosaic, Veghead, Verec, Vishnava, Vrenator, Walter Grlitz, Willem-Paul, Winhunter, Wmahan, Xanchester, Zed toocool, 512 anonymous edits Regression testing Source: https://en.wikipedia.org/w/index.php?oldid=520720866 Contributors: 7, Abdull, Abhinavvaid, Ahsan.nabi.khan, Alan ffm, AliveFreeHappy, Amire80, Andrew Eisenberg, Anorthup, Antonielly, Baccyak4H, Benefactor123, Boongoman, Brenda Kenyon, Cabalamat, Carlos.l.sanchez, Cdunn2001, Chris Pickett, DRogers, Dacian.epure, Deb, Dee Jay Randall, Designatevoid, Doug.hoffman, Eewild, Elsendero, Emj, Enti342, Estyler, Forlornturtle, G0gogcsc300, Gregbard, Hadal, Hector224, Henri662, Herve272, HongPong, Hooperbloob, Iiiren, Jacob grace, Jwoodger, Kamarou, Kesla, Kmincey, L Kensington, Labalius, LandruBek, Luckydrink1, MER-C, Marijn, Mariotto2009, Materialscientist, Matthew Stannard, Maxwellb, Menzogna, Michaelas10, Michig, MickeyWiki, Mike Rosoft, MikeLynch, Msillil, NameIsRon, Neilc, Neurolysis, Noq, Philipchiappini, Pinecar, Qatutor, Qfissler, Ravialluru, Robert Merkel, Rsavenkov, Ryans.ryu, S3000, Scoops, Snarius, Spock of Vulcan, SqueakBox, Srittau, Strait, Svick, Swtechwr, Throwaway85, Thv, Tobias Bergemann, Tobias Hoevekamp, Toon05, Urhixidur, Walter Grlitz, Will Beback Auto, Wlievens, Zhenqinli, Zvn, 202 anonymous edits Acceptance testing Source: https://en.wikipedia.org/w/index.php?oldid=513380032 Contributors: Ace of Spades, Alphajuliet, Amire80, Amitg47, Apparition11, Ascnder, Bournejc, Bouxetuv, Caesura, Caltas, CapitalR, Carse, Chris Pickett, Claudio figueiredo, CloudNine, Conversion script, DRogers, DVD R W, Dahcalan, Daniel.r.bell, Davidbatet, Dhollm, Divyadeepsharma, Djmckee1, Dlevy-telerik, Eco30, Eloquence, Emilybache, Enochlau, F, GTBacchus, GraemeL, Granburguesa, Gwernol, HadanMarv, Halovivek, Hooperbloob, Hu12, Hutcher, Hyad, Infrablue, Jamestochter, Jemtreadwell, Jgladding, JimJavascript, Jmarranz, Jpp, Kaitanen, Kekir, Ksnow, Liftoph, Lotje, MartinDK, MeijdenB, Meise, Melizg, Michael Hardy, Midnightcomm, Mifter, Mike Rosoft, Mjemmeson, Mortense, Mpilaeten, Muhandes, Myhister, Myroslav, Newbie59, Normxxx, Old Moonraker, Olson.sr, PKT, Pajz, Panzi, Pearle, PeterBrooks, Phamti, Pine, Pinecar, Qem, RHaworth, RJFerret, Riki, Rlsheehan, Rodasmith, Salimchami, Shirulashem, Swpb, TheAMmollusc, Timmy12, Timo Honkasalo, Toddst1, Viridae, Walter Grlitz, Well-rested, Whaa?, William Avery, Winterst, Woohookitty, 168 anonymous edits Software testing Source: https://en.wikipedia.org/w/index.php?oldid=519881264 Contributors: 0612, 144.132.75.xxx, 152.98.195.xxx, 166.46.99.xxx, 192.193.196.xxx, 212.153.190.xxx, 28bytes, 2D, 2mcm, 62.163.16.xxx, A Man In Black, A R King, A.R., A5b, AGK, Abdull, AbsolutDan, Academic Challenger, Acather96, Ad88110, Adam Hauner, Addihockey10, Ag2402, Agopinath, Ahoerstemeier, Ahy1, Aitias, Akamad, Akhiladi007, AlMac, AlanUS, Alappuzhakaran, Albanaco, Albertnetymk, Aleek vivk, AlexiusHoratius, Alhenry2006, AliaksandrAA, AliveFreeHappy, Allan McInnes, Allens, Allstarecho, Alphius, Alvestrand, Amire80, Amty4all, Andonic, Andre Engels, Andreas Kaufmann, Andres, Andrew Gray, Andrewcmcardle, Andygreeny, Ankit Maity, Ankurj, Anna Frodesiak, Anna88banana, Annepetersen, Anon5791, Anonymous Dissident, Anonymous anonymous, Anonymous editor, Anorthup, Anthonares, Anwar saadat, Aphstein, Apparition11, Aravindan Shanmugasundaram, ArmadilloFromHell, Arno La Murette, Ash, Ashdurbat, Avoided, Barunbiswas, Bavinothkumar, Baxtersmalls, Bazzargh, Beland, Bentogoa, Betterusername, Bex84, Bigtwilkins, Bigwyrm, Bilbo1507, Bindu Laxminarayan, Bkil, Blair Bonnett, Blake8086, Bluerasberry, Bobdanny, Bobisthebest, Bobo192, Bonadea, Bornhj, Bovineone, Boxplot, Bpluss, Breno, Brequinda, Brion VIBBER, BruceRuxton, Brunodeschenes.qc, Bryan Derksen, Bsdlogical, Burakseren, Buxbaum666, Calton, Cangoroo11, CanisRufus, Canterbury Tail, Canterj, CardinalDan, Carlos.l.sanchez, CattleGirl, CemKaner, Certellus, Certes, Cgvak, Chairboy, Chaiths, Chamolinaresh, Chaser, Cheetal heyk, ChiLlBeserker, Chowbok, Chris Pickett, ChrisB, ChrisSteinbach, ChristianEdwardGruber, Chrzastek, Cjhawk22, Claygate, Closedmouth, Cometstyles, Conan, Contributor124, Conversion script, CopperMurdoch, Copyry, Corruptcopper, Cpl Syx, Cptchipjew, Craigwb, Cvcby, Cybercobra, CyborgTosser, DARTH SIDIOUS 2, DMacks, DRogers, DVdm, Dacoutts, DaisyMLL, Dakart, Dalric, Danhash, Danimal, Davewild, David.alex.lamb, Dazzla, Dbelhumeur02, Dcarrion, Declan Kavanagh, Dekanherald, DeltaQuad, Denisarona, Deogratias5, Der Falke, DerHexer, Derek farn, Dev1240, Dicklyon, Diego.pamio, Digitalfunda, Discospinster, Dnddnd80, Dougher, Downsize43, Dravecky, Drewster1829, Drivermadness, Drxim, DryCleanOnly, Dvansant, Dvyost, E2eamon, ELinguist, ESkog, Ea8f93wala, Ebde, Ed Poor, Edward Z. Yang, Electiontechnology, ElfriedeDustin, Ellenaz, EncMstr, Enumera, Enviroboy, Epim, Epolk, Eptin, Ericholmstrom, Erkan Yilmaz, ErkinBatu, Esoltas, Eumolpo, Excirial, Exert, Falcon8765, FalconL, Faught, Faye dimarco, Fayenatic london, Felix Wiemann, Filadifei, Flavioxavier, Forlornturtle, FrankCostanza, Fredrik, FreplySpang, Furrykef, G0gogcsc300, GABaker, Gail, Gar3t, Gary King, Gary Kirk, Gdavidp, Gdo01, GeoTe, Georgie Canadian, Geosak, Giggy, Gil mo, Gogo Dodo, Goldom, Gonchibolso12, Gorson78, GraemeL, Graham87, GregorB, Gsmgm, Guehene, Gurchzilla, GururajOaksys, Guybrush1979, Hadal, Halovivek, Halsteadk, HamburgerRadio, Harald Hansen, Havlatm, Haza-w, Hdt83, Headbomb, Helix84, Hemnath18, Henri662, Hghyux, Honey88foru, Hooperbloob, Hsingh77, Hu12, Hubschrauber729, Huge Bananas, Hutch1989r15, I dream of horses, IJA, IceManBrazil, Ignasiokambale, ImALion, Imroy, Incnis Mrsi, Indon, Infrogmation, Intray, Inwind, J.delanoy, JASpencer, JPFitzmaurice, Ja 62, JacobBramley, Jake Wartenberg, Jakew, Jarble, Jeff G., Jehochman, Jenny MacKinnon, JesseHogan, JimD, Jjamison, Jluedem, Jm266, Jmax-, Jmckey, Jobin RV, JoeSmack, John S Eden, Johndci, Johnny.cache, Johnuniq, JonJosephA, Joneskoo, JosephDonahue, Josheisenberg, Joshymit, Joyous!, Jsled, Jstastny, Jtowler, Juliancolton, JuneGloom07, Jwoodger, Kalkundri, KamikazeArchon, Kanenas, Kdakin, Keithklain, KellyHass, Kelstrup, Kevin, Kgf0, Khalid hassani, Kingpin13, Kingpomba, Kitdaddio, Kku, Klilidiplomus, KnowledgeOfSelf, Kompere, Konstable, Kothiwal, Krashlandon, Kuru, Lagrange613, LeaveSleaves, Lee Daniel Crocker, Leomcbride, Leszek Jaczuk, Leujohn, Listmeister, Little Mountain 5, Lomn, Losaltosboy, Lotje, Lowellian, Lradrama, Lumpish Scholar, M Johnson, MER-C, MPerel, Mabdul, Madhero88, Madvin, Mailtoramkumar, Manekari, ManojPhilipMathen, Mark Renier, Materialscientist, MattGiuca, Matthew Stannard, MaxHund, MaxSem, Mazi, Mblumber, Mburdis, Mdd, MelbourneStar, Mentifisto, Menzogna, MertyWiki, Metagraph, Mfactor, Mhaitham.shammaa, Michael B. Trausch, Michael Bernstein, MichaelBolton, Michecksz, Michig, Mike Doughney, MikeDogma, Miker@sundialservices.com, Mikethegreen, Millermk, Misza13, Mitch Ames, Miterdale, Mmgreiner, Moa3333, Mpilaeten, Mpradeep, Mr Minchin, MrJones, MrOllie, Mrh30, Msm, Mtoxcv, Munaz, Mxn, N8mills, NAHID, Nambika.marian, Nanobug, Neokamek, Netra Nahar, Newbie59, Nibblus, Nick Hickman, Nigholith, Nimowy, Nine smith, Nksp07, Noah Salzman, Noq, Notinasnaid, Nuno Tavares, OBloodyHell, Oashi, Ocee, Oddity-, Ohnoitsjamie, Oicumayberight, Okal Otieno, Oliver1234, Omicronpersei8, Orange Suede Sofa, Orphan Wiki, Ospalh, Otis80hobson, PL290, Paranomia, Pascal.Tesson, Pashute, Paudelp, Paul August, Paul.h, Pcb21, Peashy, Pepsi12, PhilHibbs, Philip Trueman, PhilipO, PhilippeAntras, Phoe6, Piano non troppo, Piast93, Pieleric, Pine, Pinecar, Pinethicket, Plainplow, Pmberry, Pointillist, Pomoxis, Poulpy, Pplolpp, Prari, Praveen.karri, Priya4212, Promoa1, Psychade, Puraniksameer, Puzzlefan123asdfas, Pysuresh, QTCaptain, Qaiassist, Qatutor, Qazwsxedcrfvtgbyhn, Qwyrxian, RA0808, RHaworth, Radagast83, Rahuljaitley82, Rajesh mathur, RameshaLB, Randhirreddy, Raspalchima, Ravialluru, Raynald, RedWolf, RekishiEJ, Remi0o, ReneS, Retired username, Rex black, Rgoodermote, Rhobite, Riagu, Rich Farmbrough, Richard Harvey, RitigalaJayasena, Rje, Rjwilmsi, Rlsheehan, Rmattson, Rmstein, Robbie098, Robert Merkel, Robinson weijman, Rockynook, Ronhjones, Ronwarshawsky, Ronz, Roscelese, Rowlye, Rp, Rror, Rschwieb, Ruptan, Rwwww, Ryoga Godai, S.K., SD5, SJP, SP-KP, SURIV, Sachipra, Sachxn, Sam Hocevar, Samansouri, Sankshah, Sapphic, Sardanaphalus, Sasquatch525, SatishKumarB, ScaledLizard, SchreyP, ScottSteiner, Scottri, Sega381, Selket, Senatum, Serge Toper, Sergeyl1984, Shadowcheets, Shahidna23, Shanes, Shepmaster, Shimeru, Shishirhegde, Shiv sangwan, Shoejar, Shubo mu, Shze, Silverbullet234, Sitush, Skalra7, Skyqa, Slowbro, Smack, Smurrayinchester, Snowolf, Softtest123, Softwareqa, Softwaretest1, Softwaretesting1001, Softwaretesting101, Softwrite, Solde, Somdeb Chakraborty, Someguy1221, Sooner Dave, SpaceFlight89, Spadoink, SpigotMap, Spitfire, Srikant.sharma, Srittau, Staceyeschneider, Stansult, StaticGull, Stephen Gilbert, Stephenb, Steveozone, Stickee, Storm Rider, Strmore, SunSw0rd, Superbeecat, SwirlBoy39, Swtechwr, Sxm20, Sylvainmarquis, T4tarzan, TCL India, Tagro82, Tdjones74021, Techsmith, Tedickey, Tejas81, Terrillja, Testersupdate, Testingexpert, Testingfan, Testinggeek, Testmaster2010, ThaddeusB, The Anome, The Thing That Should Not Be, The prophet wizard of the crayon cake, Thehelpfulone, TheyCallMeHeartbreaker, ThomasO1989, ThomasOwens, Thread-union, Thv, Tipeli, Tippers, Tmaufer, Tobias Bergemann, Toddst1, Tommy2010, Tonym88, Tprosser, Trusilver, Ttam, Tulkolahten, Tusharpandya, TutterMouse, Uktim63, Uncle G, Unforgettableid, Useight, Utcursch, Uzma Gamal, VMS Mosaic, Valenciano, Vaniac, Vasywriter, Venkatreddyc, Venu6132000, Verloren, VernoWhitney, Versageek, Vijay.ram.pm, Vijaythormothe, Vishwas008, Vsoid, W.D., W2qasource, Walter Grlitz, Wavelength, Wbm1058, Wifione, WikHead, Wiki alf, WikiWilliamP, Wikieditor06, Will Beback Auto, Willsmith, Winchelsea, Wlievens, Wombat77, Wwmbes, Yamamoto Ichiro, Yesyoubee, Yngupta, Yosri, Yuckfoo, ZenerV, Zephyrjs, ZhonghuaDragon2, ZooFari, Zurishaddai, 2225 anonymous edits Business process modeling Source: https://en.wikipedia.org/w/index.php?oldid=517709401 Contributors: AGK, Alancrean, Andriesvdwatt, Apapadop, Appraiser, Auntof6, Axeltroike, Bender235, Bernd in Japan, Bjmullan, Bluemask, Boly38, Borodinvadim, Bpmbooks, BrightonRock101, Canterbury Tail, Clovisleoncio, ComplyAnt, Corporate Minion, Cosmicnet, Creacon, DanMS, Danim, Davidbspalding, Deb, Dlwl, Dmccreary, Eigenwijze mustang, Erkan Yilmaz, FatalError, F, GrahamBignell, Greyskinnedboy, Hazmat2, Hughch, J04n, Jajis, Jamelan, JasonLax, JayMacArthur, Jeffdusting, JezWalters, JmRay, Joel Alcalay, Jomama2000, Jordanniall, Jzupan, Khaliddb, Krisgrotius, Kuru, MDE, MER-C, MME, Mark Renier, Marvinrulesmars, Maxsira, Mdd, Michael Hardy, Mickael.istria, Mild Bill Hiccup, Mjchonoles, MrOllie, Oicumayberight, Paulbschulte, Pm master, Pvanerk, Quuxplusone, Radagast83, RayGates, Razorbliss, Renesis, RichardVeryard, Rlendog, Robertbowerman, Scottywott, Shoeofdeath, Some standardized rigour, THB, Thatemooverthere, Theeverst87, Titouan13, Tom hay99, Turnstep, Twas Now, Warpsmith, Woohookitty, ZweiOhren, , , 117 anonymous edits Joint application design Source: https://en.wikipedia.org/w/index.php?oldid=517178431 Contributors: Allens, Antiuser, Bill.albing, Boson, Canterbury Tail, Ccmcpher, Christopher Forster, Crazyscott96, DMacks, Ellengott, Ettrig, Euku, Ghaag, Gnewf, Grush, IncognitoErgoSum, Jerryobject, Kjtobo, Landon1980, M.e, Mimrecard, Mondo one, Nekohakase, PL290, Pm master, Radagast83, Rklawton, Shivaramswamy, Sujt.nr, Telemmoshe, Uncle Dick, 102 anonymous edits

263

Article Sources and Contributors


Software development process Source: https://en.wikipedia.org/w/index.php?oldid=520634221 Contributors: 123admin, 146.227.71.xxx, AFFAN HASAN, AGK, Ab762, Acockrum, Adrian.walker, Aeonx, Ahoerstemeier, Alansohn, Aleenf1, AlephGamma, Alex.key, AliveFreeHappy, Allan McInnes, Amalas, Ancheta Wis, Andy1618, Angmayakda, Angr, Ansell, Ap, Aront54, AroundLAin80Days, Ash zz, Avoided, Axd, Ayonbd2000, Az1568, Badon, Bassbonerocks, Baygun, Beland, Belinrahs, BenAveling, Bensel, Biasoli, Boing! said Zebedee, Bondegezou, Brighterorange, BryceHarrington, Bstpierre, Caltas, Canderra, Capricorn42, CatWikiEdit, CattleGirl, Cburnett, Chairboy, Chanti naresh, Charles Sturm, Chiranthan, Chiswick Chap, Choggo, Cholmes75, Chowbok, ChrisLoosley, ChrisSteinbach, Christian75, Closedmouth, Cmmiloveprocess, Conversion script, Cory Donnelly, Cpl Syx, Crickel, Ctote, Cymru.lass, DBigXray, DRAGON BOOSTER, DRogers, DXBari, Darth Panda, David.alex.lamb, DavidBailey, Dawidl, Degger, Delimata, Demiane, Demonkoryu, Devilgate, Dfarrell07, Dfrg.msc, Dgw, Discospinster, Dougher, Dpbsmith, Dwandelt, Dybdahl, Dysprosia, Ebde, Edward, Egbsystem, Ehheh, Elkman, Elockid, EncMstr, Ency, Ennetws, Epbr123, Eric B. and Rakim, Erkan Yilmaz, ErkinBatu, Faigl.ladislav, Frecklefoot, Freedomlinux, Furrykef, Future ahead, Garde, Gdavidp, Gdupont, Ghamma, Giraffedata, Glenn4pr, Graham87, Guppie, Gkhan, Happysailor, Harald Hansen, Hdante, He Who Is, Hede2000, Herwin.a, HienTau, Hsorbu, Hu12, Hzhbcl, IRP, Ian Bailey, Ilario, Intodevel, Isaacdealey, IvanLanin, J00tel, J04n, J991, JDP90, JLaTondre, JaGa, Jaiquois, Jamestochter, Jarble, Jaydec, Jindal yatin, Jmlive, John Broughton, Jojalozzo, Jojhutton, Jpbowen, Juliancolton, Jwkeder, K.Nevelsteen, Kazvorpal, Kbdank71, Kchampcal, Keilana, Kenny sh, Kesla, Khalid hassani, Khsdrummer2001, KnightRider, Knownot, Kuru, L Kensington, Lanev93, Larry_Sanger, Leafyplant, Lee Carre, Leebo, Leibniz, Leif, Leolaursen, Levineps, Limabeans, Limbo socrates, Liridon, Logan, Lunawisp, M ajith, M.e, M4gnum0n, MER-C, Majkiw, Malcolm Farmer, Mannojshukla, Mark Renier, Martin451, MattGiuca, MaxHund, Mboverload, Mcorazao, Mdd, Michig, Midoladido, Mild Bill Hiccup, Mitch Ames, Mk*, Mo ainm, Monstermensch, Mos3ab, MrOllie, Mrdempsey, Mrh30, Myasuda, Natkeeran, NatusRoma, Nefuchs, Neokilly, Nigosh, Niteowlneils, Nixdorf, OMouse, Octahedron80, Ohnoitsjamie, Oicumayberight, Orange Suede Sofa, Paviagrawal, Petra.hegarty, Petrb, Phil webster, Philip Trueman, Pierre Doleans, Pinar, Pmtoolbox, Poco a poco, Psb777, Pstraton, Pvazteixeira, Qaiassist, RekishiEJ, Richard R White, Ripogenus77, RonLichty, Rursus, Rwgreen1173, Safemariner, Sagaciousuk, Sardanaphalus, SchreyP, Shadowjams, Shirik, SimonMorgan, SiobhanHansa, Skarebo, SkyWalker, Slayemin, Softtest123, Sourcejedi, Spinality, Sql qassem, Srnelson, Stephen Gilbert, Stephen e nelson, SunSw0rd, Supersteve04038, Suruena, TYelliot, Tasc, Teelosdomain, Teryx, The Thing That Should Not Be, Thecheesykid, Theleftorium, ThinkEnemies, Thowa, Tide rolls, Timneu22, Tobias Bergemann, Tommy2010, Tonyshan, Toothmarks1111, Train2104, Triona, True Pagan Warrior, Turkishbob, Tvarnoe, Uirauna, Ukexpat, Ulric1313, Umapathy, Valafar, VanDingo, Vary, Vasiliy Faronov, Vasywriter, Versageek, Vicenarian, Vis-tiger, VoteLibertarian, Waggers, Walter Grlitz, WalterGR, Wctaiwan, Whitepaw, Wiki13, WikiNC, WikipedianYknOK, William Avery, Wykis, Yaris678, Yngvarr, Yop83, Yunshui, Ywarnier, Zachlipton, Ziphon, Zondor, ZooFari, 863 , anonymous edits Agile software development Source: https://en.wikipedia.org/w/index.php?oldid=519908246 Contributors: 1exec1, A Quest For Knowledge, A Train, ABV, AGK, Aamahdys, Aardvark92, Aaronbrick, Abmmw, Abukhader, Adbge, Adblast, Aditya2k, Aeuoah, Agauthier007, Agile blog, Agiledev, Agilista, Aguanno, Alansohn, Alexf, Aliaghatabar, Alienus, Allan McInnes, Amit Singh, Ancheta Wis, AndrasSzilard, Andreas Kaufmann, Android79, Annoyamouse, Anonymous Dissident, Ans, Antoniadjames, Apjordan, Artbrock, Ash, Aternity, Auderobinson, AutumnSnow, Badgettrg, BaldPark, BartVB, Bdasgupta@gmail.com, Beetstra, Begoodenough, Beland, BenLinders, Benjamin.booth, Benzirpi, BertBril, Bevo, Bf2k, Blaisorblade, Bmcmichael, Boing! said Zebedee, Bosef1, Boson, Bovineone, Brendanoconnor, Brick Thrower, Brighterorange, Bunchofgrapes, CL, Capricorn42, Carewolf, CatWikiEdit, Catgut, Cawel, Ceyockey, Chicken Wing, Chmod007, Chonglongchoo, Chris the speller, ChristianEdwardGruber, Chunkylover199, Cinderella314, Ciotog, Cliffu, Cochiloco, Cognizance, Cpm, Cruccone, Crysb, Ctobola, Cybercobra, Czarkoff, D'n, DRAGON BOOSTER, Daen, Damitchellvbcoach, Damon Poole, Dandv, Daniel.Cardenas, Danielklein, Danr7, DarkseidX, Davelane103, David Biddulph, David Newton, David Waters, David.alex.lamb, Davidparker9, Dbenson, Deathphoenix, Demiane, Demonkoryu, Derekrogerson, Designatevoid, Dethomas, Devon Fyson, DiiCinta, Dingsoyr, Dirk Riehle, Dlazerka, Dogcow, Donblair27, Dordal, DouglasGreen, Download, Dparsonsnz, Dprust, Draicone, Eagerterrier, Ed Poor, Edward, Ejvyas, Elf, Emvee, Ennui93, Epeters1, Eric B. and Rakim, Eric Le Bigot, Ericholmstrom, Erpingham, Ettrig, Euryalus, Exir Kamalabadi, Explicit, Ezani, Famarsha, Fastily, FatalError, Feigling, Finell, First.thesecond, FlashSheridan, Fongamanda, FrancoGG, Frecklefoot, Fred Bradstadt, Friday, Furrykef, Futuregood, G5reynol, GADFLY46, Gary a mason, Gazpacho, GeorgeBills, Ggiavelli, Gogo Dodo, Graham Jones, Graytay, Greyskinnedboy, Gronky, Gruffi, Grutness, Gurubrahma, H10130, HalJor, Halmir, Halsteadk, Happysailor, Harizotoh9, Harry, Harry4000, Hede2000, Heirpixel, HelloAnnyong, Hessamnia, Hmains, Hooperbloob, Hu, Hu12, Hzhbcl, Ianneub, IjonTichyIjonTichy, Ileshko, Ipsign, Iterator12n, J.delanoy, JHP, JPalonus, Jafet, JakobVoss, JamesShore, Janeve.george, Jarble, Jdpipe, Jeffreydca, Jerzy, Jethro B, Jferman, Jim1138, Jjdawson7, Jmabel, Jmilunsky, Jmlive, Johnromeovetis, Jojalozzo, Jon.strickler, Jonadin93, Jonathan Drain, Jonkpa, Josh Parris, JoshRyan, Joshb, Joshkuo, Jovial Air, Jpbowen, Jtowler, Julesd, Juliancolton, Junnytony, Jusdafax, Jna runn, KGasso, Kazvorpal, Kbdank71, Kbh3rd, Kelovy, Kenyon, KerryBuckley, Ketiltrout, KevinBrydon, Kevinevans, Khalid hassani, Klazorik, Klemen Kocjancic, Knutux, Koavf, Kswaters, Kundu.anupam, Kuru, Kvng, Laterality, Latha P Nair, Laughing Man, Lefty, Leonus, Leopedia2, Leszek Jaczuk, Liao, Lightmouse, Ligulem, Littlesal, Liujiang, LizardWizard, Ljhenshall, Lolawrites, Lorezsky, Lsisson, Lumberjake, Lumos3, MER-C, Mange01, Mangst, Mario777Zelda, Mark Arsten, Markwaldin, Martin Hampl, Martinig, MartinsB, Martinvl, Mathmo, Matiasp, MaxGuernseyIII, MaxSem, Maycrying, Mberteig, Mblumber, Mbroooks, Mcsee, Mdd, Mdubakov, Merenta, Mgualtieri, Michael Hardy, Michael Slone, Michig, Mike Field, Mindmatrix, Mmainguy, Mmpubs, Moliang, Morendil, Mortense, Moulding, Mr Stephen, MrOllie, Msh210, Mymallandnews, NathanLee, Natl1, NawlinWiki, Nearly Human, Neufusion, Nhajratw, Nima1024, Niteowlneils, Niteshnema, Nixeagle, Nnivi, Obiomap1, Ocaasi, Oicumayberight, Okipage8p, Okivekas, OlEnglish, Orangeumbrella, PCock, PJamshidi, PabloStraub, Papeschr, Pastore Italy, Patrickegan, Patsw, Pauli133, Paulralph, Peashy, Penneys79, Petri Krohn, Pgr94, Phanishashank, Phansen, Philtro, Pinar, Pm master, Pmsyyz, Porao, Possum, Postdlf, Pratheepraj, Presidentelect00, Prestonpdx, Prunesqualer, PseudoSudo, Ptrb, Pundit, Qollin, Qst, RA0808, RG2, RHaworth, Raand, Ramsyam, Raspalchima, Rawler, Rddill, Rebroad, Renffeh, Rentaferret, Riana, Rich Farmbrough, Rich257, Richwil, Rickjpelleg, RiyaKeralore, Robert.ohaver, RobertoComoEsta, Rockpocket, Rodney Boyd, Rogermw, Ron Richard, Rothley, RoyHanney, Rrelf, Rubicon, Rursus, Rwalker, SQGibbon, SamJohnston, Sampi, Samwilson, Sardanaphalus, Sarefo, ScottWAmbler, Sean Whitton, Sean.hoyland, Seanbreeden, Seaphoto, SebastianBreier, Semperf, Sgroupace, Shepard, Shinmawa, Shirik, Shoessss, Siliconglen, Silvonen, Sir Nicholas de Mimsy-Porpington, Skarebo, Smalljim, SolBasic, SouthLake, SpaceFlight89, Specter01010, Spellmaster, Srayhan, Sri2001, Srinicenthala, Srleffler, Stein korsveien, SteveLoughran, Stevegallery, Stickee, Strategy architecture, Strazhce, Stumps, Sulfis, Susan Akers, Swguy, Swillcox, Swtechwr, TJK4114, Tagishsimon, TakuyaMurata, Tarinth, Tarjei Knapstad, Techdoer, Teckmx5, Teknobo, Tempshill, Tgwizard, The Banner, The Thing That Should Not Be, The wub, TheCoffee, Thejoshwolfe, Themfromspace, Themshow, Thingg, Thinkpod, Thiseye, Tiago simoes, Ticaro, Tide rolls, Timwi, Tisni, Tobias Bergemann, Tomb, Tomcat0815, Tommy2010, Tony Sidaway, Tsilb, UdayanBanerjee, Umofomia, UncleDouggie, Urrameu, Vald, Van der Hoorn, Vasemwant, Vreemt, Vyyjpkmi, WDavis1911, WTF-tech, Walter Grlitz, Wiki3 1415, William Pietri, Windowsvista2007, Wiretse, Wjbean, Xyad, Yahya Abdal-Aziz, Yath, YoavShapira, ZAD-Man, Zalgo, Zane McFate, Zenofile, Zigger, 1289 anonymous edits

264

Image Sources, Licenses and Contributors

265

Image Sources, Licenses and Contributors


File:IS-Relationships-Chart.jpg Source: https://en.wikipedia.org/w/index.php?title=File:IS-Relationships-Chart.jpg License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:Dbmesser Image:Four-Level-Pyramid-model.png Source: https://en.wikipedia.org/w/index.php?title=File:Four-Level-Pyramid-model.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: Compo File:SDLC - Software Development Life Cycle.jpg Source: https://en.wikipedia.org/w/index.php?title=File:SDLC_-_Software_Development_Life_Cycle.jpg License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:Cliffydcw File:CPT-SystemLifeSycle.svg Source: https://en.wikipedia.org/w/index.php?title=File:CPT-SystemLifeSycle.svg License: Creative Commons Zero Contributors: Pluke Image:Systems Development Life Cycle.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Systems_Development_Life_Cycle.jpg License: Public Domain Contributors: US Department of Justice (redrawn by Eugene Vincent Tantog) Image:SDLC Phases Related to Management Controls.jpg Source: https://en.wikipedia.org/w/index.php?title=File:SDLC_Phases_Related_to_Management_Controls.jpg License: Public Domain Contributors: U.S. House of Representatives Image:SDLC Work Breakdown Structure.jpg Source: https://en.wikipedia.org/w/index.php?title=File:SDLC_Work_Breakdown_Structure.jpg License: Public Domain Contributors: U.S. House of Representatives File:knowledge spiral.svg Source: https://en.wikipedia.org/w/index.php?title=File:Knowledge_spiral.svg License: Copyrighted free use Contributors: JohannesKnopp Image:Data Structure Diagram.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Data_Structure_Diagram.jpg License: Public Domain Contributors: U.S. Department of Transportation, Image:ER Diagram MMORPG.png Source: https://en.wikipedia.org/w/index.php?title=File:ER_Diagram_MMORPG.png License: GNU Free Documentation License Contributors: Original uploader was TheMattrix at en.wikipedia File:Entity Relationship metamodel.png Source: https://en.wikipedia.org/w/index.php?title=File:Entity_Relationship_metamodel.png License: Creative Commons Zero Contributors: Ottomachin Image:Erd-entity-relationship-example1.svg Source: https://en.wikipedia.org/w/index.php?title=File:Erd-entity-relationship-example1.svg License: Creative Commons ShareAlike 1.0 Generic Contributors: Chanueting Image:Erd-entity-with-attribute.svg Source: https://en.wikipedia.org/w/index.php?title=File:Erd-entity-with-attribute.svg License: Creative Commons Attribution-Sharealike 2.5 Contributors: Original uploader was Bigsmoke at en.wikipedia Image:erd-relationship-with-attribute.png Source: https://en.wikipedia.org/w/index.php?title=File:Erd-relationship-with-attribute.png License: Creative Commons Attribution-Sharealike 2.5 Contributors: Original uploader was Bigsmoke at en.wikipedia Image:Erd-id-as-primary-key.png Source: https://en.wikipedia.org/w/index.php?title=File:Erd-id-as-primary-key.png License: Creative Commons Attribution-Sharealike 2.5 Contributors: Original uploader was Bigsmoke at en.wikipedia Image:ERD Representation.svg Source: https://en.wikipedia.org/w/index.php?title=File:ERD_Representation.svg License: Public Domain Contributors: Benthompson Image:ERD-artist-performs-song.svg Source: https://en.wikipedia.org/w/index.php?title=File:ERD-artist-performs-song.svg License: Public Domain Contributors: Original uploader was Bignose at en.wikipedia Image:4-2 ANSI-SPARC three level architecture.svg Source: https://en.wikipedia.org/w/index.php?title=File:4-2_ANSI-SPARC_three_level_architecture.svg License: Creative Commons Attribution-Sharealike 3.0,2.5,2.0,1.0 Contributors: 4-2_ANSI-SPARC_three_level_architecture.jpg: Matthew West and Julian Fowler derivative work: Razorbliss (talk) Image:XML.svg Source: https://en.wikipedia.org/w/index.php?title=File:XML.svg License: Creative Commons Attribution-Sharealike 2.5 Contributors: en:User:Dreftymac File:Architectural Levels and Attributes.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Architectural_Levels_and_Attributes.jpg License: Public Domain Contributors: Federal Enterprise Architecture Program Management Office, OMB File:Soa-layers.svg Source: https://en.wikipedia.org/w/index.php?title=File:Soa-layers.svg License: Public Domain Contributors: Own Work File:SOA Elements.png Source: https://en.wikipedia.org/w/index.php?title=File:SOA_Elements.png License: Creative Commons Attribution-ShareAlike 3.0 Unported Contributors: Florian Lindner (designer) File:SOA Metamodel.svg Source: https://en.wikipedia.org/w/index.php?title=File:SOA_Metamodel.svg License: Creative Commons Attribution-Share Alike Contributors: Loc Corbasson, created with en:OOo Draw (ODG source file available on request) File:SOMF V 2.0.jpg Source: https://en.wikipedia.org/w/index.php?title=File:SOMF_V_2.0.jpg License: Public Domain Contributors: AngelaMartin2008 File:The Zachman Framework of Enterprise Architecture.jpg Source: https://en.wikipedia.org/w/index.php?title=File:The_Zachman_Framework_of_Enterprise_Architecture.jpg License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:Ideasintegration, User:SunSw0rd File:Zachman Frameworks Collage.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Zachman_Frameworks_Collage.jpg License: GNU Free Documentation License Contributors: Marcel Douwe Dekker File:Zachman Framework Detailed.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Zachman_Framework_Detailed.jpg License: GNU Free Documentation License Contributors: Marcel Douwe Dekker based on earlier work of Phogg2 et al. File:Simplification Zachman Enterprise Framework.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Simplification_Zachman_Enterprise_Framework.jpg License: unknown Contributors: Al Zuech, Director, Enterprise Architecture Service at the US Department of Veterans Affairs. File:Zachman Framework Model.svg Source: https://en.wikipedia.org/w/index.php?title=File:Zachman_Framework_Model.svg License: Creative Commons Attribution 3.0 Contributors: Marcel Douwe Dekker (image) + SunSw0rd (text) File:Example of Zachman Framework Rules.JPG Source: https://en.wikipedia.org/w/index.php?title=File:Example_of_Zachman_Framework_Rules.JPG License: unknown Contributors: Al Zuech, Acting Chief Architect File:TEAF Matrix of Views and Perspectives.jpg Source: https://en.wikipedia.org/w/index.php?title=File:TEAF_Matrix_of_Views_and_Perspectives.jpg License: Public Domain Contributors: Department of the Treasury Chief Information Officer Council File:Framework for EA Direction, Description, and Accomplishment Overview.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Framework_for_EA_Direction,_Description,_and_Accomplishment_Overview.jpg License: Public Domain Contributors: Department of the Treasury Chief Information Officer Council File:TEAF Products.jpg Source: https://en.wikipedia.org/w/index.php?title=File:TEAF_Products.jpg License: Public Domain Contributors: Chief Information Officer Council File:TEAF Work Products for EA Direction, Description, and Accomplishment.jpg Source: https://en.wikipedia.org/w/index.php?title=File:TEAF_Work_Products_for_EA_Direction,_Description,_and_Accomplishment.jpg License: Public Domain Contributors: Chief Information Officer Council File:EAP mapped to the Zachman Framework.jpg Source: https://en.wikipedia.org/w/index.php?title=File:EAP_mapped_to_the_Zachman_Framework.jpg License: Public Domain Contributors: The Chief Information Officers Council File:DOD C4ISR Architecture Framework Products Mapped.jpg Source: https://en.wikipedia.org/w/index.php?title=File:DOD_C4ISR_Architecture_Framework_Products_Mapped.jpg License: Public Domain Contributors: Rob Thomas II File:DoD Products Map to the Zachman Framework Cells.jpg Source: https://en.wikipedia.org/w/index.php?title=File:DoD_Products_Map_to_the_Zachman_Framework_Cells.jpg License: Public Domain Contributors: P.Kathleen Sowell, The MITRE Corporation (original author;slide has been publicly released & widely used throughout Government) File:DoDAF Support to the Builder.jpg Source: https://en.wikipedia.org/w/index.php?title=File:DoDAF_Support_to_the_Builder.jpg License: Public Domain Contributors: DoD File:NIST Enterprise Architecture Model.jpg Source: https://en.wikipedia.org/w/index.php?title=File:NIST_Enterprise_Architecture_Model.jpg License: Public Domain Contributors: The Chief Information Officers Council File:LISI Reference Model 1997.jpg Source: https://en.wikipedia.org/w/index.php?title=File:LISI_Reference_Model_1997.jpg License: Public Domain Contributors: Department of Defense, the Integrated Architectures Panel of the C4ISR Integration Task Force.

Image Sources, Licenses and Contributors


File:DOE Information Architecture Conceptual Model.jpg Source: https://en.wikipedia.org/w/index.php?title=File:DOE_Information_Architecture_Conceptual_Model.jpg License: Public Domain Contributors: US DOE File:DoDAF Perspectives and Decomposition Levels.jpg Source: https://en.wikipedia.org/w/index.php?title=File:DoDAF_Perspectives_and_Decomposition_Levels.jpg License: Public Domain Contributors: DoD Architecture Framework Working Group File:Integrated Process Flow for VA IT Projects.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Integrated_Process_Flow_for_VA_IT_Projects.jpg License: unknown Contributors: Dr. John A. Gauss, Assistant Secretary for Information and Technology, va.gov. File:VA Zachman Framework Portal.jpg Source: https://en.wikipedia.org/w/index.php?title=File:VA_Zachman_Framework_Portal.jpg License: unknown Contributors: Al Zuech, Director, Enterprise Architecture Service at the US Department of Veterans Affairs. File:VA EA Repository Introduction.jpg Source: https://en.wikipedia.org/w/index.php?title=File:VA_EA_Repository_Introduction.jpg License: unknown Contributors: va.gov File:A Tutorial on the Zachman Architecture Framework.jpg Source: https://en.wikipedia.org/w/index.php?title=File:A_Tutorial_on_the_Zachman_Architecture_Framework.jpg License: unknown Contributors: va.gov File:VA EA Meta-Model Cell Details Enlarged.jpg Source: https://en.wikipedia.org/w/index.php?title=File:VA_EA_Meta-Model_Cell_Details_Enlarged.jpg License: unknown Contributors: Albin Martin Zuech, Director, VA Enterprise Architecture Service (Retired) Image:TOGAF ADM.jpg Source: https://en.wikipedia.org/w/index.php?title=File:TOGAF_ADM.jpg License: Public Domain Contributors: Stephen Marley, NASA /SCI File:DoD Standards-Based Architecture Planning Process.jpg Source: https://en.wikipedia.org/w/index.php?title=File:DoD_Standards-Based_Architecture_Planning_Process.jpg License: Public Domain Contributors: Department of Defense Image:Structure of the FEAF Components.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Structure_of_the_FEAF_Components.jpg License: Public Domain Contributors: Chief Information Officer Council Image:FEA Reference Models.jpg Source: https://en.wikipedia.org/w/index.php?title=File:FEA_Reference_Models.jpg License: Public Domain Contributors: CIO Councel Image:Performance Reference Model.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Performance_Reference_Model.jpg License: Public Domain Contributors: FEA Image:BRM Overview.jpg Source: https://en.wikipedia.org/w/index.php?title=File:BRM_Overview.jpg License: Public Domain Contributors: FEA Image:Service Component Reference Model.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Service_Component_Reference_Model.jpg License: Public Domain Contributors: FEA Image:DRM Collaboration Process.jpg Source: https://en.wikipedia.org/w/index.php?title=File:DRM_Collaboration_Process.jpg License: Public Domain Contributors: FEA Image:Technical Reference Model.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Technical_Reference_Model.jpg License: Public Domain Contributors: FEA Image:Architectural Levels and Attributes.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Architectural_Levels_and_Attributes.jpg License: Public Domain Contributors: Federal Enterprise Architecture Program Management Office, OMB Image:IBM360-65-1.corestore.jpg Source: https://en.wikipedia.org/w/index.php?title=File:IBM360-65-1.corestore.jpg License: GNU Free Documentation License Contributors: Original uploader was ArnoldReinhold at en.wikipedia Image:PC-DOS 1.10 screenshot.png Source: https://en.wikipedia.org/w/index.php?title=File:PC-DOS_1.10_screenshot.png License: Public Domain Contributors: Remember the dot at en.wikipedia (PNG) File:Unix history-simple.png Source: https://en.wikipedia.org/w/index.php?title=File:Unix_history-simple.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: Eraserhead1 Image:First Web Server.jpg Source: https://en.wikipedia.org/w/index.php?title=File:First_Web_Server.jpg License: GNU Free Documentation License Contributors: User:Coolcaesar at en.wikipedia File:Ubuntu 12.04 Final Live CD Screenshot.png Source: https://en.wikipedia.org/w/index.php?title=File:Ubuntu_12.04_Final_Live_CD_Screenshot.png License: GNU General Public License Contributors: Ahunt, Meno25, 1 anonymous edits File:Android 4.0.png Source: https://en.wikipedia.org/w/index.php?title=File:Android_4.0.png License: unknown Contributors: Android Open Source project File:Windows To Go USB Drive.png Source: https://en.wikipedia.org/w/index.php?title=File:Windows_To_Go_USB_Drive.png License: Creative Commons Zero Contributors: Adrignola, SF007, 2 anonymous edits Image:Kernel Layout.svg Source: https://en.wikipedia.org/w/index.php?title=File:Kernel_Layout.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Bobbo Image:Priv rings.svg Source: https://en.wikipedia.org/w/index.php?title=File:Priv_rings.svg License: Creative Commons Attribution-Sharealike 2.5 Contributors: Daemorris, Magog the Ogre, Opraco, Torsch File:Virtual memory.svg Source: https://en.wikipedia.org/w/index.php?title=File:Virtual_memory.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Ehamberg File:Dolphin FileManager.png Source: https://en.wikipedia.org/w/index.php?title=File:Dolphin_FileManager.png License: unknown Contributors: KDE File:Command line.png Source: https://en.wikipedia.org/w/index.php?title=File:Command_line.png License: GNU General Public License Contributors: The GNU Dev team, and the Arch Linux Dev team (for the Pacman command in the example) File:KDE 4.png Source: https://en.wikipedia.org/w/index.php?title=File:KDE_4.png License: GNU General Public License Contributors: KDE File:OSI-model-Communication.svg Source: https://en.wikipedia.org/w/index.php?title=File:OSI-model-Communication.svg License: Public Domain Contributors: Runtux File:Virtual Private Network overview.svg Source: https://en.wikipedia.org/w/index.php?title=File:Virtual_Private_Network_overview.svg License: Creative Commons Attribution-Share Alike Contributors: Ludovic.ferre Image:semantic-web-stack.png Source: https://en.wikipedia.org/w/index.php?title=File:Semantic-web-stack.png License: Public Domain Contributors: C. A. Russell, Marobi1 File:Wearer of an ITIL Foundation Certificate pin.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Wearer_of_an_ITIL_Foundation_Certificate_pin.jpg License: Creative Commons Attribution-ShareAlike 3.0 Unported Contributors: Tijmen Stam (User:IIVQ) File:Trajan's Column (Roman Soldiers Building a Fortress).png Source: https://en.wikipedia.org/w/index.php?title=File:Trajan's_Column_(Roman_Soldiers_Building_a_Fortress).png License: GNU Free Documentation License Contributors: Fikret Yegul Image:Henri Gannt.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Henri_Gannt.jpg License: Public Domain Contributors: Mdd Image:Pert chart colored.svg Source: https://en.wikipedia.org/w/index.php?title=File:Pert_chart_colored.svg License: Public Domain Contributors: Pert_chart_colored.gif: Original uploader was Jeremykemp at en.wikipedia derivative work: Hazmat2 (talk) Image:Project Management (phases).png Source: https://en.wikipedia.org/w/index.php?title=File:Project_Management_(phases).png License: Creative Commons Attribution-Share Alike Contributors: Alphamu57 Image:Prince2 procces model .jpg Source: https://en.wikipedia.org/w/index.php?title=File:Prince2_procces_model_.jpg License: GNU Free Documentation License Contributors: Markavian, Throxana File:Agile Project Management by Planbox.png Source: https://en.wikipedia.org/w/index.php?title=File:Agile_Project_Management_by_Planbox.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:Fongamanda Image:Xp-loop with time frames.svg Source: https://en.wikipedia.org/w/index.php?title=File:Xp-loop_with_time_frames.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Marcel Douwe Dekker File:Project development stages.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Project_development_stages.jpg License: unknown Contributors: DEPARTMENT OF VETERANS AFFAIRS Office of Information and Technology File:Initiating Process Group Processes.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Initiating_Process_Group_Processes.jpg License: unknown Contributors: DEPARTMENT OF VETERANS AFFAIRS Office of Information and Technology File:Executing Process Group Processes.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Executing_Process_Group_Processes.jpg License: unknown Contributors: DEPARTMENT OF VETERANS AFFAIRS Office of Information and Technology File:Monitoring and Controlling Process Group Processes.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Monitoring_and_Controlling_Process_Group_Processes.jpg License: unknown Contributors: DEPARTMENT OF VETERANS AFFAIRS Office of Information and Technology Image:Project Management (project control).png Source: https://en.wikipedia.org/w/index.php?title=File:Project_Management_(project_control).png License: Creative Commons Attribution-Share Alike Contributors: Alphamu57

266

Image Sources, Licenses and Contributors


File:Closing Process Group Processes.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Closing_Process_Group_Processes.jpg License: unknown Contributors: DEPARTMENT OF VETERANS AFFAIRS Office of Information and Technology Image:The triad constraints.jpg Source: https://en.wikipedia.org/w/index.php?title=File:The_triad_constraints.jpg License: Creative Commons Attribution-Sharealike 3.0 Contributors: John M. Kennedy T.

267

File:US Navy 090407-N-4669J-042 Sailors assigned to the air department of the aircraft carrier USS George H.W. Bush (CVN 77) test the ship's catapult systems during acceptance trials.jpg Source: https://en.wikipedia.org/w/index.php?title=File:US_Navy_090407-N-4669J-042_Sailors_assigned_to_the_air_department_of_the_aircraft_carrier_USS_George_H.W._Bush_(CVN_77)_test_the_ship's_catapult_systems_d License: Public Domain Contributors: File:Blackbox.svg Source: https://en.wikipedia.org/w/index.php?title=File:Blackbox.svg License: Public Domain Contributors: Original uploader was Frap at en.wikipedia File:Government Business Reference Model.svg Source: https://en.wikipedia.org/w/index.php?title=File:Government_Business_Reference_Model.svg License: Public Domain Contributors: FEA File:Process and data modeling.svg Source: https://en.wikipedia.org/w/index.php?title=File:Process_and_data_modeling.svg License: Public Domain Contributors: Process_and_data_modeling.jpg: Paul R. Smith. Redrawn by Marcel Douwe Dekker derivative work: Razorbliss (talk) File:Business Process Reengineering Cycle.svg Source: https://en.wikipedia.org/w/index.php?title=File:Business_Process_Reengineering_Cycle.svg License: Public Domain Contributors: Marmelad File:Agile Software Development methodology.svg Source: https://en.wikipedia.org/w/index.php?title=File:Agile_Software_Development_methodology.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Agile-Software-Development-Poster-En.pdf: and VersionOne, Inc. *derivative work: Devon Fyson File:Generic diagram of an agile methodology for software development.png Source: https://en.wikipedia.org/w/index.php?title=File:Generic_diagram_of_an_agile_methodology_for_software_development.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:Benzirpi Image:MartinFowler portrait.jpg Source: https://en.wikipedia.org/w/index.php?title=File:MartinFowler_portrait.jpg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Ade oshineye, Rzuwig, 2 anonymous edits Image:Pair programming 1.jpg Source: https://en.wikipedia.org/w/index.php?title=File:Pair_programming_1.jpg License: Creative Commons Attribution 2.0 Contributors: Lisamarie Babik File:SoftwareDevelopmentLifeCycle.jpg Source: https://en.wikipedia.org/w/index.php?title=File:SoftwareDevelopmentLifeCycle.jpg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Abrahamsson, P., Salo, O., Ronkainen, J., & Warsta, J.

License

268

License
Creative Commons Attribution-Share Alike 3.0 Unported //creativecommons.org/licenses/by-sa/3.0/

You might also like