Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 68

SYSTEM A system is a set of inter-dependent components that create the whole entity and work together in pursuit of a common

goal. The General Systems Theorem (GST) It is a theorem, which states that the whole is greater than the sum of the independent elements. This means a system works better and more efficiently and achieves greater results when its elements work as an entity as compared to working individually. This phenomenon is called synergy and can be expressed mathematically as 1+1=3. System concepts
A system is a group of interrelated and interdependent elements forming a unified whole and working together to achieve a unified goal. Systems achieve their goals by accepting inputs, processing them and producing outputs. Such a system has three basic interacting elements or functions;

Input - This takes the form of money, raw materials, energy, decisions, information etc. Physical input to information system may be in the form of computer media such as keyboard or manuscript documents. ii. Process - This is what translates Input into Output. For example in the case of a production system, this is what converts raw materials into finished products. iii. Output This is the result of processing. Self-monitoring or self-regulating systems have 2 additional elements: Feedback and control. Feedback is data about the performance of a system and can be used to control the system e.g. data about sales performance is feedback to a sales manager. Feedback can be NEGATIVE (which discourages processing) or POSITIVE (which encourages processing). Control involves monitoring and evaluating feedback to determine whether a system is moving toward the achievement of its goal.

i.

A typical feedback control system consists of the components above. Each one of the components plays a significant role in the operation of the system: Sensor - This is a means of measuring output from the system. It can be in the form of a human being or mechanised equipment. Standard - This is a set of measurements or expected results used to benchmark success or failure of the system. E.g. this can be a quality or quantity value. Comparator - A means of comparing the system output against the set standard. E.g. an employee can compare monthly output against expected monthly output Effector A means of effecting change to either the input or the output. An example would be an increase in the efficiency of the system or reduction or increase in the quantity or quality of input. Other system elements Environment Refers to the area within which a system operates. Boundary Refers to the confines or limits of a system that mark its size.

Interface This is the region within which a system exchanges material or resources with other systems or its environment. Other authors define it as the process of exchanging material or resources with other systems or the environment. INFORMATION SYSTEM
Definition

An information system is a manual or computer system designed to collect data, process it and provide management with appropriate information to support decision-making, planning and control of the operations of an organisation. What then is an Information system from the managers perspective? An information system is a set of people, procedures and resources that collects, transforms, and disseminates information in an organization to support decision-making and control in an organization. Today managers rely on many types of information systems; manual information systems (paper and pencil) and informal information systems (word of-mouth). A computer based-information system is a system that uses hardware, software, telecommunications and other forms of information technology to transform data resources into a variety of information products. Information system Components An information system has the following components; a) People users and information technology specialists b) Hardware / machines refers to physical computer equipment and associated devices. Most modern systems are computer based. c) Procedures user-instructions, for users of the application to record data, enter data or retrieve data. Instructions for preparation of input by data preparation personnel. These include all processing and information flow activities, computer programs and program specifications. Operating instructions for computer operations personnel. d) Database contains all the data utilized by application software. It forms the foundation upon which the information system is based. e) Control - the part of the information system that measures, performance provides feedback and adjust mechanisms for cost-effective performance. Information system resources Information systems consist of 4 major resources. These are hardware, software, people and data. Human / people resources People are needed for the operation of any information system. These people are end users or clients who us the system and its products. They can be also information system e.g. analysts, programmers, data capture clerks etc. Hardware / machine resources These include all physical devices and materials in information processing. Examples include computers, networks, media (disk tapes or paper). Software resources these include all types of information processing instructions e.g. programs which direct and control computer hardware and procedures which are instructions needed by people. Software can be dived into two: a) System software these are programs which control and support the operations of a computer system e.g. operating system is the backbone. b) Application software these are programs that direct processing for a particular use of computers of end users e.g. sales analysis program and accounting packages. Data resources Data is the raw material of information systems. It can be in the form of alphanumeric data, text data or audio data. Data resources of information systems are organized into a database which hold processed and organised data.

TYPES OF INFORMATION SYSTEMS The nature and application of a system is mostly determined by the following factors: a) Decision level operational, tactical or strategic b) Use of information / purpose planning, control etc c) Source and destination of the computer internal or external 1) Transaction Processing Systems These keep track of the daily business events. Transaction Processing systems record and process data resulting from business transactions. Examples are sales accounts receivable and payable systems etc. The results of TPS are used to update organisational databases e.g. customer, inventory etc. These databases then provide the data resources that can be processed and used by information reporting systems (MIS), decision n support systems and executive information systems. 2) Management Information Systems (MIS) MIS provide managerial end users with information products that support much of their day-today decision-making needs. They provide a variety of reports and displays to management. The content of these reports are specified in advance by managers so that they contain information they need. MIS retrieve information about internal operations from internal databases (updated by TPS) and also from external sources about the businesss environment. Types of reports produced - On demand reports produced whenever needed - Scheduled reports produced periodically - Exceptional reports produced whenever exceptional / out-of-line conditions occur. 3) Decision Support Systems These are interactive, computer based information systems that allow decision maker to interface directly with computers to create information useful in making semi-structured and ill-structured decisions. Decision support systems; refer to computer systems that allow decisions makers to create data models and "what-if" scenarios to simplify decision-making. DSS make use of decision models and specialized databases. DSS extract data maintained by TPS. DSS differ from MIS, which focus on structured types of decisions. DSS provide managerial and users with information in an interactive session on an ad-hoc (as needed) basis. Thy provide managers with analytical modelling, data retrieved and information presentation. DSS Analytical Modelling Alternatives Using a DSS involves 4 basic types of analytical modelling activities 1) What -if -analysis 2) Sensitivity analysis 3) Goal seeking analysis 4) Optimisation analysis a) What- if- analysis Observing how changes to selected variables affect other variables. An end user makes changes to variables, or relationships among variables and observes resulting changes in the value of other variables e.g. what if out advertising by 10% what would happen to sales? b) Sensitivity analysis Observing how repeated changes to a single variable affect other variables. Typically the value of only one variable is repeatedly and the resulting changes on their variables are observed. Sensitivity analysis is a case of what if analysis involving repeatedly changes to only one variable at a time e.g. cut advertising by $50 repeatedly so we can see its relationship to sales. c) Goal seeking analysis Instead of observing how changes in a variable affect other variables goal-seeking analysis sets a target value (a goal) for a variable and then repeatedly change other variables until the target

value is achieved. Making repeated changes to selected variables until a chosen variable reaches a target value. d) Optimisation analysis Finding an optimum value for the selected variable, given certain constraints e.g. what is the best amount of advertising to have, given our budget and choice of media. DSS characteristics - Enables business models to be built - Provide interactive sessions 4) Executive Information System (EIS) These are management support systems tailored to the strategic information needs of top and middle management. An EIS provides information about the current status and projected trends for key factors selected by top executives. They are easy to operate and understand. They use extensive graphic displays and provide immediate access to internal and external databases. EISs provide for the communication of summary-level information to executives but can also provide the capacity to display more detailed information to keep track of information concerning the factors they consider most critical to the success of their firm. NB: This type of system is covered in more detail below. 5) Strategic Information Systems These are systems that give the first organisation to use them a competitive advantage and are designed to exploit marketing opportunities by affecting the industry in which they operate. 6) Expert systems It is a branch of artificial intelligence. It is a computer program that enables a computer to act as an expert consultant to users in a particular area. It has a knowledge base, which stores facts and rules that are used in reaching a judgment in a particular case. Expert systems are used in different fields including medicine, engineering and business. 7) Open Systems An open system is one that interacts and exchanges material with its environment. The feedback it gives to the user of the system is used to control its operations and can either be positive feedback (promote processing) or negative feedback (discourage processing). E.g. Business systems. 8) Closed Systems A closed system is one that does not interact with the environment. It is self contained and will continue in a state of equilibrium throughout its existence. E.g. artificial heart pump. 9) Probabilistic Systems It is a system whose future behaviour cannot be predicted with certainty because their operations are governed by chance. E.g. business systems 10) Deterministic Systems These are systems that operate according to a predetermined set of rules. Hence, their future behaviour can be predicted if its present state and operating characteristics are accurately known. NB: The first 5 systems represent classification according to purpose of the IS whilst the rest represent classification according to mode operation. System Hierarchy From lower to higher we have the following systems:

Operational / Activity level systems These cater for the day-to-day operations of individuals within an organisation. They vary widely depending on the nature of business operations and capture the finest operational details of the organisation. They are frequently called Transaction Processing Systems (TPS). Departmental level systems They are also called Operational Control Systems (OCS) and they focus on the analysis, planning and control of operational activities. Functional level systems These are designed to support the management of the primary functions of the organisation. They are also called Decision Support Systems (DSS) and may incorporate financial planning systems, manufacturing performance systems and pricing analysis systems. Business level systems These focus attention on specific products rather than technology or market aspects of the unit itself and they are also called DSS. Corporate or Strategic level systems These are concerned with the direction, co-ordination and management of the organisation as a corporate unit in terms of its mission statement. EXECUTIVE INFORMATION SYSTEMS (EIS) They are also called Executive Management Information Systems (EMIS). EIS are computerized systems designed to meet senior executive managers information needs for strategic decision-making. They combine many of the features of the DSS and Information Reporting Systems (IRS). Thus EIS provide top management with immediate and easy-to-access information about the firms key factors that are critical to accomplishing its strategic objectives. Difference between MIS, DSS and EIS MIS / DSS MIS typically generates structured reports that help middle-level managers to organize and control resources. These reports may have little impact upon senior managers. On the other hand, DSS help managers at all levels with less structured decision-making tasks, such as financial planning or market research analysis. But DSS are often too complex to be used by strategic managers. EIS Provides managers with direct, on-line access to current information about the status of the organization using state-of-the-art graphics and data retrieval tools. Top managers need data that let them know if they are achieving critical business objectives. They need to know if projects are progressing according to schedule, if customers are re-ordering merchandise, or if key employees are being retained. EIS must be fast, because executive managers are busy, and easy-to-use (user-friendly), because executive managers wont thumb through detailed user manuals. Screens must be self explanatory and the user interface must be simple.

Key factors needed for the development of a successful EIS 1) Commitment and involvement of top-level management If executive managers are not visibly 100% behind the project, it will not get the priority it deserves and if it somehow progresses to completion may not be used for long.

2) Understanding data sources Successful implementation depends on the availability of accurate and complete data. For many organizations, this could mean that a significant investment in existing business systems is needed, to get the data in the format or structure needed by the EIS prior to implementing the EIS. 3) Focusing on what is important The organizations critical success factors, exception reporting, easy information access facilities are key to the success of an EIS. 4) Response time The EIS should be fast in providing information required by executive managers. If satisfactory, its use, functionality and scope will increase over time. Hence ongoing system performance monitoring is key. 5) Understanding of executive managers level of computer literacy The EIS must be easy to use. This can be achieved by employing techniques such as graphical data presentation (graphs, charts etc), text, mouse, touch screens etc. 6) Learning curve for development team It is ideal to use familiar tools for EIS development. Vendor support for an EIS package is essential. 7) Flexibility The needs of executive managers will continue to evolve and change over time so the EIS should be as flexible as possible. 8) Ongoing support EIS cannot be implemented and forgotten. Continuing support is critical to satisfy changing needs. EIS should have the following capabilities: a) It should be possible to tailor them to individual executive users. b) Extract, filter, compress and track critical data c) Provide on-line status access, trend analysis, exception reporting and drill down capabilities d) Access and integrate a broad range of internal and external data e) Are user friendly and require minimal or no training to use f) Can be used directly by executive managers without intermediaries g) Can present graphical, tabular, and/or textual information Executive Support Systems (ESS) These are EIS with additional capabilities such as the following: a) Support electronic communication (e.g. E-mail, computer conferencing and word processing) b) Data analysis (e.g. spreadsheets, query languages and DSS) c) Personal productivity tools (e.g. electronic calendars, electronic diaries, reminders) etc. How information systems are developed. Information systems can be developed as a result of the following points; 1. Problems from the existing system 2. New technology or advancements in technology 3. Voluntary suggestion made by the computing systems development department. 4. Organizational growth when the organization expands it needs more powerful systems to meet its needs. 5. Increasing competition companies need to keep pace in order to remain competitive. 6. Senior management may consider improving operational efficiency.

SYSTEMS ANLYSIS Is the study of the existing system and its problems with a view to correct, enhance or replace it. Human aspects of systems analysis The introduction of an information system involves and affects people and the systems analyst must anticipate a variety of possible reactions to the system and the reasons behind them, even if they seem irrational. Some ground rules for resisting persuasion are worth considering since they indicate what the systems analyst must overcome. The systems analyst must consider the possible means for which users/employees resist change. It is important because systems development projects may fail if the users resist the introduction of the new system. The reasons for resistance are follows: 1) Fear of losing jobs. 2) Ignorance 3) Inability to learn the new system. 4) Social upset caused by breaking up of working groups. 5) Resentment against personal attack or a feeling that any change is a personal criticism of the way a job was being done. 6) Suspicion of managements motives in making that change. 7) Fear of the unknown. How to overcome resistance. Among the ways of overcoming these background reasons for resistance are the following: 1. Keep people in the picture well in advance and sell the benefits. 2. Give people an opportunity to participate by making suggestions. 3. Take time to introduce the change by letting people get accustomed to an idea before implementing it. 4. Provide sound personal examples of where a similar system was implemented successfully and the benefits being enjoyed. 5. Cultivate a habit of change. If changes are frequent people will be more used to the idea and changes will be readily acceptable. 6. Give security, which may mean guaranteeing the financial future or providing retraining. Attributes Of a Systems Analyst a) Should discover quickly the fundamental logic of a given system b) Should produce sound and understandable plans c) Should appreciate new facts from others when planning systems d) Must be perceptive dont rush to conclusion e) Must be persistent to overcome difficulties and obstacles f) Must have a strong sense of purpose and character g) Must have good social skills Roles and Functions of the Systems Analyst 1) Identification of problems within the existing system 2) Designing alternative solutions 3) Providing documentation 4) Evaluating the alternative system 5) Explains how current system can be modified 6) Develops a proper cost-benefit analysis in conjunction with the users specifications 7) Provides suitable environment for testing the new system 8) Estimates the technical and physical requirements of the new system

9) Persuades users, management and society (clients) to accept the new system

SYSTEMS DEVELOPMENT LIFE CYCLE (SDLC)


SDLC is a term that is used to refer to the stages of information systems development. The SDLC has the following stages: 1. Problem identification / Preliminary survey 2. Feasibility studies 3. Systems analysis 4. Systems design 5. Implementation 6. Maintenance and review Systems development projects are influenced by factors such as technological obsolescence, increased downtimes, increased overhead costs, bottlenecks, and technological advancement. In general, every system is born at some stage, survives its lifespan and has to be replaced owing to any problems sited above this processes involved have given rise to the birth of what is known as the systems life-cycle. Below is a diagram of a system life cycle.

Preliminary Survey/Study Feasibility Study Investigation and Fact Finding Analysis Design Program Coding and Testing Implementation Maintenance and Review

PRELIMINARY SURVEY / PROBLEM IDENTIFICATION

A preliminary survey is first carried out to try to find out if there are any problems in the system. A preliminary survey results in a statement of the problem. It is a statement, which confirms only of the existence of a problem.

FEASIBILITY STUDY A feasibility study is carried out after real problems have been identified above so as to determine the most favourable solution. An analyst should list all possible solution objectives and analyses each solution objective in detail. Areas looked at for each alternative are as below: o Economic feasibility o Social feasibility o Technical feasibility o Operational feasibility Economic Feasibility Economic feasibility aims at justifying the costs of implementing the new system by looking at the available financial resources. It aims to prove that the new system will return funds invested in a favourable period of time. It also assesses whether operational inputs are far below outputs produced. If so, then the system is said to be feasible. Such financial analysis is done using like ROI and NPV. Social Feasibility This is an evaluation on effects of introducing the new system on the organizations human resources. It seeks to establish who are going to loose their jobs and wont? What training needs are there? What benefits will employees realise from the new system? The major emphasis, however, is to concientise employees of the need to introduce the new system and foster a positive mind in them so as to avoid deliberate sabotage when the system is implemented. This is so because most workers feel threatened changes in their work areas. Technical Feasibility. This is a study, which seeks to prove whether the technology required for the new system is readily available. How efficient and effective will the targeted technology be if used as solution gadgets? Other areas of consideration include the availability of backup and spares or consultancy. Operational Feasibility. An evaluation of the input processing and output procedures is made on each functional unit (subsystem). What changes in operational style will there be? The solution should not require an overhaul of management or require major changes in management responsibilities and chains of command, and then the system is not feasible. The system should not be in conflict with the way in which the organization carries out its standard business. Schedule feasibility A measure of how reasonable the projected timetable is. The above five areas are studied a Feasibility Study Report is compiled and submitted to management for approval. Among the set of alternatives, the system analyst should specify a recommended solution objective. Management may adopt the recommended course of action or any of the listed solutions. It is possible that they may reject all listed solutions and as such may request for new studies. If, however, approved then the next step commences. Contents of the feasibility study report 1. Terms of reference giving exactly what the system areas the working group description of the current system 2. A description of the hardware and software 3. Outline of both existing and proposed systems 4. Costs and likely benefits of the new system 5. Proposed system alternatives 6. Staff and training requirements 7. Suggested implementation timetables 8. Recommendations and justification

INVESTIGATION AND FACT RECORDING A detailed investigation is carried out in order to gather more information so as to understand the problem in greater detail than it was the case in preliminary survey. It aims at understanding the causes and effects of problem situations. The system analyst should carry out extensive detective work and should use any or all of the following information gathering methods: Interviews These are one on one question and answer sessions conducted to gather information about current system problems and expectations of users in the new system. Advantages - Provide a good chance of obtaining considerably accurate information especially if actual system users are interviewed. - Facts can be verified - It provides an opportunity to overcome resistance - It is flexible and direct Disadvantages - They can be time consuming if many users are to be interviewed - It is difficult to analyse every users response during an interview
The interview becomes expensive if interviewee has to come from a distant place due to transport, accommodation, catering, and other related costs.

Questionnaire This method entails sending out a set of questions that the recipients must answer and send back their responses. Advantages - Respondent is given time to assemble the required information thus saving time at meetings. - If the questions are open ended the respondent may describe the system with ease Disadvantages - They have got a low rate of return - The questions are difficult to frame - The questionnaires are boring to complete Observation Entails being present and noting the problems encountered with the current system as users go about their daily work. A report will then be produced afterwards detailing the problems that users experience with the current system. Advantages - Observations can be checked by others to avoid distortions that may be caused by bias, misconceptions, etc. - Accuracy is enhanced since aids such as video camera, slides, thermometers, microscopes etc can be used. Disadvantages - The method cannot be used without the support of other techniques. Record Inspection This is a process whereby a sample is taken from a number of records and particular facts are gathered on that sample and used to represent the complete set of records. Advantages - Results can be obtained more quickly - Relatively cheap Disadvantages - Results can be faulty if a non-representative sample is taken

Final system from the model can prove to be costly in the long run because of accumulated changes and reworking.

SYSTEMS ANALYSIS Information obtained above is then analysed in order to refine the solution objective previously stated in the feasibility into a new and proper solution alternative. At this level the analyst should separate facts from myth, opinions, exaggerations and fictions expectations. The information gathered include 1. Current processes 2. Data and documents currently being used 3. All transaction being carried out in the system 4. Volumes of transaction being processed 5. The users requirements from the new system 6. Problems of the current system

Principal tools used in the analysis stage include; a. Decision trees and decision tables b. Entity relationship diagrams These will be covered in greater detail in c. Data flow diagrams and the next handout !!! d. Prototyping NB: The tools above are also known as process specification tools. In summary the analysis stage involves the detailed study of; 1. The information needs of the organization and end users like yourself 2. The activities, resources, and products of any present information system 3. The information system capabilities required meeting your information needs, and those of other end users. SYSTEMS DESIGN After analysing the analyst is now able to made the final solution selection or refinement. He then begins to build a design of the system. Considered here are the physical design and logical design. The physical design refers to the physical layout of buildings, equipment, office arrangement, etc. The logical design maps out the logic in processing procedures. This includes program design. Also included is the development of specifications for the hardware, software, people and data resources and the information products that will satisfy the functional requirements of the proposed system. The design stage mainly involves the design of inputs, outputs, files, and programs. The design is mainly divided into two categories, which are as follows: 1. Logical design this is the planning of the system, as the users require it. It emphasizes the application as seen by those who will operate or use the outputs of the system i.e. files needed, input forms/screens, output forms/screens, control procedures etc. 2. Physical design refers to the layout of office furniture, building blocks, curtains, and other physical considerations to facilitate smooth system operation. PROGRAM CODING AND TESTING Programmers then translate the program designs into actual program code for input to the computer system. These are tested and debugged. The system analyst may also be training users on how to operate the system by making use of completed programs.

IMPLEMENTATION This is the stage when the theoretical design becomes a working, practical system. The major tasks involved in any implementation process are as follows; 1. Training of staff the introduction of a new system means changes in roles and relationships of people in the organization. A new system may involve the recruitment of new staff or the need of new skills. 2. Program coding involves converting the program design specifications into the actual program instructions using suitable programming language. This is covered in greater detail below. 3. File conversion this involves converting the existing master files into a magnetic form 4. System testing this ensures that both the individual programs have been written correctly and that the system as a whole will work. There are several stages of testing - Unity testing which involves testing program modules - Program/integration testing which involves testing the program modules as they are put together to form programs that are complete. - System testing which involves testing the whole programs in the full form in which they will be used i.e. to see if they link and coordinate correctly as expected. - User acceptance testing which involves users of the programs, testing to see whether it is what was required. 2. Production of complete set of system documentation - documentation refers to a wide range of reference materials used in the running of and maintenance of computer systems. The types of documentation produced can be categorized into: systems documentation, program documentations and user documentation or guide. 3. Allocation of premises 4. Changeover procedures this is when the system is introduced and goes live. Changeover should start after; - All staff have been trained - All systems have been fully tested - All documents are available for use

Changeover can be achieved in 4 ways; a. Direct changeover. Direct changeover is also known as immediate or abrupt changeover. In this method, a date is set when the old system is last used e.g. at the end of a week .at the beginning of the new period the new system is then put into use. This method is simple and causes few problems I the new system operates well. However risks involved in adopting this method are quiet high as it may not be possible to reactivate the old system should the new system fail. b. Phased changeover Users move to the new system sectionally i.e. department by department. For example, in an accounting system, the sales ledger may shift to a new system first and experience gained during this process is then used to shift the new system to the purchases ledger. When through the debtors system may follow and so on until the entire system has changed over. This method is a series of small immediate changeovers. The advantages is that there is less

c.

d.

work and experience gained in changing one subsystem facilities the efficient change over of subsequent subsystems however it is a very slow method. Parallel run change over. The old and new systems are run concurrently using the same inputs. The outputs are crosschecked and difference resolved. Outputs from the old system continue to be distributed until the new system has proved satisfactory. The old system is then shut down. It is costly method because of duplication involved. It is only possible when the out puts from both systems are easy to reconcile. The method requires extra staff employment or overtime for existing staff. It allows management the facility of testing the new system while still retaining the existing one. Pilot changeover A miniaturized version of an intended system containing all the characteristics of the system to be set up, run and tested for functionary. If the system is satisfactory the full scale system is then put into place using any of the changeovers this method enables the testing of a system on a smaller-scale in order to avoid possible resource wastage should a system be found to be undesirable.

MAINTENANCE AND REVIEW Six months or so after moving over to the new system, the analyst carries out a maintenance and review session to weed out any unforeseen problems in the system. Thereafter, the new system enjoys regular maintenance should breakdowns and other anomalies arise. As the system experiences entropy, then the cycle is revisited. Overtime the user may request changes to the system. These may be as a result of the following: a. Changing circumstances in the business i.e. an increase in the volume of transactions b. Requests for the additional information, which helps the users in the performance of their work. c. System requirements may have been incorrectly specified. As a result the system may have to be modified. Review the purpose of the review is to determine whether objectives specified in proposals have been met and if not, why? Review serves two main purposes; 1. It allows actual development and implementation costs and timescales to be compared with initial estimates. 2. It allows you to look closely at the original objectives. You may find that the objectives themselves were not completely accurate, and that may be useful information. There may also be objectives which have not yet been met, but which are still valid, SYSTEM CONTROLS These are measures that are built into a system to ensure the safety of information system activities and resources. They can minimise errors, fraud and destruction in an information services organisation. The controls that are needed are as follows: a) Physical facility controls b) Procedural controls c) Information systems controls Physical facility controls These are methods that are put in place to protect physical facilities and their contents from loss and destruction. Examples include the following: i. Physical protection ii. Computer failure controls iii. Network controls iv. Insurance etc

Procedural controls These are operational methods that specify how the information system within an organisation should be operated to achieve maximum security. The following are some of the procedural controls that may be put in place: i. Separation of duties ii. Standard procedures iii. Documentation iv. Authorisation requirements v. Auditing Information systems controls These are methods and devices that attempt to ensure the accuracy, validity and correctness of information system activities. Controls must be developed to ensure proper data entry, processing techniques, storage methods and information output. These control processes are designed to monitor and maintain the quality and security of the input, processing, output, and storage activities of any information system. PROJECT MANAGEMENT CONTROL This is a function of the project manager, which entails scheduling of activities, establishing completion dates, supervising the project team, monitoring progress and ensuring that activities are being completed successfully. When problems occur, he must reallocate resources and schedule personnel, get users to review the design work and provide feedback. The project is conducted by splitting it into self-contained activities thus facilitating resource allocation and scheduling. Some of these activities precede others while some run concurrently. The 2 major tools used in project management are: a) Critical Path Method (CPM) b) Gantt Charts 1) CRITICAL PATH METHOD (CPM) CPM is also called critical path analysis (CPA) or program evaluation and review technique (PERT). It involves the following stages: i. Programming ii. Evaluation iii. Review iv. PERT/cost v. Resource analysis vi. Quality 1) Programming Refers to the identification and listing of all major tasks to be undertaken in the project. Initially broad activities are identified to show an overview of the total project development process as a network diagram. Subsequently, each of the activities shown as an arrow in the network diagram will be developed into increasingly more detailed network diagrams. Construction of a network diagram shows the logical sequence of each of the activities listed. The symbols used are as follows: Arrow Represents a task or activity with arrowhead indicating direction of logical flow of activities. Circle

Earliest start

Latest finish

Event number Represents events. An event is a logical device to connect up the activities and to indicate the point at which an activity (preceding arrow) starts and finishes. It also ties together all those preliminary activities, which must be completed before that particular activity may commence. It is often possible to start a subsequent activity prior to the full completion of a preceding activity only if the interdependent portion of the work is completed before the subsequent activity can commence. This change in the logical relationship between activities may be represented in the network diagram by dividing the earlier activity into two parts. Example Given the activity schedule below, draw the network diagram and use it to calculate the total float and hence the critical path. Activity numbers Start event End event 1 1 3 2 2 4 5 2 3 4 4 5 6 6 Durations 14 5 7 6 7 12 8 Earliest Start Finish 0 0 5 14 14 20 21 14 5 12 20 21 32 29 Start 0 8 13 14 17 20 24 Latest Finish 14 13 20 20 24 32 32 Total Float

NB: Estimation of time required to undertake each activity may be expressed in days or weeks. PERT employ a system of estimation based on 3 time classifications for each activity namely pessimistic, most likely and optimistic. STEPS IN DRAWING A NETWORK DIAGRAM Step 1 Draw a basic network diagram showing event numbers with their corresponding activity durations as indicated in the activity schedule (table). Show the sequential nature of activities in the diagram.

Step 2 Starting at the beginning of the network diagram and setting the initial time to zero, progress along each of the paths of activities in the diagram adding the duration to the earliest finish date of each activity e.g. 1-2 may start at time 0 and requires 14 weeks to complete, hence the earliest finish date would be week 14. In situations where more than one activity enters an event, then the later of the finish dates of the activities entering is taken as the earliest date of all activities coming out of the event. For instance, activity 4-6 can only start at the completion of activities, 13, 3-4 and 1-2, 2-4 which take 12 weeks and 20 weeks, respectively, to complete. Hence, activity 4-6 can only start after week 20.

Step 3

The latest start and latest finish times are calculated by reversing the above process for calculating the earliest start and earliest finish times. The latest finish time for the project is set to be equal to the earliest finish time calculated i.e. week 32. To calculate the latest finish time, deduct the activity duration

from the latest finish time of the succeeding event. For instance, the latest finish time for activity 5-6 is calculated by subtracting its duration (8 weeks) from the latest finish time of the succeeding event (32 weeks) giving 24. Hence, activity 5-6 would need to start by week 24 at the latest to ensure total project completion by week 32. Where more than one activity leaves an event, the latest finish time for the activity preceding the event is the earlier of the latest start times calculated. For instance, the latest start of activity 2-5 is week 17, and for activity 2-4 is week 14, hence, the latest finish of activity 1-2 is week 14, the earlier of the 2 times.

Step 4 The last step involves the calculation of float time (leeway) for each activity, which allows the determination of the critical path for the project. This is calculated using the following formula: Total Float = (Latest Finish Earliest start) Duration For example, to reach event 4 via 3 it takes (5+7) = 12 weeks whilst to reach the same event via 2 it takes (14+6) = 20 weeks. Therefore the Total Float = 20 12 = 8 weeks.

The sequence of activities from the start of the project to the end that has the lowest Total Float (value zero in the example above) is termed the critical path. This is the longest logical sequence of activities within the project and dictates the overall length of time required to complete the project. Interpretation of the total float value indicates the degree to which particular activities may be delayed in starting or extended in terms of their duration, without affecting the completion date of the project. In the example above, activities 1-3 and 3-4 have a total float of 8 weeks, which means that either the start of these activities may be delayed to week 8 or week 13 respectively, or the total duration of the activities may be increased by a further 8 weeks before the project completion time of week 32 is jeopardised. Total Float refers to a sequence of activities and not to individual activities, thus if activity 1-3 is delayed by 6 weeks, thee total float on the subsequent activity 3-4 would reduce to 2 weeks. Evaluation In the event that the initial plan fails to meet the required completion date, alternative strategies should be evaluated.

Advantages of CPA - Provides a rapid means of assessing the effect of changes in one part of the plan on all other activities and the completion date. - Assist in refocusing the attention of management on new areas of the plan, which may become critical as a result of changes or delays. - Shows start time and completion times - Shows how tasks depend on each other - Shows slack times i.e. latest completion time less earliest finish time There are 2 ways of reducing the overall time required to complete a project: 1. Reducing activity durations. 2. Changing or reviewing the logical sequence of activities so as to identify those activities which may undertaken in parallel - It is commenced when only part of the logically preceding activities has been completed. - It is not fully dependant on the completion of all the preceding activities. - When analysing these possibilities attention should be focussed on those activities that are on the critical path. 2) GANTT CHARTS It is a graphical tool used for planning, monitoring and coordinating projects in the form of a grid that lists activities and deadlines such that each time a task is completed, a darkened line is placed in the proper grid cell to indicate completion of a task. Shading highlights are used to indicate the extent to which each of the activities is either on time, behind time or in advance of schedule. They are useful for the following: a) Displaying the current state of a small project with limited activities. b) Summarising the activities of a larger project. Example Draw a Gantt chart for the activities listed below given that the first activity should commence on 1 January 06: a) Problem identification (10 days) b) Feasibility study (25 days) c) Analysis (10 days) d) Design (20 days) e) Implementation (30days)

TIME Advantages a) Understood by non-systems personnel and project participants b) Shows overlapping tasks c) Has got the capacity to represent the current state of the project clearly Disadvantage a) Does not illustrate how activities depend on each other. Logical interrelationships between component activities or tasks are more difficult to deduce from a chart than from a network diagram. Strategies to Overcome System Development Bottleneck 1) Software Packages This is software that has been developed by software houses, and can be purchased and used by other organisations. Advantages a) Economies of scale in development b) Economies of scale in maintenance c) Well established industry

Disadvantages a) Major modifications may be necessary b) May require too much hardware 2) Prototyping A prototype is a small-scale working model of the actual system that is created by using advanced development tools in a short period of time. Prototyping is a process of developing a model of the proposed system design and working with the user to modify it until users requirements are met using 4GLs. Advantages a) Rapid development of a working system. b) Stepwise refinement of designs c) Uses new, more user-friendly tools (4GLs, Query languages) d) Much cheaper than traditional approach (75% less) e) Allows experimentation Disadvantages a) Unending iterations may occur b) Not realistic for high volume systems c) 4GL products require excessive machine resources 3) User-developed Information Systems (EUC) These are systems that are developed by users, sometimes with the assistance of MIS professionals. Advantages a) Increase user satisfaction b) Provides users with needed DSS c) Allows ad hoc query reporting d) Addresses specialised problems Disadvantages a) Doesnt solve the backlog problem b) Questionable cost-effectiveness c) Inferior development methods d) Poor transferability of systems e) Poor quality assurance The following factors should be considered when selecting the a development approach: 1) Commonality 2) Impact 3) Structure Commonality Impact/effect Structure Method Common Broad High Package Uncommon Broad High Traditional Uncommon Broad Low Prototype Common Limited High Package Uncommon Limited High User-developed Uncommon Limited Low User-developed Risk Assessment in Project Management One of the major issues in project management is how to assess risk. If high-risk projects are embarked on, millions of dollars can be spent without ever achieving anticipated results. Many of these problems occur because users and MIS professionals embark on projects without assessing the risk. However, high-risk projects may not be that bad. If risky projects are successful, they usually provide the greatest benefits. The following risk factors must be well managed for a project to be successful: 1) Project size 2) Experience with the technology 3) Project structure

The greater the time and expense involved in the project the greater the risk. If a project team is not familiar with the hardware, OS, DBMS, or telecommunications network, the risk increases. Projects, with managers who have ill-defined ideas of the input, processing and output of a system or who change their mind throughout the design process incur greater risk. PROCESS SPECIFICATION TOOLS 1) PROGRAM FLOW CHARTS Flow charts are a traditional means of showing in diagrammatic form the sequence of steps in performing a programming task. Symbols used Terminator ( Start / Stop )

Process

Input/Output

Decision Connection Line Connector Example: Write a program that will accept the number of hours worked, and the hourly rate and calculate the gross salary and determine the tax rate to be applied as listed below and output the tax amount only. Gross salary < $500 000Tax = 20% Gross salary > $500 000Tax = 45% 2) DECISION TABLES Is a program design tool used to express the logic of a process. It comprises of 4 parts as shown below: CONDITION STUB CONDITION ENTERY ACTION STUB ACTION ENTRY Condition Stub this is a section where all possible conditions are listed Condition Entry a section where all possible combinations of conditions are specified Action Stub a section in which all possible actions are listed Action Entry a section, which shows the actions to be taken for each combination of conditions Construction of a decision table 1) From a given problem, construct a table listing all conditions in the condition stub and all actions in the action stub

2) Calculate the number of rules (columns) by applying the formula 2n , where n is the number conditions. 3) Fill up the condition entry with Ys and Ns such that no column looks like the other. 4) Fill up the action entry with Xs to show the required action for a given set of conditions. 5) Rules may be considered in any order and if there are 2 conditions, one being the negative of the other, then eliminate one of them. Example 1 If an order of $500 or more is received from a creditworthy customer a discount of 5% is allowed and an order less than $500 attracts 3% discount, otherwise the case is referred to the manager for decision. Draw a decision table for the above situation. Y Y N N Order greater or equal to $500 N Creditworthy customer
Actionstub Conditionentry

Actionentry

5% Discount

3% Discount Refer to manager

X X X

Example 2 If a customer invests at least $5000 per month and has either a good payment history or has been with the company for at least 10 years, he receives priority treatment and also given that he invests less than $5000 per month gets priority treatment if and only if he has good payment history. Construct a decision tree and hence a table. Advantages of decision tables 1) Ensure completeness of process specification 2) Easy to check for possible errors such as impossible situations, contradictions and redundancy 3) Can easily be converted into program code Disadvantages of decision tables 1) As the number of decisions increase they become cumbersome 3) DECISION TREES They are pictorial ways of showing a sequence of interrelated conditions and their outcomes. When drawing the tree the following rules should be followed: a) Identify all conditions and actions b) Build the tree from left to right c) Branches should represent possible combinations of conditions with the last item along the branch being the corresponding action Exercise 1: Draw a tree diagram for the 2 examples above. Advantages of decision trees 1) Easy to understand

2) The order of checking and executing actions is immediately noticeable due to the sequential structure of the decision tree branches Disadvantages of decision trees 1) Consumes a lot of space 2) Suitable for problems with a small number of conditions and actions 4) PROTOTYPING Prototyping is the rapid development and testing of working models (prototypes) of new systems in an interactive process involving both systems analysts and users. The user is encouraged to play an active role in the design of the new system while the analysts assumes an advisory role. The analyst builds a working model, which can be demonstrated, to the user in order to establish whether it exactly meets the users requirements and expectations. Advantages of prototyping 1) Improves the quality of the new system by drawing upon the users expertise 2) Removes or reduces user resistance through encouraging user participation 3) Training is reduced because users get familiar with the system as it is being developed Disadvantages 1) Time and money could be wasted through the development of redundant prototypes 2) Difficult to document accurately 3) Users keep changing the user specifications thus prolonging the development period 4) Difficult to convince management of the need to prototype and hence difficult to obtain the required resources ASSIGNMENT: Discuss in detail the concept of office automation and explain how an electronic office facilitates office automation. [30] 5) DATAFLOW DIAGRAMS (DFD) These are diagrammatic representations of the flow of data through the system and the various data stores used as well as changes that occur to the data itself as it moves through the system. When drawing DFDs on should stick to the following conventions: i) Dont draw flow lines directly between data stores and external entities; there should be a process box between them to show the operation performed. ii) Label the data flow line so that it is clear what data is being transferred The emphasis is on the data and its flow within a system. It shows: - How information enters and leaves the system. - What changes the information? - Where information is stored. There is an important technique of system analysis for various reasons. - Boundary definition. The diagrams clearly show the boundaries of the system represented. - Completeness of analysis .the construction of the diagrams helps to ensure that all information and activities within the system have been considered. - Basic for program specification. DataFDs denote the major functional areas of the system. In the structured systems analysis and design method (SSADM), DFDs may be used to represent a physical system or a logical abstraction of a system and has a specific notation where elements from outside the system are shown by an oval, as shown below

A process is represented by a rectangle with a division within it for cross-referencing and numbering the process. The location of the process is placed at the top of the box. This might be a physical location, but it is often used to denote the staff responsible for performing the process

Three types of objects exist in a DFD, sometimes also known as a bubble diagram: - Input objects (files, terminal input) - Output objects (reports, messages) - Storage objects (temporary or permanent,eg arrays,records,tables). In the diagram, objects are drawn as bubbles, actions as boxes. When the arrow is between an object and a processing action, the action reads the data object. Similarly, when the arrow is between an action and data object the action writes information into the data object. Example The following is an example of a DFD for the registration of a new member in a video rental shop. Only one file is involved: the file containing information about the customer. The new applicant information, which has been checked and accepted, is stored in this file. This is the processed to create a membership card, which is the returned to the customer.

Customer file

A part of the membership system is shown here. There are three programs and one file. Thecheck data customer procedure accepts delivery from the keyboard. After checking and accepting the customer information, it writes onto the information customer file. This file provides input to create membership card procedure, which then prints out the card. 6) APPLICATIONS GENERATOR These are software tools, which can be used to create complete systems. The user describes the input, data and files, and everything else that needs to be done. The applications generator then uses this information to generate a program or suite of programs. 7) CASE (Computer Aided Software Engineering) A CASE tool refers to any software tool that can be used in the design and development of a system. Benefits of CASE 1) Provides checks on design errors 2) Provides system wise data dictionary 3) Prevents redrawing of diagrams 4) Provides opportunities to make design changes 5) Ensures conformance with design and documentation standards 6) Increases user involvement in systems design 7) Creates a repository of system design documentation 8) Improves system reliability and maintainability

9) Aids project management and control FILE CONCEPTS A file is an organized collection of related records. It is a collection of data pertaining to one item or entity. It consists of a number of fields each of which can hold one piece of data such as name, date of birth etc. For example, in a college, there could be a file of student records each with the structure shown below: Name Date of birth Course Intake Class (day/evening) Primary and Secondary Keys A primary key is a record field or number of fields (in which case it is called a composite primary key) that is used to uniquely identify a record in a file. A secondary key is any other field or fields, besides the primary key that can be used to identify a record. Types of Files a) Transaction File it is a temporary file that contains details of all transactions that have occurred in the last period and is discarded after serving its purpose of updating the master file. b) Master File it is a permanent file that is kept up to date by applying the transactions that occurred during the operation of the business. They contain generally 2 basic types of data: 1) Data of a more or less permanent nature such as on a payroll file, name, address etc are details that seldom change when file is processed 2) Data, which will change every time transactions are applied to the file such as the value of gross pay to date, leave days etc. c) Reference File it is a file that contains data used by a program during processing e.g. a file of tax bands in a payroll system. d) Input files files created by transcribing source documents to some media readable by the computer e) Transfer files files created at various stages of processing so that information from the preceding stage is stored. f) Work file are temporary files that contain data selected from one or more files for short-term processes such as analyzing sales, item-by-item reconciliation etc. g) Output files created to carry output from a computer system either to be printed or input to another computer system or as reports e.g. sales invoices. i) Library files used for storing application programs, modules or utility software required by the system. j) Dump files are files created during some periodic control points so that if the system fails there is always a fallback point. They are used for security purposes. k) Scratch files a file used to contain temporary data and is immediately discarded after serving its purpose. l) Backup files are files created at frequent intervals such that in the event of a disaster the last update situation can be obtained. m) Archive Files are permanent files created to store records that wont be accessed in the near future. File Organization Refers to the order in which records are stored in a file. Files stored on magnetic media can be organized in various ways depending on several factors such as: - How the file is to be used - How many records are processed each time the file is updated - Whether individual records need to be quickly accessible The following are the various methods of file organization: Serial file organization It is the creation of a file by placing one record after another as it becomes appropriate to file away a record without paying any regard to record keys. There is no relationship between

adjacent records and also there is no way of knowing the whereabouts of a particular record, therefore file access to a selected record is through a serial search. It is not suitable for master files. Serial files are used as temporary files to store transaction data (transaction files). With serial files records are easily added by appending them to the end of the file. However, deleting a record is more complex. To delete a record the computer serially searches for the record first, making a copy of all records preceding it to a brand new tape, leaves out the record to be deleted and copies all records succeeding it. Sequential file organization It is a file organization method in which records are organized one after the other according to their record key values. It leads to fast processing of master files compared to serial organization and is extremely efficient in batch processing systems. Files that are stored on magnetic tape are always either serial or sequential, as it is impossible to write records to a tape in any way except one after the other. Sequential files are usually used as master files for high hit rate applications. To add a new record, all records with lower key values are copied to a new tape, the new record is then inserted in its proper place and then all records with higher record keys are also copied. Record deletion is the same as for serial file organization. Direct access file organization Direct access to a stored record means that the user can get to the record within a few seconds without having to institute a search through a file, inspecting and rejecting records until a key match is found as is the case with serially and sequentially organized files. Direct access is associated with on-line systems, real-time systems and immediate or rapid response systems. Not all on-line systems offer direct access facilities to store master file records it may be that input is on-line during normal working hours but updating of master files is done in batch processing mode overnight. The 2 commonly used methods of organizing files to enable records to be directly accessed are: 1) Indexed sequential and 2) random a) Indexed sequential file organization The file organization uses 3 areas on the disk for data storage: an index area, a main record storage area and a record overflow area. The index to the file contains a list of sorted record keys. Not every key to a record is stored in the index, but only those necessary to reference a physical record. Associated with each key in the index is an address that corresponds to the position on the surface of the disk where the record belonging to that key is stored. Records stored in the main storage area are not necessarily in sequential order, and this does not change the sequential access to the file since it is the index that is organized in a sequential manner. Records in the main and overflow storage areas can be amended, deleted and inserted into the file without having to create a separate updated master file. However, for security reasons, it is recommended to create a separate sequential file that logs all the changes to the master file. Magnetic discs are the appropriate storage medium for this method of organization. It organizes files into some sequential order based on the key field. It is very similar in principle to sequential file organization. However, it is possible to randomly access the records by using a separate index file. They are useful for real-time batch processing systems where file updating is done periodically (and sequentially) and referencing is done occasionally (and randomly) e.g. stock control system. Records can be stored in a sequence based on the value in the key field of each record. In addition, however, an index file contains an index consisting of a list of key field values and the corresponding disk address for each record in the file. The index is usually stored with the file when the file is first created, and retrieved from disk and placed in memory when the file is to be processed. Magnetic disc showing sectors and treks Tracks (concentric circles) Sector Overflow Refers to a situation where a record becomes longer during updating, or a record which is being newly inserted into the file, does not fit into the sector which should accommodate it (i.e. its home sector). To allow for such a happening, an overflow area is created on the disc pack so that

a record that fails to fit into its home sector is placed into an overflow sector and a message or tag is left in its home sector giving the key field of the record and the address of the overflow sector in which it can be found. An indexed sequential file, then, consists of 3 areas: 1) A home area where the records are initially stored 2) One or more index areas set aside to hold the indexes 3) One or more overflow areas to hold records that are added at a later date and will not fit in there correct home sectors or blocks Blocks When a file is recorded on disk or tape, the physical space in which data is recorded is unlikely to be exactly the same size as the record, which is the logical unit of the file. Both disks and tapes transfer data between CPU and backing store in chunks called blocks. The number of logical records stored in a block is called the blocking factor of the file. A block on disk takes up one sector, such that the word sector and block are used interchangeably when referring to disk. When indexed sequential file is first set up, the user specifies how many records are to be placed in each block thus making it easier to: 1) Access the file quickly 2) Deal with additions and deletions to the file as efficiently as possible 3) Make the most efficient use of storage space A common blocking strategy is to put several records in one block, leaving enough free space for extra records in case of overflow. Block packing density is the ratio of space allocated to records in each block to the total space available. Cylinder packing density is the ratio of tracks initially set aside for records to total number of available tracks on the cylinder. b) Random file organization A random file is also called a hash file, direct or relative file and has records that are stored according to their disk address or their relative position within the file. Thus, the program, which stores and retrieves the records, has first to specify the address of the record in the file. This is done by a hashing algorithm (mathematical calculation), which transforms the record key into an address at which the record is stored. Magnetic discs are the appropriate storage medium for this method. This method results in records being stored in locations derived from record keys. A record is subsequently accessed by derivation of its address from performing some calculation on the key. Random files are used in situations where extremely fast access to individual records is required e.g. in a network system, user Ids and passwords could be stored on a random file. Advantages 1) No indexes are required 2) Permits fastest access methods 3) No sorting of master files or transaction records is required 4) Handles volatile files well Disadvantages 1) Difficult to implement 2) Can lead to inefficient usage of storage medium Operations on files The following operations are normally carried out on files: 1) Interrogating / referencing 2) Updating 3) Maintaining 4) Sorting Interrogating or referencing files When a file is referenced it is searched for a particular record first using a primary key to identify a particular record. The search method is dependent on the type of file organization (i.e. sequentially for serial and sequential files. For indexed files the index will be read first and the address of the record obtained so that the record can be accessed directly etc.). The record, if found, is then displayed on screen, printed or used for further processing, without itself being changed.

Updating Files A master file is updated by altering one or more of its records through the application of transactions to it. The method of update is again dependent on type of file organization. a) Updating sequential files using the method of updating by copying The method requires the transaction file to be sorted in the same order as the master file and then perform the following steps: 1) A record is read from master file into memory 2) A record is read from the transaction file into memory 3) The record keys from each file are compared. If no updating to the master file record currently in memory is required (no corresponding record from the transaction file), the master file record is copied from memory to a new master file on a different tape or area of disk, and another master file record is read into memory, overwriting the previous one. This step is then repeated. 4) If there is a transaction file record that matches with the master file record currently in memory, the latter is updated and written to a new tape. Steps 2 to 4 are then repeated. After a sequential file has been updated, 2 versions or generations of the master file exist, namely the old master file (still in the state it was prior to the update), and the new master file just created. The next time the file is updated, a 3rd generation or version of the master file will be created, and so on.
The general practice, however, is to keep 3 generations called grandfather, father, son for backup purposes in case of a disaster, as well as to save storage space.

Grandfather, father, son process

b) Updating indexed sequential files A record is accessed directly, read into memory, updated and then written back to its original location. This is called updating by overlay, updating in place, or updating in situ. This is possible because the record can be accessed by means of its address on the file and so can be written back to the same address. File maintenance It is similar to file updating, but refers to updating of the more permanent fields on each record, such as name, marital status etc, as well as adding new records and deleting records that are no longer required. It basically refers to keeping the file with up-to-date information (i.e. goes beyond only updating records). It is also dependent on the type of file organization. Sorting a file Master files are sometimes sorted if a report is needed in a different sequence from the one in which the records are held. File Access Methods How a file is organized determines how it is accessed. There are 4 possible file access methods: a) Sequential access method (SAM)

Is a method in which the file is sorted in a particular order first and then a record is accessed by reading and comparing the preceding records key values until the record being searched for is found (i.e. the key value matches with the one being looked for). b) Indexed sequential access method (ISAM) Is a file access method in which the record keys in the file are organized into some sequence in an index before any record can be accessed sequentially or randomly (directly) using the corresponding record address found in the index table. c) Direct access method Records are accessed without having to go through any previous records. The method relies on the fact that each record is referenced by a unique address on the particular storage medium. Therefore, only direct access media such as disks are suitable for this method. d) Serial access Records in the file do not have to be sorted in any order prior to searching but are located by reading each preceding records key value until a match is found. The Hit rate Refers to the proportion of records being accessed on any one run to the total number of records on the file expressed as a percentage. Volatility Refers to the number of additions and deletions from a file over a period of time. PRIVACY, SECURITY AND ETHICAL ISSUES OF IT ETHICAL ISSUES Ethics are rules that society expects individuals to abide by. Ethics surrounding the use of computers are few and sometimes very difficult to enforce because the computer industry is relatively new and follow-ups are very difficult. However, generally agreed rules are: 1) There should be no unauthorized copying of programs or transfer to a different location other than the proprietors licensed site. Copying, reproducing or communication should be done with prior permission from the authors of the program. 2) Any item of software should be obtained from an authorized dealer and a license should be obtained upon purchase of a new package. 3) Data collected, processed and maintained in a database, if relating to individuals should be forwarded to other people on a need to know basis. 4) The author of any software product has the right to alter, amend or update his/her item of software. The original author should authorize Updates/changes done by any one. THE DATA PROTECTION ACT The Act became law on 12th July 1984 and contains 8 principles summarized as follows: - Personal data must be obtained and processed fairly and lawfully - Personal data must be held for specific purposes - Personal data must not be used for any reason incompatible with its original purpose - Personal data must be relevant and adequate - Personal data must be accurate and up-to-date - Personal data must not be kept longer than necessary - Personal data must be made available to the individual concerned and provision made for corrections - Personal data must be kept secure Personal data means any data relating to a living person who can be identified from it. COMPUTER CRIME Refers to any form of abuse of the facilities offered by the computer the disadvantage of the owner of the computer such as hacking, fraud etc. a) Hacking This is a situation whereby an individual gains unauthorized access to data stored on a companys database files.

b) Espionage This crime occurs when an individual legally or illegally gains access to company information and passes it on to other companies thus exposing his companys secretes. This information, in most cases, would have been stored on the companys computers or database. b) Fraud The use of computers in effecting unauthorized financial transactions such as producing cheques for ghost clients, falsifying balance sheet statements (CFI/Century Bank) etc. c) Use of computer time for non-company purposes Is the use of a companys computers for purposes that do not benefit the company such as typing, storing and printing ones CV, typing college assignments etc. d) Malicious damage Refers to acts of sabotage that can lead to the damage of computers, parts or components of the computer including the data stored on them such as fire outbreaks, deliberate erasure of data etc. e) Computer Viruses A computer virus is a program written to destabilize the functioning of a computer or simply to annoy the user. It is a computer program that spreads from one computer system to another, eventually performing the illicit function for which it was designed. In worst case a virus can modify RAM, causing total collapse of the system and in some cases may destroy the boot sector or partitioning sector of a computer thus rendering it unbootable.Viruses are totally acts of sabotage and are written by people such as disgruntled programmers or hackers. A virus can be attached to a useful program and as such may enter the computer system undetected. A virus has the capability of reproducing itself and in most cases, by the time the symptoms are evident the virus would have destroyed quite a considerable number of data or programs. Threats Posed By Viruses A virus can either be destructive or non-destructive. a) Modes Of Operation Of Destructive Viruses 1) Mass destruction In this case the virus can either attack the format of a disk, hence any program or data damage will be irrecoverable. 2) Partial destruction The erasure and/or modification of a particular section of a disk, which affects files stored in that position. 3) Selective destruction This is the erasure and/or modification of specified files or file groups. 4) Memory saturation The virus systematically reproduces itself in order to take any available space, and if space is freed, it goes on to replicate itself in that space. The idea is that after some time, the entire computers memory is filled with the virus and the system crushes. 5) Random havoc The virus randomly changes data on disks or in memory during normal program execution. b) Mode of operation of non-destructive viruses 1) Annoyance The virus can display unusual messages, change keystroke values or data from input and output devices (e.g. deleting characters displayed on the VDU), thus annoying the user. Virus Categories a) Worm a software program that generally burrows into the computers memory. It is designed for idle computer memory and if found, rewrite itself successively until the memory space is full. This process is repeated every time there is free space until the system crushes. The worm differs from the ordinary virus in that the recurring segments of the virus code maintain communication with the code from which they were produced (networking).

b) Trojan is a virus that comes hidden in a legitimate program. It does not possess the
ability to replicate itself. The unauthorized code or may not cause damage to the computer system. It may be activated immediately or may continue to operate as part of the legitimate software for an extended period of time by activating itself. c) Time bomb this is a virus that is triggered by a particular date such as April Fools Day, Friday the 13th. d) Logic bomb this is a set of instructions (program code) that are executed or activated when a set of conditions are satisfied such as a counter reaching a predetermined number, the number of files on a disk reaching a certain value etc. NB: Examples of viruses include Italian, Datacrime, Stoned etc. Examples of anti-virus software include Dr Solomons Anti-virus software, Dr Watsons Anti-virus software, McAfee etc. Symptoms Of Viruses The signs of virus attacks on computers include the following: 1) Changes to disk volume 2) Changing or updating of file dates or formats 3) Displaying of unfamiliar messages or graphics on the screen 4) Programs taking longer than usual to load 5) Less memory available than usual 6) Unusual error messages appearing more frequently 7) Mysterious disappearance of computer programs from the computer SECURITY Security refers to the protection of data and equipment from such phenomena as natural disasters, fires, and theft, sabotage, and fraud, accidental or deliberate erasure/alterations. Methods that may be employed to secure computer installations and data include the following: a) Physical control measures i) Locking the computer room whenever it is not in use ii) Security personnel should be employed to control the movement of people into and out of the computer room iii) Identification tags can be used to easily identify members of staff with access to the computer room. iv) Avoid visibility of computer hardware from the streets so that you do not attract thieves. v) Fired or retired employees should relinquish their computer room duties with immediate effect to avoid deliberate sabotage. vi) Escort visitors throughout their stay in the computer room vii) Ban unauthorized movement of hardware and equipment into and out of the computer room. viii) Notices of rules and regulations can be displayed together with the penalties for failure to abide by the regulations to deter would-be offenders. ix) Alarms can be used to signal a fire outbreak or break-in. b) Computerized control measures i) Biometric locks. These are special devices used to identify physical traits of users such as fingerprints, palm prints and retina mappings. A user is scanned by a scanner, which picks and analyses his physical details. If the details do not match with pre-supplied details then access is denied. ii) Disconnect network connections when they are not necessary. This limits the possibility of unauthorized online access. iii) Maintain an access and activity log to keep track of who had access to the computer room, what files did he access, what time did he leave the computer room. iv) A password can be used. This is special code which a user must supply to the computer to in order to be granted access into a computer system, a specified

v)

vi) vii) viii)

data file or program thus limiting access to authorized users only unless the password leaks. The password should be changed regularly. Encryption techniques involve converting a message in ordinary language called plaintext into an unreadable presentation called cipher-text. The recipient must decipher it to transform it into a readable form. Data encryption is a method of scrambling data using a special algorithm that renders it meaningless to the human eye unless the reverse process (decryption) is applied to the encrypted data. File hiding can also be used. This ensures that anyone without knowledge of the location of the file cannot access it unless individual is smart enough to perform a file search. Read only attribute can be used to protect data against accidental erasure or unauthorized amendments to a file. Anti-virus software can be used to protect data against virus attacks. Viruses are pieces of code written to destabilize the computers operations.

DATA COMMUNICATIONS Is the transfer of data from sender to receiver over a communication channel or medium he data communications model

MEDIUM Sender The device that transmits the data. It can also create the data to be transmitted. Message The data that is transmitted Medium The actual connection or communication path between the source and destination devices and carries data from the sender to the receiver Receiver The destination of data Digital Signals

A digital signal is one in which data is represented through a unique pattern of bits which can take any of the 2 possible values 0 or 1.

NB: Each bit in the bit stream has the same bit duration. Characteristics of Digital Signals 1) Polarity Digital signals may be unipolar (single current) or bipolar (double current). Unipolar means that the signal voltage/current is always positive or negative. In bipolar signals the 1s and 0s vary from +ve to ve voltage as shown above. 2) Baud rate Refers to the number of signals sent per second e.g. a binary signal of 100Hz has a baud rate of 100 3) Bits per second (bps)

Is a measure of how much information can be sent across a channel in a given time i.e. transmission speed. Where a purely binary signal is being used, this is the same as the baud rate. When the signal is changed to another form (e.g. NRZ, Differential coding etc) it will not be equal to the baud rate, as each line change can represent more than one bit. 4) Data transfer rate Refers to the number of data bits transmitted per second. Data representation in detail Analogue signals An analogue signal has a sinusoidal wavelike form and is continuous in time i.e. it is defined for every time instant. Characteristics of analogue Signals

TERM Frequency Cycle Amplitude Period Bandwidth Wavelength Phase Modulation

DEFINITION The number of complete cycles per second Refers to a complete signal movement from start point and back Vertical height or depth of a signal Time taken to complete one cycle The range of frequencies over which a transmission can take place over a communication channel The distance taken to complete one cycle The difference between the start of one signal (Reference signal) and the start of another signal (Phased signal) The alteration of frequency, amplitude or phase

UNIT OF MEASUREMENT Hertz

DATA TRANSMISSION MODES 1) Serial data transmission A group of data bits is transmitted sequentially over one line in a single file. Advantages a) Error occurrence is less frequent b) Can be used over long distances c) Only 2 wires are require, the TXD (transmit data) and RXD (receive data) lines. Disadvantages a) The receiver in the connection must sample the incoming data signal at the correct instants in time before it can reproduce the transmitted character b) Data transmission is slow c) It needs to be clocked. 2) Parallel data transmission Is the simultaneous transmission of groups of bits from sender to receiver over separate conductors. The number of conductors required is called the bus width. Advantages a) Data transmission is faster because more than one bit is sent at a time Disadvantages a) It is expensive to implement due to the several wires required b) It is only effective over short distances because data reliability decreases with distance due to skewing ( bits transmitted simultaneously fail to reach the receiver at the same time ) 3) Synchronous data transmission

Data is transmitted as blocks of characters at a time with the sender and receiver locked into synchrony by a clock. The blocks dont have start and stop bits. 4) Asynchronous data transmission Data is transmitted one character at a time without clocking the sender and receiver into synchrony. The characters are marked by start and stop bits. ASSIGNMENT Giving examples clearly explain the following data transmission modes: Simplex data transmission Half duplex data transmission Full duplex data transmission [15] TRANSMISSION MEDIA Refers to the communications link or channel that carries a signal from the sender to the receiver. When choosing the transmission media some of the factors to be considered include: 1) The required transmission rate 2) Transmission distance involved 3) Cost of media 4) Ease of installation 5) Resistance to environmental conditions The 2 broad categories into which transmission media can be classified are: a) Bounded Media Consist of cables which use wire or glass strands for data transmission: i.) copper ii) Fibre op b) Unbounded Media Signals are carried by electromagnetic waves and radiate through air freely Copper Wire It is a transmission medium made up of copper conductor material. It is the most common transmission medium and exists in two broad states: 1) Open wire Uninsulated copper cable 2) Twisted Pair 2 insulated wires twisted around each other and covered with a plastic casing Twisted Pair has 2 main subclasses which are Shielded Twisted Pair ( STP ) and Unshielded Twisted Pair ( UTP ). Unlike UTP, STP has a protective sheath. Coaxial Cable It is a 2-wire conductor with a larger bandwidth than twisted pair Because it has substantially more transmission capacity it is more efficient than twisted pair. A 5cm diameter bundle of coaxial can handle about 20000 data or voice circuits simultaneously. It is mainly used in radio and television transmission. The cable has a central copper conductor surrounded by an insulating dielectric and outer conductive layer (that acts as ground) protected in a sheath. Cover

Outer conductor Insulator Inner core

The dielectric can be solid, liquid or gas but is usually polyethylene or air. Radio waves These are electromagnetic waves with a spectrum range of between 3khz and 300ghz. Examples of radio wave application are microwave systems, satellite communication systems and infrared transmission. Radio links use frequencies between 2 and 40GHz. Microwave is a commonly used radio system. It is an extremely high frequency radio communication beam that is transmitted over a direct line of sight (transmitter and receiver visible to each other) path between any 2 points. Satellite communication systems are quite similar to radio systems. They operate for frequencies between 2 and 14 MHz. There difference with radio systems is that their intermediate link station is in orbit around the earth. Infrared transmission uses low frequency light waves to carry data through the air on a direct line of sight path between 2 points. This technology is similar to that used in infrared TV remote controls. It is prone to interference. Infrared transmitters are small and therefore easy to install and use. Optic fibre cable Fibre optic cable is constructed of flexible glass and plastic. It consists of 3 concentric circles as shown below:

Core (10micro meters) cladding (125micro meters) secondary coating (1mm)

Primary coating (250 micro meters) The core Is a very thin pure glass fibre, which conducts light rays. It can consist of more than one fibre. In place of glass plastic fibre can be used for shorter distances but it has higher attenuation and difficult to manufacture on pure form. The cladding Is made of glass and serves to confine light within the glass fibre. Usually the core glass has a higher refractive index than the cladding and this is necessary in order to keep light trapped in the core. Outer coating / jacket. Provides protection against moisture and chemicals and also provides physical strength.

Advantages of optic fibre 1) Very high capacity for a small strand 2) Immunity to electromagnetic interference 3) Longer cable runs between repeaters 4) Fairly low and flat(unchanging) attenuation over a wide range 5) Immune to lightning and crosstalk 6) Very hard to perform illegal data tapping Disadvantages of optic fibre 1) Manufacture of the core is very expensive since it requires stringency. Small impurities in the fibre can cause light to be absorbed or reflected out 2) Terminal equipment (light sources and receivers) at ends of cable lags in development as compared to fibre development itself. The terminal equipment is also difficult to align to fibre. 3) Repairing and installing an optic fibre cable is quite difficult and needs specialist personnel and equipment. METHODS OF EFFICIENT USE OF TRANSMISSION MEDIA In the early days of electrical communications a medium such as copper wire carried a single information channel. One communication channel per communication session is expensive whether on a dial-up or leased circuit. In most cases 2 communicating terminals will not fully utilize the capacity of a link. Hence, it becomes imperative to examine ways of efficiently using transmission channels. Implementing and maintaining transmission links in communication networks is an expensive undertaking for network operators. Much is gained by packing and transmitting several multiple channels on one physical link such as a copper wire pair. The resulting system is called a carrier. MULTIPLEXING Is a process of combining several data streams originating from a number of separate low speed channels to form a single composite high speed bit stream. It usually done in multiples of 4, 8, 16 and 32 simultaneous transmissions over a single communication circuit. One communication channel per session is expensive because in most cases 2 communicating terminals will not fully utilize the capacity of a link. Hence, special devices called multiplexers were designed to combine more than one input signal into a single stream of data that can be transmitted over a communication channel. This increases the efficiency of communication and saves on the cost of individual channels. The various multiplexing techniques available are: i) Time Division Multiplexing (TDM) ii) Statistical Time Division Multiplexing (STDM) iii) Frequency Division Multiplexing (FDM) a) TIME DIVISION MULTIPLEXING (TDM)

In time division multiplexing, either the bits or characters being transmitted are interleaved during transmission. Terminals are polled in sequence, and each terminal is given a time slot (or slice) equal to other stations whether or not it has anything to transmit. The multiplexer at the other end separates the bits and passes them to the computer. A time division multiplexer is a device that distributes a number of channels periodically in time through the intermediary of pulse modulation. Each pulse corresponds to a channel and is interleaved between those of other channels. Hence, a time division multiplexed signal is always composed by means of synchronous sampling of the channels, with pulse shifted with respect to each other. The interleaved channels form one frame of a duration corresponding to the sampling period. TDM allocates a separate time slot or slice on high-speed bearer circuit to each user. A small part of the users data (a bit, byte or block) is sampled in turn, synchronized and applied to the data link by the TDM. At the receiving end TDM reconstitutes the original signals for transmission to their final destination. TDM transmits data in the form of message frame that consist of a number of time slots each containing data from a different channel. The time slot is allocated to each channel even when data is not being transmitted, therefore some bandwidth is wasted, though not to the same extent as FDMs guard bands. TDM is generally more efficient than FDM. It is easy to change the number of sub channels in a TDM system. TDM systems are generally less costly to maintain than FDM systems.

Channel A Channel B Channel C

The figure above shows the principle of time division multiplexing. The blocks represent either bits or octets/bytes. Each block on the common link can only use a third of the original block time T. Consequently, the number of bits per second (the capacity) of the shared link is 3 times that of each original channel. Multiplexing is also called interleaving, hence we have bit interleaving and octet interleaving. b) STATISTICAL TIME DEVISION MULTIPLEXING (STDM) In this method all sending workstations are polled for a statistical analysis of their transmission requirements. TDM is then applied only on workstations that have data to send. Is time division multiplexing in which the selection of the transmission speed of the multiplexed circuit is based a

statistical analysis of the usage requirements of the circuits to be multiplexed. Hence, it provides more efficient use of the circuit and saves money. It allows more terminals to be connected to a circuit than FDM or TDM. The basic STDM scheme is for character-by-character transmission. Another type of STDM involves interleaving entire messages from terminals rather than characters (block transmission). This reduces the amount of overhead addresses transferred but can increase delays. Fast Packet Multiplexing(FPM) is another advanced form of STDM that combines voice, data and video transmissions at high data rates of up to 2Mbits/s. FPM systems can determine which transmissions are more important and these will be sent first and fast. Voice is particularly important because it is not very tolerant to delays. c) FDM

In FDM, there are several carrier waves each having its own frequency, and each carries one lot of data. The multiplexer at the other end can therefore distinguish between each set of data and separate them out according to frequency. WAVELENGTH DIVISION MULTIPLEXING (WDM) In WDM, each circuit is transmitted on a separate wavelength of light thus drastically increasing the capacity of the transmission medium. Fibre optic cable has traditionally carried only one circuit at a time. MODULATION Refers to the processing of a signal to make it suitable for sending over a transmission medium. Modulation is a technique that enables information to be transferred in the form of changes in an information-carrying signal (carrier signal). 1) AMPLITUDE MODULATION (AM) Is mainly used to transmit analogue voice modulated on very high radio frequencies. The amplitude or height of the wave is changed. 2) FREQUENCY MODULATION (FM) Frequency modulation changes the frequency of the basic carrier wave (unmodulated signal) according to the modulating signal. Frequency modulation mainly finds application in broadcasting on the FM band ( 88- 108 MHz ), TV sound channel and certain mobile communication systems.

3) PHASE MODULATION (PM) In phase modulation, the carrier frequency remains constant, but the phase can be shifted in increments over a complete cycle of the waveform. Modulation is used for both analogue and digital information. In analogue information, it is effected continuously (soft transitions) and in digital information it is effected step by step.(state changes). As we have said above, modulating a signal has the effect of increasing the bandwidth available for the signal .If a signal is digital , it can be possible to transmit it over a telephone line. When a data carrying signal is modulated and transmitted over a line , it should be de-modulated in order to recover the baseband signal from the modulated carrier Demodulation removes the carrier from the signal. In data communication, the unit that performs modulation /demodulation is called a modem. Generally speaking, modulation involves manipulating either the frequency (frequency modulation) or amplitude (amplitude modulation) of the carrier. MODULATION OF AN ANALOGUE CARRIER SIGNAL BY A DIGITAL SIGNAL Digital modulation makes it possible to transmit digital information on analogue carriers such as radio and light waves. In the modulation process of a digital signal, a bit or group of bits may be translated into rapid state changes, such amplitude or phase shift. Adjacent bits of a signal may be paired together to form what are called dibits 00, 01, 10, and 11; each dibit being represented by a single change in the modulated waveform and hence such changes occur only half as often and the baud rate is only half as often. The term digital modulation is used in 2 senses which are: 1) Modulation of an analogue carrier by a digital baseband signal 2) Modulation of a digital carrier by an analogue baseband signal A digital signal may modulate an analogue carrier by using amplitude, frequency or phase modulation. An example of the use of a digital carrier for transmitting an analogue baseband signal is the use of pulse code modulation (PCM) for sending voice signals over digital circuits. This enables telephony to enjoy the advantages of digital transmission: a) Effective immunity to distortion, interference, cross-talk and noise b) Constant transmission performance regardless of the length of a telephone connection and its routing. AMPLITUDE SHIFT KEYING (ASK) If a binary signal is used to modulate the amplitude of a carrier to the greatest possible depth , the carrier is switched on and off . This is known as amplitude shift keying (ASK). One amplitude is defined to be zero , and another amplitude is defined to be a one Amplitude modulation is suitable for data transmission , and it allows use of the available bandwidth of a voice grade line : however, it is more susceptible to noise during transmission . FREQUENCY SHIFT MODULATION/FREQUENCY SHIFT KEYING (FSK) This is a modulation technique whereby each 0 or 1 is represented by a certain number of waves per second (i.e., a different frequency). If a digital signal is to modulate a carrier in frequency, the digital signal causes the frequency of the carrier wave to be switched between two values. Two frequencies within the audio range are chosen. These frequencies, which should be separated easily with band pass filters, are assigned as the carriers. For a bit 1, one frequency (a certain number of waves per second) is transmitted and the other frequency for 0. Two band pass filters, rectifiers and a differential amplifier are used to demodulate the signal at the receiver end. In frequency modulation, amplitude of the signal does not vary. Switching between the two carriers is performed at the bit rate of the data signal. The receiver must be able to distinguish the two carrier frequencies at each sampling instant, if data bit increases the two carrier frequencies must be further apart in order for the receiver to be able to sample the received data accurately. FSK finds use in voice frequency telegraph systems, and data transmission over telephone connections in the PSTN. PHASE MODULATION/PHASE SHIFT KEYING (PSK)

Phase refers to the direction in which the wave begins. Waves that start by moving up and to the right are called zero degree phase waves. Those that start by moving down and right are 180degree phase waves. A phase modulation system operates by shifting the phase of a sinusoidal carrier wave between two different values to represent the digital data signal. With phase modulation, one phase is defined to be a 0 and the other phase is defined to be a 1. For phase shift keying, every time there is a change in the binary value(0 or 1), there is a 180 degree change in the phase i.e, the wave immediately goes in the other direction. Phase shift keying needs a local oscillator at the receiver. This oscillator should be accurately synchronized in phase with the unmodulated transmitted carrier. Practically this makes it difficult to detect the received signal and convert it back into digital form. For this reason psk is not commonly used. A version of psk called differential phase shift keying (dpsk) is used instead because it overcomes the problem of psk. OVERVIEW OF ANALOGUE TO DIGITAL (A/D) SIGNAL CONVERSION Digital transmission is done by sending a series of electrical or light pulses through the media. Digital transmission is preferred to analogue transmission because: a) It produces fewer errors b) Is more efficient c) Permits higher maximum transmission rates d) Is more secure e) Simplifies integration of voice, video, and data on the same circuit. For data to be transmitted on a digital system it has to be converted into a sequence of pulse combinations, which are then transmitted practically without any noticeable distortion. So if analogue data is to be transmitted digitally it has to be converted through the following stages: Sampling If we consider the analogue signal to be a wave, sampling is done by reading the signal amplitude at regular intervals. The samples are taken on the signal waveform at suitable intervals, which means that the quality obtained should allow a reasonably accurate digital representation of the analogue signal.

2 1 0 Amplitude-1 -2

The voice curve is time-divided into amplitude values. The number of samples to be taken is estimated by the sampling theorem, which states that: All the information in the original signal will be present in the sampled signal if a) The original signal ha a limited bandwidth (i.e. has a maximum frequency) b) The sampling frequency is greater than twice the highest frequency in the original signal Quantisation This is the measuring of the amplitude of pulses in the PAM (pulse amplitude modulated) curve and assigning a numerical value to each pulse. To avoid having to handle an infinite number of

numerical values the amplitude levels are divided into intervals and the same value is assigned to all samples within a given interval. Quantisation compromises on accuracy as the series of digits so produced does not really reflect the whole truth about the analogue curve. This inaccuracy is called quantising distortion. Coding Is the process of representing the quantized levels with binary codes to create the digital pulse stream. A/D Conversion, Transmission, and D/A Conversion Transmitter Receiver

D/A Conversion Firstly, the digital signal is regenerated to recover energy lost during transmission. Secondly, the PCM code is decoded. The decoding process translates the PCM code into quantised amplitude values. Lastly, the signal is reconstructed to produce the original analogue signal. Data packets in network communication A data packet is a unit of information transmitted as a whole from one device to another on a network. It forms the basic unit of network communications. Data normally exists in the form of large files. In order for many users to transmit data at once quickly and easily across the network, the data must be broken down into small manageable chunks called packets. Reformatting data into packets is necessary for the following reasons:

a) A large amount of data sent as one unit ties up the network and makes timely interaction and communications impossible because one computer floods the network cable with data. b) If errors occur during data transmission usually not every packet is affected. Thus only the affected packets, not the complete message, will need to be retransmitted and this makes it easy and faster to recover from errors. Generic structure of a packet Trailer Data Header

The Header Consists of the following: a) An alert signal to indicate that the packet is being transmitted b) Source address c) Destination address d) Clocking information to synchronize transmission Data Carries the actual data to be transmitted. Its size depends on the network type but usually varies between 512 bytes to 64Kb. Trailer The exact content of the trailer is dependent on the communication method or protocol being used. However, it usually contains an error-checking component called a CRC. Different networks have different formats for the packet and allow different size packets. The packet size limits determine how many packets the network operating system will create from one large block of data. How is a packet formed Packet formation begins at the application layer, where data is generated. As the packet descends through the lower layers information relevant to those layers is added to the data for use by the destination computers corresponding layers. At the transport layer the original data block is broken down into the actual packets and sequence information is added on to guide the receiving computer in reassembling the data from packets. Packet addressing Most packets on the network are usually addressed to just one computer and therefore get the attention of just one computer. Every adapter card sees all packets sent on its cable segment but it only interrupts the computer if the packets address matches the its individual address. Another type of addressing packets is called broadcasting. When packets are sent with broadcast address they can get simultaneous attention of multiple computers on the network. Directing packets Network components use the addressing information in packets to direct the packets to their destinations, or keep them away from network locations where they dont belong. The following 2 functions play an important role in properly directing packets: a) Packet forwarding Computers can send a packet onto the next appropriate network component based on the address in the packets header. Packet filtering Filtering refer to the process of using criteria such as an address to select specific packets SWITCHING IN NETWORKS Switching The establishment, on demand, of an individual connection from a desired inlet to a desired outlet within a set of inlets and outlets for as long as is required for the transfer of information Larger data networks generally employ some kind of switching. This enables the network to be shared by many users. Three different types of switching are in use:

1. Circuit switching An end to-end link is first set up and thereafter the message is transmitted. After receiving the message, the sender releases the circuit and connection is terminated. In WANs circuit switching is used to establish links between terminals/computers wishing to communicate. Advantages of circuit switching c) Greater transparency. d) Very low transmission delays Disadvantages of circuit switching a) Individual circuits with a permanently allocated transmission capacity are established and maintained thus creating inefficiency in the usage of line capacity for burst-like data applications b) Error control is not provided. c) Speed conversion is not provided. d) Messages are not stored within the network i.e. you cannot get a message through the network if the two circuits cannot be connected. 2. Message switching/Store-and-forward switching A message is send into the network with its source and destination addresses, and some control information added and it is routed through the network to its destination as soon as possible .In message switching there is no need for setting up the link nor does it matter if the two communicating terminals operate at different transmission speeds because the system will automatically convert the message to the speed of the receiving terminal. The system can also check for errors. Advantages of message switching a) Messages can be transmitted any time convenient to the sender b) Network automatically performs code, protocol and speed conversions and this permits different types of terminals to communicate with one another c) Queuing of messages and automatic dialing gives a high utilization of lines d) Messages can be broadcast to several terminals e) If traffic is heavy calls are not blocked but merely delayed With store and forward switching, if one device is busy the central switching site stores the incoming message from the sending device, and retransmits that message to the destination when the device becomes available. The combination of store and forward switching with circuit switching by modern high speed computers offers highest levels of data throughput to network users. The circuits first attempt circuit switching but if the destination is busy, store and forward is used. 3) Packet Switching Is a store and forward data transmission technique in which messages are split into small segments called packets (usually 128) each with an address of the sender and receiver before transmission as separate entities. At the receiving end, the message is reassembled from the received packets and then sent to the destination terminal. Packet switching consists mainly of 2 components: a) Packet switching exchanges (PSE) or switching nodes (SNs) These are connected by time division multiplexed high-speed channels and terminals. The terminals can either be directly connected to the PSE or they can go through a packet assembler / disassembler (PAD). The line connecting a terminal to a PSE is called a dataline and often consists of a leased analogue telephone line, with a modem connected at either end that operates on a full duplex synchronous basis.

Packet switching network An intelligent terminal is one capable of assembling packets and so it is connected directly to the PSE. A non-intelligent terminal cannot assemble packets and has to be connected to the PSE via the PAD. High-speed lines connect PSEs. The receiving IMP (interface message processor) stores the incoming packets until all the packets in the sequence have arrived. It then forwards these to the local PAD where the packets are assembled and the complete message arrived at and delivered to the receiving DTE. MEDIUM ACCESS CONTROL (MAC) METHODS/COMMUNICATION SCHEMES When data is broken down into packet the next thing is to place it on the network cable for transmission. Since the cable is shared, a systematic way of using the transmission medium is required in order to avoid conflicts among computers when they try to use the cable simultaneously. The set of rules governing how a computer puts data onto network cable and takes data from the cable is called a medium access method. There are a number of ways available to prevent simultaneous use of the cable but the commonly used ones are as follows (NB: the first 3 are the most commonly used). a) Carrier Sense Multiple Access with Collision Detection (CSMA/CD) Carrier sense station listens to determine if anyone else is transmitting Multiple Access everyone is free to try accessing the cable at any time but only one will be successful Collision detection ability to detect when collision occurs With CSMA/CD when a station wants to send data it carries out the following sequence of steps: i) Listens to see if anyone is transmitting ii) If no one is transmitting data is placed on the network iii) Workstation listens to its own message to determine if there was any collision iv) If no collision is detected, transmission was successful and process is finished v) If there was a collision, the workstation waits a random period of time then retransmits Advantages of CSMA/CD 1) In principle it supports an unlimited number of nodes that wont require preallocated slots or inclusion in token passing activities. Thus it allows nodes to enter and leave network without need for network initialization or configuration.

Disadvantages of CSMA/CD 1) Not suitable for heavy traffic conditions as collisions reduce efficiency. 2) Suitable for short links as attenuation undermines the collision detection mechanism at distance beyond 2,500m 3) Expensive to due to the analogue circuitry required for collision detection CSMA/CA Is a version of CSMA/CD, which combines the light traffic efficiency of CSMA/CD and the heavy traffic efficiency of token-based methods. If 2 or more stations collide CSMA/CA generates a jam signal, which is sent to the network to notify all nodes of the collision, synchronize clocks and the start the contention time slots. b) Token Passing This method can be used on either bus or token ring type of networks. A single packet (called TOKEN) of unique format is passed from node to node in a specified sequence. One of the bits in the token is set or cleared to indicate whether it is free or busy. The workstation holding the token is the only station enable to transmit. A node wishing to transmit a data packet must: a. Wait for the token to arrive at the mode b. Inspect the free/busy bit c. If the token is free then change the bit to busy. d. Retransmit the token, immediately followed by the data packet (data packet is embedded into token to form a frame). e. If the token was busy then send it on and wait. f. The receiver of the data copies the data out of the frame. g. A change is made to indicate that the frame is now returning to the sender. h. The original transmitting station removes the data an releases a free token. Note that the above steps of accessing the token are not standard to all token rings. There are few modifications to some of the finer operations although the general idea is the same. This method allows packets of any length to be sent, thus providing more efficient use of the bandwidth of the network. b) Demand Priority The repeaters or hubs manage the network access by doing round-robin searches for requests to send from all nodes on the network. The repeater, or hub, is responsible for noting all addresses, links and end nodes and verifying that they are all functioning. In demand priority (DP) access method, two computers can cause contention by transmitting at exactly the same time. If the repeater or hub receives two requests at the same time, the highest priority is serviced first. If the two requests are of the same priority, both requests are serviced by alternating between the two of them. In a DP network, computers can receive and transmit at the same time because of the cabling scheme defined for this access method i.e. four pairs of wires c) Hub Polling Is a means by which a central controlling device may regulate the opportunity for machines to transfer data on the LAN. Several devices attached to a controller unit (hub) are individually given permission to access the LAN. The polled station then has exclusive use of the network to transfer data for a set period of time. When that time expires or if the device is through transmitting, the controller routes the same request for transmission to the next station. Advantages of hub polling 1) All devices are given access at a predetermined time 2) Polling systems are highly predictable in their behavior Disadvantages of hub polling 1) Leads to inefficient use of LANs capacity at low traffic levels 2) High overhead in sending out requests to transmit 3) Wastes processing time in cases where a request is sent to a station that has nothing to transmit. d) The slotted ring

Several small data packets are being continuously transmitted around the ring. Each packet has space for inserting a source address, a destination address and a fixed amount of data. Thus each packet is a data carrier containing a slot into which data can be put. The start of the packet contains a full/empty bit, and nodes wishing to transmit data must: i) Wait for a packet with an empty slot ii) Set full/empty bit to full iii) Insert the source address, destination address and data to be transmitted iv) Transmit the packet A node receiving the packet will: i) Read out the data ii) Set a response bit in the packet iii) Transmit the packet When the packet gets back to the original transmitting one with the response bit set, the data will be check for accuracy, and then the packet is marked free and transmitted on around the ring. Communications Aids These are devices that assist in the transfer of data between any 2 communicating DTEs e.g. modem, selector, acoustic coupler etc. communications aids are also known communications control devices. The communications aids together with the communications channel give what is called a subnet as shown below:

The subnet connected to the DTEs (computers and terminals) will dive a network. Modem The acronym MODEM stands for modulator-demodulator. It is a device that is capable of translating digital signals into analogue signals for transmission over a telecommunications link and vice versa. Computers generally understand binary notation but the readily available communications channel, the telephone line, appreciates analogue signals only. Hence the need to have a modem to translate signals before placing them on the telephone line and after transmission over the telephone link as shown below:

Acoustic Coupler Functions the same way as the modem. The difference is in its physical design. An acoustic coupler has cup like enclosures in which a telephone handsets mouth piece and earpiece are enclosed to prevent noise. Data moves from the computer in its digital format to the acoustic coupler through the handset. The acoustic coupler does the digital to analogue signal conversion and forwards the analogue signal into the network channel. The opposite is true for incoming signals. Multiplexer (MUX) It is a device that is capable of converting several low speed data channels into one high data channel and vice versa. It serves the connected DTEs through a time sharing technique as shown below:

The multiplexer goes to the first computer and picks a small quantity of data in a fraction of a second. It then leaves this computer and goes on to perform the same operations on the next computer, and so on, in a round robin fashion until the last computer in the sequence has been served. The process is then restarted and goes on recursively until all data has been sent. Concentrator Has the same function as the mutliplexer in that it combines several low speed data channels into one high-speed data channel. However, the concentrator first of all establishes which computers want to send data. It then multiplexes these computers only. The term concentrator comes from the fact that the device collects and stores (concentrates) the data to be stored if the channel is busy. It will then release the data over the line if the line becomes free. Selector It is a device that determines which computer gains access to a shared resource if there are 2 or more computers contending for the resource e.g. when there are 3 computers sharing a printer as shown below:

All computers in the set up are connected to the selector and there is only one cable connection from the selector to the printer. Because only one computer can use the printer at a time, a user who wishes to use the printer has to turn the knob on the selector to his slot number so that he has

exclusive use of the printer. When he is through the next user can select his slot number so that he also enjoys exclusive use of the printer and so on until every user has had his chance. Networking Refers to connection of 2 or more computers for the purpose of sharing data. Merits of networking a) Cost cutting as a result of sharing data and peripherals b) Timely acquisition from any of the computers connected to the network c) Standardisation of applications d) Very efficient communications and scheduling Demerits of networking a) Facilitate hacking b) Facilitate spread of viruses Basic network components 1) Servers are the central computers in the network that store and provide the shared resources such data or software to network users. 2) Clients are the input / output hardware devices at the other end of a communication circuit. 3) Medium that which is used to physically interconnect computers, its the pathway through which the data travels. 4) Resources files, printers, or other items to be used by network users Factors affecting network choice 1) Level of security required 2) Type of business to implement the network 3) Size of the organisation implementing the network 4) Level of administrative support available 5) Amount of network traffic 6) Needs of the network users 7) Network budget COMPUTER NETWORKS A computer network is an interconnection of 2 or autonomous computers for the purposes of sharing data and/or assisting each other in processing. A single computer with many terminals is not a network. The 3 types of networks are: a) LAN (Local Area Network) b) MAN (Metropolitan Area Network) c) WAN (Wide Area Network) / LHN (Long Haul Network) d) VAN (Value Added Network) LAN Is an interconnection of 2 or more autonomous computers in a limited geographical area such as a room, building or campus. MAN Is an interconnection of 2 or more autonomous computers geographically spaced between different buildings within the same town or city. WAN The interconnection of 2 or more autonomous computers that are geographically widely spaced. VAN This is a network owned by a proprietor and has services the user is prepared to pay for. Users pay a subscription in order to enjoy the services offered by the network owner e.g. Tel-One. LAN Topologies A topology is the physical layout of computers, peripheral devices and connection cables in a networked environment. The following are the various types of network topologies and their descriptions:

Ring Network

In a ring network the computers are connected such that each computer is connected to two computers adjacent to it, one on the left and one on the right. The structure of a ring network is as represented below. Data from one computer is passed on to the next computer in the sequence and so on and so on until it reaches the destination computer. Because of the point raised above, privacy is almost non-existent unless data is to be received by the next computer. A ring network can be unidirectional or bi-directional. In a unidirectional ring, data flows only in one direction. This, therefore, means that it takes longer to send data to computers behind the sending computer. However, in a bi-directional ring, data is sent via the shortest way possible (should it be to the left or to the right). Should a breakdown be experienced on one terminal, data will flow in the opposite direction. Token Ring Network The structure of a token ring is similar to that of any other ring network. A special electronic signal known as a token circulates continuously in the ring. For any computer to be able to send data, it has to first get hold of the token as it passes by. When the sending computer will release the token back into the network when it is through. If the token signal is lost the network becomes non-operational.

Star Network

A configuration in which, all computers are connected to a central computer. The central computer is also known as the hub or host computer. In a star network: All computers are connected to a central computer (host). Communication between any two computers is done via the host so there is lack of privacy as the operator at the hub can access all passing data. Each computer has a direct connection to the host so the number of computers in the network is limited by the number of ports on the host computer. If the central computer breaks down then there is no network

Mesh Network

A mesh network is one in which each computer in the network is directly connected to every other computer within the same network thus establishing full connectivity between computers. Messages have multiple routes through which they can be transmitted from sender to receiver. This full connectivity has an advantage that if one computer breaks down, communication is still possible between other computers. However, the advantage comes at a considerable cost due to the cabling required. It is not ideal for networking computers that do not frequently exchange data. Furthermore, it is ideal for small LAN network set-ups.
Multi-drop (key or bus) Network

This network topology is applied only to Local Area Network (LAN) and makes use of a special computer called a server. It is also very easy to set-up. Consists of on strand of cable as its core (the communications channel). The channel may be, for example, coaxial cable, optical fiber, etc. Computer and other data terminal equipment in the network are connected to the communications channel by means of tapping. Only one computer is allowed to send data at any given time. This is so because network applies a technique known as Carrier Sense Multiple Access with Collision Detection (CSMA/CD) in order to send data. Because of collision, this type of network becomes more and more undesirable as the number of computers in the network increases. Data packets are send into the channel and flow in both directions. Each message packet has the sending computer and the receiving computers address to show where it is coming from and to which computer it is being sent to. Every computer in the network can see the data packets. This naturally means that there is accessible to all computers in the network.

Hybrid Network

This configuration has at least two networks, which are connected to each other through one computer. The computer connecting these two networks may be the host computer but this is not always the case. If the connecting computer breaks down then there is no communication between the 2 networks. The Internet It is a worldwide connection networks that intercommunicate and exchange information with each other. The Internet started with ARPANET (Advanced Research Projects Agency Network), which was an experimental network, designed in the USA by the Defence department to support US military research during Cold War era in early 1969. Its goal was to try and interconnect

ARPANET with other various radio and satellite networks so that if any part of the interconnection was bombed, the network would essentially continue to function. In 1983 the original Internet split into 2 parts, one dedicated to military installations (called Milnet) and the other to university research centres (the Internet which grew to what we know and use today). At around that time that Internet was developing, Ethernet was also developing fast and it matured and was used to connect ARPANET to other networks. To date the Internet has found a wide range of applications, some of which follow below: 1) E-mail on the Internet Electronic mail is transferred among users on the Internet through a mechanism called SMTP (Simple Mail Transfer Protocol). The protocol specifies the commands necessary to send mail on the Internet, and is used with a standard that specifies the following general structure of a mail message: - A group of header lines - A blank line - The body of the message Internet e-mail addresses have2 parts, the individual users account address and the address of the computer as follows: user@computer.domain The @ symbol separates the users account from the computer address, and the period separates the name of the computer from its domain. Internet addresses are strictly regulated to make sure that no two users have the same address. Each domain has a board that assigns addresses of its domain. 2) Remote Login Internet allows users on one computer to login into other computers on the Internet. The command used to do this is called Telnet. It is the Internet standard protocol for remote terminal connection service. Telnet is an applications level protocol that makes a terminal on one computer appear to be directly attached to a remote computer on the inter-network (terminal emulation). In order to login into another computer on the Internet, you must be on TCP/IP network that gateways to the Internet. In order to access the other remote computer you need to know the account name and password of an authorised user. Telnet can be useful, especially to people who are always travelling but would like to be in constant touch with data at their office. However, it can pause a great security threat, because anyone on the Internet can attempt to login to your personal account and use it as they wish. 3) Information Resources There is a lot of information on the Internet but it is quite difficult to quickly locate and retrieve what one needs and the following are ways of finding information on the Internet: Internet Service Provider (ISP) Refers to a company or group of entrepreneurs that provide access to the Internet on a pay-peruse basis. It is a gateway for individuals to get onto Internet. ISPs already have a dedicated connection to the Internet, and allow users to connect to their computers via a modem to gain access to the Internet. ISPs also serve as an information repository. They are frequently used by business and individuals to warehouse information that they want to make available to the rest of the world. Accessories needed to get connected to the Internet 1) Computer A fairly new computer with considerable disk space for storing the browser (software to enable connection to and navigate the internet) and a fast processor will do. 2) Modem Performs digital to analogue signal conversions and vice versa to enable communication with the high-speed telephone line upon which the Internet rides. 3) Telephone line Provides the physical link that connects the computer to the ISP and through the modem thus gaining access to the Internet. 4) Software

These are special instructions written to facilitate access to the Internet as well as navigating the information super highway. 5) Dial-up or SLIP/PPP connection A dial-up connection only facilitates access to software and Internet services provided by your ISP. SLIP stands for Serial Line Internet Protocol and PPP stands for Point-to-Point Protocol and connecting to the internet using this software gives one access to all the services on the internet regardless of whether they are offered by the ISP or not. For the duration of that connection your computer is assigned its own Internet address (IP address) and will be communicating directly with TCP/IP. A dial-up connection is a connection from your computer to a host computer over standard telephone lines. Unlike a dedicated line, you must dial the host computer to establish a connection. A leased line is a dedicated telephone line that is exclusively rented and is always active (connected). It is used by businesses to connect geographically distant offices. The primary factors affecting the monthly fee are distance between end points and the speed of the circuit. Because the connection doesnt carry anybody elses communications, the carrier can assure a certain level of quality. Domain Name System (DNS) It is the method by which thousands of separate and diverse networks linked to the Internet are mapped. The DNS is essentially a collection of large databases, which are used by computers on the Internet to locate other Internet computers. World Wide Web (WWW) WWW, or simply the Web is a new technology that is used to navigate the Internet. Most of the information on the Internet is text based but not organised to suit anyones needs. WWW browsers display the Internet in what is known as graphical user interface (GUI) environment. By simply clicking a mouse button or touching a key, a large number of things can happen. The Web provides a graphical user interface and enables the display of rich graphical images, pictures etc. the Web was first developed in 1990 by Tim Berners. The strength of the Web lies with hypertext technology. Hypertext is a method of inserting links to other documents in a document i.e. it is text with pointers to other text. Hypermedia is a superset to hypertext. The advantage of hypertext is that in a hypertext document, if you want more information about a particular subject mentioned, you usually just click on it and you will be moved to further details. Hypertext documents are associative, not linear. This is made possible by the incorporation of links in the text, which enable users to jump to another part of the document, which contains related information and then later return to where they left off. The Web has 2 major components, a Web browser and a Web server. To access the WWW you have to run a browser program on your computer. A browser is application software that is capable of interpreting the links embedded in on-line documents and access the related documents as required. The documents that the browsers display are hypertext. WWW is based on the client/server model. Users use Web browsers (the client software). WWW allows you to access Gopher, Wais, FTP, and HTTP (Hyper Text Transfer Protocol) servers. To use a browser to access the Web server you must enter the servers address or the URL (Universal / Uniform Resource Locator). All Web addresses begin with http://. A Web server stores information in a series of text files called pages. A page is stored on a particular host machine but can be accessed by Web clients throughout the world. Pages have unique addresses and can therefore be retrieved explicitly or they can be cross-referenced from others by means of dynamic hypertext links. The text files or pages use a structured language called Hyper Text Mark-up Language (HTML) to store their information. HTML enables the author of a page to do the following: - Define different typestyles, sizes, titles and headings for the text - Define links to other pages that may be stored on the same Web server or on any Web server anywhere on the Internet.

The URL It is a standard shorthand method of referring to documents or other resources available on the Internet. Every document or service on the WWW has a URL, which is interpreted by Web browsers when you instruct them to retrieve documents e.g. http://www.yahoo.netguru.com/networking/topologes.txt. This means that a text document called topologies.txt resides in computer www.yahoo.netguru.com in folder networking. Relationship between Business and Technology Business is an organisation, which uses input in their line of operation to produce output. Technology refers to techniques and equipment such as computers and related devices that enable fast and efficient processing of data into information as well as information manipulation easier. Hence, business can uses technology to enhance operations by improving the efficiency and quality of service delivery. The higher the level of technological advancement the bigger the business grows because of the efficiency of e-commerce. Telecommunications Applications a) Voice Mail It is technology that enables the users spoken message to be recorded in a voicemail box for retrieval later by the intended recipient. It is accessed by a telephone user and is cheap. Although there is no immediate response to the message, voicemail enables the recipient to listen to the message at his convenience thus avoiding disturbances if he is busy. b) Electronic Data Interchange (EDI) It is a computer-to-computer exchange of data in which computers of different companies directly communicate with each other. This is a more efficient way of inter-company communication and replaces paper communication such writing memos, letters etc. Electronic Funds Transfer (EFT) is fully established in EDI. EFT is the transfer of money from one account to another through electronic means. c) Facsimile This is technology that can be viewed as long distance photocopying in that the document to be sent is passed through the sending fax machine and reproduced by the receiving fax machine. It is very reliable and is fast when duplicates of a document have to be exchanged between companies. This technology has been broadly embraced by businesses. d) E-mail This is technology in which messages are communicated by electronic means. Incoming and outgoing messages are filed electronically thus making communication and message handling easy. e) E-commerce Refers to the use of technology such as computers to automate the advertising, buying and selling of goods and services. The payments are made electronically through EFT. E-commerce has the potential to transform a local business into a global distributor and reduce staff cost because business is conducted electronically. f) Internet This is a global network of computers. It is also called WWW, standing for World Wide Web. Information can be disseminated to a broad base of clients or potential clients over the Internet. Hence, it enhances marketing by enabling clients to easily access information about the products and services they need. It is also vital for e-commerce.

g) Intranet This is a network of a local environment. It uses existing company network infrastructure, Internet communication standards and the software development tools of Internet and is accessed with company employees only. It is vital for making vital business decisions, sharing opinion while maintaining business secrecy. h) Extranet Refers to an intranet that is partially accessible to authorized outsiders. Whereas an intranet reside behind a firewall and is accessible only to people who are members of the same company or organization, an extranet provides various levels of accessibility to outsiders with valid usernames and passwords. Ones identity determines which parts of the extranet one can view. i) Telecommuting Refers to the practice of using telecommunication technologies to facilitate work at a site away from the traditional office location and environment. j) Computer conferencing Refers to an on-going computer conversation via text with others in different locations. It can be done in real-time, so that messages appear as they are being keyed, or they can be asynchronous, meaning the complete message is keyed and then stored for later use by the receiver or sender. k) Video conferencing Refers to communicating from separate geographic locations in which the participants are able to see and hear each other in real-time. It enables the transfer of video and audio data to multiple locations and with the use of a whiteboard-the transfer graphics and data. l) Audio conferencing Refers to voice only connection of more than 2 sites using standard telephone lines. m) Desktop conferencing a form of video conferencing involving a small video camera mounted on top of a desktop computer VDU. Individuals or small groups of people can see and hear each other as they share graphics and data on the computer screen as a sort of electronic whiteboard. n) Virtual office It is also known as mobile office. It is a type of telecommuting in which workers are equipped with the tools, technology and skills to perform there jobs from anywhere they may be required to be e.g. home, customers location etc. o) Wireless connection Are radio-based systems that allow transmission of information without a physical connection. p) Viewdata / Videotext Is an interactive information retrieval service, which enables a page of data to be transmitted in one second. The ISO-OSI Model ISO stands for International Standardisation Organisation. It is a world body that sets standards for the manufacture of products that are to be used internationally. OSI stands for Open System Interconnection. It is a system that can carry out all communicating processes for the applications it serves, in a standard agreed way so that it can work with any other open systems running applications that might need to intercommunicate.

Communication Architecture Architecture refers to a structure, design or orderly arrangement of parts belonging to some entity that permits the whole entity to be perceived as distinct from another entity within the same entity class. A systems architecture is what permits a system to be classified into a certain category base upon identifiable attributes. Communication architecture concerns the hardware and software structure that implements the communications function. General communication architectures are usually viewed as a hierarchy of layers, each with its own protocol (a set of rules governing the exchange of data between 2 communicating stations) to accomplish its functions. Within a given communication architecture there may be several protocols associated with a single layer. This architecture is called a suite or stack. The best-known architectures are: 1) Open System Interconnection model (OSI) 2) IBMs Systems Network Architecture (SNA) 3) US Defence Departments Transmission Control Protocol / Internet Protocol Architecture (TCP/IP) The OSI model was introduced as network model that would help vendors create inter-operable (open) network implementations. The OSI reference model divides the problem of moving information between 2 devices into seven smaller and more manageable problems each of which is solved by a layer of the model. The lower 2 layers are implemented with hardware and software whilst the upper 5 layers are generally implemented in software. 7) Application Layer It is the layer closest to the user. It is the end users access to the network. It does not provide services to any other OSI layer. It performs the following functions: b) Identifies and establishes the availability of intended communication partners c) Synchronises co-operating applications d) Establishes agreement on procedures for error recovery and control of data integrity e) Determines whether there are sufficient resources for the intended communication f) Network monitoring g) Remote system initiation and termination h) Application diagnostics i) Making network transparent to users 6) Presentation Layer Ensures that information sent by the application layer of one system will be readable by the application layer of another system. It deals with the rules for presenting information (i.e. the syntax) by translating between multiple presentation formats e.g. one system might use an ASCII 7-bit code word for each numerical digit and the other might use BCD (binary coded decimal). Its functions include: a) Negotiates data transfer syntax for the application layer b) Is concerned with displaying, formatting, and editing user inputs and outputs c) Syntax conversion 5) Session Layer It is concerned with orderly transfer of data by deciding on ways data is to be interchanged e.g. full, half duplex, serial, synchronised data transfer etc. Its functions include the following a) Establishes, manages and terminates sessions between applications b) Provides synchronisation points 4) Transport Layer The boundary between the session layer and the transport layer can be taken to the boundary between upper layer protocols and lower layer protocols. Whereas the application, presentation

and session layers are concerned with application specific issues, the lower 4 layers concentrate on data transport issues. The layer provides mechanisms for the following: a) Establishment, maintenance and termination of virtual circuits b) Transport fault detection and recovery c) Information flow control (to prevent one system from another with data) d) Generating the receivers address e) Breaking a large data stream into packets if required f) Ensuring all packets have been received and removing duplicate packets g) Ensuring cost effective data transmission by multiplexing a number of transport connections on one network layer connection. The transport layer deals with end-to-end issues and hence is sometimes called the host-to-host or end-to-end layer. 3) Network Layer It sets up the communication path with the maximum throughput, and transmission delay needed by the transport layer as well as providing addressing, relaying and routing functions to set up the communication path between the communicating end systems. It also controls the operation of the combined layers 1,2 and 3, which are sometimes called the sub-network. 2) Data Link Layer Provides reliable transit of data across a physical link between 2 nodes by detecting and correcting bit errors on transmission channels between adjacent systems communicating with each other. a) It is concerned with physical instead of logical addressing b) Network discipline c) Line discipline (how end systems will use network links) d) Error notification e) Ordered delivery of frames f) Flow control The DLL is divided into 2 sub-layers namely i) The medium access control (MAC) sub-layer - Performs most of the DLL function - Provides shared access for computers network adapter cards to the physical layer ii) The logical link control (LLC) sub-layer - Is an interface between MAC sub-layer and layer 3 software - Enables the software and hardware in the MAC sub-layer to be separated from the logical functions in the LLC sub-layer, this makes it simpler to change the MAC hardware and software without affecting the software in layer 3. 1) Physical Layer It is the lowest OSI layer. It provides for the following functions: a) Electrical/optical, mechanical, procedural, and functional specifications for activating, maintaining and deactivating the physical link between end systems. b) Voltage levels, timing of voltage changes, physical data rates, maximum transmission distances, physical connectors etc, are defined by physical layer specifications. TCP/IP TCP/IP is a suite of communication protocols, which was developed by the US Defence Department for purposes of interconnecting networks developed by different vendors (creating a network of networks). The TCP/IP also provides a routable enterprise networking protocol and access to the Worldwide Internet and the resources associated with it.

a) TCP TCP stands for Transmission Control Protocol .TCP is a connection oriented protocol that sends data as an unstructured stream of bytes. By using sequence numbers and acknowledgement messages, TCP can provide a sending node with delivery information about packets transmitted to a destination node. If the sending computer is transmitting too fast for the receiver, TCP can employ flow control mechanisms to bring the data transfer rate to the required levels. TCP adds support to detect errors or data lost and to trigger retransmission until data is correctly and completely received. Because TCP calls on IPs services, these processes can exist on machines on different networks. Most systems that support TCP/IP provide a software interface to the TCP functions. This interface is called Application Program Interface (API) and varies from machine to machine. The interface does the following: a) Sets up sessions with cooperating processes b) Listens for requests for session c) Sends and receives data d) Closes sessions Once a session has been established, the upper level application sends continuous streams of data through TCP for delivery to its peer process. TCP puts this data along with necessary control and addressing data into units called segments, which are then passed to a lower level protocol, which is usually IP. IP puts the segments into datagrams and sends them across the internetwork. On the other hand TCP checks for errors, acknowledges error-free segments, and reassembles the segments for delivery to upper layer application. TCP maintains data transmission reliability by using a positive acknowledgement with retransmission (PAR) mechanism. A sending TCP retransmits a segment at timed intervals until a positive acknowledgement is received. TCP uses a checksum to detect segments that may have been damaged in transit. TCP and IP have different checksums. TCPs checksum verifies a segment; IPs checksum verifies its header. TCP maximises reliability and efficiency by making use of a sliding window. The sliding window protocols allow the sender to transmit multiple packets before waiting for an acknowledgement. As each acknowledgement for each packet sent is received, the window moves forward and a new packet can be sent. The maximum number of packets that can be sent before an acknowledgement has been received is called the window size. TCP has a flow control mechanism that further enhances reliability. This mechanism allows the receiving end to specify how much data it can receive at the present time. When the receiving end sends an acknowledgement, it also advertises how much data it is prepared to accept on the next transmission. b) IP IP stands for Internet [working] Protocol and it is a network layer protocol. IP is responsible for Internetwork routing and it provides fragmentation and reassembly of information units called datagrams for transmission over networks with different maximum data unit sizes. IP is responsible for moving a packet of data from node to node. IP forwards each packet based on a four-byte destination address called the IP number. IP is often called a connectionless delivery system because it routes each datagram separately i.e. each datagram in the sequence may, or may not, travel over the same path to the same destination. The IP service makes a best effort attempt to deliver al datagrams, but if some datagrams get lost due to network hardware problems or resources that are overloaded, higher-level protocols, not IP, will retransmit the datagrams. Each host connected to the Internet has a numerical IP address. This address is a set of four numbers separated by dots e.g. 139. 34.100.26 .The Internet authorities assign numbers to different organisations. These organisations then assign groups of their numbers to departments. The TCP/IP has effectively become the standard protocol used for inter-operability among many different types of computers. This interoperability is one of the main advantages of TCP/IP. Other protocols, which were developed specifically for the TCP/IP suite, include: 1. SMTP (Simple mail transfer protocol) E-mail. 2. FTP (File transfer protocol)- for exchanging files among computers running TCP/IP.

3. SNMP (Simple network management protocol)- for network management. DECnet DECnet is Digital Equipments Corporations proprietary protocol stack. It is a set of hardware and software products that implement the Digital Network Architecture (DNA). It defines communication networks over Ethernet LANs, fibre distributed data interface (FDDI) metropolitan area networks and WANs that use private or public data transmission facilities. DECnet can also use TCP/IP and OSI protocols as well as its own protocols .It is a routable protocol. Each improvement of the DECnet protocol stack is called a phase. NetBEUI NetBEUI stands for NetBIOS user interface. NetBEUI and NetBIOS were originally tied together and worked as one but later, some manufacturers separated out NetBIOS, the Session layer protocol, so that it could be used with other routable transport protocols. NetBIOS (network basic input/output system) is an IBM Session layer LAN interface that acts as an application interface to the network. It provides the tools for a program to establish a session with another program over the network. Many application programs support it. NetBEUI is a small, fast and efficient transport layer protocol that is supplied with all Microsoft products. It has fast data transfer rate on the network cable and its small packet size is ideal for MS-DOS based computers. It is compatible with all Microsoft based networks. However its main problem is that it does not support routing. NETWORK CONNECTIVITY DEVICES REPEATERS Data signals suffer loss of signal strength and become degraded and distorted as propagation distance increases (attenuation). This limits the lengths of the cable that can be used without making the signal unrecognizable at the receiver. To overcome this problem special devices called repeaters are placed at intervals along the transmission medium to boost signal strength so that it can be transmitted further. Any signal reaching the repeater from one segment will be amplified and retransmitted to the other segment. Repeaters absorb the original signal, copy it, and retransmit it as a renewed and noise free signal along another segment of cabling. They can connect the same or different network cables.

A repeater is not intelligent and simply acts on the electrical signal. It amplifies weakened signals in both directions, removes interference (e.g. noise) and regenerates. (Reshapes) the signal. It is transparent to data flow meaning that to data through the repeater in a usable form from one segment to another, the packets and the Logical Link Control (LLC) protocols must be the same on each segment and so must be the medium access method. The repeater operates at the physical layer of the OSI Reference Model. Repeaters dont translate or filter anything from incoming signals, i.e. it sends every bit of data as it is (even with errors) from one segment to another. They also pass broadcast storms (occurs when the number of broadcast messages is approaching the network bandwidth). Broadcast storms degrade network performance. Advantages 2) Easy to manufacture 3) Cheap 4) Reduces number of computers per network segment by dividing the network into smaller segments Disadvantages 1) Are dumb devices and therefore can amplify even unwanted signals 2) Cause broadcast storms when traffic increases

3) Cannot be used with segments that use different access methods 4) Cant be used in situations where data needs to be filtered BRIDGES Connect 2 LAN segments that use the same data link and network protocol, operate at the Data Link Layer of the OSI model. They may connect the same or different types of cables. Bridges selectively forwards data packets based on an examination of the DLL addresses in the packets. Bridges are created to enable network administrators to segment their networks transparently i.e. when individual stations cant know whether their there is a bridge separating them or not. Modern bridges provide filtering and forwarding. Application of bridges 1) To expand a network segments distance 2) Facilitate an increased number of computers on the network 3) Reduce traffic bottlenecks resulting from an excessive number of attached computers 4) Linking dissimilar physical media 5) Linking different network segments such as Ethernet and Token Ring and forwarding packets between them. How a Bridge works During initialization bridges learn about the network and the routes. Packets are passed onto other network segments based on the medium access control layer. Each time a packet gets to a bridge, the source address is read, and compared to the bridges internal routing table. If the address is not in the routing table it is stored. This way the bridge builds up a table that identifies the segment to which the device is located on and the table is then used to determine which segment incoming frames should be forwarded to. If the destination address is not in the routing table the bridge forwards the packet to all networks or network segments except the one from which it was received. If the destination address is in both the routing table and on the same network segment as the source address, the bridge discards the packet. This process is called packet filtering. If the destination address is in the routing table but not on the same network segment, the bridge determines the port associated with the address and forwards the packet to that port. The size of this table is important especially if the network has a large number of workstations/servers. If a frame contains an address not on the created table of addresses, this frame is transferred to another network. Bridges interconnect LANs at the DLL. Bridges operate at the Media Access Control (MAC) sub layer of the DLL. The MAC layer does the following: 1) Listens to all traffic 2) Checks the source and destination address of each packet 3) Builds a routing table as information becomes available 4) Forwards packets in the following manner: - If the destination is not in the listed routing table, the bridge forwards the packets to all segments - If the destination is listed in the routing table the bridge forwards the packets to that segment (unless it is the same segment as the source) Bridges have some degree of intelligence because they learn where to forward data (building routing tables). Bridges are ideally used in environments where there is a number of well-defined workgroups, each operating more or less independent of each other, with occasional access to servers outside their localized workgroup or network segment. Bridges dont offer performance improvement when used in diverse or scattered workgroups where the majority of access occurs outside the local segment. Advantages of bridges 1) Increase the number of attached workstations and network segments 2) The buffering of frames by bridges means that its possible to interconnect network segments that use different media access control protocols 3) Since bridges work at the MAC layer they are transparent to higher-level protocols.

4) Sub-division of LANs into smaller segments increases overall reliability and makes the network easier to maintain. 5) They are flexible and easily adaptable Disadvantages 1) Frame buffering introduces delays 2) Bridges may overload during periods of high traffic 3) Bridges which combine different access control protocols require the frames to be modified before transmission onto the new segment and this causes delays NOTES 1) Bridges have all features of repeaters 2) They connect 2 segments and regenerate the signal at the packet level 3) They function at the DLL 4) They are not suitable to WANs slower than 56kbps 5) They pass all broadcast, possibly creating broadcast storms 6) Read the source and destination of every packet 7) They pass packets with unknown destinations 8) Bridges operate at a higher OSI layer than repeaters and therefore are more intelligent than repeaters 9) Bridges generate data at packet level which means that they can send packets over long distances ROUTERS A router is a device that moves information across an internetwork from source to destination. Routers can connect 2 or more LANs that use the same or different data link protocols but the same network protocol. They may connect the same or different types of cable. A router is actually a special computer, which is dedicated to the task of interconnecting networks as shown below. It moves information from its source to its destination regardless of the middleware. Routers know only about networks and not about hosts. The primary difference between routing and bridging is that routing occurs at the network layer (layer 3) while bridging occurs at the DLL. This difference provides routing and bridging with different information to use in the process of moving information from source to destination. Routers, by virtue of operating at a higher level, are more intelligent than bridges and repeaters and hence, are more expensive. Since routers perform more processes on each message than bridges, they are slower. Routing is through other routers as shown above. A data frame may pass through many routers on its way to its destination.

Routers allow the logical separation of an inter-network into many networks by using an addressing scheme that distinguishes between device addresses at the data link layer and internetwork addresses at the network layer. As opposed to bridges a router only processes messages that are specifically addressed to it. For a frame to be routed 2 addresses are needed, the destination address and the next router address. In order to be routed a frame must be compatible at the OSI network layer and above. This means that routers are written for specific protocols. At the physical and logical layers this is not required. Responsibilities of routers can be split into 2 broad categories: 1) Determination of optimal routing paths A router uses a routing algorithm to determine the optimal paths to the destination. An algorithm initializes and maintains routing tables which contain route information such as the following: a) Destination/next hop association b) Distance c) Path quality There are several different routing algorithms but they basically have the same attributes: a) Optimality The ability of the algorithm to select the best route b) Simplicity The ability of the algorithm to offer its functionality efficiently with a minimum of software and utilization overhead. This is crucial when the software implementing the routing algorithm must run on a computer with limited physical resources. c) Flexibility The ability of a routing algorithm to quickly and accurately adapt to a variety of network circumstances e.g. when an optimal route is no longer optimal, the algorithm should quickly come up with an alternative optimal route. d) Robustness and Stability Routers should perform correctly in the face of unusual or unforeseen circumstances such as hardware failures, high load conditions etc. 2) The transmission of packets through the inter-network (switching) In most cases a host determines that it must send a packet to another host. Having acquired a routers address by some means, the source host sends a packet addressed specifically to a routers physical (MAC sub-layer) address, but with the protocol (network) layer address of the destination host. On determining the destination protocol address, the router determines whether it knows how to forward the packet to the next hop or not. If it knows it changes the destination physical address to that of the next hop and transmits the packet otherwise it drops the packet. The next hop maybe or may not be the ultimate destination host. Because routers can filter packets at network level they can be used as firewalls. A firewall is a barrier, which prevents unwanted packets from either entering or leaving the network. When a router is used as a firewall the firewall is called a Packet Level firewall, because it examines packets and decides according to filtering rules whether to pass the packets or not. A packet firewall will discard packets coming from certain IP addresses while allowing those packets whose source IP address is in a predefined list, to access the network. The problem with this simple firewall is that it is easy to forge a source IP address.

Advantages of routing 1) Isolates broadcast traffic especially broadcast storms thus putting little stress on the network 2) Confines traffic to its addressing limits. Routers only read addressed network packets, bad packets will be discarded 3) Allow better management of WAN links 4) Allow better network management by passing information only if the network address is known thus reducing traffic 5) Act as safety barrier between segments Disadvantages of routing 1) Relatively expensive to implement 2) Do not accommodate all protocols Compare and contrast a bridge and a router: BRIDGE Forwards packets between networks Send data across WAN links Works at the MAC sub-layer of the DLL Looks for the nodes MAC sub-layer address in each packet Recognizes only one path between networks Relatively faster Less intelligent compared to routers ROUTER Forwards packets between networks Send data across WAN links Works at network layer Recognizes both the nodes address and the protocol as well addresses of other routers Can search among multiple active paths and determine the best at that moment Relatively slower in operation More intelligent compared to bridges

BROUTERS A brouter combines the best qualities of both the bridge and router. It can act like a router for one protocol and a bridge for others. Like a bridge a brouter examines the data link layer addresses of all packets on the network and forwards them to any other network of the same type. At the same time processes any messages addressed to it by looking at the network layer protocol to see if the message needs to be forwarded to a different data link layer type network. A brouter operates at both the data link layer and network layer. Bridges can perform the following functions: 1) Route selected routable protocols 2) Bridge non-routable protocols 3) Deliver more cost-effective and more manageable internetworking than separate bridges and routers GATEWAYS Are the most complex of all connectivity devices and span all the layers of the OSI model thus making communication between different architectures possible. A gateway is a connectivity device that completely translates one protocol to another. The conversion process is CPU intensive. A gateway can be a stand-alone microcomputer with several NICs and special software, a front-end processor connected to a mainframe computer, or a special circuit in the network server. Each of the 3 types of gateways, network-to-network, system-to-network, and system-to-system solves a specific problem. Generally gateways link network environments that dont use the same: 1) Communication protocols 2) Data formatting structures 3) Languages e.g. OS 4) Architecture Most gateways are two way protocol translation only and are task specific i.e. they are dedicated to a particular type of transfer and are often called by their particular task name such as

SNA to IPX, SNA to TCP/IP etc. if a 3rd protocol is added the gateway becomes considerably complex. To process data the gateways perform the following steps: 1) De-capsulate incoming data through the networks complete protocol stack. 2) Encapsulates the outgoing data in the complete protocol stack of the other network to allow transmission.

You might also like