Ijcttjournal V1i2p8

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

International Journal of Computer Trends and Technology- May to June Issue 2011

Analysing an Integrated Technique for the Software Requirements of a Safety Critical System based on Software Inspection, Requirements Traceability and Fault Tolerance.
V.Sree Dharinya
Assistant Professor School of Information Technology & Engineering, VIT University Vellore,India.
Dr.J.Vaideeswaran

Senior Professor School Of Computing Sciences, VIT University Vellore,India.

AbstractRequirement analysis is important for developing and implementing safety critical systems.The system identified is intended to reduce the risks and fulfils earlier discovered safety requirements. A single error during requirements gathering can generate serious software faults. Software inspection is done for the verification and validation process. Traceability of requirements is carried out to avoid later risk factors. Fault tolerant architectures are used for the efficient behaviour of safety critical systems. This integrated environment is proposed for developing the safety critical system. The computer aided tool is used for the integrated environment.
Keywords: Safety critical system; software inspection; traceability; fault tolerance.

I. INTRODUCTION Digital systems usage has increased in the recent years. The importance of software verification and validation is therefore emphasized for safety critical systems. Inspection is widely believed to be an effective technique for software. It can greatly increase the productivity and quality by reducing development time and by removing defects. Inspection can be applied to the whole software life cycle of the critical system. By inspecting products as early as possible, major defects will be revealed sooner and will not be propagated through to the final product. However, software inspection is very tedious and its introduction is difficult to justify in terms of the investment of time and money. This intensive feature is identified because software inspection uses little technology, which does not fit in well in a development environment that is more technology based. These problems are mainly due to a lack of understanding. Incorrect implementation will produce poor results. When this happens, users are discouraged from using software inspection again. But truly software inspection is gaining in popularity.

For more strong inspection, requirements traceability analysis is considered a component of software inspection. This indeed is the motivating factor for integrating the capability of requirements traceability analysis into software inspection. Requirements traceability analysis identifies requirements that are either missing from, or added to, the original requirements. Applying requirements traceability to the software architecture phase can help in identifying requirements that have not been identified in the architecture. Stepwise refinement of the requirements into the architecture produces a natural set of mappings from which requirements traceability can be derived. Safety critical systems must have the capability to adapt for component failures in a systematic way. These types of adaptable systems are known as fault tolerant systems. Given the dependence upon these critical information systems, an important property that these systems must achieve is survivability. Survivability is one attribute of dependability and the other aspects include reliability, availability, and safety. The failures of concern in safety critical systems can include events as hardware or software failure, operator error, power failure, environmental disaster, or security attacks .One approach to providing the requirement of survivability is fault tolerance. Fault tolerance enables systems to continue to provide service in spite of the presence of faults. Fault tolerance consists of four phases: error detection, damage assessment, state restoration, and continued service. The first two phases constitute comprehensive error detection, and the latter two phases constitute comprehensive error recovery. Survivability is intimately related to and dependent upon both the recognition of certain system faults that affect the provision of service (error detection) and the proper response to these system faults in order to provide some form

ISSN: 2231-2803

IJCTT

International Journal of Computer Trends and Technology- May to June Issue 2011
of continued service (error recovery). The recognition of the system faults most likely to affect the provision of service involves high-level error detection. To promote the application of software inspection, requirement traceability, and fault tolerance an integrated environment approach for requirements which is an effective technique for the software requirements analysis of safety-critical systems is identified. This approach integrates software inspection, requirements traceability, and fault tolerance in order to support systematic requirement analysis. II. INTEGRATED TECHNIQUE FOR SOFTWARE REQUIREMENT ANALYSIS An integrated environment approach for requirements which is an effective technique for the software requirements analysis is analyzed which enables easy inspection by combining requirements traceability and effective use of fault tolerance. There are some difficulties in putting inspection into practice. Because software requirements inspection is labor-intensive, it is difficult to implement properly. Also, there are difficulties in using fault tolerant techniques. For more effective software requirements analysis, integrated environment approach for requirements which integrates inspection, requirements traceability and fault tolerance is proposed. By using integrated environment approach for requirements, we can support the inspection of all software development documents written in natural language. The inspection results, which are the main requirements elicited from the documents, can then be used not only for requirements traceability analysis but also for software fault analysis. Because of the difference in domain knowledge between a developer and a requirements analyzer, the analyzer has difficulty in completely understanding design documents written in a natural language instantly and in formally specifying the software requirements of the documents, making it difficult to assure the quality of a specification. As a result, reducing the difference in domain knowledge is critical for software development. Furthermore, because requirement documents are mostly written in a natural language, the analyzer needs considerable time and effort to understand and formally specify the documents. Generally, a software life cycle consists of a concept phase, a requirements phase, a design phase, an implementation phase, and a test phase. Each phase is defined so that its activities are distinct from the activities of the other phases. The verification and validation tasks should be traceable back to the software requirements, and a critical software product should be understandable for independent evaluation and testing. Document evaluation and traceability analysis are major tasks of the concept and requirements phases. In the concept-phase, which is the initial phase of a software development project, the needs of the user are described and evaluated through documentation. The requirements phase is the period in the software life cycle when requirements such as the functional and performance capabilities of a software product are defined and documented. In this part integrated environment approach for the software requirements analysis focuses on the concept and requirements phases. Our effective technique for the software requirements analysis consists of four steps for document evaluation and requirement analysis throughout the concept and requirement phases as shown in the life cycle Figure 1.

FIGURE 1.

III. INSPECTION BASED TECHNIQUE Software requirements inspection is an attempt to improve software quality using systematic reviews. Since Fagan first defined the software inspection process there have been many variations of software requirements inspection. The major goal is to detect as many errors as possible, not to suggest corrections or examine alternative solutions. An inspection team generally consists of four to six people. Each person has one of the following well-defined roles. Moderator. A moderator is the person overall in charge of the inspection. The moderators task is to invite suitable people to join the inspection team, distribute source materials and to organize and moderate the inspection meeting. Author. An inspection requires the presence of the author of the product under inspection. The author can give invaluable help to the inspectors by answering questions related to the intent of the document. Reader. During an inspection meeting, the readers job is to paraphrase out loud the document under inspection. Recorder. The recorders duty is to note all defects, along with their classification and severity. Although Fagan says this task can be accomplished by the moderator, another member of the team is usually chosen because the workload can be quite high, though mainly secretarial. The recorder is often known as the scribe. Inspector. Any remaining team members are cast as inspectors. Their only duty is to look for defects in the document. For effective use of software inspection, the five stages of the inspection process as follows. Overview. The entire team is present during the overview. The author describes the general area of work and then gives a detailed presentation on the specific document. The document

ISSN: 2231-2803

IJCTT

International Journal of Computer Trends and Technology- May to June Issue 2011
is then distributed to all members and any necessary work is assigned to the members. Preparation. Each team member carries out individual preparation and studies the document. Errors in the document will be found during this stage but, in general, more errorswill be found at the next stage. Checklists of common types of defects can help the inspectors concentrate on the most beneficial areas of the inspection. Each inspector produces a list of comments on the document, indicating defects, omissions and ambiguities. Inspection. The inspection meeting involves all team members. The reader paraphrases all areas of the document. During this process, inspectors can stop the reader and raise any issue until a consensus is reached. If members agree that a particular issue is a defect, the issue is classified as missing, wrong or extra. The severity of the defect is also classified as major or minor. At this point the meeting moves on. No attempt is made to find a solution for the defect; this step is carried out later. After the meeting, the moderator writes a report detailing the inspection and all defects. The report is then passed to the author for the next stage. Rework phase. During the rework phase, the author corrects all the defects that are found in the document and detailed in the moderators report. Follow-up. After the document has been corrected, the moderator ensures that all required alterations have been made. The moderator then decides whether the document should be re-inspected, either partially or fully. Errors in the document will be found during this stage but, in general, more errors will be found at the next stage. Checklists of common types of defects can help the inspectors concentrate on the most beneficial areas of the inspection. Each inspector produces a list of comments on the document, indicating defects, omissions and ambiguities. IV. REQUIREMENTS TRACEABILITY Requirements traceability is to trace system, software requirements through requirements, design, code, and test materials is concerned with the relationships between requirements, their sources and the system design. Using the requirement traceability analysis, we can get the following advantages: completeness/omissions identification of most important or riskiest paths locations of interactions discovery of root causes of faults and failures V. FAULT TOLERANT ARCHITECTURE The aim of the technique is to achieve fault tolerance in safety critical systems.It is important to note that there are often many mechanisms available for accomplishing the same goal. From the literature on dependability, there are four means for providing dependable computing systems: fault avoidance, fault tolerance, error removal, and error forecasting Fault avoidance prevents the occurrence of faults by construction. Fault tolerance provides service complying with the specified function in spite of the occurrence of faults. Fault removal minimizes the presence of faults by verification. Fault forecasting estimates the presence, creation, and consequences of faults by evaluation . Firstly, these systems are too complex, too large, and too heterogeneous for it to be possible to prevent the occurrence of all faults during construction using fault avoidance; inevitably there will be faults present in these systems (as in any complex software system). Similarly, fault removal alone cannot obviate the presence of all faults; systems such as these are too complex and large to be verified completely and correctly. Fault forecasting would be difficult for much the same reason. Finally, one of the major threats to survivability in these information systems is security attacks. It is practically impossible to anticipate and prevent all of the deliberate faults exploited by malicious security attacks, and therefore these faults cannot be avoided: they must be tolerated, if at all possible. Fault tolerance is the most promising mechanism for achieving survivability requirements. Before proceeding with a discussion of fault tolerance, it is important to understand the precise definitions of the terms fault, error, and failure A failure refers to an occurrence of the delivered service deviating from the specified or expected service. An error is the cause of a failure: an error refers to the erroneous part of the system state that leads to the failure. A fault is the cause of an error: the occurrence of a fault in the system manifests itself as an error. Given these definitions and fault tolerance as a solution framework, Anderson and Lee identified four phases to fault tolerance: error detection, damage assessment, state restoration Error detection determines the presence of a fault by detecting an erroneous state in a component of the system. Damage assessment determines the extent of any damage to the system state caused by the component failure and confines that damage to the extent possible. State restoration achieves recovery from the error by restoring the system to a well defined and error-free state. Continued service for the system, in spite of the fault that has been identified, means that either the fault must be repaired or the system must operate in some configuration where the effects of the fault no longer lead to an erroneous state. As mentioned previously, the first two phases of fault tolerance constitute comprehensive error detection and the latter two phases constitute error recovery. Given fault

ISSN: 2231-2803

IJCTT

International Journal of Computer Trends and Technology- May to June Issue 2011
tolerance as a solution framework, it is important to consider in more detail the types and characteristics of the faults with which this system is concerned, as well as the impact this will have on the fault-tolerance activities of error detection and error recovery. VI. A CASE STUDY APPROACH The Automatic Half-Barrier level crossing-Railway Level Crossing System A level crossing is a point where a road or footpath intersects a railway at the same elevation. Level crossings are universally considered a weak point in terms of railway safety, because, unlike most other parts of a railway network, the safe operation of level crossings depends to a large extent on the correct behavior of the road or footpath users. Despite this fact, the onus has always been on the railway operator to reduce the risk of a collision at level crossings. One reason for this is that a collision between a road vehicle and a train may derail the train and thereby cause deaths or injuries to passengers and train crew on board. Another is that railways operate under strict rules and regulations, and therefore they are easier to control than roads, where millions of individual drivers and pedestrians are free to act as they wish. The inherent hazards, some of the actions taken by rail operators to mitigate risk, and the future options available to further improve safety. The case study is intended to provide an example of how functional analysis creates a framework for thorough examination of a system. The results are preliminary and high-level, in order to avoid distraction from details of software and hardware, but the same principles apply equally at lower levels of abstraction. A. Description of system The operation of these crossings is triggered by the approach of a train. A warning sequence starts immediately and is soon followed by the lowering of barriers which extend across half of the carriageway only, allowing vehicles already on the crossing to exit without halting. In order to allow road traffic as long as possible, the time between a train triggering the operation of the crossing and reaching the crossing is much less than the time it would take a train to brake to a stand from the maximum line speed. This means that road users are almost wholly responsible for preventing collisions. B. The future obstacle detection system It is desirable to have a means of instructing trains to stop when they are approaching a level crossing which is obstructed. The introduction of extra equipment for this function will have a negative impact on the overall reliability of the level crossing system, if the system is configured to depend on the new equipment. This is because no equipment can be 100 % reliable. However, the obstacle detection system will also have a positive effect on safety, because it will reduce the chances of a collision occurring when a road vehicle becomes stuck on the crossing. The physical implementation of this detection system is not relevant at this stage. However, its functionality must be de_ned in order to simulate it. A simple hypothetical system has been constructed which detects when a road vehicle is stuck on the crossing, and immediately sets the approaching signals to danger. It is assumed that the system is intelligent enough to tell the difference between a car moving slowly and one that is stationary. Since there are no circumstances under which a car should stop on a level crossing, any stationary vehicle on the crossing can be assumed to be in trouble. Therefore it is not unreasonable to have a system which sets the signals to danger as soon as a stationary vehicle is detected on the crossing. VII. OBSERVATIONS AND FUTURE WORK This system proposes an Integrated Environment approach for requirements which is an effective technique for the software requirements analysis; the technique enables easy inspection by combining requirement traceability and effective use of a formal method. The approach consists of two stages for effective verification and validation tasks: document evaluation and requirement analysis. Through the effective technique for the software requirements analysis, we can minimize some of the difficulties caused by the difference in domain knowledge between the designer and analyzer. The tools for our approach can also enhance verification and validation in the software requirement phase. In the future, we plan to extend our approach to the next phases of the software life cycle: the design phase REFERENCES
[1] Gibbens EB. Report of the public inquiry into the accident at Hixon level crossing. Her Majesty's [2] Cho J, Yoo J, Cha S. NuEditora tool suite for specification and verification of NuSCR. Second ACIS international conference on software engineering research, management and applications (SERA2004); 2004. p. 298304. [3] Fagan ME. Design and code inspections to reduce errors in program development. IBM Syst J 1976;15(3):182211. [4] Luhn HP. A statistical approach to the mechanized encoding and searching of literary information. IBM J Res Dev 1957;1(4):30917. [5] B. George and J. Haritsa, Secure Transaction Processing in Firm Real-Time Database Systems, Proc. ACM SIGMOD Conf., 1997. [6] W.A. Halang et al., Measuring the Performance of Real-Time Systems, Intl J. Time-Critical Computing Systems, vol. 18, pp. 5968, 2000. [7] A. Harbitter and D.A. Menasce, The Performance of Public Key Enabled Kerberos Authentication in Mobile Computing Applications, Proc. ACM Conf. Computer and Comm. Security, 2001. [8] M. Harchol-Balter and A. Downey, Exploiting Process Lifetime Distributions for Load Balancing, ACM Trans. Computer Systems, vol. 3, no. 31, 1997. [9] L. He, A. Jatvis, and D.P. Spooner, Dynamic Scheduling of Parallel Real-Time Jobs by Modelling Spare Capabilities in Heterogeneous Clusters, Proc. Intl Conf. Cluster Computing, pp. 2-10, Dec. 2003. [10] C. Irvine and T. Levin, Towards a Taxonomy and Costing Method for Security Services, Proc. 15th Ann. Computer Security Applications Conf., 1999. [11] V. Kalogeraki, P.M. Melliar-Smith, and L.E. Moser, Dynamic Scheduling for Soft Real-Time Distributed Object Systems, Proc. [12] IEEE Intl Symp. Object-Oriented Real-Time Distributed Computing, pp. 114-121, 2000.

ISSN: 2231-2803

IJCTT

International Journal of Computer Trends and Technology- May to June Issue 2011
[13] Z. Lan and P. Deshikachar, Performance Analysis of Large-Scale Cosmology Application on Three Cluster Systems, Proc. IEEE Intl Conf. Cluster Computing, pp. 56-63, Dec. 2003. [14] S. Liden, The Evolution of Flight Management Systems, Proc. IEEE/AIAA 13th Digital Avionics Systems Conf., pp. 157-169, 1995. [15] A.K. Mok, Fundamental Design Problems of Distributed Systems for the Hard Real-Time Environment, PhD dissertation, Massachusett Inst. of Technology, 1983. [16] C.L. Liu and J.W. Layland, Scheduling Algorithms for Multiprogramming in a Hard Real-Time Environment, J. ACM, vol. 20 no. 1, pp. 46-61, 1973. [17] E. Nahum, S. OMalley, H. Orman, and R. Schroeppel, Towards High Performance Cryptographic Software, Proc. IEEE Workshop Architecture and Implementation of High Performance Comm. Subsystems,Aug. 1995. [18] M. Pourzandi, I. Haddad, C. Levert, M. Zakrewski, and M. Dagenais, A New Architecture for Secure Carrier-Class Clusters, [19] Proc. IEEE Intl Workshop Cluster Computing, 2002.M. Wegmuller, J. P. von der Weid, P. Oberson, and N. Gisin, High resolution fiber distributed measurements with coherent OFDR, in Proc. ECOC00, 2000, paper 11.3.4, p. 109. [20] R.L. Winkler and W.L. Hays, Statistics: Probability, Inference, and Decision, second ed. Holt, Rinehart, and Winston, 1975

AUTHORS PROFILE

V.Sree Dharinya has obtained MS degree from Bharathiar University Coimbatore in the year 2003 and B.E(EEE) from Bharathiar University in the year 2001,who is currently pursuing her Ph.D(IPT) Software Engineering in VIT University,Vellore,TamilNadu.She is working as an Assistant Professor in the School Of Information Technology & Engineering, VIT University with the teaching experience of eight years

ISSN: 2231-2803

IJCTT

You might also like