Cleanroom software engineering process




















This report defines the Cleanroom Software Engineering Reference Model CRM , which is intended as a guide for Cleanroom project management and performance, process assessment and improvement, and technology transfer and adoption. Cleanroom software engineering is a theory-based team-oriented process for development and certification of high-reliability software systems under statistical quality control.

A principal objective of the Cleanroom process is development of software that exhibits zero failures in use. The Cleanroom name is borrowed from hardware Cleanrooms, with their emphasis on rigorous engineering discipline and focus on defect prevention rather than defect removal.

The work is protected by local and international copyright laws and is provided solely for the use of instructors in teaching their courses and assessing student learning. You have successfully signed out and will be required to sign back in should you need to download more resources. Out of print. Cleanroom Software Engineering: Technology and Process. Prowell, Q-Labs, Inc. Poore, The University of Tennessee, Knoxville.

If You're an Educator Additional order info. Overview Contents Order Authors Overview. Description This book describes a proven process, created at IBM, to develop software with zero to near-zero defects. Series This product is part of the following series. Table of Contents Preface.

Cleanroom Overview. Cleanroom and the Capability Maturity Model for Software. About the Author s. The specifications for each increment are created using the box structure methodology. Box structures define required system behavior and derive and connect objects comprising a system architecture. Each box has three forms: black, state, and clear, which have identical external behavior visible to a user but whose internals are increasingly detailed.

The black box defines the external, user-visible view in terms of stimuli inputs , responses outputs , and transition rules that map stimuli to responses.

The state box provides a view of the retained internal data initial state plus stimulus to final state and response required to satisfy black box behavior. The clear box defines the procedural functions on state data to satisfy black box behavior, often introducing new black boxes. Each transition--from black box to state box and from state box to clear box--is verified to ensure it satisfies the required system behaviors.

New black boxes are similarly refined into state boxes and clear boxes, continuing in this manner until no new black boxes are required. Incremental development as practiced in Cleanroom provides a basis for statistical quality control of the development process. Each increment is a complete iteration of the process. As is typical in statistical process control, measures of performance in each iteration of the process are compared with pre-established standards to determine whether or not the process is "in control.

If the process is in control, work on the next increment continues. If the process is determined to be "out of control," i. Feedback produced in each increment is used for project management and process improvement. The team examines all feedback, identifies problems, adjusts the incremental development plan if needed, and improves the overall software process as needed.

Developing the right system requires customer feedback throughout the development process. In incremental development, increments are executed by users in the operational environment to facilitate customer clarification of requirements. Developing the system right requires management control of resources and technical control of complexity.

In incremental development, risks to the project are assessed at planned intervals and managed through the incremental development plan. Product quality requires process control. As an iterative process of complete development cycles i. Increment planning requires assessment of the specific circumstances in each project. The considerations are both management and technical, and are based on both facts and assumptions.

Following are common factors in increment planning. The common motivation behind iterative development methods is the fact that requirements can rarely be established with certainty at the outset of a project. Under incremental development, customers provide feedback on the evolving system by direct operation of user-executable increments.

The relative clarity of requirements may influence an increment plan in two ways. Volatile requirements may be implemented in an early increment, so they can be clarified. If the user interface is not well-established, for example, it is an ideal candidate for an early increment.

Alternatively, unstable requirements may be planned for later implementation, when questions affecting the requirements have been settled. Requirements to be settled by concurrent research e. A functional usage distribution is developed as part of a top-level Cleanroom specification.

Expected usage probabilities of system functions are established from historical data and best estimates provided by customers.

System functions with high expected usage probabilities will receive greatest exposure in the field, and may therefore benefit from the greatest exposure to testing. Since increments are cumulative, the functions developed in early increments will be tested several times i. System functions expected to receive the greatest operational usage by customers, therefore, are candidates for early increments. Increasingly, customers are specifying formal software reliability requirements.

Reliability "sensitivities" and allocations can be calculated for subsystems, and subsystems that will have the greatest impact on total system reliability may be candidates for an early increment. The minimal machine is built in the first iteration, and is enhanced in subsequent iterations until the complete machine has been built.

Incremental development of software is entirely compatible with this standard approach to hardware development. A machine must be powered-on, for example, before it can be used. The software for system start-up, therefore, would likely be among the functions implemented in the first increment of an embedded software project.

In most applications there is some logical allocation of functions to increments based on relationships among functions. In a database application, for example, an add must precede a delete. In a statistical application, data must be entered or retrieved before it can be analyzed. Although program stubs i. Novel or particularly complex work may pose a risk to the schedule or even the viability of a project.

If such work is scheduled for an early increment, experience will either lend support to existing plans or point to the need to revise plans. If aspects of the project are not novel or complex in absolute terms, but are novel or complex relative to the experience of the team, an early gauge on feasibility is still desirable. The Cleanroom process emphasizes economy of effort through use of "common services" certified reusable components across and within systems.

When existing components are identified as potentially reusable, the development team must evaluate the relative effort required to tailor the component for use in the new system vs. If the evaluation is in favor of the existing component, the team may want to use the component in an early increment in order to validate its expected performance.

New common services may be desirable candidates for an early increment as well. Since common services may be in multiple places in a system, they have a disproportionate impact on system reliability relative to other, single-instance components.

Since objects may be reusable parts, the rationale for object development in an incremental development plan follows the rationale for reusable components in general.

Risk analysis is used to determine the size and content of each increment. Following a risk assessment, the project team defines an increment plan of one or more increments. Each increment mitigates some risk of project failure. For example, if the user-interface requirements are poorly understood, then the project may be at high risk of delivering a hard-to-use product.

To mitigate this risk, the early increments would contain mostly user-interface code, but not much "function," and could be shown to selected customers. Different increment plans are used when requirements stability or performance properties are high risks.

Incremental development has many advantages over bottom-up and traditional "waterfall" life cycles:. The fundamental approach to verification as espoused by Cleanroom is aimed at introducing mathematical reasoning, not mathematical notation into the verification process.

The principal motivation is to provide a rigorous methodology for software development and to provide a firm foundation as an engineering discipline. Mathematical verification of programs is done by using a few basic control structures and defining proofs following rules specified in a correctness theorem. The proof strategy is divided into small parts that easily accumulate into proof for a large software system. The method of human mathematical verification used in Cleanroom is called functional verification.

Functional verification is organized around correctness proofs, which are defined for the design constructs used in a software design. Using this type of functional verification, the verification problem changes from one with an infinite number of combinations to consider to a finite process because the correctness theorem defines the required number of conditions that must be verified for each design construct used.

It reduces software verification to ordinary mathematical reasoning about sets and functions. The objective is to develop designs in concert with associated correctness proofs. Designs are created with the objective of being easy to verify. A rule of thumb followed is that when designs become difficult to verify they should be redone for simplicity.

Statistical quality control is used when you have too many items to test all of them exhaustively. Instead, you statistically sample and analyze some items and scientifically assess the quality of all of the items through extrapolation. This technique is widely used in manufacturing in which items in a production line are sampled, the quality is measured, then sample quality is extrapolated to the entire production line, and flaws are corrected if the quality is not as expected.

For software, this notion has been evolved so that you perform statistical usage testing--testing the software the way the users intend to use it. This is accomplished by defining usage probability distributions that identify usage patterns and scenarios with their probability of occurrence.

Tests are derived that are generated based on the usage probability distributions. System reliability is predicted based on analysis of the test results using a formal reliability model, such as mean-time-to-failure.

The underlying concern is that random, statistical-based testing will not provide sufficient coverage to ensure a reliable product is delivered to the customer. The coverage concern stems from a misapprehension that statistical implies haphazard, large, and costly and that critical software requirements, which may be statistically insignificant, are overlooked or untested.

Coverage is directly related to the robustness of the usage probability distributions that control the selection process and has not proven to be a problem in current applications of the methods. In one study performed on the level of requirements coverage using statistical testing, percent of the high- level requirements were covered, 90 percent of the subcomponent-level requirements were covered, and approximately 80 percent of all requirements were covered.

The Cleanroom method asserts that statistical usage testing is many times more efficient than traditional coverage testing in improving the reliability of software. Statistical testing, which tends to find errors in the same order as their seriousness from a user's point of view , will uncover failures several times more effectively than by randomly finding errors without regard to their seriousness.

The basis for software reliability starts with the definition of a statistical model, generally based on the concept that input data comes in at random times and with random contents. With defined initial conditions, any such fixed use is distinguishable from any other use. These uses can be assembled into a sequence of uses, and the collection identified as a stochastic process subject to evaluation using statistical methods.

Coverage testing is anecdotal and can only provide confidence about the specific paths tested. No assessment can be made about the paths not tested. Because usage testing exercises the software the way the users intend to use it, high-frequency errors tend to be found early. For this reason, statistical usage testing is more effective at improving software reliability than is coverage testing.

Coverage testing is as likely to find a rare execution failure as it is to find a frequent one. If the goal of a testing program is to maximize the expected mean-time-to-failure, hence the reliability of the system, a strategy that concentrates on failures that occur more frequently is more effective than one that has an equal probability of finding high- and low- frequency failures. Experimental data from projects where both Cleanroom verification and more traditional debugging techniques were used show that the Cleanroom-verified software exhibited fewer errors injected.

Those errors were less severe possibly attributable to the philosophy of design simplification and required less time to fix. Most software users do not care how many defects are in the code. They care instead about how often the software fails to meet their needs, how severe each failure is, and how long the repairs take. Cleanroom testing adapts statistical quality control techniques to measure these quality characteristics, enabling targeted quality improvements.

In precision manufacturing, statistical quality control begins with a precise specification. A statistical sample of the manufactured product is measured against the specification, and the quality characteristics of that sample are used to estimate the overall quality of the manufacturing run.

The quality of a software product does not depend on variations in the physical copies of the product, but in its execution behavior. In Cleanroom, testers statistically sample these behaviors by providing appropriate inputs, and measure each against the specification. Data from the testing process is entered into a reliability model that predicts the quality of the software in the field. The challenge of statistical testing lies in sampling the executions.

Even a small system has an essentially infinite number of execution paths through the code. There is no chance of executing all of them, so how can they be sampled most effectively?



0コメント

  • 1000 / 1000