Home|Feedback|
Privacy Commitment|
Best if viewed with Netscape 4.7 or later, and Internet Explorer 5.0 or later
©2005 LevelA.com - All Rights Reserved
Methods
There are few complete software engineering process standards in the literature. The
Rational Unified Process is one of them. It is a widely published, web-based process that is gaining acceptance among many organizations. It employs 6 best practices that have been validated by successful projects throughout the industry:
- Develop software iteratively
- Manage requirements
- Use component-based architectures
- Visually model software
- Verify software quality
- Control changes to software
History
Over 30 years ago the NATO Science Committee formed a group to try setting a roadmap for dealing with the chronic problem of managing large development projects. The term "software engineering" was coined at that meeting. Today, despite continuing developments in
life-cycle modeling and
design, keeping large software projects aligned with project plans remains an elusive goal.
Learn
The application of software engineering is highly non-standardized. Therefore, one cannot truly refer to software engineering as a science. Most organizations use a software engineering process tailored to suit their management style, quality requirements, and technology level. The process may define:
design methods,
life-cycle steps, quality guidelines, deliverables,
verification standards,
requirements methodology, interdepartmental interactions (e.g. sales-engineering feedback).
There are volumes of information and case studies available in the literature which exude the benefits of following a process. Nevertheless, many organizations still use an unstructured development approach.
Evaluate
One truism that always emerges from case studies of successful and failed projects is: the uniform application of a software development process across an organization is crucial. An average software development project will overshoot its schedule by half. Larger projects fare worse and smaller projects better. There are many factors that contribute to a project's failure and classifying and prioritizing them often results in religious war.
There is no universal standard (yet?) for software development. Most organizations will develop a process tailored to suit their management style, quality requirements, and technology level. Constant feedback is needed to ensure that the process is working and evolving best practices are rolled into the development process. The Software Engineering Institute (SEI) has developed the Capability Maturity Model (CMM) to assist organizations with that task. Raytheon implemented the CMM between 1988-93 as a process improvement initiative with good results. As a result of that initiative, most projects came under schedule and budget and productivity in most programs doubled.
Methods
Software design methods can be broken down into 4 classification: data oriented, function oriented, object oriented and formal methods. All methods share the same goal - to transform a set of data processing requirements into a computer program.
Data oriented methods analyze a system's data entities to extract data requirements. The data requirements then drive the program design.
Function oriented methods use process decomposition to extract a system's process requirements. The process requirements then drive the program design.
Object oriented (OO) methods are a hybrid of data oriented and function oriented design. OO methods break a system down into objects, which are cohesive units of related methods and attributes.
Formal methods use a mathematical language to describe a software artifact, such as a specification, design, or source code. The language enables the application of formal proofs to assess the correctness of an artifact.
History
The terms "object" and "attribute" date back to Artificial Intelligence work performed in the 50's. However, the precursor to modern object oriented design dates to 1966 with the introduction of encapsulation in the Simula language. Today, object oriented design is as popular as structured design was in the 70's.
Learn
All methods share the same goal - to transform a set of data processing requirements into a computer program. One of the shortcomings of mainstream design methods is that they are process based and not a rigorous science - there is no mathematical proof of correctness. The quality of a design is primarily influenced by the designer's intuition and experience.
OO methods use an iterative and incremental layered approach to develop an object model of a system. Different techniques are used to design the elements of each layer in the object model. All OO methods are similar in that their goal is to transform a set of requirements into an object model. However, they differ in terminology and the number of layers and components in the object model. Generally, the object model can be divided into 3 layers: domain model, interface model and implementation model. The domain model is the top layer, the interface model is the middle layer and the implementation model is the bottom layer. Generally, design begins at the domain layer, then the interface layer, followed by the implementation layer.
Evaluate
Data oriented design is useful for systems that process lots of data and that will be programmed using a procedural language such as COBOL. e.g. database and banking applications
Function oriented design is useful for process intensive systems that will be programmed using a procedural language such as FORTRAN.
OO methods are useful for any system that will be programmed using an object oriented language such as C++. However, they are particularly well suited to developing large software systems.
Formal methods are considered to be an alternative to OO and classical design methods, but their use is expensive and claims of reduced errors remain unproven. However, the ability to formally validate the correctness of a software artifact is appealing and research on formal methods is ongoing.
Methods
There are several techniques for cost estimation, but the 2 basic approaches are top-down and bottom-up. In the top-down approach, cost is derived from a business analysis of the major project components. In the bottom-up approach, cost is derived by accumulating estimates from the people responsible for various components.
The primary techniques for cost estimation are:
- Ask an expert in the domain of interest
- Analogy to other projects
- "Price-to-Win" strategy
- Resource availability
- Parametric models
There are over a dozen parametric models available in the literature. A good survey paper of parametric techniques is found in: Information and Software Technology, Vol. 34, No. 10, Oct. 1992, F.L. Heemstra, "Software Cost Estimation".
History
The development of the IBM OS/360 operating system over 30 years ago was a monumental undertaking. That effort spawned many technical innovations, now widely copied. In spite of technical successes, the OS/360 was a management failure. "The product was late, it took more memory than planned, the costs were several times the estimate, and it did not perform well until several releases after the first". Dr. Brooks published many of his lessons learned from managing this project in the book "The Mythical Man-Month: Essays on Software Engineering". The book remains one of the best-regarded texts on software project management.
Today, situations similar to the OS/360 remain all too familiar.
Learn
"Cost does not scale linearly with size", is perhaps the most important principle in estimation. Barry Boehm used a wide range of project data and came up the following relationship of effort versus size:
effort = C x size
MThis is known as the Constructive Cost Model (COCOMO). C and M are always greater than 1, but their exact values vary depending upon the organization and type of project. Typical values for real-time projects utilizing very best practices are: C=3.6, M=1.2. Poor software practices can push the value of M above 1.5. A bit of plotting for various values of M should quickly reveal the value of utilizing best practices!
One fall out of the COCOMO model is that it is more cost effective to partition a project into several independent sub-projects - each with its own autonomous team. This "cheats" the exponential term in the COCOMO model. Partition strategies include domain analysis and partition by CPUs.
Evaluate
The human factor is the dominant parameter in cost estimation. Cost can be driven down substantially by utilizing quality analysts and programmers. Based up his parametric studies of 63 projects at TRW, Boehm was able to obtain a quantitative advantage for utilizing quality analysts and programmers: having a very low skill level among analysts or programmers will cost twice as much as having a very high skill level. If both analysts and programmer have a very low skill level, costs can quadruple. (Hint: training is good.)
Other factors that have a large cost impact are: required reliability, and complexity. Stringent reliability requirements can double costs as can high complexity. If both high complexity and reliability are required costs can quadruple. (Moral: taking time to simplify pays handsome dividends.)
Methods
History
Learn
Evaluate
Methods
Several testing methods are available to expose bugs. Testing is used to show the presence of bugs, it can never prove their absence. Testing can be broken down into 2 broad classifications: functional and structural. A variety of static and dynamic test methodologies are available.
Methodologies include:
- black-box
- white-box
- code reading
- integration testing
- system testing
History
In 1962, a simple software error caused the destruction of a Mariner payload on board an Atlas booster. This simple but expensive error led the Air Force to mandate independent verification on all future mission-critical space launches. The methodology of verification and validation evolved from that requirement.
Historically, testing was considered a separate phase of the software life cycle that followed coding. Modern software life cycle models eschew that view in favor of iterative testing throughout the development life cycle.
Learn
Testing can be used to show the presence of bugs, but never their absence. Testing is a powerful risk management tool because it provides early error detection and correction benefits, and technical insight into the true nature of a system's performance. Typically, a program will use several testing methodologies to address different aspects of the software product. Certification considerations often dictate the verification methods to be used.
Evaluate
Code reading is a systematic procedure for reading and understanding the operation of a program. Studies have shown that code reading detects more errors at a lower cost than any other method.
Black-box Testing is a functional test, usually based upon documented program requirements.
White-Box Testing verifies the logic, computations, functionality, and error handling of a unit. Unit tests derived from software requirements are a very effective strategy for early error detection.
Integration Testing verifies the integrity of a collection of logically related units, checks external and internal interfaces, external input/output.
System Testing is performed on the complete system to verify functional and operational requirements.
Methods
Scheduling mechanisms for real-time operating systems (RTOSs) include: foreground/background, rate monotonic scheduling (RMS), and priority based scheduling.
Priority based scheduling always gives control of the CPU to the highest priority task that is ready to run. There are 2 types of priority based kernels: preemptive and non-preemptive. Most commercial RTOSs use preemptive kernels.
Rate monotonic scheduling is very common in safety critical applications where simplicity and resource constraints are paramount. Tasks are prioritized based upon frequency. RMS makes the following assumptions: all tasks are periodic, tasks do not synchronize with each other and the CPU must always execute the highest priority task.
Foreground/background based methods are used for simple systems with loose real-time constraints. Most consumer appliances, digital watches and toys use a foreground/background scheduling mechanism.
History
RTOSs have been used in embedded systems for over 40 years. The early RTOSs had rudimentary scheduling mechanisms such as foreground/background tasking. Less expensive, more capable CPU and memory capabilities and more demanding system requirements have been driving software complexity upwards. This increased complexity of real-time embedded systems has created a need for full-featured RTOSs.
In addition to addressing real-time constraints, an RTOS simplifies system design by giving some hardware abstraction and providing a mechanism for partitioning a problem in the time domain.
Learn
Some of the motivation for having an RTOS is to control the frequency of specific functions, facilitate inter-task communication, synchronize tasks with each other and with interrupt service routines (ISRs). An RTOS also simplifies software development by partitioning a system in the time domain. Application programs are typically easier to design and maintain if an RTOS is used. Real-time systems fall into 2 major categories: hard and soft. Soft real-time systems perform tasks "as fast as possible", but can miss deadlines without inducing a system failure. Response times in soft real-time systems are vaguely characterized as bad, better, and best. Hard real-time systems are characterized by tight, deterministic deadlines. Missing a deadline in a hard real-time system results in a system failure.
Evaluate
With today's faster and cheaper memory and CPUs, more and more embedded projects will embrace real-time, preemptive RTOSs. Commercial RTOS vendors are also starting to address some of the most stringent safety critical standards such as RTCA/DO-178B level-A. Preemptive, priority based RTOSs provide several advantages over foreground/background, RMS, and cooperative multi-tasking schemes such as: flexible scheduling, deterministic scheduling, semaphores for data protection and synchronization between tasks and interrupt service routines, robust intertask communication using mailboxes and queues. A preemptive, priority based RTOS, however, will use extra CPU resources, ROM, and lots of stack space so performance needs to be assessed before project plans are finalized. Real-time designers also need to be concerned with priority inversion and deadlock when using a priority based RTOS.
Methods
The common life cycle models are:
- waterfall model
- spiral model
- throwaway prototyping model
- evolutionary prototyping model
- incremental/iterative development
- reusable software model
- automated software synthesis
History
The "waterfall model", documented in 1970 by Royce was the first publicly documented life cycle model. The model was developed to help cope with the increasing complexity of aerospace products. The waterfall model followed a documentation driven paradigm. The next revolutionary new look at the development lifecycle was the "spiral model", presented by Boehm in 1985. The spiral model is focused on risk management.
Learn
A software life cycle model depicts the significant phases or activities of a software project from conception until the product is retired. It specifies the relationships between project phases, including: transition criteria, feedback mechanisms, milestones, baselines, reviews, and deliverables. Typically, a life cycle model addresses the following phases of a software project: requirements phase, design phase, implementation, integration, testing, operations and maintenance. Much of the motivation behind implementing a life cycle model is to provide structure to avoid the problems of the "undisciplined hacker".
Evaluate
The waterfall model is documentation driven and the spiral model focuses on risk management. At the completion of each life cycle step, the spiral model requires you to review and reevaluate the project's objectives, alternatives, risks and constraints.
Definition:
- The application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software; that is, the application of engineering to software.
- The study of approaches as in (1).
Definition:
- A development model depicting significant phases or activities of a software project from conception until the product is retired.
- A description of the relationships between project phases, including: transition criteria, feedback mechanisms, milestones, baselines, reviews, and deliverables.
Definition:
- The disciplined process of transforming requirements into an architectural description suitably detailed to allow code construction.
- The elements of the software life cycle that lie between requirements analysis and software coding.
Definition:
- The application of sound project management, domain experience, and risk analysis to determine detailed costs and schedules.
Definition:
- Any language that can be used to define a sequence of instructions that can ultimately be processed and executed by the computer. Computer languages are generally procedural and non-literary, but subsets of English are used and understood by some fourth-generation languages.
Definition:
- The dynamic and static verification of the behavior of a program on a finite set of test cases against the specified expected behavior.
- The act of creating test cases with the intent of finding bugs.
Definition:
- "An operating system designed or optimized for the needs of a process-control environment."
Microsoft Press® Computer and Internet Dictionary, 4th Edition © (P) 2000 Microsoft Corporation. All rights reserved.