Useful Engineering Failure
One of the few popular engineering writers nowadays is Henry Petroski, professor of civil engineering at Duke University in North Carolina. His book, To Engineer is Human: The Role of Failure in Successful Design, crosses engineering disciplines in its applicability. His major assertions are worth pondering by electronics engineers.
Consequences of Failure
Petroski recounts some spectacular civil engineering failures: the Tacoma Narrows Bridge, the Kansas City Hyatt Regency Hotel elevated walkway, and the de Havilland Comet, the first commercial jet aircraft to fly across the Atlantic Ocean. By pushing the state of the art too far, or by irresponsible design or faulty construction techniques, failures happen. Failure is commonplace and well-known not only to engineers but the general public, as recounted in cruder terminology on some bumper stickers. For engineers, however, failure takes on special meaning. It is part of the exercise of our craft. As Woody Allen said, if you're not failing now and then, you're playing it safe. Being "too safe" is the goal when designing life-critical devices, such as the typical works of civil, automotive, medical, and aeronautical engineers. And when less threatening devices, such as garage-door openers and software operating systems, fail too frequently, they pose a financial threat to their suppliers, either before or after shipping the product. But an over-designed product is usually not cost-competitive in the marketplace, and may also suffer performance disadvantages due to non-optimal design tradeoffs. Knowing how close to the edge of failure to come - and knowing where that edge is - is a mark of an experienced designer.
In evaluation engineering at Tektronix in the early 1970s, some guy had schematic diagrams of Tek oscilloscopes taped onto the walls around his desk. His job was to monitor field failure reports. When a failure occurred, he would put a red dot by the failed component. In time, these "measles charts" graphically depicted the statistical pattern of failure, and suggested where some investigation was needed. As an engineering "summer student" at the time, I was given the task of investigating a failure in a video vectorscope, a display-based instrument that draws vectors showing at a glance whether a video color signal is within specification. I quickly found that the base drive to a NPN transistor was excessively negative, causing base-emitter junction breakdown. In time, the transistor would fail. Once a failure is localized, the search for its cause is narrowed sufficiently that a sophomore student (in this case) could find it.
Vectorscope failures were merely inconveniences to television stations, illustrating the contrast in consequences between most electronics design and civil engineering failures. We can afford to push the limits more readily than airplane or bridge designers, for electronics failure is usually a nuisance and nothing more. Is it possible that one factor contributing to the rapid advance of electronics is that failure can be tolerated more readily by electronics engineers? It is nothing to pop a few transistors in the course of a good day's bench work, but a chemical engineer can hardly be so casual about the design of a petroleum refinery. While the single failure of a jet aircraft can trigger a long, involved study, and hopefully lead to a clear identification of the failure mechanism, multiple failure events daily occur during electronics lab sessions, often leading immediately to insights into how not to design the prototype circuit, or what its catastrophic limits are.
Petroski identifies the goal of the engineer as the obviation of failure. While E.E.s do not like failure, we can tolerate more of it than our C.E. counterparts, and we do. Power electronics is one of the areas where catastrophic failure is most likely. Converters and motor drives are power-limited devices and when they fail, they do so by distributing bits of plastic and silicon around their environs. Electrolytic capacitors are even more dramatic. Power circuits designers are generally more caught up with avoidance of catastrophic failure than, say, test-instrument or consumer-electronics designers. Failure for most of electronics is parametric, a matter of not meeting the specified claims, and therefore contributes to more of a moral than a physical hazard.
The Ultimate Failure
Engineering is living with failure. We try to learn from and minimize it, but only to an extent. The removal of all weakness from design is an ideal of a good design engineer, but is not a practical goal. What would happen if we pushed elimination of failure too far in an imperfect world? Petroski illustrates the reductio ad absurdum conclusion by a poem written long ago by Oliver Wendell Holmes. "The Deacon's Masterpiece" tells of a carriage builder intent upon not having any component of a "one-hoss shay" weaker than any other. In the poem, he succeeds. The shay lasts for a century and more. But it cannot last forever and one day it fails. But when it fails, with no component of it being weaker than any other, it fails spectacularly, disintegrating all at once. An electrical engineer might imagine this as a plot of failure versus time, and like a perfect low-pass filter, it has wide bandwidth followed by a vertical slope. Imagine your DMM or desktop computer lasting 50 years and then failing ultimately beyond all repair.
Some components are designed for controlled failure. Large electrolytic capacitors often have a crease indented into the case or a rubber-plugged hole in the bottom to provide a weak point for easing what otherwise would be an explosive situation. The plethora of protective devices, such as fuses, circuit-breakers, varistors, and spark gaps, have the sole function of being weak links to allow for controlled failure and minimal subsequent circuit repair. Experienced engineers know with some accuracy where the failure limits of a design are, and can manipulate failure as a controlled design variable. In pushing the concept of controlled failure, cost can be traded off for a product that fails reliably just after its warranty expires. As warranties on automobiles, for instance, are competitively extended, designers are driven toward one-hoss shay design. Perhaps it is the no-hoss shay that will realize the no-weak-link product, suitable only for hauling to the dump once it breaks.
Failure and Normative Design
New technologies are not well understood and designs are conservative. Mature technologies, such as beverage cans (see Petroski's book, Invention by Design, for interesting case studies of this kind) are pushed to their minimalist limits. Once a technology is mastered and reliability becomes a design variable, the engineer has the power to choose alternative approaches to design based on differing values. A company with a reputation for high-quality products will allow engineers to exercise their seeming escape from original sin, to design the best product possible. But a competitor, realizing an increased profit margin through stealthy corner-cutting, pressures the quality supplier to do the same to compete. The eventual result is a marketplace filled with junk, and a society with a throw-away mentality.
I will leave it to economists or their witch-doctor relatives to work out a new economic cycle describing this phenomenon. Recovery from the bottom of the quality-junk cycle involves much more than engineering-related issues. One of the questions is whether the financial state of buyers has anything to do with it. An affluent market may be just as inclined to spend more repeatedly for product replacements than to spend the same amount once for a long-lasting product. Perhaps a preference for the former is a factor in the shortening of design cycles, which gives engineers less time to design it right, thus reinforcing the descent into a world of techno-trash. While Petroski laments the lack of dissemination and even cover-up of failure analyses in civil engineering, and what a treasure-trove the failure reports hidden in insurance company files would be, consumer electronics (as the other extreme) unabashedly drives onward, strewing in its trail landfill-sized piles of broken products.
The bright side to the junk cycle is that it is another form of failure from which we can learn as engineers. The rapid advance of enabling technologies makes possible better products, if only they would be designed and manufactured. This induces a shorter design cycle, which results in a quicker financial return for the enabling technology thus employed. Despite the junk-product side-effect, the underlying technologies are moved along. When rapid advances in root technology bogs down (usually due to non-technical factors), then they plateau for a while at a more advanced state than if they hadn't been underwritten by junk products. A severe global economic down-turn (yes, the "d-word"), world war, or major social or institutional upheaval would cool off markets, reinstill discipline, lengthen product life, and give engineers time to apply the more advanced state of technological componentry to quality designs. The upward junk-to-quality half-cycle would then commence.
To Engineer is Human: The Role of Failure in Successful Design, Henry Petroski, Vintage Books (Random House), 1992, 251 pages; $13 in paperback.
Invention by Design: How Engineers Get from Thought to Thing, Henry Petroski, Harvard University Press, 1996, 242 pages, hardbound.
Ó Copyright Dennis L. Feucht, 2001