Shipping high-quality ICs requires that design-for-test (DFT) methodologies be included in a design. DFT provides external access at the device’s I/O pins to internal registers to either control or observe state data during manufacturing test.
External controllability and observability from the chip boundary enable automated manufacturing test-program creation with automatic test pattern generation (ATPG) software. Today, it would be very rare for any device shipping in medium production volumes not to have embedded some level of DFT and use ATPG software to create manufacturing tests.
Given the semiconductor industry’s widespread acceptance of DFT, many believe the industry has solved all problems associated with manufacturing test. This belief is not entirely accurate. As designs become larger and quality requirements more stringent, DFT complexity increases which, in turn, drives up the cost of test. A simple rule of thumb states the complexity of a specific DFT methodology depends on the size of the design and the final quality level defined by defects per million that a product team requires after testing.
Based on data in the International Technology Roadmap for Semiconductors 2006 Update, projections for gate counts would rise to 1.546 billion transistors by the time designs reach production in the 32-nm process node, so there is no foreseeable end to the growth of DFT complexity.
According to projections from International Business Strategies (IBS) shown in Figure 1, by the time the semiconductor industry begins production in the 22-nm process node, costs associated with manufacturing could consume 36% of the entire design development budget.1 This scenario would present an untenable situation for an industry that historically produces new products with more features and functionality at each generation that cost less than products of previous generations.
Figure 1. Cost-of-Test vs. Design Feature Size Projections
At first glance, there appear to be few new methodology developments on the horizon that would slow the rate of cost increases. Using history as a guide, we can reasonably surmise that test costs will continue to grow as design sizes and complexity increase.
Driving the rise in design complexity is the growing use of smaller process geometries by semiconductor developers. To accommodate testing of these products, there is a need for even more advanced DFT methodologies to address new timing failure modes caused by parametric and lithographic variability common in sub-100-nm manufacturing.
Adding further to the testing challenge is the consumer-driven need for reduced power consumption. This is forcing a continued reduction in power supply voltage, leading to shrinking noise margins, in turn creating new complex failure modes that exhibit themselves as small delay defects. The external symptoms of small delay defects manifest themselves as signals within a circuit that fail when running at system frequency but function appropriately when being driven at a lower frequency.
To test for these small delay defects, DFT methodology has evolved to include on-product clock generation (OPCG) to provide advanced clocking often not available on ATE. During testing for small delay defects, ATPG tools will force a slow transition to an observation point in the circuit and use a clock from OPCG to capture it at system speed or faster. If the logic value of the transition does not match the proper operation of the circuit, then there is detection of the small delay defect.
Testing for small defects is difficult and requires not only more complex fault models that account for actual circuit delays but also significantly more test vectors. This has driven an advanced DFT on-chip pattern compression methodology to reduce the large number of test vectors associated with detecting small delay defects. The insertion and verification of on-chip compression logic increase complexity of the design flow, leading to higher costs than for traditional testing for stuck-at faults.
Obviously, if the IBS projections hold even at post-65 nm, there will be an even greater need to limit the cost of deploying newer and more sophisticated DFT methods and techniques. However, design size and quality will force adoption of new DFT methods. These new methods, if not implemented properly, could negatively affect test costs by impeding timely DFT signoff.
Figure 2 shows the experience of a large systems company in deploying evermore-complex DFT methodologies over the past decade. The company saw the responsibilities of the design teams grow from implementing and verifying basic scan to supporting detection of stuck-at faults to incorporating more advanced DFT methodologies.
Figure 2. Increased Project Uncertainty Caused by Complex DFT
Today, the company integrates scan with 1149.x, on-chip compression, built-in self-test/repair for embedded memory, and IEEE 1500 Core Test to detect small delay defects. It also supports the testing of nonlogical elements such as memories and I/O structures within the designs. Because of this growth of DFT complexity in the design, project schedules became less predictable and more susceptible to delays due to costly iterations throughout the design and test process.
Lack of planning for DFT is the major reason design teams stumble and iterate with their design flows. Obviously, most project teams define broad test coverage goals but often fail to consider more subtle DFT metrics and constraints that come back unexpectedly during the design process, forcing iteration in the design flow. For example, something as straightforward as not allocating the proper number of I/O pins for test purposes early in the design can cause RTL synthesis and verification iteration.
DFT Metric and Constraint Considerations
Failure to Close Timing
Timing closure is one of the most vital goals in terms of signoff for both system and DFT operations. In the case of achieving optimal DFT timing, design teams strive for a maximum operating frequency for the application of manufacturing test patterns. The requirement is quite simple to understand: The faster the test application, the less time devices are on ATE, allowing more devices to run through the test process in the same amount of time, which results in a less costly test process.
As design teams strive to achieve high-speed manufacturing test pattern application, the greatest problem is inserting DFT post-synthesis and not implementing DFT as part of the synthesis process. Post-synthesis DFT insertion often contributes to a lack of success closing DFT timing.
When the scan logic is put in late, unforeseen hold violations often occur. This forces test engineers to modify manufacturing tests at the last possible minute while bringing up their test program on an expensive ATE resource. To have their problematic test vectors pass on ATE, test engineers resort to scanning test data in at slower speeds or removing test vectors from the test program. Beyond the capital and personnel costs of tweaking the test program late in the design process, slower scan clocking leads to longer test times and increased production costs.
Several solutions are available to prevent this late cycle iteration. One solution is to link DFT synthesis with silicon virtual prototyping (SVP) early in the design implementation flow so a preliminary floor plan could group close proximity scan registers to limit potential wiring congestion.
Later, scan-chain reordering can be done to eliminate any hold-time violations. In addition, early simulation and the use of formal techniques and static timing analysis on the scan logic could alert designers early on to any scan clocking issues well before test program development begins.
Higher-than-necessary test costs also can be a result of power-related issues. In the process of lowering dynamic power consumption of their products, design teams use extreme care to design and implement a power control infrastructure and multiple power domains and voltages. For instance, many hand-held devices will gradually lower power consumption by going to sleep, shutting down nonessential functional units to conserve battery life.
In the implementation of DFT, design teams should give the same care and attention to power consumption. Overlooking power issues, such as functional power modes and power control infrastructure, often leads to the use of a single power mode during manufacturing test.
Given this common practice, many physical designers will over-specify the power grid to accommodate worst-case toggling of the scan chains during test. This normally eliminates power problems; however, it forces manufacturing to use a larger die size than is required, which increases the cost of the final product. In large-volume consumer devices, even a few pennies added to the cost due to a larger die can reduce profits significantly.
In other cases when the power grid is not robust enough to handle the massive switching that occurs during scan testing, failures can arise from sources such as ground bounce, IR drop, or crosstalk. If these failures can be identified, test engineers will be forced to iterate on ATE or in simulation to selectively exclude the highest power-consuming tests, sacrificing both time to market and product quality.
It is possible to avoid costly iterations and achieve optimal power consumption on ATE if a design team inserts gated clocks and scan bypass infrastructure in the device. These elements support testing individual clock/power domains separately or in groups to keep dynamic power consumption within the design and ATE limits. Figure 3 shows a simple example where a power controller reconfigures scan chains to test MAC I and MAC II independently or with DMA modules. This configuration also ensures that the level shifters and isolation logic between the two power domains are tested.
Figure 3. DFT-Enabled Partitioning to Reduce Power During Test
There also is growing concern over power consumption when scan patterns shift in and out of their respective chains. Given the large cone of logic behind each scan register, any change in state on the register’s output can cause a large number of dependent gates to switch states, causing current to spike.
A power-aware ATPG tool can create patterns that limit switching and still are effective in detecting manufacturing defects. Accomplishing this is straightforward by intelligently filling don’t-care bits in the scan chains with data to keep flip-flop switching to a minimum.
Experienced design teams readily understand that DFT enables their manufacturing test process. However, with increasing pressure of ever larger and more complex designs and growing time-to-market demands, design teams should consider a better way of implementing DFT.
One solution to reduce costly test-related iterations and bring more predictability to the design process is to use a combination of technology and methodology that enables design with test (DWT). This concept implies more integration of manufacturing test within the design flow so manufacturing test requirements are part and parcel of the execution of each step within the RTL-to-GDSII process.
Unlike the traditional DFT approach, DWT integrates manufacturing test metrics and constraints into the design flow to drive DFT implementation and verification as well as the test generation process itself. Figure 4 shows a design flow that employs DWT using a set of well-integrated tools that have strong interoperability including RTL verification, synthesis and DFT, equivalence checking, floor planning, placement and routing tools, and ATPG.
Figure 4. DWT Features Well-Defined Metrics and Constraints
Adopting a DWT approach enables logic designers to use technologies efficiently at each step in the process to produce a more testable design in less time. DWT helps design teams avoid costly iterations caused by unintended oversights or late-breaking DFT requirements in the design flow. In the last few months, design with power, design with physical, and design with verification have evolved in a similar manner as DWT to address costly unintended iterations creeping into today’s advanced design flows.
Moving from a design-for process to a design-with approach is an evolutionary extension to existing design flows by using new technology and interfaces to further integrate test into the design flow. This leads to faster DFT signoff and a more efficient and cost-effective manufacturing test process.
1. Global System IC (ASSP/ASIC) Service Management Report, International Business Strategies, Vol. 16, No. 7, July 2007.
About the Author
Tom Jackson is a product marketing director for the Cadence Encounter Test Group. He began his career as a test engineer working on hardware accelerators and has held various positions in field applications and product marketing in the areas of design for test, formal verification, and yield management. Mr. Jackson graduated from Augsburg College and St. Paul College. Cadence Design Systems, 260 Billerica Rd., Chelmsford, MA 01824, 603-424-4918, e-mail: email@example.com