We are witnessing the gradual transition of the automobile from a simple means of transportation to a mobile electronic hub. The amount of electronic content in passenger cars continues to grow rapidly. Recent reports indicate that electronics now contribute about 40% of the total costs of a traditional internal-combustion-engine car, and this jumps as high as 75% for the growing number of electric and hybrid-electric vehicles. Incredibly, the new S-class Mercedes-Benz contains nearly as many microcontrollers as the Airbus A380 airplane. The amount of electronics will only continue to grow as manufacturers add more advanced safety features, greater information and entertainment services, and improvements in energy efficiency.
Safety features are experiencing particularly large growth and encompass items such as collision avoidance, lane-change assistance, and automatic parking. The industry’s move toward fully autonomous vehicles promises to even further increase the number of these safety features.
The electronic components behind these safety features, as well as any other electronics involved in the operation of the vehicle, need to meet extremely high quality and reliability metrics. To ensure consistency across the large and growing number of automotive suppliers, a new international automotive components safety standard was recently completed. Called ISO 26262, the standard defines the requirements for building safe automotive equipment and is being rapidly adopted by automotive manufacturers and suppliers worldwide despite its first edition having been published less than two years ago.
The standard is comprehensive and covers all aspects of the hardware and software lifecycle from design through testing and in-field operation. Achieving the quality and reliability metrics mandated by the standard can be accomplished in a variety of ways. The challenge for semiconductor manufacturers is to achieve the necessary metrics in the most cost-effective way. New semiconductor test solutions therefore need to provide significant efficiency improvements in both test generation and application.
Better Fault Models
The widely used methodology for testing digital circuits is to add scan test structures to the design and then deliver test patterns through these structures that reveal defects when the chip responses are observed. The approach has been in use for decades and is based on modeling circuit defects to a level of abstraction that enables a computationally efficient test-pattern generation process. The simple stuck-at fault model, which treats circuit defects as logic nets stuck at a value of either 0 or 1, initially was used. More complex fault models were added over the years to account for new defect types that appeared as the industry transitioned to new technology nodes. Among the more recently adopted fault models were the transition, bridging, open, and small-delay faults.
However, with the move to smaller geometries, these fault models and associated test patterns are becoming less and less effective at ensuring desired quality levels. The main problem is that all of these existing fault models only consider faults on cell inputs and outputs and on interconnect lines between these cells. In other words, only faults abstracted to the netlist level are explicitly considered.
It turns out that increasingly more defects occur within the cell structures. For the more advanced technology nodes and associated fabrication technologies, some estimates put the number of defects found within cells to represent almost half of all circuit defects. Thousands of patterns typically are produced during the normal ATPG process. As a result, although traditional fault models do not target cell-internal defects, many of these defects end up being detected by chance. However, when considering millions of gates in a design, it is not effective to rely on luck to detect potential defects within each cell.
One option would be to apply every possible combination of inputs at every gate. This fault model is referred to as gate-exhaustive. It certainly would be effective in detecting all static cell-internal defects since it would apply every possible combination. For example, for an eight-input cell, gate-exhaustive testing would apply all possible 28 = 256 input combinations. It is easy to see that applying such an exhaustive set of patterns quickly becomes impractical.
To make matters worse, many defects inside cells are timing related and therefore not detectable using static tests. A two-pattern test is necessary to detect such defects. So for the eight-input cell example, two-cycle gate-exhaustive testing would require the application of 28 × 28 = 65,536 patterns. For designs with very high quality requirements such as those looking to comply with the ISO 26262 standard, a much more efficient test strategy for detecting cell-internal defects is clearly necessary.
A recently introduced ATPG-based test methodology achieves the needed efficiency improvements by directly targeting specific shorts and opens defects internal to each cell. The cell-aware test approach starts with an automated cell library characterization process, which is illustrated in Figure 1. Each semiconductor process node has a set of technology cell libraries used to describe the logic behavior and physical layout of the lowest-level component in the netlist. The cell-aware characterization process starts with an extraction of the physical library, represented in GDSII. Each extracted cell results in a transistor-level netlist with parasitic resistances and capacitances. A resistance location represents a conductive path with the potential for an open defect while a capacitance identifies locations with the potential for a bridge defect.
|Figure 1. Generating Cell-Aware Fault Models Through Library Characterization|
An analog simulator then is used to evaluate each potential defect against an exhaustive set of stimuli to determine if there are sets of cell inputs that produce an output different than the defect-free result. The simplest case is to simulate each capacitive location with a 1-Ω resistance representing a hard bridge. Many other resistive values can be used as well with some resulting in different test stimuli requirements. In addition, simulating over multiple cycles also is useful to detect bridges or opens that are only observed as dynamic defects.
The final process in cell-aware characterization is to convert the list of input combinations into a set of the necessary input values for each fault within each cell. Because this information is defined at the cell inputs as logic values, it basiclly is a logic fault model representation of the analog defect simulation. This set of stimulus for each cell represents the cell-aware fault model file for ATPG.
Within this file, a simulated defect (now a fault) can have one or more input combinations. For the example shown in Figure 2, the ATPG engine will try to find any of the three input combinations when targeting this fault. The fault is considered detected if any one of the combinations is applied. Note that because the cell characterization process is performed for all cells within a technology library, any design using that technology can read in the same cell-aware fault model file. Characterization only needs to occur once and then can be applied to any design on that technology node.
|Figure 2. Example of a Cell-Aware Fault Model File|
Silicon results already have shown significant additional defect detection beyond standard stuck-at and transition patterns when using cell-aware ATPG. These detection improvements have been measured at various technology nodes from 350 nm down to 32 nm and below. Perhaps more importantly, these improvements have been achieved with modest increases in test application times.
The defect coverage improvements obtained using cell-aware test patterns also can result in other test benefits. With these improved results, it may become possible to reduce or eliminate other costly test procedures such as performance margining or system-level testing.
Additional test quality and efficiency improvements critical for devices looking to comply to the ISO 26262 standard can be achieved using a new hybrid solution that combines ATPG compression and logic BIST techniques. Although these solutions historically have been used independently and typically for different applications, they possess complimentary features that turn out to be very beneficial in combination.
The two solutions also make use of much the same on-chip DFT resources. For example they both use scan chains and related test clocks. The main difference between the two solutions lies in the on-chip logic feeding test data to the scan chains and processing the test-response data coming out of the scan chains. It turns out that there also are similarities in this logic so the logic of the two solutions can be effectively combined to support both approaches.
A diagram illustrating the high-level architecture for the hybrid solution is shown in Figure 3. Most of the on-chip resources in the diagram are common to both test approaches. The only resources unique to one approach or the other are the EDT low-power and Xpress modules used for ATPG compression and the multiple-input signature register (MISR) module used by the logic BIST solution. In addition to efficient sharing of resources, both ATPG compression and logic BIST capabilities also can be integrated into the design using common flow automation capabilities, adding to the overall efficiency and value of the solution.
|Figure 3. Hybrid AIPG Compression and Logic BIST Architecture|
One of the benefits of the hybrid solution is improved tester memory utilization during manufacturing test. Pseudo-random patterns can be used first to cover the faults that are easier to detect. Because stored patterns are no longer needed for these faults, additional tester pattern storage becomes available for compressed ATPG patterns that target the remaining more difficult-to-detect faults.
The hybrid solution also can reduce the total test time for a complex hierarchical design. Each core is equipped with its own hybrid test infrastructure, which allows it to be tested independently of other cores. This means that the cores can be tested in parallel, thus reducing overall test time. Consider, for example, a design with four cores. If only ATPG compression were available, then the four cores would have to share the available tester pattern application bandwidth. Each core could be tested sequentially using all available tester channels, or all cores could be tested in parallel with each core using a subset of the channels.
However, if each core has both ATPG compression and logic BIST available, then the test for each core can be divided into two phases—ATPG compression used in one phase and logic BIST in the other. With this separation, the entire chip can be tested in two phases. In the first phase, two cores use ATPG compression and the other two use logic BIST. In the second phase, the situation is reversed. The advantage now is that in each phase only two cores are sharing all available tester channels because logic BIST does not require patterns from the tester. This means the bandwidth to each core is doubled so the test time is reduced by half.
Another critical area addressed by the ISO 26262 standard is long-term device reliability. In an effort to achieve the necessary reliability levels, a number of techniques, such as functional redundancy, error correction, and built-in self-test, already are used by many semiconductor manufacturers. As a result, the hybrid solution plays another critical role related to ISO 26262. The solution’s logic BIST capability can be combined with other common BIST capabilities such as memory BIST to provide in-system test coverage for most, if not all, of the design.
All of the BIST capabilities generally can be accessed through the standard IEEE 1149.1 TAP controller interface. This dedicated interface is sometimes not accessible in-system. To accommodate in-system access, the TAP controller can be enhanced to support a generic CPU interface that translates between parallel read/write CPU operations and the serial bit sequences required by the TAP protocol.
For situations where a fully autonomous power-on self-test (POST) capability is required, a fairly simple finite-state machine-based test controller can instead drive the TAP controller, as illustrated in Figure 4. When activated by a power-on reset signal, this test controller automatically applies the necessary serialized sequences to the TAP to perform any needed BIST initialization and activation.
|Figure 4. POST Architecture|
Meeting the quality and reliability requirements of ISO 26262 and other automotive electronics standards will only become more difficult as device sizes and complexities continue to grow. New advanced test technologies such as cell-aware ATPG and hybrid compression/logicBIST provide some key building blocks toward ensuring compliance to the new standards. Adoption of these and other advanced test capabilities not only will improve the capability of semiconductor manufacturers to achieve necessary quality and reliability metrics, but also will help to further differentiate their products by delivering embedded test capabilities that can be leveraged by their customers at the system level and in the field.
About the Author
Steve Pateras is product marketing director within Mentor Graphics Silicon Test Solutions group and has responsibility for the company’s ATPG and DFT products. He received his Ph.D. in electrical engineering from McGill University. firstname.lastname@example.org