“Big data” is the popular buzzword used to describe the technologies, tools, and processes for managing massive amounts of both structured and unstructured data. While the term originated in the IT world and the Web for storing and analyzing large distributed aggregations of loosely structured data, it is now becoming a critical issue in the future of semiconductor test as companies seek to expand the use of test data for improving yield, throughput, efficiency, and product quality.
Big data is already a major challenge in test operations today. Many test engineers will confirm that 90% of the test data they store is not used. The number and complexity of defects in new devices are increasing rapidly. Well publicized problems shipping 28-nm devices were partly attributed to difficulties in managing data used in traditional yield ramping techniques.
According to the International Technology Roadmap for Semiconductors (ITRS), “Test data usage for purposes beyond identifying whether a given die is good or defective has become essential for several reasons and drives a need for expanded, revamped, and better integrated test data systems and infrastructure…. The need for better integrated usage of this test output (data) for fab process yield learning, maverick material identification, and feedback within a more distributed manufacturing test are all becoming more critical and even essential applications moving forward.”
One approach to effectively managing big data in test is addressed through Adaptive Test. Adaptive Test is a broad term used for software solutions or methods that change test conditions, test flow, test content, and test limits based on manufacturing data and statistical data analysis. Adaptive Test offers a vision of a comprehensive big-data solution to total product quality—conceptually linking design, wafer processing, packaging, performance, and end use—but it also offers immediate benefits to current test flows.
The ITRS describes several applications for Adaptive Test, including
- real time monitoring of test results that can dynamically change test flows,
- statistical analysis of post test results,
- feed-forward of test results from one step to another to optimize testing or to enable more focused screening,
- off-line data analysis used to drive test changes for future devices, and
- card/system-level configuration and test based on feed-forward of component or card-level test.
At this year’s SEMICON West, held in San Francisco from July 9-11, big data and test will be principal discussion points on the show floor and during conference sessions. Optimal Test, one of the leaders in Adaptive Test, will provide its view on the status and future of Big Data during the Test session at TechXPOT North. Optimal Test's presentation, “Big Data,” will describe breakthrough implementations in data management at the top five fabless companies. Each of the top five fabless companies runs dozens of terabytes in multiple environments. Data from their worldwide operations and engineering floors is being collected in real-time and near-time and gathered into huge databases to enable massive automated analysis to increase yields and efficiency and to decrease test time. Since the data is aligned across all the operations it enables methods to prevent escapes and manage RMAs in a more effective way. Their presentation will describe what it takes to manage “Big Data” and the ROI resulting from such advanced implementations.
Mentor Graphics will also discuss Big Data in Test in a presentation entitled, “Test Data – A Key Asset for Effective Yield Learning.” Also held at TechXPOT North session, Mentor will describe how the recent transition to the 28-nm node illustrated how traditional yield-learning methods are running out of steam. One key disrupting factor is the dramatic increase in the number and complexity of design-sensitive defects. Each new design introduces additional variability. As a result, yield learning is increasingly done based on production devices rather than test chips. In response, fabless semiconductor companies are deploying new technologies and methodologies that leverage design and test data to rapidly separate and identify the root cause of design and process oriented yield issues. Test-fail data is emerging as a key ingredient for production device yield learning, which increases the demand for effective production test-fail data collection.
The Mentor presentation describes how diagnosis-driven yield analysis (DDYA) accelerates time to root cause of yield loss and identifies yield limiters. The presentation will focus on recent advances that significantly improve the value of this flow by eliminating the noise that typically exists in diagnosis data, and correlation of diagnosis results with design profiling techniques such as DFM analysis to identify systematic design features.
Big Data will also be a main topic at the Test Vision 2020, held concurrently with SEMICON West on July 10-11. Organized by test professionals from the leading ATE suppliers, fabless firms, and IDMs, Test Vision has emerged the premier executive conference on semiconductor test. They will address Big Data in panel discussions on the future technologies and with dedicated presentations by both users and suppliers.
SEMICON West will be held from July 9-11 in San Francisco. To register for SEMICON West (registration is free until May 10) and Test Vision 2020 (early-bird registration is open until June 7), visit www.semiconwest.org/registration.
See related article, “Semicon West Participants Offer Updates on EUV, 3-D Transistors, and 450-mm Manufacturing.”
View previous online exclusives:
“Modularity protects investment in MIL/aero test applications” (April Web Exclusive),
“Design and test links help support multistandard radios from design to production” (March Web Exclusive),
“Nonintrusive Test Complements ATE to Meet PCB Test Needs” (February Web Exclusive),
“Software Helps Address Signal Integrity Challenges for Serial-Bus Test” (January Web Exclusive).
“'We don't judge, we measure',” and