Speed. I like that word. But just how important is it? Of course, the answer depends on what the heck you’re talking about. In the racing world, whether it be cars, horses, track sprinters, or swimmers, high speed is the ultimate goal and is used to get to the finish line first. On the other hand, speed is often only one factor in the total package. There’s a guy that used to play on my softball team that didn’t have much speed while running the bases. In fact, I would tell people that he had two speeds: slow, and slower. But he was a good pitcher and hitter, so what he gave to the team helped us overall.
It is similar in the world of digital ICs, where speed is sometimes the king of the hill like in the most advanced microprocessors, and other times it can be a detriment, like in low power applications where a more important goal is to have the battery last longer. But for the most part, faster is usually seen as better, and the designs of today and tomorrow continue to operate at higher speeds. This has become a challenge for testing advanced ICs after manufacturing. The newer processes and smaller geometries have lead to an increase in speed-related defects.
For many years, the golden standard for test quality was based on the fault or test coverage number for a stuck-at test pattern set created by an automated test pattern generation (ATPG) tool. For every node in the design, stuck-at-1 and stuck-at-0 faults were assigned in the ATPG tool; then, the tool created test patterns to exercise the design in such a way to check that these nodes were not stuck to a fixed logic value during test. A test pattern that failed on the tester indicated the presence of a manufacturing defect in the device.
Starting at the 130 nm process size and smaller, using additional tests to check for speed-related defects became important because these defects were more prevalent than with the older technologies. These tests generally used the transition fault model within the ATPG tool to check for slow-to-rise and slow-to-fall delays. Similar to stuck-at tests, two faults were assigned to each design node, and then the ATPG tool created test patterns that launched a logic 0-to-1 or 1-to-0 transition on a short path that included this node. The capture cycle locked in the response at a specified time in the observation point. If the initial logic value was captured, this indicated a delay defect within that path because the updated value was not captured.
Like the stuck-at fault model, the transition fault model has become a standard part of most companies’ test methodology. It provides a good delay check across the entire device. At higher design speeds and smaller process sizes, however, it isn’t always sufficient to capture the smallest delay defects that might be present. This is because the ATPG tool gets to randomly choose which path is used for each test pattern. In the interest of processing time, it usually chooses a simpler, or shorter, path if available.
A newer fault model based on the transition fault model has been gaining use in industry where it is important to target the smallest delay defects. This fault model is either called the small delay defect (SDD) or timing-aware fault model. Both names are used and are interchangeable.
The timing-aware ATPG process reads in the layout timing information in the form of a Standard Delay Format (SDF) file so that it has the delay values required to create tests that propagate faults along the longest paths possible. By using the paths with the smallest timing slack margins, the created tests can capture the smallest delay defects that might slip by regular transition test patterns.
Doesn’t this SDD test sound great? Well, it is, but just like most things, it comes with trade-offs. First, because the actual timing information is used, the layout needs to be completed before final test patterns can be created. For all designs, it will also take more ATPG processing time and create a higher number of test patterns than the transition fault model. But if high quality test is a requirement, using this SDD fault model can help to capture defects that escape other tests and increase the total product quality.
To help determine quality levels and the improvements possible with SDD test patterns, a new delay test coverage metric is available in the timing-aware ATPG process. The delay coverage calculation for a test pattern is basically the percentage of the path delay actually used for that fault detection divided by the longest possible path delay that is associated with that fault location. When the ATPG tool presents the other test coverage numbers for a pattern set, the delay test coverage for the whole set of patterns is also given. This provides a convenient way to quickly compare the delay test coverage of the SDD patterns versus the standard transition delay patterns.
For many designs, due to the trade offs of test coverage versus addition patterns and runtime, it probably makes sense to target faults with the smallest slack numbers in the design with SDD patterns, and then create additional transition test patterns for the rest of the design. Of course your mileage may vary, as they say, depending on how fast you need to go and what you are trying to accomplish. In some cases, results will vary based on the slack differences between paths. If the design was optimized so that path delays are mostly balanced, then results from transition patterns which use any path could be sufficient for at-speed test.
Because this is a newer fault model, there have not been many papers or articles published yet with actual silicon test results. A poster from a large IC manufacturing company at the 2008 International Test Conference showed some good results on one of the company’s chips. This production device already had a high-quality test suite that included transition test patterns, and the defective parts per million (DPM) was below 50. When they added new SDD test patterns and tested over 500K devices in production, these new patterns uniquely detected an additional 30 DPM of the devices that passed the transition test patterns. That’s more than a 50% improvement in DPM for this chip!
I know some companies are now including SDD testing in production test. Others are still evaluating it on a subset of their devices. If high quality test is important to your products, regardless of the speed that you need, I encourage you to explore using SDD test patterns for your design if it uses a 90 nm process or smaller.
by Bruce Swanson
About the Author:
Bruce Swanson is a Technical Marketing Engineer in the Silicon Test division at Mentor Graphics. He received an MS in applied information management from the University of Oregon and a BS in computer engineering from North Dakota State University. Bruce has over 20 years of experience in EDA and computer hardware design.