News

Insights for technology professionals

Evaluating Parallel Test

Posted on February 27, 2015 by Doug Shelton

alphaTesting multiple devices in a single insertion has long been the most promoted method for reducing costs on a production test floor. It is generally presented to management in terms of throughput gain; a simple multiple of the number of units being tested in parallel. 

In the real test environment however, this is likely unachievable with the actual formula for determining the true cost of a parallel test solution being a bit more involved.

To better understand throughput gain from a parallel test system, we will borrow two basic concepts from bipolar transistor theory: beta and alpha. We can define beta as the number of units in a single test insertion, or potential gain. Alpha is the ratio of the actual flow (throughput) divided by the potential flow. It is always a value less than 1. The actual throughput gain of the test system is therefore expressed as beta multiplied by alpha.

While beta is an integral value that can be easily communicated, alpha is more complex. It is a cocktail of ATE limitations, test techniques designed to offset these limitations, low pin count Design For Test strategies to increase insertion sites, and overstepping loss at wafer probe that have compounding effects on this factor.  Disconnects between the design intent and test implementation can further degrade alpha in significant and unexpected ways.

The main reason that alpha is not considered a serious concern is because it is assumed that it can be mitigated by adding more units to the test insertion. It is further assumed that test engineers will fully understand the implications and account for them when creating test programs.

These assumptions are fraught with peril, because the test engineering focus is primarily on coverage and efficacy. The increased mechanical load of adding more units can easily diminish any expected throughput gains due to device contact problems during a test run; a variable component of alpha. In other words, there is a point in which an increase in parallelism may actually decrease throughput.

There are two ways to confront the impact of alpha on a parallel test system:

  1. Accurately assess alpha within the current test context.
  2. Engineer a test methodology to specifically target alpha.

The most direct method is to define alpha accurately within the context of a cost/benefit model. This will yield a point where the count of devices in the insertion reaches diminishing returns and define the optimal level of parallelism for each specific test solution. The less attractive path is dedicating engineering resources to the task of improving alpha. For the money, ATE Resource Augmentation is perhaps the most effective means of directly offsetting some of the losses injected by the overall ATE system and probe environment by using dynamic hardware interfaces.

Attempting to improve alpha has a large upfront cost and would appear to have little benefit associated with it. That is, until you consider that just a 20% improvement in throughput gain can easily overcome this cost many times over for large beta parallel systems. In either case, knowing where you stand with the efficiency of your parallel test solution can pay big dividends when the time comes to execute it.

Learn How We Help Companies Find Semiconductor Talent Download our Quick Start Guide

Doug Shelton

Doug Shelton is the CEO of Unified Test & Development, a full service test engineering company specializing in custom high performance semiconductor test support for manufacturing and development. His business helps engineering teams by evaluating parallel test alpha and provides recommendations or solutions for improving this factor. Unified Test & Development also partners with Talent 101 to provide these services to semiconductor engineering teams.