caLogo

Next-generation testers perform most of the heavy lifting at the wafer level.

Better Manufacturing

Two months ago we looked at the concept of design for dismantling and the probable impact this will have on future technology. In the preamble, we mentioned design for test, design for manufacture and design for excellence, although we dwelled on the future of dismantling.

Recent events suggest a more rigorous emphasis on DfT is needed, since it is easy to overlook major factors in production. When I wrote the September column, I naively thought that everyone knew and appreciated the DfT requirements. How wrong I was.

In the U.K., an organization called the Smart Group (similar to the SMTA and IPC’s Technet listerv) holds regular seminars that keep European manufacturers and designers up to date and encourage them to maintain efficiency and a customer-based focus. The Smart Group (smartgroup.org) recently conducted an exposé on DfT and was surprised by how much everyone knew in theory but how little they put into practice. The format followed the lines of describing the theory of what is available as a spectrum of test techniques, from most simple to most complex, followed by user experiences of implementing some of the techniques and the pitfalls that can accrue. One equipment supplier described how to follow trends elucidated by the users but also how to lead the pack by coming up with innovative ideas. This supplier centered on the notion of data handling as the main platform on which to collect data and to develop processes and customer reliability factors. The final talk covered how life testing of products (HALT/HASS) could give proper perspective to the total cost of development, production and repair handling.

The vast majority of users are familiar with the classic technologies and many employ AOI, x-ray, ICT and functional test strategies. Some use flying probe, but almost all use standard techniques routinely to investigate manufacturing defects. Yet not many employ more advanced techniques such as built-in system test (BIST) or JTAG for in-depth and detailed circuit and component analyses because they need to be designed in. Some of the more advanced users, such as defense and avionics, use such early design techniques regularly but it seems that not enough others employ the correct techniques to ensure customers gain the best possible results.

The chosen test strategy must encompass all the fundamentals of the manufacturer’s basic portfolio but it must also cover the customer’s needs. This must be a balanced test strategy and cover process capability, inspection and test capability, and the customers’ targets. Unfortunately, no single test strategy covers every requirement, so the user must employ more than one. The next question, of course, is should we cover not only the materials but also design defects and reliability defects? Some classic test techniques – AOI, ICT, functional, flying probe and x-ray – output mainly MDT analyses, making it more important to consider BIST or boundary scan to analyze how circuitry functions and how components react to one another. To do so, apply appropriate design considerations before anything is committed to hardware.

At the moment, few component manufacturers offer full BIST or boundary scan, so possibilities are limited. But, the technology has proven capable and the number of manufacturers and device types available with BIST or boundary scan is increasing. As a result, the costs are steadily decreasing and techniques are becoming worthy of consideration for a range of products.

So, after the preamble, what are the reasons for DfT? According to a user who specializes in space and defense electronics, DfT can be defined two ways:

1. A design process that creates products and processes with automatic or integrated test and measurement systems.

2. A methodology of ensuring that a device or unit can be tested effectively during manufacture. The elements of testability must be included in the design from the concept stages.

The second definition is the fundamental premise for considering DfT. BIST and the other methodologies offer benefits, but can work only if developed when the design is in its infancy and they are properly assessed to give detailed information to the manufacturer and then to the user or end-customer.

What surrounding factors affect DfT?

  • Rising equipment costs.
  • Increasingly complex designs, which result in increased test times.
  • Increasingly restricted circuit accessibility, particularly when BGAs and CSPs are used.
  • High yields are becoming more difficult to maintain.

DfT can reduce the number or level of test equipment required as some functions can be built in. Test times can be reduced as some of the techniques can be designed into the devices, permitting faster access and minimum setup times. DfT also enhances yield factors by permitting analyses and control of tests carried out on a particular device

In response to the spiraling cost of test iron and the spread of BIST, the test industry itself is changing. Established production tester companies and startups are working on a new kind of tester that assumes the existence of extensive BIST. The new machines will provide little more than power, clock and scan-chain connections but vendors say even more changes are under development.

“We are looking at a generation of testers that could sell for under $500,000 instead of the current $3 million and up,” said Schlumberger strategic marketing manager Rudy Garcia. But Garcia sees another key change as well. With reduced tester functionality, a sharp reduction in the number of contacts the tester must touch on each die and the opportunity for parallel test of multiple dice, the center of gravity for test is moving from packaged dice to wafer sort.

A BIST-ready tester can probe scan-chain contacts, clock and supply lines on a large number of die at once, launch multiple BIST sequences on each dice and accomplish much of the production test job at the wafer level. The savings are obvious.

From the user’s perspective, a number of fundamental procedures must be considered. These include issues we face daily without properly considering. Here’s a short list:

1. Authority to influence designs.

  • A design process incorporating DfT reviews/drawing sign-off.
  • Buy-in of engineering management.

2. Working relationships.

  • Relationship with the design team is key.
  • The test development group should be colocated with the hardware design team.

3. Visibility of projects.

  • Awareness of upcoming projects/
  • technologies/boards.
  • Test development team should be involved in bid and estimating processes.

What benefits should we expect?

1. Risk reduction.

  • No surprises when the first-off boards are tested.
  • Device/board functionality is well understood early in the design process.
  • Engineering estimates are easier to deliver against.
  • Raises awareness.
  • Features implemented within the design that ease design proving.

2. Improved diagnostics and fault coverage.

  • Design partitioning can improve achievable fault coverage.
  • Improved diagnostics.

3. Reduced complexity of the test development activity.

  • Design partitioning simplifies test vector development.
  • Common test development tools across projects.
  • Common test strategies across projects/products.
  • Reuse design proving tests/BIST triggers.
  • Implement simple loop-back testing wherever possible.

4. Test equipment reuse.

  • Reuse of BIST/design proving tests for the production test solution.
  • Common test fixtures/adaptors.
  • Boundary scan test vectors can be embedded on the PEC for reuse as part of BIST.

5. Improved through-life support.

  • Store BIST/embedded test results within on-board nonvolatile memory.
  • Boundary scan tests can be run in the field.
  • Firmware changes are easier to implement.
  • Unit under test history can be embedded on the board.
  • Reduced costs through equipment reuse.

This is only a fraction of the extensive list. Are there pitfalls? Yes, but they also fall under the category of common sense. Knowing what you are doing will prevent some potential pitfalls:

1. Understand the functionality.

  • Device and board functionality must be understood.
  • Implement methods of disabling FPGA configuration on power-up.
  • “Test mode pins” referenced in device datasheets do not always have test functionality behind them.
  • Do not assume BSDL files are 100% verified against hardware.
  • Understand clock/reset schemes and the impact on the test method.

2. OBP/ISP programming.

  • Firmware programming, especially flash EEPROMs, via boundary scan is very slow.
  • Understand the transportability of vectors between the various different platforms.

3. Boundary scan signal integrity.

  • Select an appropriate buffer and termination scheme to limit reflections and over/undershoot.
  • Ensure TCK, TMS, TDI, TDO are routed as critical signals on the PCB.
  • Implement screening on boundary scan interface cables.

4. Fixturing issues.

  • Fixture requirements/mechanical constraints/cooling.
  • Connector placement and layout.
  • Can you influence the physical interfaces to simplify the test strategy?

5. Fault coverage.

  • How do you accurately and quickly determine fault coverage on boards combining various test methods?

If the design lacks space for test pads or MDA styles of inspection, you may be forced to boundary scan or BIST, so it pays to think first, act later.

 

Peter Grundy is director of P G Engineering (Sussex) Ltd. and ITM Consulting (itmconsulting.org); peter.grundy2@btinternet.com. His column appears semimonthly.

Submit to FacebookSubmit to Google PlusSubmit to TwitterSubmit to LinkedInPrint Article
Don't have an account yet? Register Now!

Sign in to your account