home Electronic The guarantor of chip design, detailed explanation of the testable design technology of chip design

The guarantor of chip design, detailed explanation of the testable design technology of chip design

Chip design is one of the most important industries in the world. It can be said that chip design determines the country’s manufacturing level to a certain extent. Chip design usually includes multiple stages, and the focus of each chip design stage is different. In order to ensure the reliability of the chip design process, testable design technology is particularly important. Therefore, this article will introduce this technology in chip design in detail.

Chip design is one of the most important industries in the world. It can be said that chip design determines the country’s manufacturing level to a certain extent. Chip design usually includes multiple stages, and the focus of each chip design stage is different. In order to ensure the reliability of the chip design process, testable design technology is particularly important. Therefore, this article will introduce this technology in chip design in detail.

The guarantor of chip design, detailed explanation of the testable design technology of chip design

In the test, the purpose is to determine as soon as possible whether the chip is working normally with higher stability, not absolute stability. Now the chip design team generally recognizes that this requires adding a DFT (design for test) circuit to the chip. Third-party tools and IP (intellectual property) companies can help achieve this goal.

Debugging is completely different. The purpose of debugging is not simply to determine that the chip is malfunctioning, but to find out the cause of the malfunction. This inspection is not limited to a few seconds on the test bench, and may last for several weeks. It is not done automatically, but requires the participation of the chip design team. It appears at discrete points in the design cycle: in the first chip design phase, in the reliability research phase, and field failure analysis phase.

According to this situation, it is conceivable that a good DFT strategy should be able to meet the needs of chip debugging, and in fact it is often the case. As SoC (system-on-chip) designs become more and more complex, first-class design teams have stated that they will provide more planning, implementation work, and chip area for circuits that support debugging rather than testing.

“Ten years ago, when designing three metal layers, it was not a big problem,” said Tony Chiang, senior vice president of engineering at Bay Microsystems. “If there is a problem with the chip, you should directly study the metal layer to view the circuit, and for the focused ion beam system, you should re-wiring. Now, for the 9-layer metal layer and 0.2mm metal spacing, the problem is not

It’s that simple. The circuit must be designed to be controllable and observable from the outside of the chip without exceeding our goals in terms of cost and time budget. “

This situation briefly describes the situation in the debugging design community.

Technology overview

Debugging and DFT are not completely separated. Kris Hublitz, senior director of test and development engineering at Broadcom, cited as an example, Broadcom has a company-level team of more than 70 engineers who work with other chip design teams in the company for debugging and testing. Hublitz has repeatedly claimed that DFT manufacturer LogicVision is the main partner of Broadcom’s chip debugging strategy.

Others also agree with this view. “Debug design and production testing are not unrelated,” said David McCall, vice president of CSR (Cambridge Silicon Radio). “The starting point of the two is similar.”

Many design managers emphasize that this starting point is to explore the controllability and observability of the circuit. Debugging is similar to production testing. The basic problem is to set the circuit in a known state, then start to run and observe its behavior. In medium-scale integration, boundary scan technology can effectively accomplish this task. Since there are few internal states of the chip, it can be tested comprehensively: Pass the input through a series of known states, synchronize the clock of the circuit, and then observe the output.

With the advent of microprocessors, things have become more complicated. Microprocessors have many internal states, so just applying the input to a known vector and observing the output is not particularly effective. In the early days, the industry tried a variety of technologies to make microprocessors debuggable, from scanning each group of logic between registers to relying on similar traces, breakpoints, and single-step functions used by microcomputers for software debugging. Combine the two methods to work.

Designers today use the same tool suite for the digital part of the SoC. Other techniques are used for mixed-signal analog circuits. But there is no single method that can cover the entire complex SoC. Therefore, the debugging design process includes dividing the system into independent debuggable modules, implementing a debugging strategy for each module, and integrating these strategies into a complete chip solution, so that the user interface of a single module is similar and the circuit is minimized. Required chip resources. Finally, the designer must use these debugging resources to recheck to make the operation of the fully integrated chip both controllable and observable, because you can’t just judge certain problems based on isolated functional modules.

Digital SoC

The most basic form of SoC is a CPU core surrounded by simple and often programmable peripheral modules and memory. In most cases, the CPU core is a third-party IP, and there is at least one option for internal debugging of the core. Software development teams often emphasize this point. This core is combined with a standard DFT circuit, which is implemented by the design team for the peripherals to achieve observability and controllability to isolate faults. This debugging kernel can be applied to the CPU core to simulate the asynchronous part of the core to capture the results. By allowing the CPU to read and write peripheral registers, the core can also simulate and observe peripherals, usually allowing designers to determine faults in the scan chain at a manageable level.

The guarantor of chip design, detailed explanation of the testable design technology of chip design

But there are not many SoCs that are so simple today (Figure 1). In more cases, a chip has several or a group of CPU cores and several different processor cores. Some external controllers are very complicated, and only use the CPU to simulate them to observe the results, and they cannot be effectively diagnosed. There are also multiple clock domains, which are usually not synchronized with each other. Such chips need more effective methods to debug.

In this case, several strategies are available. A simple method introduced by Broadcom’s Hublitz is to make the inputs and outputs of all major functional modules accessible to the pins of the chip. This method requires a lot of multiplexing. In a design with a large number of I/O and memory interfaces, before any additional access is introduced for debugging, the number of chip pins has been limited, and designers must reuse pins for debugging access. Simply deriving the input and output from each complex module may be more useful than executing it on the main CPU core. The designer may need to derive internal signals.

All these multiplexing and input-output transmission work together, which may not be practical. Moreover, the resulting additional interconnection will cause all modules to be actually accessed from the pins, but the access speed is simply not up to the requirement. This is a serious problem. “We have to test the circuit at full speed, especially the interconnection between the modules,” Hublitz said. “This is especially true for 65 nm process chips. Otherwise, failures in the chip will occur.”

Hublitz emphasized that a good DFT strategy supported by ATE (Automatic Test Equipment) can greatly help the debugging process. “Our first round of commissioning was carried out on the ATE system,” he said. “After we know that the chip will not melt, give it to the designer and work with them.” Hublitz also stated that the chip may continue to be returned to the Broadcom test bench to allow the ATE system to collect a large amount of data or perform Speed ​​check. “It’s really useful to have ATE capabilities inside,” he said. “We have 28 systems, and we add a new one about every quarter, mainly for debugging. Debugging new chips is the main purpose of our equipment.”

Even with the ATE system, certain signals and states still cannot pass the inspection of the detection card. Need to adopt other strategies: internal simulation and logic analysis. Sometimes, the only effective way to quickly simulate a module and capture its behavior is to build the circuit into the module. According to Chiang, Bay organizes its network processing chips into a series of independent processors and uses this technology extensively. Important modules can have their own debugging kernels, including single step and breakpoint capabilities, and trace buffers to capture internal states in real time. Jun-wen Tsong, Bay’s logic design director, describes this approach as a multi-stage verification process.

“First, we implement the chip at the module level. In this mode, each module is isolated: we can inject enough state to start its operation, and then observe its independent operation characteristics.” These tests must be at the maximum clock speed To ensure accuracy. In this way, the designer can realize the debugging of each stage of a series of processors. At this time, the designer also isolates the I/O ring from the internal module so that the input can directly enter the output FIFO. After Bay’s designers independently verified the I/O ring and internal modules, they combined the two to test the chip as a whole.

Collecting data with the entire chip running at full speed requires a comprehensive plan. The debug kernel in a single processor must not only recognize local instructions and data words, but also large image data is also very important for chip operation: such as data packets and data packets. In addition, a 36-bit bus runs through the entire chip, which can transmit key signals from any module to the package feet in real time. When the chip is processing packets at full speed, the debugging engineer can observe the operation of the module. In addition, the hardware monitors specific assertions in real time, such as FIFO full/empty assertions. Broadcom has a similar approach. Hublitz told us that their company’s wireless LAN chip has enough internal debugging hardware, and engineers can track the vector amplitude on the entire chip, from input to baseband to output.

Once the problem is isolated to a function in a module, based on a DFT-like strategy, debugging engineers can use low-level diagnostic tools. Bay’s distinguished engineer and chip architect Barry Lee said: “We have the trigger and single-step clock control in the module, and can scan the signals that we think are important. Ideally, we can know exactly how a particular pipeline is Execute to the pin and register level.”

Simulation challenge

When it comes to analog circuits, everything is different. “We separate the analog part from the digital circuit for debugging,” Lee explained. “The debugging techniques for the two are different. In the analog domain, to open the loopback path, you may have to take all the debugging outside the package pins. Because the active primitives in the analog circuit are not synchronized with the clock, Therefore, it cannot be captured.”

Analog circuits are similar to digital circuits. As the geometric size shrinks, designers have seen the capabilities of detection and experimental design, thinks Paul Ferguson of Analog Devices. “We are used to using laser cutters for detection stations to modify the circuit. Later, as the geometric size decreases, we move to a focused ion beam system. It is very practical for pitches of 250 nm or greater. This shows that, in fact, If the 65nm process is used, only the two metal layers above can be modified.”

This situation triggered an interesting change in the analog design style, Ferguson said. “Recently we are working on a PLL with a 90nm design. We found that we must first complete the VCO (Voltage Controlled Oscillator) in order to build a suitable model. Therefore, we introduced some circuits to adjust the gain and other parameters to what can be achieved. The upper metal layer. This is really good for the debugging process.”

Matt Ball, a mixed-signal project engineer for Jennic, a single-chip radio manufacturer, also emphasized the importance of putting key analog signals in desirable locations. “We added as much programmability and digital adjustment functions as possible,” he said. “Some things must be fine-tuned for metal, and we turn those locations into a single mask layer level to achieve accessibility.”

In addition to directing real-time signals to the upper metal layer or package feet, today’s analog designers have other weapons to set up and observe the state of the circuit. The most important thing is to carry out on the fine geometry, there must be close cooperation between the analog circuit and the digital circuit for calibrating and monitoring them.

CSR’s McCall said that in its design, the ADC monitor can determine multiple points of the digital monitoring circuit in the analog circuit. These points provide the debugging engineer with an opportunity to access the behavior of the analog part by connecting the output of the converter to the outside of the package. “Usually important analog signals are digitized at some point,” Ball said. “Why not perform sampling, filter with on-chip DSP, and output the result that we can see?”

Designing filters or amplifiers so that the digital circuit can adjust all the important electrical characteristics seems to be a big deal. But the difference between the chip that is working for the first time and the chip with two new metal mask layers before debugging can even start the digital part of the design. Moreover, in the process of less than 90nm, designers must face increasingly stronger variability. These digital adjustments have become necessary, so that a sufficient number of useful chips can be produced.

How to adjust? For the accuracy and frequency of the signal on the radio chip, IF (Intermediate Frequency) signal, in the test mode, only wiring and analog multiplexing can be used to lead the signal out of the package. “In the intermediate frequency part, the buffer is very useful.” Ball said. “Get the signal from the important node and send it to the pin, you can see the result you need to see.” Ferguson of Analog Devices also agrees with this view. “As far as debugging is concerned, it often does not need to be much more accurate than the analog multiplexer can provide to see oscillation or 20% gain error.”

If the signal cannot be routed outside the package, sometimes the signal can be routed to an on-chip data converter. “There is usually an auxiliary ADC on the chip to monitor the chip temperature, battery voltage, etc.,” Ferguson explained. “In debugging, we put a huge multiplexer in front of it to check other nodes in the analog part. But be careful: additional measurement circuits will damage other parts. For example, turn on the multiplexer to observe The node will improve the oscillation ability of the stable circuit. If the debug signal accidentally crosses the power domain, it can introduce parasitic current paths that have not been encountered.”

Ball also agrees with this warning and must choose the appropriate method. He said: “The 10fF or 20fF generated when buffering the analog signal can change the behavior of the node.” Jennic tends to only base on areas that have previously had problems, such as bandgap batteries. Build its debugging plan. “We prefer to add a bypass circuit to prevent problems.” Ball added. This conservative thinking can reduce the chance of faulty circuits.

After planning, plus luck, and a bit of elegance, the functional modules can be reused for debugging. Many analog signals are terminated in the data converter, so at least some of them are observable. Ferguson pointed out that the s-delta converter can be easily switched to work as a filter to observe the incoming analog signal. Or carefully route the bit stream to the pins, observable on both sides of the converter. Once the data is digitized, the CPU or DSP module can be used to adjust and compress or test its assertions.

It is also possible to build the debugging intelligence (equivalent to a simple network analyzer) into a module. The loopback path can use the transmitter and receiver to check each other (Figure 2), and some circuits can extract analog waveforms of the results. “In our gigabit PHY (physical layer) design, we captured some analog signals in the PHY block,” Broadcom Hublitz said.

The guarantor of chip design, detailed explanation of the testable design technology of chip design

Prospects

It is not difficult to imagine that in the early stages of system design, each functional module accepts enough self-checking capability to perform self-diagnosis during full-speed operation, and realizes this capability at a level that can be handled by the DFT scan chain. This method usually requires an input buffer or signal generator to simulate the module, an output capture register or ADC to observe it, and sufficient internal breakpoints and trace capabilities to reveal the internal workings of the module. Some SOC design teams are currently working on this plan. In this way, the actual implementation becomes a compromise between the level of debugging support that the architect considers necessary and the cost that the design can afford.

To further expand this concept, perfect system designers can use to retarget certain functional modules as signal sources or capture devices for other modules. The attached ADC is a good example, and there are more opportunities like this. For example, adding a fast data converter can turn the signal processing module into a network analyzer or digital oscilloscope. A slight addition to the control logic can convert the buffer SRAM array into a tracking buffer.

In this way of thinking, the on-chip functional modules can become a large amount of debugging resources, and only a few multiplexers and mode switches can be reset. But this process requires foresight. Such organization will affect floor plan and global wiring. It must be done at the beginning of the design, not at the final implementation.

Ferguson believes that certain tools can also support this process. Precise tools can automatically install this structure, such as scan chains, scan controllers, and vector generators. Moreover, DFT hardware is essential for register-level diagnosis. However, there is no tool that supports the creation of debugging structures. Ferguson wants to see at least a detection tool that treats mixed-signal modules as observable and controllable, and can scan for simple errors. Ideally, a tool should be able to run through a design and propose a debugging architecture and process. But this is a problem to be solved in the future.

The Links:   APT10050JN EP1C3T144I7N BUY-LCD

Leave a Reply

Your email address will not be published.

*

code