Verifying at a Higher Level of Abstraction

Date: Jun 22, 2022
Type: In the News

Transaction-level modelling can help with certification-level verification if an FPGA application is safety-critical, says Krzysztof Szczur of Aldec.

 

Henderson, NV, USA – June 22, 2022 – FPGA designs for avionics applications are increasingly employing high-speed interface buses to deliver greater performance and, if the application is safety-critical, verifying the design for certification purposes is challenging.

 

Avionics buses use serial rather than parallel data transfer to reduce the number of wires needed in harnesses/looms, and they tend to be differential signals to reduce EMI and susceptibility. One protocol that is becoming popular in the avionics community, because of its standardization and widespread adoption as a means of connecting devices and subsystems, is PCI Express (PCIe). It is a high-speed interface with 8b/10b or 128b/130b line encoding schemes and delivers great performance thanks to strict impedance matching. Another benefit is that clocking is embedded within the signal.

 

There is also a wealth of hard IP from FPGA vendors and third parties to embed. That really helps reduce the design cycle.

 

Analysis of a PCIe transmission at the signal level is impossible however without using additional equipment, such as a protocol analyzer. It should be noted that PCIe is ‘point-to-point’, so it cannot be shared with other devices. The strict impedance matching requirements mean it is hard to physically probe for monitoring or debug purposes.

 

If the FPGA design is Design Assurance Level (DAL) A or B, DO-254 compliance will require in-hardware (and at-speed) testing of the target device using a requirements-based approach.

 

verifying abstraction 01

Figure 1a: Examples of non-determinism effects shifted or reordered

verifying abstraction 01

Figure 1b: Transfers that are detected as faults during bit-level waveform comparison but are actually correct behaviour that need to be explained and justified during certification.

 

Board level test

This is the most common approach in DO-254 and is fine for simple FPGA designs. For more complicated designs, it is seldom possible to verify all FPGA level requirements due to limited access to the FPGA’s I/Os and controllability of its interfaces while on the board. One solution is to apply test vectors captured during simulation to the pins of the FPGA containing the design under test (DUT).

 

One solution, proposed by Aldec, is a compliance tool suite (CTS). This consists of a software controller, a motherboard and a daughter card (which is customized to the target design and FPGA). Test vectors captured during simulation are applied to the DUT’s pins. Simulation and physical test results are then compared. Hundreds of DAL A and B projects have been verified and this approach to verification is recognized by the certification authorities as acceptable for design assurance.

 

It is difficult to verify PCIe-related requirements because there is no easy way to see what the PCIe interface is doing during the test. It is possible to implement extra test mechanisms, either in software (if there is a microprocessor in the system) or in hardware, but this approach has three major drawbacks.

 

First, you would need to write system level test cases to test FPGA level requirements, but not all scenarios will be possible.

 

Second, you would need to wait for the rest of the system, with its test mechanisms, to become available. This is difficult for organisations responsible for designing just the FPGA and not the whole system. Simulation and bus functional models (BFMs) are common approaches, but they are simplified and the BFMs provided by FPGA vendors are for simulation only, not in-hardware testing. Also, for PCIe, BFMs can only validate the interfaces to the IP block. The physical layer of the protocol is not simulated.

Another approach is to perform a full simulation of the resistor-transistor logic (RTL) version of the PCIe block. Disadvantages include the length of time required and the need for extra verification IP.

 

Lastly, verifying designs that behave non-deterministically is a problem. There is non-determinism in the hardware, a result of data passing between different clock domains, and there is non-determinism in the execution of the software because the system is typically controlled by a non-deterministic OS kernel (such as Linux), which handles many uncorrelated events.

 

Working with test vectors and analyzing design interfaces at the bit-level during in-hardware verification becomes a real challenge and obtaining repeatable consistent results might even be impossible without strictly constrained test scenarios. Typical effects of shifted or reordered data transfers observed when comparing captured bit-level waveforms are shown in Figures 1a and 1b.

 

For the rest of this article, please visit electronicsweekly.

Ask Us a Question
x
Ask Us a Question
x
Captcha ImageReload Captcha
Incorrect data entered.
Thank you! Your question has been submitted. Please allow 1-3 business days for someone to respond to your question.
Internal error occurred. Your question was not submitted. Please contact us using Feedback form.
We use cookies to ensure we give you the best user experience and to provide you with content we believe will be of relevance to you. If you continue to use our site, you consent to our use of cookies. A detailed overview on the use of cookies and other website information is located in our Privacy Policy.