Setting Up Linting Regressions

A regression test suite is a set of tests aimed to ensure that design functionality which once "worked" does not "regress" to a point where it does not work. Once these tests are developed, any subsequent release or check-in of either RTL or testbench should be checked with this regression suite.

Similarly, regression linting verification aims to ensure that the quality of RTL code does not "regress" to lower quality levels during the design process. Any subsequent release or check-in of RTL code should be checked with this regression linting to ensure the high quality of RTL coding throughout the whole development period.

As opposed to functional regressions, linting regression does not require test case development. Also, linting tools run fast, providing designers with an efficient way to validate code quality after any subsequent release or check-in of RTL code.

It is important to run linting checks prior to functional regressions and prior to every code release for implementation (synthesis). Linting tools are able to reveal functional and implementation-specific code issues in seconds, saving designers from time-consuming implementation and functional verification iterations.

It is especially important to constantly run linting checks for designs with multiple hard configuration options. In such designs, each macro combination creates another design version, and each design version must be checked with linting tools. Every design version must be separately verified with linting tools, as code-specific issues may be visible only for specific design versions.

In design verification, regressions are built as a series of automated tests. In order to develop automated tests, there is a need to implement automated checks - either in the test case body, or in the verification environment. These automated checks clearly define test status (passed or failed) at the end of test execution. Regression is considered passing when all the regression tests are passing. In the case that at least one test fails, regression is considered failing. Once regression fails, the message is broadcasted across the development team to stop introducing design modifications into the common repository and start fixing the code. Once the common repository code is fixed, designers can then resume their normal workflow.

Linting regressions are extremely useful during a project's entire lifetime. Unlike design verification regressions, they are fast in time and modest in computational requirements. One of the common strategies when running linting regressions is to periodically poll code repository for code modifications. Once modifications are found, the linting tool launches static design verification, and as a result, generating the non-ambiguous pass/fail status. In the case regression fails, the system immediately informs the team members about the failure, allowing designers to fix the code immediately, prior to the accumulation of multiple code modifications.

The following app note presents the practical approach of setting up linting regressions. First, we present the methodology of extracting the non-ambiguous pass/fail status from the static design verification. Then, we demonstrate how to set up and run linting regressions with Jenkins, the extensible automation server. Jenkins automation server helps designers automate functional and linting regressions in order to maintain the quality of project code.

Developing Pass/Fail Status for Static Verification of Hardware Designs

The pass/fail status development helps designers to further automate the static design verification process and to set up linting regressions. For this purpose, designers have to carefully select rules that cannot be violated. These rules have to be marked with severity "Error". Once at least one rule with severity error fails, the whole lint run process fails too, returning a standard error message to the environment. The following image shows the ALINT-PRO Policy editor with selected rule severities set to "Error":

As different design teams have different requirements, for the design coding, verification, and implementation, it is difficult to come out with a common strategy for the critical rules selection. However, we propose the following steps for developing linting run statuses that, with minor modifications, may be suitable for the majority of design teams:

  1. For each rules plugin, "common" critical rules are chosen.

  2. In addition to common critical rules, there are additional critical rules that may be added depending on design or design process requirements.

The script "set_regress_status.do" defines common and additional rules for the majority of ALINT-PRO plugins. Additional rules are grouped into a number of sub-groups; each sub-group may be added to the severity "Error" rules set by setting the corresponding variable to 1.

The following table describes TCL variables enabling additional sub-groups of critical rules:

Variable

Default

Description

Clock-related requirements

require_single_clock

0

require single clock in current design

require_single_clock_edge

0

require single rising clock edge in current design

Reset-related requirements

require_async_reset

0

Require all sequential design logic to be asynchronously reset

allow_initial_assign

0

Some FPGA vendors allow initial assigns for sequential elements. Set to 1 when initial assignments are allowed

Tools Reuse-specific requirements

require_syn_tool_reuse

0

require RTL code to be re-usable between synthesis tools

require_sim_tool_reuse

0

require RTL code to be re-usable between simulation tools

Timing & Area efficiency requirements

require_timing_efficienty

0

require RTL code to be timing efficient

require_area_efficienty

0

require RTL code to be area efficient

Code-specific requirements

require_no_attributes

0

require an absence of synthesis attributes

require_no_redundancy

0

require no redundancy in RTL code

require_strict_bit_match

0

require strict bit match at signal assignments

require_strict_downto_range

0

require ranges to be strictly down to with the lower range limit equaling 0

Other requirements

require_ip_checks

0

require design to follow commonly-used IP designs' timing requirements

require_strict_dft

0

require RTL code to follow DFT requirements

require_strict_xprop

0

require RTL code to properly propagate X's

require_sv_constructs

0

require design to use more efficient SystemVeriog language constructs for processes and case selection statements

The example regression setting script is available in the Aldec repository (please see "Downloading Demo scripts" chapter at the end). Users are welcome to modifying the script, adding or removing common rules or additional critical rules.

Developing a TCL Run File for Linting Regressions

Once a regression status is developed, there is a need to write a simple TCL run file in order to enable linting batch mode. The linting run file has to:

  1. Automatically create an ALINT-PRO project and add to the project all required files.

  2. Configure the project policy to enable pass/fail status extraction.

  3. Run the project.

  4. Generate error/waring reports.

  5. In a case of failure, return STDERR message to the environment.

The following example TCL run file implements all of the above requirements:

Setting up Linting Regressions with Jenkins Extensible Automation Server

Jenkins automation server enables fast and easy linting regression setup, allowing constant verification of repository code. In this article, we present how to use Jenkins-based regression automation with Apache Subversion (SVN), the commonly-used revision control system.

For the purpose of our example, we added an open-source Aquarius processor code (downloaded from the OpenCores.org resource) to the SVN repository. Also, we added two scripts:

  • aquarius_policy.do: the policy configuration script implementing pass/fail status.

  • run_alint.do: the TCL run file for running ALINT-PRO in batch mode.

Once the SVN repository code is prepared, we may start configuring the Jenkins automation server.

  1. Download and install Jenkins. As of this writing, the latest version is available at the URL https://jenkins.io. When running the installer, accept the defaults. Note: JDK (Java Development Kit) with Java version 8 or later must be installed prior to Jenkins installation.

  2. Open a web browser and navigate to the Jenkins page (at http://localhost:8080). If the install was successful, you should see the Jenkins homepage:

  3. Set up a Jenkins user account and then Click on "New Item". Enter your project identifier (item name), select "Freestyle project", and click "OK":

  4. Configure the Project:

    1. Select "Subversion" as a Source Code Management tool.

    2. Fill "Repository URL" option with the URL path to SVN repository. In most cases, SVN repository servers can be accessed using SVN-specific svn:// protocol or using SSH. In this example, the SVN repository resides locally:

    3. In the "Build Triggers" section, select the last option: "Poll SCM". This option periodically polls the code repository for changes. Once a change is found, the next build will be triggered.

    4. In the "Schedule" section, set a time period for polling the code repository in cron format. To facilitate the task, use crontab.guru editor (www.crontab.guru).

      For debugging purposes, it is convenient to set up time intervals of 1 minute. Just enter * * * * * (5 stars separated by spaces) within the schedule window.

    5. In the Build section, add the build step "Execute Windows batch command". Add batch commands to invoke ALINT-PRO in batch mode using the previously prepared "run_alint.do" script:

    6. Click on "Post-build Actions" and select "E-mail notification". This will issue automatic e-mails upon a build failure. Add recipients' emails to the "Recipients" field.

      This will end the Project configuration, and from here we are able to start running linting regressions.

  5. Running the regressions: Click on "Aquarius" project link to get into the project workspace:

Click "Build now" to start scheduling builds. Each build is identified with either blue (build passed) or red (build failed) dots and a clickable Build ID number. In order to review the failing build logs, click on the build ID number and review "Console Output", in our case error and warning logs are directed to the console.

For example, as shown, build #3 fails due to design errors. We may review linting errors in build #3 console output for more information:

To fix an error, it is advisable to open ALINT-PRO in interactive mode. Jenkins packs all project files in the .zip format. Please switch to the workspace page and download the archive:

After extracting, just launch ALINT-PRO and open the workspace file from the archive. Fix the error, commit design changes into the SVN database, and re-run Jenkins, ideally this time getting a "passed" status for the next build. For Linux users, Jenkins installation procedures may slightly differ. Please see www.jenkins.io or community sites for the platform-specific installations. For example, the following page describes Jenkins installation process on Ubuntu 16.04: https://www.digitalocean.com/community/tutorials/how-to-install-jenkins-on-ubuntu-16-04

Downloading Demo Scripts

Summary

This application note demonstrated and taught how to set up linting regressions using ALINT-PRO with the Jenkins automation server. Also, it presented the pass/fail status definition strategy for ALINT-PRO runs. The clearly identified pass/fail status for linting runs allows designers to automate linting usage, set up linting regressions, and apply the power of static code verification to maintain both RTL code quality and stability over a project's entire lifetime.

Ask Us a Question
x
Ask Us a Question
x
Captcha ImageReload Captcha
Incorrect data entered.
Thank you! Your question has been submitted. Please allow 1-3 business days for someone to respond to your question.
Internal error occurred. Your question was not submitted. Please contact us using Feedback form.
We use cookies to ensure we give you the best user experience and to provide you with content we believe will be of relevance to you. If you continue to use our site, you consent to our use of cookies. A detailed overview on the use of cookies and other website information is located in our Privacy Policy.