Robustness and Contextual Code Verification

Part One: Robustness Verification

This is part one of a two-part series outlining code verification methods.

We begin with a question: At what stage of software development should I verify my code?

The answer is simple. You should verify it right after you have compiled it, when the code is fresh on your mind. Once you are shown potential errors, reviewing and fixing those errors can be almost trivial. Fixing errors never gets easier after that stage in the workflow.

If you follow this approach, you are most likely verifying a unit of code. The unit will probably be integrated into an existing code base later in the development cycle. Polyspace®products allow you to approach your verification in two distinct ways: through robustness verification, in which you verify your unit in isolation, or through contextual verification, in which you verify your unit in the context of the remaining code base. In the following two posts, we provide some guidelines that will help you decide which approach to take.

But first, what difference does each approach make? Let us begin with robustness verification.

In robustness verification, Polyspace products verify that your unit is robust (i.e., that it is safe against certain errors for all inputs, undefined functions, function call sequences). If you follow this approach, no new errors should appear during code integration and testing. If you review and fix all errors flagged by Polyspace products, you do not have to review that unit (file or function) again.

Let us illustrate this behavior with a few simple examples.

Robustness Against All Inputs

num1 and num2 are inputs to the function dividebydifference

Example showing num1 and num2 are inputs to the function dividebydifference.

In this example, num1 and num2 are inputs to the functiondividebydifference. Other than their data type, Polyspace cannot determine anything else aboutnum1andnum2from this code. Therefore, it considers all possible values in the range of anintvariable:

  • The results of operationsnum1 – num2andnum/dencan exceed the value allowed for anintvariable. Therefore, Polyspace warns about a potential overflowon both operations.
  • num1can be equal tonum2. Therefore, Polyspace warns about a potential division by zeroerror on the division.

然而,一旦Polyspace产品指出这样的错觉下载188bet金宝搏lnerabilities in your code, you can make your function more robust by taking appropriate action. For instance, in the above example, you can place checks to safeguard against overflow and division by zero.

Robustness Against All Undefined Functions

Suppose your unit of code calls functions external to the unit and/or you do not have the function definitions. How can you assure yourself that a new run-time error will not occur in your code once the function is defined? You can use the robustness approach.

den is assigned a value between 1 and 3 during declaration

Example showng den is assigned a value between 1 and 3 during declaration.

在上面的示例中,denis assigned a value between 1 and 3 during declaration. However, its address is passed tocheckDenominator.Dencan potentially change incheckDenominator. Since the definition ofcheckDenominatoris not provided, Polyspace products consider the possibility thatdencan be changed and can have any value in the range of anintvariable. Therefore, it warns that the division might cause a division by zero error.

However, once Polyspace products point out this error, you can protect the division, for instance, by passingdentocheckDenominatorby value instead of through a pointer. Now the division operation is safe, whatever the definition ofcheckDenominatormight be.

Protect the division by passing den to checkDenominator by value instead of through a pointer

Protect the division by passing den to checkDenominator by value instead of through a pointer.

Robustness verification is also appealing from the user’s perspective. Setting up verification using this approach requires less effort. After you have provided your source files and target settings, you can go ahead and run the verification. The default settings are designed to verify your code against all inputs, function definitions, and so on.

So, robustness verification is the way to go, isn’t it? Well… in practice, you have to trade off the time spent setting up with the time spent reviewing significantly higher number of errors. However, using the robustness approach may result in too many potential errors to review.

In order to spot all errors during robustness verification, Polyspace products make broad assumptions about unknowns such as input variables. Most operations involving those unknowns can potentially fail. Therefore, you can have many more potential errors to review than your time allows.

The next postillustrates an alternative verification approach (called contextual code verification), which addresses this issue.