Main Content

Specify Test Properties in the Test Manager

The Test Manager has property settings that specify how test cases, test suites, and test files run. To open the Test Manager, usesltest.testmanager.view. For information about the Test Manager, seeTest Manager

Test Case, Test Suite, and Test File Sections Summary

When you open a test case, test suite, or test file in the Test Manager, the test settings are grouped into sections. Test cases, test suites, and test files have different sections and settings. Click a test case, test suite, or test file in theTest Browserpane to see its settings.

If you do not want to see all of the available test sections, you can use the Test Manager preferences to hide sections:

  1. In the Test Manager toolstrip, clickPreferences.

  2. Select theTest File,Test Suite, orTest Casetab.

  3. Select the sections to show, or clear the sections to hide. To show only the sections in which you have already set or changed settings, clear all selections in thePreferencesdialog box.

  4. ClickOK.

Sections that you already modified appear in the Test Manager, regardless of the preference setting.

To set these properties programmatically, seesltest.testmanager.getprefandsltest.testmanager.setpref.

Create Test Case from External File

To use an existing Excel file that is in the supported金宝app®Test™format to create a test case, selectCreate test case from external file. Then, enter the path to the file. The supported Excel format is described inMicrosoft Excel Import, Export, and Logging Format.

To use an Excel®or MAT file that is not in the supported format, write an adapter function so you can use that file in the Test Manager. Then, register the file usingsltest.testmanager.registerTestAdapterfunction. If you have registered an adapter, when you selectCreate test case from external file, two fields appear, one for the path to the Excel or MAT file and one for the adapter function name. Seesltest.testmanager.registerTestAdapterfor information and an example.

Tags

Tag your test file, test suite, or test case with categorizations, such assafety,logged-data, orburn-in. Filter tests using these tags when executing tests or viewing results. SeeFilter Test Execution, Results, and Coverage.

For the corresponding API, see theTagsproperty ofsltest.testmanager.TestFile,sltest.testmanager.TestSuite, orsltest.testmanager.TestCase, respectively.

Description

Add descriptive text to your test case, test suite, or test file.

For the corresponding API, see theDescriptionproperty ofsltest.testmanager.TestFile,sltest.testmanager.TestSuite, orsltest.testmanager.TestCase, respectively.

Requirements

If you haveRequirements Toolbox™installed, you can establish traceability by linking your test file, test suite, or test case to requirements. For more information, seeLink Test Cases to Requirements(Requirements Toolbox).

To link a test case, test suite, or test file to a requirement:

  1. Open the Requirements Editor. In the Simulink Toolstrip, on theAppstab, under Model Verification, Validation, and Test, clickRequirements Editor.

  2. Highlight a requirement.

  3. In the Test Manager, in theRequirementssection, click the arrow next to theAddbutton and selectLink to Selected Requirement.

  4. The requirement link appears in theRequirementslist.

For the corresponding API, see theRequirementsproperty ofsltest.testmanager.TestFile,sltest.testmanager.TestSuite, orsltest.testmanager.TestCase, respectively.

System Under Test

Specify the model you want to test in theSystem Under Testsection. To use an open model in the currently active Simulink window, click theUse current modelbutton.

Note

The model must be available on the path to run the test case. You can add the folder that contains the model to the path using the preload callback. SeeCallbacks.

Specifying a new model in theSystem Under Testsection can cause the model information to be out of date. To update the model test harnesses,Signal Editorscenarios, and available configuration sets, click theRefreshbutton.

For the corresponding API, see theModelname-argument pair ofsetProperty.

Test Harness

If you have a test harness in your system under test, then you can select the test harness to use for the test case. If you have added or removed test harnesses in the model, click theRefreshbuttonto view the updated test harness list.

For more information about using test harnesses, seeRefine, Test, and Debug a Subsystem.

For the corresponding API, see theHarnessNamename-argument pair ofsetProperty.

Simulation Settings and Release Overrides

To override theSimulation Modeof the model settings, select a new mode from the list. If the model contains SIL/PIL blocks and you need to run in Normal mode, enableOverride model blocks in SIL/PIL mode to normal mode. For the corresponding API, see theOverrideSILPILModename-argument pair ofsetProperty.

You can simulate the model and run tests in more than one MATLAB®release that is installed on your system. UseSelect releases for simulationto select available releases. You can use releases from R2011b forward.

To add one or more releases so they are available in the Test Manager, clickAdd releasesinSelect releases for simulationto open theReleasepane in the Test Manager Preferences dialog box. Navigate to the location of the MATLAB installation you want to add, and clickOK.

You can add releases to the list and delete them. You cannot delete the release in which you started the MATLAB session.

For more information, seeRun Tests in Multiple Releases of MATLAB. For the corresponding API, see theReleasename-argument pair ofsetProperty.

System Under Test Considerations

  • TheSystem Under Testcannot be in fast restart or external mode.

  • To stop a test running inRapid Acceleratormode, pressCtrl+Cat the MATLAB command prompt.

  • When running parallel execution in rapid accelerator mode, streamed signals do not show up in the Test Manager.

  • TheSystem Under Testcannot be a protected model.

Simulation 1 and Simulation 2

These sections appear in equivalence test cases. Use them to specify the details about the simulations that you want to compare. Enter the system under test, the test harness if applicable, and simulation setting overrides underSimulation 1. You can then clickCopy settings from Simulation 1underSimulation 2to use a starting point for your second set of simulation settings.

For the test to pass, Simulation 1 and Simulation 2 must log the same signals.

Use these sections with theEquivalence Criteriasection to define the premise of your test case. For an example of an equivalence test, seeTest Two Simulations for Equivalence.

For the corresponding API, see theSimulationIndexname-argument pair ofsetProperty.

Parameter Overrides

Specify parameter values in the test case to override the parameter values in the model workspace, data dictionary, base workspace, or in a model reference hierarchy. Parameters are grouped into sets. You can turn parameter sets and individual parameter overrides on or off by using the check box next to the set or parameter. To copy an individual parameter and paste it into another parameter set, highlight the parameter and right-click to useCopyandPastefrom the context menu. You can also copy and paste parameter sets.

To add a parameter override:

  1. ClickAdd.

    A dialog box opens with a list of parameters. If the list of parameters is not current, click theRefreshbuttonin the dialog box.

  2. Select the parameter you want to override.

  3. To add the parameter to the parameter set, clickOK.

  4. Enter the override value in the parameterOverride Value列。

To restore the default value of a parameter, clear the value in theOverride Valuecolumn and pressEnter.

You can also add a set of parameter overrides from a MAT-file, including MAT-files generated bySimulink Design Verifier™. Click theAddarrow and selectAdd Fileto create a parameter set from a MAT-file.

For an example that uses parameter overrides, seeOverride Model Parameters in a Test Case.

For the corresponding APIs, see thesltest.testmanager.ParameterOverrideclass, and theOverrideStartTime,OverrideStopTIme,OverrideInitialState,OverrideModelOutputSettings, andConfigSetOverrideSettingname-argument pairs of thesetPropertymethod.

Parameter Overrides Considerations

The Test Manager displays only top-level system parameters from the system under test.

Callbacks

Test-File Level Callbacks

Two callback scripts are available in each test file that execute at different times during a test:

  • Setup runs before test file executes.

  • Cleanup runs after test file executes.

For the corresponding test case APIs, see thePreloadCallback,PostloadCallback,CleanupCallback, andPreStartRealTimeApplicationCallbackname-argument pairs of theTestCasesetPropertymethod.

For the corresponding test file APIs, see theSetupCallbackandCleanupCallbackname-argument pairs of the test fileTestFilesetPropertymethod.

Test-Suite Level Callbacks

Two callback scripts are available in each test suite that execute at different times during a test:

  • Setup runs before the test suite executes.

  • Cleanup runs after the test suite executes.

If a test suite does not have anytest cases, the test suite callbacks do not execute.

For the corresponding APIs, see theSetupCallbackandCleanupCallbackname-argument pairs of theTestSuitesetPropertymethod.

Test-Case Level Callbacks

Three callback scripts are available in each test case that execute at different times during a test:

  • Pre-loadruns before the model loads and before the model callbacks.

  • Post-loadruns after the model loads and thePostLoadFcnmodel callback.

  • Cleanupruns after simulations and model callbacks.

SeeTest Execution Orderfor information about the order in which callbacks occur and models load and simulate.

To run a single callback script, click theRunbuttonabove the corresponding script.

You can use predefined variables in the test case callbacks:

  • sltest_bdrootavailable inPost-Load: The model simulated by the test case. The model can be a harness model.

  • sltest_sutavailable inPost-Load: The system under test. For a harness, it is the component under test.

  • sltest_isharnessavailable inPost-Load: Returns true ifsltest_bdrootis a harness model.

  • sltest_simoutavailable inCleanup: Simulation output produced by simulation.

  • sltest_iterationNameavailable inPre-Load,Post-Load, andCleanup: Name of the currently executing test iteration.

dispandfprintfdo not work in callbacks. To verify that the callbacks are executed, use a MATLAB script that includes breakpoints in the callbacks.

The test case callback scripts are not stored with the model and do not override Simulink model callbacks. Consider the following when using callbacks:

  • To stop execution of an infinite loop from a callback script, pressCtrl+Cat the MATLAB command prompt.

  • sltest.testmanagerfunctions are not supported.

For the corresponding APIs, see thePreloadCallback,PostloadCallback,CleanupCallback, andPreStartRealTimeApplicationCallbackname-argument pairs of theTestCasesetPropertymethod.

Assessment Callback

You can enter a callback to define variables and conditions used only in logical and temporal assessments by using theAssessment Callbacksection. SeeAssessment Callbackin the Logical and Temporal Assessments section for more information.

For the corresponding API, seesetAssessmentsCallback.

Inputs

A test case can use input data from:

  • ASignal Editorblock in the system under test. SelectSignal Editor scenarioand select the scenario. The system under test can have only oneSignal Editorblock at the top level.

  • An external data file. In theExternal Inputstable, clickAdd. Select a MAT-file orMicrosoft®Excelfile.

    对我们的更多信息ing external files as inputs, seeUse External Excel or MAT-File Data in Test Cases. For information about the file format forMicrosoft Excelfiles in Test Manager, seeFormat Test Case Data in Excel.

  • Scenarios in a Test Sequence block. First, click the refresh arrownext to theTest Sequence Blockfield, then select the Test Sequence block in the model that contains the scenarios. If you do not also select a scenario fromOverride with Scenario和不使用迭代,然后测试运行active scenario in the selected Test Sequence block. If you do not also select a scenario, but do use iterations, then the active scenario in the Test Sequence block is the default for all the iterations.

    UseOverride with Scenarioto override the active scenario in the selected Test Sequence block. Click the refresh arrow next to theOverride with Scenario字段。然后,选择要使用的场景,而不是the active scenario or as the default for the iterations. In theIterationssection, you can change the scenario assigned to each iteration. For more information, seeUse Test Sequence Scenarios in the Test Sequence Editor and Test Manager.

To include the input data in your test results set, selectInclude input data in test result.

If the time interval of your input data is shorter than the model simulation time, you can limit the simulation to the time specified by your input data by selectingStop simulation at last time point.

For more information on test inputs, see the测试一个uthoring: Inputspage.

Edit Input Data Files in Test Manager

From the Test Manager, you can edit your input data files.

To edit a file, select the file and clickEdit. You can then edit the data in the Signal Editor for MAT-files orMicrosoft Excelfor Excel files.

To learn about the syntax for Excel files, seeFormat Test Case Data in Excel.

For the corresponding API, seesltest.testmanager.TestInput.

Simulation Outputs

Use theSimulation Outputssection to add signal outputs to your test results. Signals logged in your model or test harness can appear in the results after you add them as simulation outputs. You can then plot them. Add individual signals to log and plot or add a signal set.

Logged Signals— In theLogged Signalssubsection, clickAdd. Follow the user interface.

To copy a logged signal set to another test case in the same or a different test file, select the signal set inLogged Signals, right-click to display the context menu, and clickCopy. Then, in the destination test case, select the signal set inLogged Signals, right-click the signal, and clickPaste. You can copy and paste more than one logged signal set at a time.

For a test case, you can use theSDI View Filesetting to specify the path to a Simulation Data Inspector (SDI) view file. You can assign a different view file to each test case. The view file configures which signals to plot and their layout in the test case results. The Test Manager does not support some configurations in the SDI view file, such as axes plot layouts other than time plots and axes layouts other thanN-by-Mgrids. However, the Test Manager applies a similar configuration, if possible. You cannot save an SDI view file from the Test Manager, although when you save the test and results in an MLDATX test file, the file saves the current layout for that test. UseSimulink.sdi.saveViewto create and save an SDI view file. For more information, seeSave and Share Simulation Data Inspector Data and Views.

Other Outputs— Use the options in theOther Outputssubsection to add states, final states, model output values, data store variables, and signal logging values to your test results. To enable selecting one or more of these options, clickOverride model settings.

  • States— Include state values between blocks during simulation. You must have a Sequence Viewer block in your model to include state values.

  • Final states— Include final state values. You must have a Sequence Viewer block in your model to include final state values.

  • Output— Include model output values.

  • Data stores— Include logged data store variables in Data Store Memory blocks in the model. This option is selected by default.

  • Signal logging— Include logged signals specified in the model. This option is selected by default. If you selectedLog Signal Outputswhen you created the harness, all of the output signals for the component under test are logged and returned in test results, even though they are not listed in theSimulation Outputssection. To turn off logging for one of the signals, in the test harness, right-click a signal and selectStop Logging Selected Signals.

For more information, seeCapture Simulation Data in a Test Case. For the corresponding API, see theOverrideModelOutputSettingsname-argument pair ofsetProperty.

Output Triggers—Use theOutput Triggerssubsection to specify when to start and stop signal logging based on a condition or duration. ** test passes if pass while triggered even if test fails outside of triggered time/condition true**.

TheStart Loggingoptions are:

  • On simulation start— Start logging data when simulation starts.

  • When condition is true— Start logging when the specified condition expression is true. Click the edit symbol next toConditionto display an edit box, where you enter the condition.

  • After duration— Start logging after the specified number of seconds have passed since the start of simulation. Click on the value next toDuration(sec)to display an edit box where you enter the duration in seconds.

TheStop Loggingoptions are:

  • When simulation stops— Stop logging data when simulation ends.

  • When condition is true— Stop logging when the specified condition expression is true. Click the edit symbol next toConditionto display an edit box, where you enter the condition. Variables in the condition appear in theSymbolseditor, where you can map them to a model element or expression, or rename them.

  • After duration— Stop logging after the specified number of seconds have passed since logging started. Click on the value next toDuration(sec)to display an edit box where you enter the duration in seconds.

Shift time to zero— Shifts the logging start time to zero. For example, if logging starts at time 2, then selecting this option shifts all times back by 2 seconds.

Symbols— ClickAddto map a signal from the model to a symbol name. You can use that symbol in a trigger condition. For information on using and mapping symbols, seeAssess Temporal Logic by Using Temporal Assessments

Configuration Settings Overrides

For the test case, you can specify configuration settings that differ from the settings in the model. Setting the configuration settings in the test case enables you to try different configurations for a test case without modifying the model. The configuration settings overrides options are:

  • Do not override model settings— Use the current model configuration settings

  • Name— Name of active configuration set. A model can have only one active configuration set. Refresh the list to see all available configuration sets and select the desired one to be active. If you leave the default[Model Settings]as the name, the simulation uses the default, active configuration set of the model.

  • Attach configuration set in a file— Path to the external file (文件位置) that contains a configuration set variable. The variable you specify inVariable Namereferences the name of a configuration set in the file. For information on creating a configuration set, seeSimulink.ConfigSetandSave a Configuration Set. For information on configuration set references, seeShare a Configuration with Multiple Models.

For the corresponding API, see theConfigSetOverrideSetting,ConfigSetName,ConfigSetVarName,ConfigSetFileLocation, andConfigSetOverrideSettingname-argument pairs ofsetProperty.

Baseline Criteria

TheBaseline Criteriasection appears in baseline test cases. When a baseline test case executes, Test Manager captures signal data from signals in the model marked for logging and compares them to the baseline data.

Include Baseline Data in Results and Reports

ClickInclude baseline data in test resultto include baseline data in test result plots and test reports.

Capture Baseline Criteria

To capture logged signal data from the system under test to use as the baseline criteria, clickCapture. Then follow the prompts in the Capture Baseline dialog box. Capturing the data compiles and simulates the system under test and stores the output from the logged signals to the baseline. For a baseline test example, seeCompare Model Output to Baseline Data.

For the corresponding API, see thecaptureBaselineCriteriamethod.

You can save the signal data to a MAT-file or aMicrosoft Excelfile. To understand the format of the Excel file, seeFormat Test Case Data in Excel.

You can capture the baseline criteria using the current release for simulation or another release installed on your system. Add the releases you want to use in the Test Manager preferences. Then, select the releases you want available in your test case using theSelect releases for simulationoption in the test case. When you run the test, you can compare the baseline against the release you created the baseline in or against another release. For more information, seeRun Tests in Multiple Releases of MATLAB.

When you select Excel as the output format, you can specify the sheet name to save the data to. If you use the same Excel file for input and output data, by default both sets of data appear in the same sheet.

If you are capturing the data to a file that already contains outputs, specify the sheet name to overwrite the output data only in that sheet of the file.

To save a baseline for each test case iteration in a separate sheet in the same file, selectCapture Baselines for Iterations. This check box appears only if your test case already contains iterations. For more information on iterations, seeTest Iterations.

Specify Tolerances

You can specify tolerances to determine the pass-fail criteria of the test case. You can specify absolute, relative, leading, and lagging tolerances for individual signals or the entire baseline criteria set.

After you capture the baseline, the baseline file and its signals appear in the table. In the table, you can set the tolerances for the signals. To see tolerances used in an example for baseline testing, seeCompare Model Output to Baseline Data.

For the corresponding API, see theAbsTol,RelTol,LeadingTol, andLaggingTolproperties ofsltest.testmanager.BaselineCriteria.

Add File as Baseline

By clickingAdd,你可以选择一个已存在的文件作为一个基准。You can add MAT-files andMicrosoft Excelfiles as the baseline. FormatMicrosoft Excel文件中描述Format Test Case Data in Excel.

For the corresponding API, see theaddInputmethod.

Update Signal Data in Baseline

You can edit the signal data in your baseline, for example, if your model changed and you expect different values. To open the Signal Editor or theMicrosoft Excelfile for editing, select the baseline file from the list and clickEdit. SeeManually Update Signal Data in a Baseline.

You can also update your baseline when you examine test failures in the data inspector view. SeeExamine Test Failures and Modify Baselines.

Equivalence Criteria

This section appears in equivalence test cases. The equivalence criteria is a set of signal data to compare in Simulation 1 and Simulation 2. Specify tolerances to regulate pass-fail criteria of the test. You can specify absolute, relative, leading, and lagging tolerances for the signals.

To specify tolerances, first clickCaptureto run the system under test in Simulation 1 and add signals marked for logging to the table. Specify the tolerances in the table.

After you capture the signals, you can select signals from the table to narrow your results. If you do not select signals underEquivalence Criteria, running the test case compares all the logged signals in Simulation 1 and Simulation 2.

For an example of an equivalence test case, seeTest Two Simulations for Equivalence.

For the corresponding API, see thecaptureEquivalenceCriteriamethod.

Iterations

Use iterations to repeat a test with different parameter values, configuration sets, or input data.

  • You can run multiple simulations with the same inputs, outputs, and criteria by sweeping through different parameter values in a test case.

  • Models, external data files, and Test Sequence blocks can contain multiple test input scenarios. To simplify your test file architecture, you can run different input scenarios as iterations rather than as different test cases. You can apply different baseline data to each iteration, or capture new baseline data from an iteration set.

  • You can iterate over different configuration sets, for example to compare results between solvers or data types. You can also iterate over different scenarios in a Test Sequence block.

To create iterations from defined parameter sets, Signal Editor scenarios, Test Sequence scenarios, external data files, or configuration sets, use table iterations. To create a custom set of iterations from the available test case elements, write a MATLAB iteration script in the test case.

To run the iterations without recompiling the model for each iteration, enableRun test iterations in fast restart. When selected, this option reduces simulation time.

For more information about test iterations, seeTest Iterations. For more information about fast restart, seeHow Fast Restart Improves Iterative Simulations.

For the corresponding API, seesltest.testmanager.TestIteration.

Logical and Temporal Assessments

Create temporal assessments using the form-based editor that prompts you for conditions, events, signal values, delays, and responses. When you collapse the individual elements, the editor displays a readable statement summarizing the assessment. SeeAssess Temporal Logic by Using Temporal AssessmentsandLogical and Temporal Assessment Syntaxfor more information.

To copy and paste an assessment or symbol, select the assessment or symbol and right-click to display the context menu. You can select a single assessment or symbol or select multiple assessments or symbols. Alternatively, to copy or paste selected assessments or symbols, useCtrl+CorCtrl+V. Pasting the assessment adds it to the end of the assessment list in the current test case. You can also paste to a different test case. The assessments and their symbol names change to the default names in the pasted assessment. You can also use the context menu to delete assessments. To delete symbols, use the删除button. If you delete an assessment or symbol, you cannot paste it even if you copied it before deleting it.

Assessment Callback

You can define variables and use them in logical and temporal assessment conditions and expressions in theAssessment Callbacksection.

Define variables by writing a script in theAssessment Callbacksection. You can map these variables to symbols in theSymbols窗格中单击右键的象征,selectingMap to expression, and entering the variable name in theExpression字段。For information on how to map variables to symbols, seeMap to expressionunderResolve Assessment Parameter Symbols.

TheAssessment Callbacksection has access to the predefined variables that contain test, simulation, and model data. You can define a variable as a function of this data. For more information, seeDefine Variables in the Assessment Callback Section. For the corresponding API methods, seesetAssessmentsCallbackandgetAssessmentsCallback.

If your assessments useat least,at most,between, oruntilsyntax, selectExtend Resultsto produce the minimum possible untested results. In some cases, none or not all untested results can be tested, so the results will still show some untested results. When you extend the test results, previously passing tests might fail. LeaveExtend Resultschecked unless you need to avoid an incompatibility with earlier test results.

Symbolt(time)

The symboltis automatically bound to simulation time and can be used in logical and temporal assessment conditions. This symbol does not need to be mapped to a variable and is not visible in theSymbolspane. For example, to limit an assessment to a time between 5 and 7 seconds, create aTrigger-responseassessment and, in the trigger condition, entert < 5 & t > 7. To avoid unexpected behavior, do not define a new symboltin theSymbolspane.

Symbol Data Type

If you map a symbol to a discrete data signal that is linearly interpolated, the interpolation is automatically changed to zero-order hold during the assessment evaluation.

Custom Criteria

This section includes an embedded MATLAB editor to define custom pass/fail criteria for your test. Selectfunction customCriteria(test)to enable the criteria script in the editor. Custom criteria operate outside of model run time; the script evaluates after model simulation.

Common uses of custom criteria include verifying signal characteristics or verifying test conditions. MATLAB Unit Test qualifications provide a framework for verification criteria. For example, this custom criteria script gets the last value of the signalPhiRefand verifies that it equals0:

% Get the last value of PhiRef from the dataset Signals_Req1_3lastValue = test.sltest_simout.get('Signals_Req1_3').get('PhiRef').Values.Data(end);% Verify that the last value equals 0test.verifyEqual(lastValue,0);

SeeProcess Test Results with Custom Scripts. For a list of MATLAB Unit Test qualifications, seeTable of Verifications, Assertions, and Other Qualifications.

You can also define plots in theCustom Criteriasection. SeeCreate, Store, and Open MATLAB Figures.

For the corresponding API, seesltest.testmanager.CustomCriteria.

Coverage Settings

Use this section to configure coverage collection for a test file. The settings propagate from the test file to the test suites and test cases in the test file. You can turn off coverage collection or one or more coverage metrics for a test suite or test case, unless your test is a MATLAB-based Simulink test.

For MATLAB-based Simulink tests, you can change the coverage settings only at the test file level. If you change the coverage settings in the Test Manager, the changes are not saved to the MATLAB-based Simulink test script file. If you also set the coverage using thesltest.plugins.ModelCoveragePluginin a MATLAB-based Simulink test script (.m) file or at the command line, the Test Manager uses the coverage settings from the test script instead of the Test Manager coverage settings.

Coverage is not supported for SIL or PIL blocks.

The coverage collection options are:

  • Record coverage for system under test— Collects coverage for the model or, when included, the component specified in theSystem Under Testsection for each test case. If you are using a test harness, the system under test is the component for which the harness is created. The test harness is not the system under test.

    • For a block diagram, the system under test is the whole block diagram.

    • For a Model block, the system under test is the referenced model.

    • For a subsystem, the system under test is the subsystem.

  • Record coverage for referenced models— Collects coverage for models that are referenced from within the specified system under test. If the test harness references another model, the coverage results are included for that model, too.

  • Exclude inactive variants— Excludes from coverage results these variant blocks that are not active at any time while the test runs:

    • Variant blocks in Simulink withVariant activation timeset to startup

    • Variant configurations in Stateflow®charts

    When displaying the test results, if you select or clear this option in theAggregated Coverage Resultssection, the coverage results update automatically. For information, seeModel Coverage for Variant Blocks(Simulink Coverage).

Note

覆盖设置,包括覆盖过滤器文件, in the Test Manager override all coverage settings in the model configuration. In the Test Manager,Do not override model settingsin the Configuration Settings section andOverride model settingsin the Simulation Outputs section do not apply to coverage.

By default the Test Manager includes external MATLAB functions and files in the coverage results. You can exclude external MATLAB functions and files by usingset_param(model,'CovExternalEMLEnable','off','CovSFcnEnable','off');at the command line. Alternatively, you can exclude MATLAB functions and files by using theInclude in analysissetting in the Coverage Analyzer app from within the Simulink model.

For more information about collecting coverage, seeCollect Coverage in Tests. For the corresponding API, seesltest.testmanager.CoverageSettings.

For information on theCoverage Metricsoptions, seeTypes of Model Coverage(Simulink Coverage).

For information about MATLAB-based Simulink tests, seeUsing MATLAB-Based Simulink Tests in the Test Manager.

Test File Options

Close open Figures at the end of execution

When your tests generate figures, select this option to clear the working environment of figures after the test execution completes.

For the corresponding API, see theCloseFiguresproperty ofsltest.testmanager.Options.

StoreMATLABfigures

Select this option to store figures generated during the test with the test file. You can enter MATLAB code that creates figures and plots as a callback or in the test caseCustom Criteriasection. SeeCreate, Store, and Open MATLAB Figures.

For the corresponding API, see theSaveFiguresproperty ofsltest.testmanager.Options.

Generate report after execution

SelectGenerate report after executionto create a report after the test executes. Selecting this option displays report options that you can set. The settings are saved with the test file.

Note

To enable the options to specify the number of plots per page, selectPlots for simulation output and baseline.

By default, the model name, simulation start and stop times, and trigger information are included in the report.

For the corresponding API, see theGenerateReportproperty ofsltest.testmanager.Options.

For detailed reporting information, seeExport Test ResultsandCustomize Test Results Reports.

Test File Content

For a MATLAB-based Simulink test, displays the contents of the M file that defines the test. This section appears only if you opened or created a new MATLAB-based Simulink test. SeeUsing MATLAB-Based Simulink Tests in the Test Manager.

See Also

||