Main Content

Create and Run Test Cases with Scripts

For a list of functions and objects in the金宝app®Test™programmatic interface, seeTest Scripts.

Create and Run a Baseline Test Case

This example shows how to usesltest.testmanager有趣的ctions, classes, and methods to automate tests and generate reports. You can create a test case, edit the test case criteria, run the test case, export simulation output, and generate results reports programmatically. The example compares the simulation output of the model to a baseline.

% Open the model for this exampleopenExample('sldemo_absbrake');% Create the test file, test suite, and test case structuretf = sltest.testmanager.TestFile('API Test File'); ts = createTestSuite(tf,'API Test Suite'); tc = createTestCase(ts,'baseline','Baseline API Test Case');% Remove the default test suitetsDel = getTestSuiteByName(tf,'New Test Suite 1'); remove(tsDel);% Assign the system under test to the test casesetProperty(tc,'Model','sldemo_absbrake');% Capture the baseline criteriabaseline = captureBaselineCriteria(tc,'baseline_API.mat',true);% Test a new model parameter by overriding it in the test case% parameter setps = addParameterSet(tc,'Name','API Parameter Set'); po = addParameterOverride(ps,'m',55);% Set the baseline criteria tolerance for one signalsc = getSignalCriteria(baseline); sc(1).AbsTol = 9;%运行测试用例和结果返回一个对象lts dataResultsObj = run(tc);% Get the test case result and the Sim Output run datasettcr = getTestCaseResults(ResultsObj); runDataset = getOutputRuns(tcr);% Open the Test Manager so you can view the simulation% output and comparison datasltest.testmanager.view;% Generate a report from the results datafilePath ='test_report.pdf'; sltest.testmanager.report(ResultsObj,filePath,...'Author','Test Engineer',...'IncludeSimulationSignalPlots',true,...'IncludeComparisonSignalPlots',true);% Export the Sim Output run datasetdataset = export(runDataset);

The test case fails because only one of the signal comparisons between the simulation output and the baseline criteria is within tolerance. The results report is a PDF and opens when it is completed. For more report generation settings, see thesltest.testmanager.report有趣的ction reference page.

Create and Run an Equivalence Test Case

This example compares signal data between two simulations to test for equivalence.

% Open the model for this exampleopenExample('sldemo_absbrake');% Create the test file, test suite, and test case structuretf = sltest.testmanager.TestFile('API Test File'); ts = createTestSuite(tf,'API Test Suite'); tc = createTestCase(ts,'equivalence','Equivalence Test Case');% Remove the default test suitetsDel = getTestSuiteByName(tf,'New Test Suite 1'); remove(tsDel);% Assign the system under test to the test case% for Simulation 1 and Simulation 2setProperty(tc,'Model','sldemo_absbrake','SimulationIndex',1); setProperty(tc,'Model','sldemo_absbrake','SimulationIndex',2);% Add a parameter override to Simulation 1 and 2ps1 = addParameterSet(tc,'Name','Parameter Set 1','SimulationIndex',1); po1 = addParameterOverride(ps1,'Rr',1.20); ps2 = addParameterSet(tc,'Name','Parameter Set 2','SimulationIndex',2); po2 = addParameterOverride(ps2,'Rr',1.24);% Capture equivalence criteriaeq = captureEquivalenceCriteria(tc);% Set the equivalence criteria tolerance for one signalsc = getSignalCriteria(eq); sc(1).AbsTol = 2.2;%运行测试用例和结果返回一个对象lts dataResultsObj = run(tc);% Open the Test Manager so you can view the simulation% output and comparison datasltest.testmanager.view;

In the Equivalence Criteria Result section of the Test Manager results, theyout.Wwsignal passes because of the tolerance value. The other signal comparisons do not pass, and the overall test case fails.

Run a Test Case and Collect Coverage

This example shows how to use a simulation test case to collect coverage results. To collect coverage, you need aSimulink Coverage™license.

% Open the model for this exampleopenExample('sldemo_autotrans');% Create the test file, test suite, and test case structuretf = sltest.testmanager.TestFile('API Test File'); ts = createTestSuite(tf,'API Test Suite'); tc = createTestCase(ts,'simulation','Coverage Test Case');% Remove the default test suitetsDel = getTestSuiteByName(tf,'New Test Suite 1'); remove(tsDel);% Assign the system under test to the test casesetProperty(tc,'Model','sldemo_autotrans');% Turn on coverage settings at test-file levelcov = getCoverageSettings(tf); cov.RecordCoverage = true;% Enable MCDC and signal range coverage metricscov.MetricSettings ='mr';%运行测试用例和结果返回一个对象lts datars =运行(tf);% Get the coverage resultscr = getCoverageResults(rs);% Open the Test Manager to view resultssltest.testmanager.view;

In theResults and Artifactspane of the Test Manager, click on Results. You can view the aggregated coverage results.

Create and Run Test Case Iterations

This example shows how to create test iterations. You can create table iterations programmatically that appear in theIterationssection of a test case. The example creates a simulation test case and assigns a Signal Editor scenario for each iteration.

% Open the model for this exampleopenExample('sldemo_autotrans');% Create test file, test suite, and test case structuretf = sltest.testmanager.TestFile('Iterations Test File'); ts = getTestSuites(tf); tc = createTestCase(ts,'simulation','Simulation Iterations');% Specify model as system under testsetProperty(tc,'Model','sldemo_autotrans');% Set up table iteration% Create iteration objecttestItr1 = sltestiteration;% Set iteration settingssetTestParam(testItr1,'SignalEditorScenario','Passing Maneuver');% Add the iteration to test caseaddIteration(tc,testItr1);% Set up another table iteration% Create iteration objecttestItr2 = sltestiteration;% Set iteration settingssetTestParam(testItr2,'SignalEditorScenario','Coasting');% Add the iteration to test caseaddIteration(tc,testItr2);% Run test case that contains iterationsresults = run(tc);% Get iteration resultstcResults = getTestCaseResults(results); iterResults = getIterationResults(tcResults);

Related Topics