Main Content

rlNumericSpec

Create continuous action or observation data specifications for reinforcement learning environments

Description

AnrlNumericSpecobject specifies continuous action or observation data specifications for reinforcement learning environments.

Creation

Description

example

spec= rlNumericSpec(dimension)creates a data specification for continuous actions or observations and sets theDimensionproperty.

spec= rlNumericSpec(dimension,Name,Value)setsPropertiesusing name-value pair arguments.

Properties

expand all

Lower limit of the data space, specified as a scalar or matrix of the same size as the data space. WhenLowerLimitis specified as a scalar,rlNumericSpecapplies it to all entries in the data space.

Upper limit of the data space, specified as a scalar or matrix of the same size as the data space. WhenUpperLimitis specified as a scalar,rlNumericSpecapplies it to all entries in the data space.

Name of therlNumericSpecobject, specified as a string.

Description of therlNumericSpecobject, specified as a string.

This property is read-only.

Dimension of the data space, specified as a numeric vector.

This property is read-only.

Information about the type of data, specified as a string.

Object Functions

rlSimulinkEnv Create reinforcement learning environment using dynamic model implemented in金宝app
rlFunctionEnv Specify custom reinforcement learning environment dynamics using functions
rlRepresentation (Not recommended) Model representation for reinforcement learning agents

Examples

collapse all

对于这个示例,考虑rlSimplePendulumModelSimulink model. The model is a simple frictionless pendulum that initially hangs in a downward position.

Open the model.

mdl ='rlSimplePendulumModel'; open_system(mdl)

CreaterlNumericSpecandrlFiniteSetSpecobjects for the observation and action information, respectively.

obsInfo = rlNumericSpec([3 1])% vector of 3 observations: sin(theta), cos(theta), d(theta)/dt
obsInfo = rlNumericSpec with properties: LowerLimit: -Inf UpperLimit: Inf Name: [0x0 string] Description: [0x0 string] Dimension: [3 1] DataType: "double"
actInfo = rlFiniteSetSpec([-2 0 2])% 3 possible values for torque: -2 Nm, 0 Nm and 2 Nm
actInfo = rlFiniteSetSpec with properties: Elements: [3x1 double] Name: [0x0 string] Description: [0x0 string] Dimension: [1 1] DataType: "double"

You can use dot notation to assign property values for therlNumericSpecandrlFiniteSetSpecobjects.

obsInfo.Name ='observations'; actInfo.Name ='torque';

Assign the agent block path information, and create the reinforcement learning environment for the Simulink model using the information extracted in the previous steps.

agentBlk = [mdl' / RL代理']; env = rlSimulinkEnv(mdl,agentBlk,obsInfo,actInfo)
env = 金宝appSimulinkEnvWithAgent属性:模型: rlSimplePendulumModel AgentBlock : rlSimplePendulumModel/RL Agent ResetFcn : [] UseFastRestart : on

You can also include a reset function using dot notation. For this example, randomly initializetheta0in the model workspace.

env.ResetFcn = @(in) setVariable(in,'theta0',randn,'Workspace',mdl)
env = 金宝appSimulinkEnvWithAgent属性:模型: rlSimplePendulumModel AgentBlock : rlSimplePendulumModel/RL Agent ResetFcn : @(in)setVariable(in,'theta0',randn,'Workspace',mdl) UseFastRestart : on

Version History

Introduced in R2019a