Main Content

Load Predefined金宝appEnvironments

Reinforcement Learning Toolbox™ software provides predefined Simulink®environments for which the actions, observations, rewards, and dynamics are already defined. You can use these environments to:

  • Learn reinforcement learning concepts.

  • Gain familiarity with Reinforcement Learning Toolbox software features.

  • Test your own reinforcement learning agents.

You can load the following predefined Simulink environments using therlPredefinedEnvfunction.

Environment Agent Task
Simple pendulum Simulink model Swing up and balance a simple pendulum using either a discrete or continuous action space.
Cart-pole Simscape™ model Balance a pole on a moving cart by applying forces to the cart using either a discrete or continuous action space.

For predefined Simulink environments, the environment dynamics, observations, and reward signal are defined in a corresponding Simulink model. TherlPredefinedEnvfunction creates aSimulinkEnvWithAgent对象,trainfunction uses to interact with the Simulink model.

Simple Pendulum金宝appModel

This environment is a simple frictionless pendulum that initially hangs in a downward position. The training goal is to make the pendulum stand upright without falling over using minimal control effort. The model for this environment is defined in therlSimplePendulumModelSimulink model.

open_system('rlSimplePendulumModel')

Simulink model of a pendulum system in a feedback loop with an agent block.

There are two simple pendulum environment variants, which differ by the agent action space.

  • Discrete — Agent can apply a torque of eitherTmax,0, or -Tmaxto the pendulum, whereTmaxis themax_tauvariable in the model workspace.

  • Continuous — Agent can apply any torque within the range [-Tmax,Tmax].

To create a simple pendulum environment, use therlPredefinedEnvfunction.

  • Discrete action space

    env = rlPredefinedEnv ('SimplePendulumModel-Discrete');
  • Continuous action space

    env = rlPredefinedEnv ('SimplePendulumModel-Continuous');

For examples that train agents in the simple pendulum environment, see:

Actions

In the simple pendulum environments, the agent interacts with the environment using a single action signal, the torque applied at the base of the pendulum. The environment contains a specification object for this action signal. For the environment with a:

For more information on obtaining action specifications from an environment, seegetActionInfo.

Observations

In the simple pendulum environment, the agent receives the following three observation signals, which are constructed within thecreate observationssubsystem.

  • Sine of the pendulum angle

  • Cosine of the pendulum angle

  • Derivative of the pendulum angle

For each observation signal, the environment contains anrlNumericSpecobservation specification. All the observations are continuous and unbounded.

For more information on obtaining observation specifications from an environment, seegetObservationInfo.

Reward

The reward signal for this environment, which is constructed in thecalculate rewardsubsystem, is

r t = ( θ t 2 + 0.1 θ ˙ t 2 + 0.001 u t 1 2 )

Here:

  • θtis the pendulum angle of displacement from the upright position.

  • θ ˙ t is the derivative of the pendulum angle.

  • ut-1is the control effort from the previous time step.

Cart-PoleSimscapeModel

The goal of the agent in the predefined cart-pole environments is to balance a pole on a moving cart by applying horizontal forces to the cart. The pole is considered successfully balanced if both of the following conditions are satisfied:

  • The pole angle remains within a given threshold of the vertical position, where the vertical position is zero radians.

  • The magnitude of the cart position remains below a given threshold.

The model for this environment is defined in therlCartPoleSimscapeModelSimulink model. The dynamics of this model are defined usingSimscape Multibody™.

open_system('rlCartPoleSimscapeModel')

Simulink model of an environment in a feedback loop with an agent block.

In theEnvironmentsubsystem, the model dynamics are defined using Simscape components and the reward and observation are constructed using Simulink blocks.

open_system('rlCartPoleSimscapeModel/Environment')

Simulink model of a cart-pole system.

There are two cart-pole environment variants, which differ by the agent action space.

  • Discrete — Agent can apply a force of15,0, or-15to the cart.

  • Continuous — Agent can apply any force within the range [-15,15].

To create a cart-pole environment, use therlPredefinedEnvfunction.

  • Discrete action space

    env = rlPredefinedEnv ('CartPoleSimscapeModel-Discrete');
  • Continuous action space

    env = rlPredefinedEnv ('CartPoleSimscapeModel-Continuous');

For an example that trains an agent in this cart-pole environment, seeTrain DDPG Agent to Swing Up and Balance Cart-Pole System.

Actions

In the cart-pole environments, the agent interacts with the environment using a single action signal, the force applied to the cart. The environment contains a specification object for this action signal. For the environment with a:

For more information on obtaining action specifications from an environment, seegetActionInfo.

Observations

In the cart-pole environment, the agent receives the following five observation signals.

  • Sine of the pole angle

  • Cosine of the pole angle

  • Derivative of the pendulum angle

  • Cart position

  • Derivative of cart position

For each observation signal, the environment contains anrlNumericSpecobservation specification. All the observations are continuous and unbounded.

For more information on obtaining observation specifications from an environment, seegetObservationInfo.

Reward

The reward signal for this environment is the sum of two components (r=rqr+rn+rp):

  • A quadratic regulator control reward, constructed in theEnvironment/qr rewardsubsystem.

    r q r = ( 0.1 x 2 + 0.5 θ 2 + 0.005 u t 1 2 )

  • A cart limit penalty, constructed in theEnvironment/x limit penaltysubsystem. This subsystem generates a negative reward when the magnitude of the cart position exceeds a given threshold.

    r p = 100 ( | x | 3.5 )

Here:

  • xis the cart position.

  • θis the pole angle of displacement from the upright position.

  • ut-1is the control effort from the previous time step.

See Also

Functions

Blocks

Related Examples

More About