Main Content

Train DDPG Agent to Control Double Integrator System

This example shows how to train a deep deterministic policy gradient (DDPG) agent to control a second-order dynamic system modeled in MATLAB®.

For more information on DDPG agents, seeDeep Deterministic Policy Gradient Agents. For an example showing how to train a DDPG agent in Simulink®, seeTrain DDPG Agent to Swing Up and Balance Pendulum.

Double Integrator MATLAB Environment

The reinforcement learning environment for this example is a second-order double-integrator system with a gain. The training goal is to control the position of a mass in the second-order system by applying a force input.

For this environment:

  • The mass starts at an initial position between –4 and 4 units.

  • 力来回动作信号m the agent to the environment is from –2 to 2 N.

  • The observations from the environment are the position and velocity of the mass.

  • The episode terminates if the mass moves more than 5 m from the original position or if | x | < 0 . 01 .

  • The reward r t , provided at every time step, is a discretization of r ( t ) :

r ( t ) = - ( x ( t ) Q x ( t ) + u ( t ) R u ( t ) )

Here:

  • x is the state vector of the mass.

  • u 力应用于质量。

  • Q is the weights on the control performance; Q = [ 10 0 ; 0 1 ] .

  • R is the weight on the control effort; R = 0 . 01 .

For more information on this model, seeLoad Predefined Control System Environments.

Create Environment Interface

Create a predefined environment interface for the double integrator system.

env = rlPredefinedEnv("DoubleIntegrator-Continuous")
env = DoubleIntegratorContinuousAction with properties: Gain: 1 Ts: 0.1000 MaxDistance: 5 GoalThreshold: 0.0100 Q: [2x2 double] R: 0.0100 MaxForce: Inf State: [2x1 double]
env.MaxForce = Inf;

The interface has a continuous action space where the agent can apply force values from -InftoInfto the mass.

Obtain the observation and action information from the environment interface.

obsInfo = getObservationInfo(env); numObservations = obsInfo.Dimension(1); actInfo = getActionInfo(env); numActions = numel(actInfo);

Fix the random generator seed for reproducibility.

rng(0)

Create DDPG Agent

A DDPG agent approximates the long-term reward, given observations and actions, using a critic value function representation. To create the critic, first create a deep neural network with two inputs (the state and action) and one output. For more information on creating a neural network value function representation, seeCreate Policies and Value Functions.

statePath = featureInputLayer(numObservations,'Normalization','none','Name','state'); actionPath = featureInputLayer(numActions,'Normalization','none','Name','action'); commonPath = [concatenationLayer(1,2,'Name','concat') quadraticLayer('Name','quadratic') fullyConnectedLayer(1,'Name','StateValue','BiasLearnRateFactor',0,'Bias',0)]; criticNetwork = layerGraph(statePath); criticNetwork = addLayers(criticNetwork,actionPath); criticNetwork = addLayers(criticNetwork,commonPath); criticNetwork = connectLayers(criticNetwork,'state','concat/in1'); criticNetwork = connectLayers(criticNetwork,'action','concat/in2');

View the critic network configuration.

figure plot(criticNetwork)

Figure contains an axes object. The axes object contains an object of type graphplot.

Specify options for the critic representation usingrlOptimizerOptions.

criticOpts = rlOptimizerOptions(“LearnRate”,5e-3,'GradientThreshold',1);

Create the critic representation using the specified neural network and options. You must also specify the action and observation info for the critic, which you obtain from the environment interface. For more information, seerlQValueFunction.

critic = rlQValueFunction(criticNetwork,obsInfo,actInfo,'ObservationInputNames','state','ActionInputNames','action');

A DDPG agent decides which action to take, given observations, using an actor representation. To create the actor, first create a deep neural network with one input (the observation) and one output (the action).

Construct the actor in a similar manner to the critic.

actorNetwork = [ featureInputLayer(numObservations,'Normalization','none','Name','state') fullyConnectedLayer(numActions,'Name','action','BiasLearnRateFactor',0,'Bias',0)]; actorNetwork = dlnetwork(actorNetwork); actorOpts = rlOptimizerOptions(“LearnRate”,1e-04,'GradientThreshold',1); actor = rlContinuousDeterministicActor(actorNetwork,obsInfo,actInfo);

To create the DDPG agent, first specify the DDPG agent options usingrlDDPGAgentOptions.

agentOpts = rlDDPGAgentOptions(...“SampleTime”,env.Ts,...'ActorOptimizerOptions',actorOpts,...'CriticOptimizerOptions',criticOpts,...'ExperienceBufferLength',1e6,...'MiniBatchSize',32); agentOpts.NoiseOptions.Variance = 0.3; agentOpts.NoiseOptions.VarianceDecayRate = 1e-6;

Create the DDPG agent using the specified actor representation, critic representation, and agent options. For more information, seerlDDPGAgent.

agent = rlDDPGAgent(actor,critic,agentOpts);

Train Agent

To train the agent, first specify the training options. For this example, use the following options.

  • Run at most 1000 episodes in the training session, with each episode lasting at most 200 time steps.

  • Display the training progress in the Episode Manager dialog box (set thePlotsoption) and disable the command line display (set theVerboseoption).

  • Stop training when the agent receives a moving average cumulative reward greater than –66. At this point, the agent can control the position of the mass using minimal control effort.

For more information, seerlTrainingOptions.

trainOpts = rlTrainingOptions(...'MaxEpisodes', 5000,...'MaxStepsPerEpisode', 200,...'Verbose', false,...'StopTrainingCriteria','AverageReward',...'StopTrainingValue',-66);

You can visualize the double integrator environment by using theplotfunction during training or simulation.

plot(env)

Figure Double Integrator Visualizer contains an axes object. The axes object contains an object of type rectangle.

Train the agent using thetrainfunction. Training this agent is a computationally intensive process that takes several hours to complete. To save time while running this example, load a pretrained agent by settingdoTrainingtofalse. To train the agent yourself, setdoTrainingtotrue.

doTraining = false;ifdoTraining% Train the agent.trainingStats = train(agent,env,trainOpts);else% Load the pretrained agent for the example.load('DoubleIntegDDPG.mat','agent');end

Simulate DDPG Agent

To validate the performance of the trained agent, simulate it within the double integrator environment. For more information on agent simulation, seerlSimulationOptionsandsim.

simOptions = rlSimulationOptions('MaxSteps',500); experience = sim(env,agent,simOptions);

Figure Double Integrator Visualizer contains an axes object. The axes object contains an object of type rectangle.

totalReward = sum(experience.Reward)
totalReward = -65.9933

See Also

Related Topics