Main Content

Train AC Agent to Balance Cart-Pole System

This example shows how to train an actor-critic (AC) agent to balance a cart-pole system modeled in MATLAB®.

For more information on AC agents, seeActor-Critic Agents. For an example showing how to train an AC agent using parallel computing, seeTrain AC Agent to Balance Cart-Pole System Using Parallel Computing.

Cart-Pole米ATLAB Environment

The reinforcement learning environment for this example is a pole attached to an unactuated joint on a cart, which moves along a frictionless track. The training goal is to make the pendulum stand upright without falling over.

For this environment:

  • The upward balanced pendulum position is0radians, and the downward hanging position ispiradians.

  • The pendulum starts upright with an initial angle between –0.05 and 0.5 rad.

  • The force action signal from the agent to the environment is from –10 to 10 N.

  • The observations from the environment are the position and velocity of the cart, the pendulum angle, and the pendulum angle derivative.

  • The episode terminates if the pole is more than 12 degrees from vertical or if the cart moves more than 2.4 m from the original position.

  • A reward of +1 is provided for every time step that the pole remains upright. A penalty of –5 is applied when the pendulum falls.

For more information on this model, seeLoad Predefined Control System Environments.

Create Environment Interface

Create a predefined environment interface for the pendulum.

env = rlPredefinedEnv("CartPole-Discrete")
env = CartPoleDiscreteAction with properties: Gravity: 9.8000 MassCart: 1 MassPole: 0.1000 Length: 0.5000 MaxForce: 10 Ts: 0.0200 ThetaThresholdRadians: 0.2094 XThreshold: 2.4000 RewardForNotFalling: 1 PenaltyForFalling: -5 State: [4x1 double]
env。PenaltyForFalling = -10;

The interface has a discrete action space where the agent can apply one of two possible force values to the cart, –10 or 10 N.

Obtain the observation and action information from the environment interface.

obsInfo = getObservationInfo(env); actInfo = getActionInfo(env);

Fix the random generator seed for reproducibility.

rng(0)

Create AC Agent

An AC agent approximates the long-term reward, given observations and actions, using a critic value function representation. To create the critic, first create a deep neural network with one input (the observation) and one output (the state value). The input size of the critic network is 4 since the environment has four observations. For more information on creating a deep neural network value function representation, seeCreate Policies and Value Functions.

criticNetwork = [ featureInputLayer(4,“归一化”,'none','Name','state') fullyConnectedLayer(32,'Name','CriticStateFC1') reluLayer('Name','CriticRelu1') fullyConnectedLayer(1,'Name','CriticFC')]; criticNetwork = dlnetwork(criticNetwork);

Specify options for the critic representation usingrlOptimizerOptions.

criticOpts = rlOptimizerOptions(“LearnRate”,1e-2,'GradientThreshold',1);

Create the critic representation using the specified deep neural network. You must also specify the action and observation information for the critic, which you obtain from the environment interface. For more information, seerlValueFunction.

critic = rlValueFunction(criticNetwork,obsInfo);

An AC agent decides which action to take, given observations, using an actor representation. To create the actor, create a deep neural network with one input (the observation) and one output (the action). The output size of the actor network is 2 since the environment has two possible actions, –10 and 10.

Construct the actor in a similar manner to the critic. For more information, seerlDiscreteCategoricalActor.

actorNetwork = [ featureInputLayer(4,“归一化”,'none','Name','state') fullyConnectedLayer(32,'Name','ActorStateFC1') reluLayer('Name','ActorRelu1') fullyConnectedLayer(2,'Name','ActorStateFC2') softmaxLayer ('Name','actionProb')]; actorNetwork = dlnetwork(actorNetwork); actorOpts = rlOptimizerOptions(“LearnRate”,1e-2,'GradientThreshold',1); actor = rlDiscreteCategoricalActor(actorNetwork,obsInfo,actInfo);

To create the AC agent, first specify the AC agent options usingrlACAgentOptions.

agentOpts = rlACAgentOptions(...'ActorOptimizerOptions',actorOpts,...'CriticOptimizerOptions',criticOpts,...'EntropyLossWeight',0.01);

Then create the agent using the specified actor representation and the default agent options. For more information, seerlACAgent.

agent = rlACAgent(actor,critic,agentOpts);

Train Agent

To train the agent, first specify the training options. For this example, use the following options.

  • Run each training episode for at most 1000 episodes, with each episode lasting at most 500 time steps.

  • Display the training progress in the Episode Manager dialog box (set thePlotsoption) and disable the command line display (set theVerboseoption tofalse).

  • Stop training when the agent receives an average cumulative reward greater than 480 over 10 consecutive episodes. At this point, the agent can balance the pendulum in the upright position.

For more information, seerlTrainingOptions.

trainOpts = rlTrainingOptions(...'MaxEpisodes',1000,...'MaxStepsPerEpisode',500,...'Verbose',false,...“阴谋”,'training-progress',...'StopTrainingCriteria','AverageReward',...'StopTrainingValue',480,...'ScoreAveragingWindowLength',10);

You can visualize the cart-pole system during training or simulation using theplotfunction.

plot(env)

Figure Cart Pole Visualizer contains an axes object. The axes object contains 6 objects of type line, polygon.

Train the agent using thetrainfunction. Training this agent is a computationally intensive process that takes several minutes to complete. To save time while running this example, load a pretrained agent by settingdoTrainingtofalse. To train the agent yourself, setdoTrainingtotrue.

doTraining = false;如果doTraining% Train the agent.trainingStats = train(agent,env,trainOpts);else% Load the pretrained agent for the example.load('MATLABCartpoleAC.mat','agent');end

Simulate AC Agent

To validate the performance of the trained agent, simulate it within the cart-pole environment. For more information on agent simulation, seerlSimulationOptionsandsim.

simOptions = rlSimulationOptions('MaxSteps',500); experience = sim(env,agent,simOptions);

Figure Cart Pole Visualizer contains an axes object. The axes object contains 6 objects of type line, polygon.

totalReward = sum(experience.Reward)
totalReward = 500

See Also

Related Topics