Train Reinforcement Learning Agents
Once you have created an environment and reinforcement learning agent, you can train the agent in the environment using thetrain
function. To configure your training, use therlTrainingOptions
function. For example, create a training option setopt
, and train agentagent
in environmentenv
.
opt = rlTrainingOptions(...'MaxEpisodes',1000,...'MaxStepsPerEpisode',1000,...'StopTrainingCriteria',"AverageReward",...'StopTrainingValue',480); trainStats = train(agent,env,opt);
For more information on creating agents, seeReinforcement Learning Agents. For more information on creating environments, seeCreate MATLAB Reinforcement Learning EnvironmentsandCreate Simulink Reinforcement Learning Environments.
train
updates the agent as training progresses. To preserve the original agent parameters for later use, save the agent to a MAT-file.
save("initialAgent.mat","agent")
Training terminates automatically when the conditions you specify in theStopTrainingCriteria
andStopTrainingValue
options of yourrlTrainingOptions
object are satisfied. To manually terminate training in progress, typeCtrl+Cor, in the Reinforcement Learning Episode Manager, clickStop Training. Becausetrain
updates the agent at each episode, you can resume training by callingtrain(agent,env,trainOpts)
again, without losing the trained parameters learned during the first call totrain
.
Training Algorithm
In general, training performs the following steps.
Initialize the agent.
For each episode:
Reset the environment.
Get the initial observations0from the environment.
Compute the initial actiona0=μ(s0), whereμ(s) is the current policy.
Set the current action to the initial action (a←a0), and set the current observation to the initial observation (s←s0).
虽然这一事件没有完成或终止,perform the following steps.
Apply actionato the environment and obtain the next observations''and the rewardr.
Learn from the experience set (s,a,r,s').
Compute the next actiona'=μ(s').
Update the current action with the next action (a←a') and update the current observation with the next observation (s←s').
Terminate the episode if the termination conditions defined in the environment are met.
If the training termination condition is met, terminate training. Otherwise, begin the next episode.
The specifics of how the software performs these steps depend on the configuration of the agent and environment. For instance, resetting the environment at the start of each episode can include randomizing initial state values, if you configure your environment to do so. For more information on agents and their training algorithms, seeReinforcement Learning Agents. To use parallel processing and GPUs to speed up training, seeTrain Agents Using Parallel Computing and GPUs.
Episode Manager
By default, calling thetrain
function opens the Reinforcement Learning Episode Manager, which lets you visualize the training progress. The Episode Manager plot shows the reward for each episode (EpisodeReward) and a running average reward value (AverageReward). Also, for agents that have critics, the plot shows the critic's estimate of the discounted long-term reward at the start of each episode (EpisodeQ0). The Episode Manager also displays various episode and training statistics. You can also use thetrain
function to return episode and training information.
For agents with a critic,Episode Q0is the estimate of the discounted long-term reward at the start of each episode, given the initial observation of the environment. As training progresses, if the critic is well designed.Episode Q0approaches the true discounted long-term reward, as shown in the preceding figure.
To turn off the Reinforcement Learning Episode Manager, set thePlots
option ofrlTrainingOptions
to"none"
.
Save Candidate Agents
During training, you can save candidate agents that meet conditions you specify in theSaveAgentCriteria
andSaveAgentValue
options of yourrlTrainingOptions
object. For instance, you can save any agent whose episode reward exceeds a certain value, even if the overall condition for terminating training is not yet satisfied. For example, save agents when the episode reward is greater than100
.
opt = rlTrainingOptions('SaveAgentCriteria',"EpisodeReward",'SaveAgentValue',100');
train
stores saved agents in a MAT-file in the folder you specify using theSaveAgentDirectory
option ofrlTrainingOptions
. Saved agents can be useful, for instance, to test candidate agents generated during a long-running training process. For details about saving criteria and saving location, seerlTrainingOptions
.
After training is complete, you can save the final trained agent from the MATLAB®workspace using thesave
function. For example, save the agentmyAgent
to the filefinalAgent.mat
in the current working directory.
save(opt.SaveAgentDirectory +"/finalAgent.mat",'agent')
By default, when DDPG and DQN agents are saved, the experience buffer data is not saved. If you plan to further train your saved agent, you can start training with the previous experience buffer as a starting point. In this case, set theSaveExperienceBufferWithAgent
option totrue
. For some agents, such as those with large experience buffers and image-based observations, the memory required for saving the experience buffer is large. In these cases, you must ensure that enough memory is available for the saved agents.
Validate Trained Policy
To validate your trained agent, you can simulate the agent within the training environment using thesim
function. To configure the simulation, userlSimulationOptions
.
When validating your agent, consider checking how your agent handles the following:
Changes to simulation initial conditions — To change the model initial conditions, modify the reset function for the environment. For example reset functions, seeCreate MATLAB Environment Using Custom Functions,Create Custom MATLAB Environment from Template, andCreate Simulink Reinforcement Learning Environments.
Mismatches between the training and simulation environment dynamics — To check such mismatches, create test environments in the same way that you created the training environment, modifying the environment behavior.
As with parallel training, if you have Parallel Computing Toolbox™ software, you can run multiple parallel simulations on multicore computers. If you haveMATLAB Parallel Server™software, you can run multiple parallel simulations on computer clusters or cloud resources. For more information on configuring your simulation to use parallel computing, seeUseParallel
andParallelizationOptions
inrlSimulationOptions
.
Environment Visualization
If your training environment implements theplot
method, you can visualize the environment behavior during training and simulation. If you callplot(env)
before training or simulation, whereenv
is your environment object, then the visualization updates during training to allow you to visualize the progress of each episode or simulation.
环境不支持可视化tr金宝appaining or simulating your agent using parallel computing.
For custom environments, you must implement your ownplot
method. For more information on creating a custom environments with aplot
function, seeCreate Custom MATLAB Environment from Template.