Main Content

火车DQN代理车道保持辅助使用对位llel Computing

This example shows how to train a deep Q-learning network (DQN) agent for lane keeping assist (LKA) in Simulink® using parallel training. For an example that shows how to train the agent without using parallel training, seeTrain DQN Agent for Lane Keeping Assist(Reinforcement Learning Toolbox).

For more information on DQN agents, seeDeep Q-Network Agents(Reinforcement Learning Toolbox). For an example that trains a DQN agent in MATLAB®, seeTrain DQN Agent to Balance Cart-Pole System(Reinforcement Learning Toolbox).

DQN Parallel Training Overview

In a DQN agent, each worker generates new experiences from its copy of the agent and the environment. After everyNsteps, the worker sends experiences to the host agent. The host agent updates its parameters as follows.

  • For asynchronous training, the host agent learns from received experiences without waiting for all workers to send experiences, and sends the updated parameters back to the worker that provided the experiences. Then, the worker continues to generate experiences from its environment using the updated parameters.

  • For synchronous training, the host agent waits to receive experiences from all of the workers and learns from these experiences. The host then sends updated parameters to all the workers at the same time. Then, all workers continue to generate experiences using the updated parameters.

Simulink Model for Ego Car

的reinforcement learning environment for this example is a simple bicycle model for ego vehicle dynamics. The training goal is to keep the ego vehicle traveling along the centerline of the lanes by adjusting the front steering angle. This example uses the same vehicle model asTrain DQN Agent for Lane Keeping Assist(Reinforcement Learning Toolbox).

m = 1575;% total vehicle mass (kg)Iz = 2875;% yaw moment of inertia (mNs^2)lf = 1.2;% longitudinal distance from center of gravity to front tires (m)lr = 1.6;% longitudinal distance from center of gravity to rear tires (m)Cf = 19000;% cornering stiffness of front tires (N/rad)Cr = 33000;% cornering stiffness of rear tires (N/rad)Vx = 15;%的纵向速度(米/秒)

Define the sample timeTsand simulation durationTin seconds.

Ts = 0.1; T = 15;

的output of the LKA system is the front steering angle of the ego car. To simulate the physical steering limits of the ego car, constrain the steering angle to the range[–0.5,0.5]rad.

u_min = -0.5; u_max = 0.5;

的curvature of the road is defined by a constant 0.001 ( m - 1 ). The initial value for the lateral deviation is0.2m and the initial value for the relative yaw angle is–0.1rad.

rho = 0.001; e1_initial = 0.2; e2_initial = -0.1;

Open the model.

mdl ='rlLKAMdl'; open_system(mdl) agentblk = [mdl'/RL Agent'];

For this model:

  • 的steering-angle action signal from the agent to the environment is from –15 degrees to 15 degrees.

  • 的observations from the environment are the lateral deviation e 1 , relative yaw angle e 2 , their derivatives e ˙ 1 and e ˙ 2 , and their integrals e 1 and e 2 .

  • 的simulation is terminated when the lateral deviation | e 1 | > 1 .

  • 的reward r t , provided at every time step t , is

r t = - ( 1 0 e 1 2 + 5 e 2 2 + 2 u 2 + 5 e ˙ 1 2 + 5 e ˙ 2 2 )

where u is the control input from the previous time step t - 1 .

Create Environment Interface

Create a reinforcement learning environment interface for the ego vehicle.

Define the observation information.

observationInfo = rlNumericSpec([6 1],'LowerLimit',-inf*ones(6,1),'UpperLimit',inf*ones(6,1)); observationInfo.Name ='observations'; observationInfo.Description ='information on lateral deviation and relative yaw angle';

Define the action information.

actionInfo = rlFiniteSetSpec((-15:15)*pi/180); actionInfo.Name ='steering';

Create the environment interface.

env = rlSimulinkEnv(mdl,agentblk,observationInfo,actionInfo);

的interface has a discrete action space where the agent can apply one of 31 possible steering angles from –15 degrees to 15 degrees. The observation is the six-dimensional vector containing lateral deviation, relative yaw angle, as well as their derivatives and integrals with respect to time.

To define the initial condition for the lateral deviation and relative yaw angle, specify an environment reset function using an anonymous function handle.localResetFcn,这是定义在这次考试的结束ple, randomizes the initial lateral deviation and relative yaw angle.

env.ResetFcn = @(in)localResetFcn(in);

Fix the random generator seed for reproducibility.

rng(0)

Create DQN Agent

DQN agents can use multi-output Q-value critic approximators, which are generally more efficient. A multi-output approximator has observations as inputs and state-action values as outputs. Each output element represents the expected cumulative long-term reward for taking the corresponding discrete action from the state indicated by the observation inputs.

To create the critic, first create a deep neural network with one input (the six-dimensional observed state) and one output vector with 31 elements (evenly spaced steering angles from -15 to 15 degrees). For more information on creating a deep neural network value function representation, seeCreate Policies and Value Functions(Reinforcement Learning Toolbox).

nI = observationInfo.Dimension(1);% number of inputs (6)nL = 120;% number of neuronsnO = numel(actionInfo.Elements);% number of outputs (31)dnn = [ featureInputLayer(nI,'Normalization','none','Name','state') fullyConnectedLayer(nL,'Name','fc1') reluLayer('Name','relu1') fullyConnectedLayer(nL,'Name','fc2') reluLayer('Name','relu2') fullyConnectedLayer(nO,'Name','fc3')]; dnn = dlnetwork(dnn);

View the network configuration.

figure plot(layerGraph(dnn))

Specify options for the critic optimizer usingrlOptimizerOptions.

criticOptions = rlOptimizerOptions('LearnRate',1e-4,'GradientThreshold',1,'L2RegularizationFactor',1e-4);

Create the critic representation using the specified deep neural network and options. You must also specify the action and observation info for the critic, which you obtain from the environment interface. For more information, seerlVectorQValueFunction(Reinforcement Learning Toolbox).

critic = rlVectorQValueFunction(dnn,observationInfo,actionInfo);

To create the DQN agent, first specify the DQN agent options usingrlDQNAgentOptions(Reinforcement Learning Toolbox).

agentOpts = rlDQNAgentOptions(...'SampleTime',Ts,...'UseDoubleDQN',true,...'CriticOptimizerOptions',criticOptions,...'ExperienceBufferLength',1e6,...'MiniBatchSize',256); agentOpts.EpsilonGreedyExploration.EpsilonDecay = 1e-4;

的n create the DQN agent using the specified critic representation and agent options. For more information, seerlDQNAgent(Reinforcement Learning Toolbox).

agent = rlDQNAgent(critic,agentOpts);

Training Options

To train the agent, first specify the training options. For this example, use the following options.

  • Run each training for at most10000episodes, with each episode lasting at mostceil(T/Ts)time steps.

  • Display the training progress in the Episode Manager dialog box only (set thePlotsandVerboseoptions accordingly).

  • Stop training when the episode reward reaches-1.

  • Save a copy of the agent for each episode where the cumulative reward is greater than 100.

For more information, seerlTrainingOptions(Reinforcement Learning Toolbox).

maxepisodes = 10000; maxsteps = ceil(T/Ts); trainOpts = rlTrainingOptions(...'MaxEpisodes',maxepisodes,...'MaxStepsPerEpisode',maxsteps,...'Verbose',false,...“阴谋”,'none',...'StopTrainingCriteria','EpisodeReward',...'StopTrainingValue', -1,...'SaveAgentCriteria','EpisodeReward',...'SaveAgentValue',100);

Parallel Training Options

To train the agent in parallel, specify the following training options.

  • Set theUseParalleloption totrue.

  • Train agent in parallel asynchronously by setting theParallelizationOptions.Modeoption to"async".

trainOpts.UseParallel = true; trainOpts.ParallelizationOptions.Mode ="async";

For more information, seerlTrainingOptions(Reinforcement Learning Toolbox).

Train Agent

Train the agent using thetrain(Reinforcement Learning Toolbox)function. Training the agent is a computationally intensive process that takes several minutes to complete. To save time while running this example, load a pretrained agent by settingdoTrainingtofalse. To train the agent yourself, setdoTrainingtotrue. Due to randomness of the parallel training, you can expect different training results from the plot below. The plot shows the result of training with four workers.

doTraining = false;ifdoTraining% Train the agent.trainingStats = train(agent,env,trainOpts);else% Load pretrained agent for the example.load('SimulinkLKADQNParallel.mat','agent')end

Simulate DQN Agent

To validate the performance of the trained agent, uncomment the following two lines and simulate the agent within the environment. For more information on agent simulation, seerlSimulationOptions(Reinforcement Learning Toolbox)andsim(Reinforcement Learning Toolbox).

% simOptions = rlSimulationOptions('MaxSteps',maxsteps);% experience = sim(env,agent,simOptions);

To demonstrate the trained agent using deterministic initial conditions, simulate the model in Simulink.

e1_initial = -0.4; e2_initial = 0.2; sim(mdl)

As shown below, the lateral error (middle plot) and relative yaw angle (bottom plot) are both driven to zero. The vehicle starts from off centerline (–0.4 m) and nonzero yaw angle error (0.2 rad). The LKA enables the ego car to travel along the centerline after 2.5 seconds. The steering angle (top plot) shows that the controller reaches steady state after 2 seconds.

Local Function

functionin = localResetFcn(in)% resetin = setVariable(in,'e1_initial', 0.5*(-1+2*rand));% random value for lateral deviationin = setVariable(in,'e2_initial', 0.1*(-1+2*rand));% random value for relative yaw angleend

See Also

(Reinforcement Learning Toolbox)

Related Topics