主要内容

setActor

Set actor of reinforcement learning agent

Description

example

agent= setActor(agent,actor)更新加固学习代理,agent,要使用指定的Actor对象,actor

Examples

collapse all

Assume that you have an existing trained reinforcement learning agent. For this example, load the trained agent fromTrain DDPG Agent to Control Double Integrator System

load('doubleintegddpg.mat','agent')

从代理获取参与者函数近似器。

actor = getActor(agent);

从演员那里获取可学习的参数。

params = getLearnableParameters(Actor)
参数=2×1 cell array{[-15.4689 -7.1635]} {[ 0]}

Modify the parameter values. For this example, simply multiply all of the parameters by2

modifiedParams = cellfun(@(x) x*2,params,'UniformOutput',错误的);

Set the parameter values of the actor to the new modified values.

actor = setLearnableParameters(actor,modifiedParams);

Set the actor in the agent to the new modified actor.

setActor(agent,actor);

显示新的参数值。

getLearnableParameters(getActor(agent))
ans=2×1 cell array{[-30.9378 -14.3269]} {[ 0]}

创建一个具有连续动作空间的环境,并获得其观察和行动规范。对于此示例,加载示例中使用的环境Train DDPG Agent to Control Double Integrator System

Load the predefined environment.

env = rlPredefinedEnv(“双整合器连续”);

获得观察和行动规格。

obsinfo = getObservationinfo(env);actinfo = getActionInfo(env);

Create a PPO agent from the environment observation and action specifications. This agent uses default deep neural networks for its actor and critic.

agent = rlPPOAgent(obsInfo,actInfo);

为了修改加强学习代理中的深层神经网络,您必须首先提取演员和评论家功能近似器。

actor = getActor(agent); critic = getCritic(agent);

从演员和评论家功能近似器中提取深神网络。

actornet = getModel(Actor);评论家= getModel(评论家);

The networks aredlnetworkobjects. To view them using theplotfunction, you must convert them toLayerGraphobjects.

例如,查看演员网络。

plot(layerGraph(actorNet))

Figure contains an axes object. The axes object contains an object of type graphplot.

To validate a network, useanalyzeNetwork。例如,验证评论家网络。

analyzeNetwork(criticNet)

You can modify the actor and critic networks and save them back to the agent. To modify the networks, you can use the深网设计师app. To open the app for each network, use the following commands.

DeepNetworkDesigner(layergraph(Crivernet))deepnetworkDesigner(layergraph(actornet))

In深网设计师, modify the networks. For example, you can add additional layers to your network. When you modify the networks, do not change the input and output layers of the networks returned byGetModel。有关构建网络的更多信息,请参阅Build Networks with Deep Network Designer

验证修改后的网络深网设计师,您必须点击分析DLNETWORK, under theAnalysis部分。要将修改后的网络结构导出到MATLAB®工作区,请生成用于创建新网络的代码,并从命令行运行此代码。请勿在深网设计师。For an example that shows how to generate and run code, see使用深网设计师创建代理,并使用图像观测来训练

For this example, the code for creating the modified actor and critic networks is in thecreateModifiedNetworks助手脚本。

createModifiedNetworks

每个修改的网络都包括一个附加的fullyConnectedLayerreluLayer在他们的主要公共路径中。查看修改后的演员网络。

绘图(LayerGraph(ModifiedActornet))

Figure contains an axes object. The axes object contains an object of type graphplot.

导出网络后,将网络插入演员和评论家功能近似器中。

Actor = SetModel(Actor,ModifiedActornet);评论家= setModel(评论家,修改Criticnet);

Finally, insert the modified actor and critic function approximators into the actor and critic objects.

agent = setActor(agent,actor); agent = setCritic(agent,critic);

Input Arguments

collapse all

Reinforcement learning agent that contains an actor, specified as one of the following:

Note

agentis an handle object. Therefore is updated bysetActor无论agent是否作为输出参数返回。有关处理对象的更多信息,请参阅处理对象行为

Actor object, specified as one of the following:

  • rlContinuousDeterministicActorobject — Specify whenagentis anrlDDPGAgent或者rlTD3Agentobject

  • rlDiscreteCategoricalActorobject — Specify whenagentis anrlACAgent,rlpgagent,rlPPOAgent,rlTRPOAgent或者rlSACAgentobject for an environment with a discrete action space.

  • rlContinuousGaussianActorobject — Specify whenagentis anrlACAgent,rlpgagent,rlPPOAgent,rlTRPOAgent或者rlSACAgentobject for an environment with a continuous action space.

演员中近似模型的输入和输出(通常是神经网络)必须匹配原始代理的观察和动作规范。

To create an actor, use one of the following methods:

  • Create the actor using the corresponding function approximator object.

  • Obtain the existing actor from an agent usinggetActor

Output Arguments

collapse all

Updated agent, returned as an agent object. Note thatagentis an handle object. Therefore its actor is updated bysetActor无论agent是否作为输出参数返回。有关处理对象的更多信息,请参阅处理对象行为

Version History

Introduced in R2019a