getCritic
Get critic from reinforcement learning agent
Syntax
Description
Examples
Modify Critic Parameter Values
Assume that you have an existing trained reinforcement learning agent. For this example, load the trained agent fromTrain DDPG Agent to Control Double Integrator System。
load('doubleintegddpg.mat','agent')
Obtain the critic function approximator from the agent.
critic = getCritic(agent);
Obtain the learnable parameters from the critic.
params = getLearnableParameters(critic)
参数=2×1 cell array{[-5.0077 -1.5619 -0.3475 -0.0961 -0.0455 -0.0026]} {[ 0]}
Modify the parameter values. For this example, simply multiply all of the parameters by2
。
modifiedParams = cellfun(@(x) x*2,params,'UniformOutput',错误的);
Set the parameter values of the critic to the new modified values.
critic = setLearnableParameters(critic,modifiedParams);
将代理商的评论家定为新的修改者。
setCritic(agent,critic);
显示新的参数值。
GetLearnableParameters(GetCritic(Agent))
ans=2×1 cell array{[-10.0154 -3.1238 -0.6950 -0.1922 -0.0911 -0.0052]} {[ 0]}
Modify Deep Neural Networks in Reinforcement Learning Agent
创建一个具有连续动作空间的环境,并获得其观察和行动规范。对于此示例,加载示例中使用的环境Train DDPG Agent to Control Double Integrator System。
Load the predefined environment.
env = rlPredefinedEnv(“双整合器连续”);
获得观察和行动规格。
obsinfo = getObservationinfo(env);actinfo = getActionInfo(env);
Create a PPO agent from the environment observation and action specifications. This agent uses default deep neural networks for its actor and critic.
agent = rlPPOAgent(obsInfo,actInfo);
为了修改加强学习代理中的深层神经网络,您必须首先提取演员和评论家功能近似器。
actor = getActor(agent); critic = getCritic(agent);
从演员和评论家功能近似器中提取深神网络。
actornet = getModel(Actor);评论家= getModel(评论家);
The networks aredlnetwork
对象。To view them using theplot
function, you must convert them toLayerGraph
对象。
例如,查看演员网络。
plot(layerGraph(actorNet))
To validate a network, useanalyzeNetwork
。例如,验证评论家网络。
analyzeNetwork(criticNet)
You can modify the actor and critic networks and save them back to the agent. To modify the networks, you can use the深网设计师app. To open the app for each network, use the following commands.
DeepNetworkDesigner(layergraph(Crivernet))deepnetworkDesigner(layergraph(actornet))
In深网设计师, modify the networks. For example, you can add additional layers to your network. When you modify the networks, do not change the input and output layers of the networks returned byGetModel
。有关构建网络的更多信息,请参阅Build Networks with Deep Network Designer。
验证修改后的网络深网设计师,您必须点击分析DLNETWORK, under theAnalysis部分。要将修改后的网络结构导出到MATLAB®工作区,请生成用于创建新网络的代码,并从命令行运行此代码。请勿在深网设计师。For an example that shows how to generate and run code, see使用深网设计师创建代理,并使用图像观测来训练。
For this example, the code for creating the modified actor and critic networks is in thecreateModifiedNetworks
助手脚本。
createModifiedNetworks
每个修改的网络都包括一个附加的fullyConnectedLayer
和reluLayer
在他们的主要公共路径中。查看修改后的演员网络。
绘图(LayerGraph(ModifiedActornet))
导出网络后,将网络插入演员和评论家功能近似器中。
Actor = SetModel(Actor,ModifiedActornet);评论家= setModel(评论家,修改Criticnet);
Finally, insert the modified actor and critic function approximators into the actor and critic objects.
agent = setActor(agent,actor); agent = setCritic(agent,critic);
Input Arguments
agent
—Reinforcement learning agent
rlQAgent
|rlSARSAAgent
|rlDQNAgent
|rlpgagent
|rlDDPGAgent
|rlTD3Agent
|rlACAgent
|rlSACAgent
|rlPPOAgent
|rlTRPOAgent
Reinforcement learning agent that contains a critic, specified as one of the following objects:
rlpgagent
(when using a critic to estimate a baseline value function)
Output Arguments
critic
— Critic
rlValueFunction
object |rlQValueFunction
object |rlVectorQValueFunction
object | two-element row vector ofrlQValueFunction
对象
评论家的对象,作为以下内容之一返回:
rlValueFunction
object — Returned whenagent
is anrlACAgent
,rlpgagent
, orrlPPOAgent
目的。rlQValueFunction
object — Returned whenagent
is anrlQAgent
,rlSARSAAgent
,rlDQNAgent
,rlDDPGAgent
, orrlTD3Agent
object with a single critic.rlVectorQValueFunction
object — Returned whenagent
is anrlQAgent
,rlSARSAAgent
,rlDQNAgent
, object with a discrete action space, vector Q-value function critic.两元素行向量
rlQValueFunction
对象 - 返回agent
is anrlTD3Agent
或者rlSACAgent
object with two critics.
Version History
MATLAB コマンド
次の MATLAB コマンドに対応するリンクがクリックされました。
コマンドを MATLAB コマンド ウィンドウに入力して実行してください。Web ブラウザーは MATLAB コマンドをサポートしていません。
选择一个网站
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select:。
您还可以从以下列表中选择一个网站:
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- AméricaLatina(Español)
- Canada(English)
- United States(English)
欧洲
- Belgium(English)
- 丹麦(English)
- Deutschland(德意志)
- España(Español)
- Finland(English)
- 法国(Français)
- 爱尔兰(English)
- 意大利(Italiano)
- Luxembourg(English)
- Netherlands(English)
- 挪威(English)
- Österreich(德意志)
- Portugal(English)
- Sweden(English)
- 瑞士
- 英国(English)