rlDeterministicActorRepresentation
(Not recommended) Deterministic actor representation for reinforcement learning agents
rlDeterministicActorRepresentation
is not recommended. UserlContinuousDeterministicActor
instead. For more information, seerlDeterministicActorRepresentation is not recommended.
Description
This object implements a function approximator to be used as a deterministic actor within a reinforcement learning agent with acontinuousaction space. A deterministic actor takes observations as inputs and returns as outputs the action that maximizes the expected cumulative long-term reward, thereby implementing a deterministic policy. After you create anrlDeterministicActorRepresentation
object, use it to create a suitable agent, such as anrlDDPGAgent
agent. For more information on creating representations, seeCreate Policies and Value Functions.
Creation
Syntax
Description
creates a deterministic actor using the deep neural networkactor
= rlDeterministicActorRepresentation(net
,observationInfo
,actionInfo
,'Observation',obsName
,'Action',actName
)net
as approximator. This syntax sets theObservationInfoandActionInfoproperties ofactor
to the inputsobservationInfo
andactionInfo
, containing the specifications for observations and actions, respectively.actionInfo
must specify a continuous action space, discrete action spaces are not supported.obsName
must contain the names of the input layers ofnet
that are associated with the observation specifications. The action namesactName
must be the names of the output layers ofnet
that are associated with the action specifications.
creates a deterministic actor using a custom basis function as underlying approximator. The first input argument is a two-elements cell in which the first element contains the handleactor
= rlDeterministicActorRepresentation({basisFcn
,W0
},observationInfo
,actionInfo
)basisFcn
to a custom basis function, and the second element contains the initial weight matrixW0
. This syntax sets theObservationInfoandActionInfoproperties ofactor
respectively to the inputsobservationInfo
andactionInfo
.
creates a deterministic actor using the additional options setactor
= rlDeterministicActorRepresentation(___,options
)options
, which is anrlRepresentationOptions
object. This syntax sets theOptionsproperty ofactor
to theoptions
input argument. You can use this syntax with any of the previous input-argument combinations.
Input Arguments
Properties
Object Functions
rlDDPGAgent |
Deep deterministic policy gradient reinforcement learning agent |
rlTD3Agent |
Twin-delayed deep deterministic policy gradient reinforcement learning agent |
getAction |
Obtain action from agent or actor given environment observations |