Main Content

rlQAgentOptions

Options for Q-learning agent

Description

Use anrlQAgentOptionsobject to specify options for creating Q-learning agents. To create a Q-learning agent, userlQAgent

For more information on Q-learning agents, seeQ-Learning Agents.

For more information on the different types of reinforcement learning agents, seeReinforcement Learning Agents.

Creation

Description

opt= rlQAgentOptionscreates anrlQAgentOptionsobject for use as an argument when creating a Q-learning agent using all default settings. You can modify the object properties using dot notation.

opt= rlQAgentOptions(Name,Value)sets optionpropertiesusing name-value pairs. For example,rlQAgentOptions('DiscountFactor',0.95)creates an option set with a discount factor of0.95. You can specify multiple name-value pairs. Enclose each property name in quotes.

Properties

expand all

Options for epsilon-greedy exploration, specified as anEpsilonGreedyExplorationobject with the following properties.

Property Description Default Value
Epsilon Probability threshold to either randomly select an action or select the action that maximizes the state-action value function. A larger value ofEpsilonmeans that the agent randomly explores the action space at a higher rate. 1
EpsilonMin Minimum value ofEpsilon 0.01
EpsilonDecay Decay rate 0.0050

At the end of each training time step, ifEpsilon大于EpsilonMin, then it is updated using the following formula.

Epsilon = Epsilon*(1-EpsilonDecay)

If your agent converges on local optima too quickly, you can promote agent exploration by increasingEpsilon.

To specify exploration options, use dot notation after creating therlQAgentOptionsobjectopt. For example, set the epsilon value to0.9.

opt.EpsilonGreedyExploration.Epsilon = 0.9;

Critic optimizer options, specified as anrlOptimizerOptionsobject. It allows you to specify training parameters of the critic approximator such as learning rate, gradient threshold, as well as the optimizer algorithm and its parameters. For more information, seerlOptimizerOptionsandrlOptimizer.

作为一个积极的指定的代理,样品时间scalar or as1. Setting this parameter to1allows for event-based simulations.

Within a Simulink®environment, theRL代理block in which the agent is specified to execute everySampleTimeseconds of simulation time. IfSampleTimeis1, the block inherits the sample time from its parent subsystem.

Within a MATLAB®environment, the agent is executed every time the environment advances. In this case,SampleTimeis the time interval between consecutive elements in the output experience returned bysimortrain. IfSampleTimeis1, the time interval between consecutive elements in the returned output experience reflects the timing of the event that triggers the agent execution.

Discount factor applied to future rewards during training, specified as a positive scalar less than or equal to 1.

Object Functions

rlQAgent Q-learning reinforcement learning agent

Examples

collapse all

This example shows how to create an options object for a Q-Learning agent.

Create anrlQAgentOptionsobject that specifies the agent sample time.

opt = rlQAgentOptions('SampleTime',0.5)
opt = rlQAgentOptions with properties: EpsilonGreedyExploration: [1x1 rl.option.EpsilonGreedyExploration] CriticOptimizerOptions: [1x1 rl.option.rlOptimizerOptions] SampleTime: 0.5000 DiscountFactor: 0.9900 InfoToSave: [1x1 struct]

You can modify options using dot notation. For example, set the agent discount factor to0.95.

opt.DiscountFactor = 0.95;

Version History

Introduced in R2019a

See Also