tensortrade.agents.a2c_agent module

References

class tensortrade.agents.a2c_agent.A2CAgent(env: TradingEnvironment, shared_network: tensorflow.keras.Model = None, actor_network: tensorflow.keras.Model = None, critic_network: tensorflow.keras.Model = None)[source]

Bases: tensortrade.agents.agent.Agent

get_action(state: numpy.ndarray, **kwargs)int[source]

Get an action for a specific state in the environment.

restore(path: str, **kwargs)[source]

Restore the agent from the file specified in path.

save(path: str, **kwargs)[source]

Save the agent to the directory specified in path.

train(n_steps: int = None, n_episodes: int = None, save_every: int = None, save_path: str = None, callback: callable = None, **kwargs)float[source]

Train the agent in the environment and return the mean reward.

class tensortrade.agents.a2c_agent.A2CTransition(state, action, reward, done, value)

Bases: tuple

__getnewargs__()

Return self as a plain tuple. Used by copy and pickle.

static __new__(_cls, state, action, reward, done, value)

Create new instance of A2CTransition(state, action, reward, done, value)

__repr__()

Return a nicely formatted representation string

property action

Alias for field number 1

property done

Alias for field number 3

property reward

Alias for field number 2

property state

Alias for field number 0

property value

Alias for field number 4