tensortrade.agents.dqn_agent module

class tensortrade.agents.dqn_agent.DQNAgent(env: TradingEnv, policy_network: tensorflow.keras.Model = None)[source]

Bases: tensortrade.agents.agent.Agent

get_action(state: numpy.ndarray, **kwargs)int[source]

Get an action for a specific state in the environment.

restore(path: str, **kwargs)[source]

Restore the agent from the file specified in path.

save(path: str, **kwargs)[source]

Save the agent to the directory specified in path.

train(n_steps: int = None, n_episodes: int = None, save_every: int = None, save_path: str = None, callback: callable = None, **kwargs)float[source]

Train the agent in the environment and return the mean reward.

class tensortrade.agents.dqn_agent.DQNTransition(state, action, reward, next_state, done)

Bases: tuple

__getnewargs__()

Return self as a plain tuple. Used by copy and pickle.

static __new__(_cls, state, action, reward, next_state, done)

Create new instance of DQNTransition(state, action, reward, next_state, done)

__repr__()

Return a nicely formatted representation string

property action

Alias for field number 1

property done

Alias for field number 4

property next_state

Alias for field number 3

property reward

Alias for field number 2

property state

Alias for field number 0