API

Variables and derivatives

class autodiff_107.diff.Node.Node(value, fm_seed=0)

This is the main object for variables used for Automatic Differentiation. Operations for the functions the user wants to use will have to be done using this object. Constants involved in the function can be inserted normally and do not need to be wrapped in Node objects.

Parameters
  • value (int, float) – Value of the variable

  • fm_seed (int, float) – Seed to use in forward mode. Most likely this will be 1 for one input and 0 for all others, but specific use cases may warrant different weights. Default: 0

Variables
  • _value – Value at which the variable is evaluated

  • _d – Value of the derivative of this Node with respect to each parent at the point where this Node is evaluated.

  • _fmd – Forward mode derivative. At any point after the forward pass, this should store the derivative of any node with respect to some input as defined by the fm_seed. This could actually be the sum of derivatives of the node with respect to some weighting of inputs. If that is your use case, you likely know what you are doing.

class autodiff_107.diff.Node.Var(*args, **kwargs)
property value

This holds the value of this variable

autodiff_107.diff.Node.derivative(f: numpy.ndarray, X: numpy.ndarray)

This function will return the derivative of f with respect to X. f and X must both be one dimensional but their lengths need not match. If f is a vector, the jacobian will be returned

Parameters
  • f (array(Node)) – function evaluated at X

  • X (array(Node)) – variable to take the derivative w.r.t

Raises

ValueError – f and X must both be 1-dimensional

Returns

derivative of f w.r.t X

Return type

array

autodiff_107.diff.Node.get_fm_derivative(Y)

If a forward mode seed was set using set_fm_seed, this function will return the derivative calculated using forward mode with the set seed.

Parameters

Y (array(Node)) – input

Returns

forward mode derivative

Return type

array

autodiff_107.diff.Node.get_fm_seed(X)

This function will return the current seed of X to be used in forward mode

Parameters

X (Node or array(Node)) – input

Returns

forward mode seed

Return type

array

autodiff_107.diff.Node.set_fm_seed(X, seed)

This function sets the forward mode seed for the variable X to be used in the forward pass in any following calculations. X and seed should have matching shapes.

Parameters
  • X (array(Node)) – input

  • seed (array(Node)) – seed

Raises

ValueError – “X and seed shapes do not match”

autodiff_107.diff.Node.value(x)

This function takes in a numpy array returned by variable() or the result of a calculation using a numpy array from variable() and returns a numpy array of the numeric values of x

Parameters

x (Node or array(Node)) – input

Returns

value of Node

Return type

float or array(float)

autodiff_107.diff.Node.variable(x)

This function takes a number or numpy array of numbers and returns a “variable” which is a numpy array of Node objects that will handle automatic differentiation

Parameters

x (float or array(float)) – input

Returns

variable(input)

Return type

Node or array(Node)

Optimizers

autodiff_107.optim.optimize.adagrad(f, x0, lr=0.1, lr_decay=0.0, weight_decay=0.0, eps=1e-10, max_iter=1000)

Adagrad algorithm: Adaptive Subgradient Methods for Online Learning and Stochastic Optimization.

Parameters
  • f (function (callable)) – objective function to minimize

  • x0 (list or np.array w/ dtype float or int) – initial parameters

  • lr (float, optional) – learning rate, defaults to 0.1

  • lr_decay (float, optional) – learning rate decay, defaults to 0.

  • weight_decay (float, optional) – weight decay (L2 penalty), defaults to 0.

  • eps (float, optional) – term added to the denominator to improve numerical stability, defaults to 1e-10

  • max_iter (int, optional) – maximum number of iterations, defaults to 1000

Returns

function minimum after max_iter steps

Return type

np.array

autodiff_107.optim.optimize.gradient_descent(f, x0, lr=0.1, max_iter=1000)

Simple gradient descent

Parameters
  • f (function (callable)) – function to minimize

  • x0 (list or np.array w/ dtype float or int) – initial starting point for gradient descent, if x0 is too far from the minimum, algorithm might not converge

  • lr (float) – learning rate, default to 0.1

  • max_iter (int) – number of iterations to run gradient descent, default to 1000

Returns

function minimum after max_iter steps

Return type

np.array

Root finding

autodiff_107.optim.rootfinding.newton(f, x0, tol=1e-05, maxiter=50)

Newton’s root finding algorithm for single variable functions

Parameters
  • f (function (callable)) – function of a single variable x

  • x0 (float or int) – initial starting point estimate

  • tol (float, optional) – root tolerance, defaults to 1e-5

  • maxiter (int, optional) – maximium number of iterations, defaults to 50

Returns

root of f

Return type

float