# Jacobians¶

Myokit contains two classes designed to calculate Jacobian matrices. Given a model `f` with state `y` such that `dy/dt = f(y)` the Jacobian is the matrix of partial derivatives `df/dy`. The `JacobianCalculator` takes a point in the state space as input and returns the corresponding Jacobian. The `JacobianTracer` does the same, but takes a full `DataLog` as input.

class myokit.JacobianTracer(model)

Given a model and a simulation log, this class can revisit every logged point and generate the corresponding Jacobian.

When created, a `JacobianTracer` will generate and compile a back-end that uses automatic differentiation to evaluate the partial derivatives of the model’s right-hand side function at any given point in time.

For a model:

```dx/dt = f(x, t, i)
```

with state x, time t and inputs i, the class will generate the partial derivatives of each component of f with respect to each component of x.

Methods to calculate eigenvalues and isolate dominant eigenvalues are provided.

N.B. The partial derivatives can not be calculated for the following functions: `floor`, `ceil`, `abs`, quotients and remainders. If these are encountered the resulting derivatives will be yielded as `NaN`. However, in many cases, these functions will only occur as part of a condition in an if statement, so the `NaN`’s won’t propagate to the final result.

dominant_eigenvalues(log=None, block=None)

Calculates the dominant eigenvalues of the jacobian matrix for each point in time. The returned value is 1d numpy array.

The “dominant eigenvalue” is defined as the eigenvalue with the largest magnitude (`sqrt(a + bi)`). Note that the returned values may be complex.

If a `DataLog` suitable for `jacobians()` is given as `log`, the jacobians are calculated on the fly. To re-use a set of jacobians generated earlier, pass in the `DataBlock2d` generated by `jacobians()` as `block`.

jacobians(log)

Calculates the Jacobian matrix for each point in `log` and returns a `DataBlock2d` containing all the information from the log as well as a 2d data series containing the jacobians (stored under the key “jacobians”).

The given `DataLog` must contain logged results for all states in the model and any bound variables used by the model. Bound variables whose value does not appear in the log must be unbound before creating the `JacobianTracer`. Only results from one-dimensional simulations are supported.

largest_eigenvalues(log=None, block=None)

Calculates the largest eigenvalues of the jacobian matrix at each point in time. The returned value is 1d numpy array.

The “largest eigenvalue” is defined as the eigenvalue with the most positive real part. Note that the returned values may be complex.

If a `DataLog` suitable for `jacobians()` is given as `log`, the jacobians are calculated on the fly. To re-use a set of jacobians generated earlier, pass in the `DataBlock2d` generated by `jacobians()` as `block`.

class myokit.JacobianCalculator(model)

Given a cell model, this class can calculate Jacobian matrices for any point in the state space.

The given model will be cloned before use. No inputs are provided by the jacobian calculator, so all default values will be used.

N.B. The partial derivatives can not be calculated for the following functions: `floor`, `ceil`, `abs`, quotients and remainders. If these are encountered the resulting derivatives will be yielded as `NaN`. However, in many cases, these functions will only occur as part of a condition in an if statement, so the `NaN`’s won’t propagate to the final result.

calculate(state)

Calculates both the derivatives `f` and the Jacobian `J` at the given state `x` and returns a tuple `(f(x), J(x))`.

The order of state variables must be that specified by the model (i.e. the one obtained from calling `Model.states()`).

newton_root(x=None, accuracy=0, max_iter=50, damping=1)

Uses Newton’s method to search for a stable point.

An initial guess can be given as `x`, if no guess is provided the model’s initial state is used.

Search is halted if the next iteration doesn’t change the result, when `max|f(x)| < accuracy` or after `max_iter` iterations. The accuracy and maximum iterations criteria can be disabled by setting the parameters to 0.

A damping factor can be applied to every step by setting a damping factor `damping` to some value between `0` and `1`. With `damping=1` the full step suggested by Newton’s method is taken. With any smaller value only a fraction of the suggested step is made.

Returns a tuple `(x*, f*, j*, e*)`, where `x*` is a root, `f*` is the derivative vector at `x*`, `j*` is the Jacobian matrix at this point and `e* = max|f(x*)|`.