Parameter identification¶
The module myokit.lib.fit is deprecated: it will be removed in future versions of Myokit. Please have a look at Pints instead.
The module myokit.lib.fit
provides tools to help with the process of
parameter identification: Finding the parameter values for (part of) a model
that produce the desired output.
Global optimization methods are provided that use heuristic search methods to explore large search spaces and find reasonable results. Further refinement of these solutions can then be done using a local optimization method. Some information about the quality of the solution can be obtained by fitting a second order polynomial to points near the solution: if it’s a minimum this should result in a hyperparabola with a critical point at the solution. Methods are provided to evaluate a large number of points in parallel or on a single CPU. These can also be used to perform a bruteforce mapping of the parameter space. Finally, some methods are provided to visualize 2D parameter spaces. Because points found using for example a particle search optimization will not be regularly distributed in space, these visualization methods are based on voronoi diagrams.
Global optimization¶
The methods listed below perform global optimization (pso) or global/local optimization (cmaes, snes, xnes). The cmaes method is an interface to the Python cma module.

myokit.lib.fit.
cmaes
(f, bounds, hint=None, sigma=None, n=None, ipop=0, parallel=False, fatol=1e11, target=1e06, callback=None, verbose=False, args=None)¶ Global/local optimizer that minimizes a function
f
within a specified set ofbounds
using the CMAES methods provided by the cma module [1, 2].CMAES stands for Covariance Matrix Adaptation Evolution Strategy, and is designed for nonlinear derivativefree optimization problems [1]. To run, the cma module must have been installed (for example via PIP).
A parallel (multiprocessing) version of the algorithm can be run by setting
parallel
toTrue
. Please keep in mind that the objective functionf
cannot access any shared memory in this scenario. SeeParallelEvaluator
for details.The method will stop when one of the following criteria is met:
 Absolute changes in
f(x)
are smaller thanfatol
 The value of
f(x)
is smaller thantarget
.  The maximum number of iterations is reached.
Arguments:
f
 A function to minimize. The function
f(x)
must be callable withx
a sequence ofm
coordinates and should return a single scalar value. bounds
 A list of
m
tuples(min_i, max_i)
specifying the minimum and maximum values in the search space for each dimensioni
. hint=None
 A suggested starting point. Must be within the given bounds. If no
hint is given the center of the parameter space is used. A (uniform)
random point can be selected by setting
hint='random'
. sigma=None
 A suggsted standard deviation; the method works best if the global
optimum lies within +/ 3*sigma from the initial guess. If no value for
sigma is given, the default value is
1/6 * min(upper  lower)
, whereupper
andlower
are the boundaries on the parameter space. n=None
 Can be used to manually overrule the population size used in CMAES. By
default, this is set by CMAES to be
4+int(3*np.log(d))
whered
is the number of dimensions of the search space. When running in parallel, Myokit will round this up to the nearest multiple of the CPU count. To set manually, use this parameter. ipop=0
 Set to any integer greater than 1 to enable increasing population size
restarts (IPOP). This will rerun the simulation
ipop
times (for a total number ofipop + 1
runs). Each rerun will be from a randomly chosen starting point (within the boundaries) and with the population size increased by a factor 2. The rrecommended value foripop
is either0
or8
. parallel=False
 Set this to
True
to run a multiprocess version of the search that utilizes all available cores. SeeEvaluatorProcess
for the details of using multiprocess parallelisation and the requirements this places on the functionf
. fatol
 The smallest difference in
f(x)
between subsequent iterations that is acceptable for convergence. target=1e6
 The value of
f(x)
that is acceptable for convergence. callback=None
 An optional function to be called after each iteration with arguments
(pg, fg)
wherepg
is the current best position andfg
is the corresponding score. verbose
 Set to
True
to enable logging of all sorts of information into the console window. args=None
 An optional tuple containing extra arguments to
f
. Ifargs
is specified,f
will be called asf(x, *args)
.
The method returns a tuple
(xbest, fbest)
wherexbest
is the best position found andfbest = f(xbest)
.References:
[1] https://www.lri.fr/~hansen/cmaesintro.html
[2] Hansen, Mueller, Koumoutsakos (2006) Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMAES).
Note: This method requires the `cma` module to be installed.
 Absolute changes in

myokit.lib.fit.
pso
(f, bounds, hints=None, n=4, r=0.5, v=0.001, parallel=False, target=1e06, max_iter=500, hybrid=False, return_all=False, callback=None, callback_particles=None, verbose=False, args=None)¶ Global optimizer that minimizes a function
f
within a specified set ofbounds
using particle swarm optimization (PSO) [1].In a particle swarm optimization, the parameter space is explored by
n
independent particles. The particles perform a pseudorandom walk through the parameter space, guided by their own personal best score and the global optimum found so far.A parallel (multiprocessing) version of the algorithm can be run by setting
parallel
toTrue
. Please keep in mind that the objective functionf
cannot access any shared memory in this scenario. SeeParallelEvaluator
for details.The method will stop when one of the following criteria is met:
 The value of
f(x)
is smaller thantarget
.  The maximum number of iterations is met (where one “iteration” means
each of the
n
particles has been updated).
Arguments:
f
 A function to minimize. The function
f(x)
must be callable withx
a sequence ofm
coordinates and should return a single scalar value. Alternatively, in hybrid mode (which is disabled by default),f(x)
should return a tuple(x2, fx2)
wherex2
is a new value forx
andfx2
is the score function evaluated atx2
. bounds
 A list of
m
tuples(min_i, max_i)
specifying the minimum and maximum values in the search space for each dimensioni
. The returned solution is guaranteed to be within these bounds. hints
 A (possibly empty) list of points to use as initial values. Each point
x
is specified as a sequence ofm
coordinates, such as can be passed tof(x)
. Hints will be used as long as they are available. If there are more hints than particles only the first hints in the sequence will be used. If there are more particles than hints, the remaining particles will be initialized at uniformly random positions in the search space. n=4
 The number of particles to use in the search.
r=0.5
 A number between
0
and1
, specifying the ratio between the local attraction (r=1) and global attraction (r=0, see below). v=1e3
 The maximum initial velocity, as a fraction of the bounded space. By
default, the velocity in each direction
i
is set asvs[i] = U(0, v * (upper[i]  lower[i]))
, whereU
is a uniform sampling function, andupper
andlower
represent the given upper and lower bounds in directioni
. parallel=False
 Set this to
True
to run a multiprocess version of the search that utilizes all available cores. SeeEvaluatorProcess
for the details of using multiprocess parallelisation and the requirements this places on the functionf
. target=1e6
 The method will stop searching when a score below the target value is found.
max_iter=500
 The maximum number of iterations to perform.
hybrid=False
 Set this to
True
to perform a hybrid optimization. In this case, the function passed as the first argument should return a tuple(x2, fx2)
wherex2
is the updated position andfx2
is its score. return_all=False
 Set this to
True
to return all results, instead of only the best particle. callback=None
 An optional function to be called after each global step with arguments
(pg, fg)
wherepg
is the current best position andfg
is the corresponding score. callback_particles=None
 An optional function to be called after each global step with arguments
(xs, vs, fs)
wherexs
,vs
andfs
are lists containing the current particle positions, velocities and scores respectively. verbose=False
 Set to
True
to have progress information printed to the terminal. If set to any nonzero integer this will determine the number of iterations between updates. args=None
 An optional tuple containing extra arguments to
f
. Ifargs
is specified,f
will be called asf(x, *args)
.
The method starts by creating a swarm of
n
particles and assigning each an initial position and initial velocity (see the explanation of the argumentshints
andv
for details). Each particle’s score is calculated and set as the particle’s current best local scorepl
. The best score of all the particles is set as the best global scorepg
.Next, an iterative procedure is run that updates each particle’s velocity
v
and positionx
using:v[k] = v[k1] + al * (pl  x[k1]) + ag * (pg  x[k1]) x[k] = v[k]
Here,
x[t]
is the particle’s current position andv[t]
its current velocity. The valuesal
andag
are scalars randomly sampled from a uniform distribution, with values bound byr * 4.1
and(1  r) * 4.1
. Thus a swarm withr = 1
will only use local information, while a swarm withr = 0
will only use global information. The de facto standard isr = 0.5
. The random sampling is done each timeal
andag
are used: at each time step every particle performsm
samplings, wherem
is the dimensionality of the search space.Pseudocode algorithm:
almax = r * 4.1 agmax = 4.1  almax while stopping criterion not met: for i in [1, 2, .., n]: if f(x[i]) < f(p[i]): p[i] = x[i] pg = min(p[1], p[2], .., p[n]) for j in [1, 2, .., m]: al = uniform(0, almax) ag = uniform(0, agmax) v[i,j] += al * (p[i,j]  x[i,j]) + ag * (pg[i,j]  x[i,j]) x[i,j] += v[i,j]
A hybrid PSO method [2] can be implemented by adding the optional argument
hybrid=True
and changing the score function to return a tuple(x, fx)
wherex
is an updated position andfx
is its score. Suggestions forx
’s outside the set boundaries will be ignored.The method returns a tuple
(pg, fg)
wherepg
is the best position found andfg = f(pg)
. Ifreturn_all
was set toTrue
, the tuple will contain a vectorpg
with each particle’s best location and a vectorfg
with the corresponding scores, sorted best to worst.References:
[1] Kennedy, Eberhart (1995) Particle Swarm Optimization. IEEE International Conference on Neural Networks
[2] Loewe, Wilhems, Seemann et al. (2016) Parameter estimation of ion current formulations requires hybrid optimization approach to be both accurate and reliable. Frontiers in Bioengineering and Biotechnology
 The value of

myokit.lib.fit.
snes
(f, bounds, hint=None, n=None, parallel=False, target=1e06, max_iter=1000, callback=None, verbose=False, args=None)¶ Global/local optimizer that minimizes a function
f
within a specified set ofbounds
using the SNES method described in [1, 2].SNES stands for Seperable Natural Evolution Strategy, and is designed for nonlinear derivativefree optimization problems in high dimensions and with many local minima [1].
A parallel (multiprocessing) version of the algorithm can be run by setting
parallel
toTrue
. Please keep in mind that the objective functionf
cannot access any shared memory in this scenario. SeeParallelEvaluator
for details.The method will stop when one of the following criteria is met:
 The value of
f(x)
is smaller thantarget
.  The maximum number of iterations is reached.
Arguments:
f
 A function to minimize. The function
f(x)
must be callable withx
a sequence ofm
coordinates and should return a single scalar value. bounds
 A list of
m
tuples(min_i, max_i)
specifying the minimum and maximum values in the search space for each dimensioni
. hint=None
 A suggested starting point. Must be within the given bounds. If no
hint is given the center of the parameter space is used. A (uniform)
random point can be selected by setting
hint='random'
. n=None
 Can be used to manually overrule the population size used in SNES. By
default, this is set to be 4+int(3*np.log(d)) where
d
is the number of dimensions of the search space. When running in parallel, Myokit will round this default value up to the nearest multiple of the cpu count. To overrule these defaults, use this parameter. parallel=False
 Set this to
True
to run a multiprocess version of the search that utilizes all available cores. SeeEvaluatorProcess
for the details of using multiprocess parallelisation and the requirements this places on the functionf
. target=1e6
 The value of
f(x)
that is acceptable for convergence. max_iter=500
 The maximum number of iterations to perform.
callback=None
 An optional function to be called after each iteration with arguments
(pg, fg)
wherepg
is the current best position andfg
is the corresponding score. verbose=False
 Set to
True
to have progress information printed to the terminal. If set to any nonzero integer this will determine the number of iterations between updates. args=None
 An optional tuple containing extra arguments to
f
. Ifargs
is specified,f
will be called asf(x, *args)
.
The method returns a tuple
(xbest, fbest)
wherexbest
is the best position found andfbest = f(xbest)
.References:
[1] Schaul, Glasmachers, Schmidhuber (2011) High dimensions and heavy tails for natural evolution strategies. Proceedings of the 13th annual conference on Genetic and evolutionary computation. ACM, 2011.
[2] PyBrain: The Python machine learning library (http://pybrain.org)
 The value of

myokit.lib.fit.
xnes
(f, bounds, hint=None, n=None, parallel=False, target=1e06, max_iter=1000, callback=None, verbose=False, args=None)¶ Global/local optimizer that minimizes a function
f
within a specified set ofbounds
using the xNES method described in [1, 2].xNES stands for Exponential Natural Evolution Strategy, and is designed for nonlinear derivativefree optimization problems [1].
A parallel (multiprocessing) version of the algorithm can be run by setting
parallel
toTrue
. Please keep in mind that the objective functionf
cannot access any shared memory in this scenario. SeeParallelEvaluator
for details.The method will stop when one of the following criteria is met:
 The value of
f(x)
is smaller thantarget
.  The maximum number of iterations is reached.
Arguments:
f
 A function to minimize. The function
f(x)
must be callable withx
a sequence ofm
coordinates and should return a single scalar value. bounds
 A list of
m
tuples(min_i, max_i)
specifying the minimum and maximum values in the search space for each dimensioni
. hint=None
 A suggested starting point. Must be within the given bounds. If no
hint is given the center of the parameter space is used. A (uniform)
random point can be selected by setting
hint='random'
. n=None
 Can be used to manually overrule the population size used in xNES. By
default, this is set to be 4+int(3*np.log(d)) where
d
is the number of dimensions of the search space. When running in parallel, Myokit will round this default value up to the nearest multiple of the cpu count. To overrule these defaults, use this parameter. parallel=False
 Set this to
True
to run a multiprocess version of the search that utilizes all available cores. SeeEvaluatorProcess
for the details of using multiprocess parallelisation and the requirements this places on the functionf
. target=1e6
 The value of
f(x)
that is acceptable for convergence. max_iter=500
 The maximum number of iterations to perform.
callback=None
 An optional function to be called after each iteration with arguments
(pg, fg)
wherepg
is the current best position andfg
is the corresponding score. verbose=False
 Set to
True
to have progress information printed to the terminal. If set to any nonzero integer this will determine the number of iterations between updates. args=None
 An optional tuple containing extra arguments to
f
. Ifargs
is specified,f
will be called asf(x, *args)
.
The method returns a tuple
(xbest, fbest)
wherexbest
is the best position found andfbest = f(xbest)
.References:
[1] Glasmachers, Schaul, Schmidhuber et al. (2010) Exponential natural evolution strategies. Proceedings of the 12th annual conference on Genetic and evolutionary computation
[2] PyBrain: The Python machine learning library (http://pybrain.org)
Note: This method requires the `scipy` module to be installed.
 The value of
Local optimization¶
The local optimization methods presented here are interfaces to methods found in SymPy (http://sympy.org).

myokit.lib.fit.
bfgs
(f, x, bounds, max_iter=500, args=None)¶ Local optimizer that minimizes a function
f
using the constrained quasiNewton method of Broyden, Fletcher, Goldfarb, and Shanno provided by SciPy [1,2,3].Arguments:
f
 A function to minimize. The function
f(x)
must be callable withx
a sequence ofm
coordinates and should return a single scalar value. x
 An initial guess for the
x
with the lowestf(x)
. bounds
 A sequence of tuples
(xmin, xmax)
with the boundaries for each dimension ofx
. max_iter
 The maximum number of iterations to perform.
args
 An optional tuple containing extra arguments to
f
. Ifargs
is specified,f
will be called asf(x, *args)
.
The method returns a tuple
(xopt, fopt)
wherexopt
is the best position found andfopt = f(xopt)
.References:
[1] http://scipy.org
[2] A Limited Memory Algorithm for Bound Constrained Optimization. Byrd, R H and P Lu and J. Nocedal (1995) SIAM Journal on Scientific and Statistical Computing 16 (5): 11901208.
[3] LBFGSB: Algorithm 778: LBFGSB, FORTRAN routines for large scale bound constrained optimization. Zhu, C and R H Byrd and J Nocedal (1997) ACM Transactions on Mathematical Software 23 (4): 550560.
Note: This method requires SciPy to be installed.

myokit.lib.fit.
nelder_mead
(f, x, xatol=0.0001, fatol=0.0001, max_iter=500, args=None)¶ Local optimizer that minimizes a function
f
using the NelderMead simplex method provided by SciPy [1,2].The method will stop when one of the following criteria is met:
 Absolute changes in
x
are smaller thanxatol
_and_ absolute changes inf(x)
are smaller thanfatol
 The maximum number of iterations is reached.
Arguments:
f
 A function to minimize. The function
f(x)
must be callable withx
a sequence ofm
coordinates and should return a single scalar value. x
 An initial guess for the
x
with the lowestf(x)
. xatol
 Sets the _absolute_ difference in
x
between iterations that is acceptable for convergence. fatol
 Sets the _absolute_ difference in
f(x)
between iterations that is acceptable for convergence. max_iter
 The maximum number of iterations to perform.
args
 An optional tuple containing extra arguments to
f
. Ifargs
is specified,f
will be called asf(x, *args)
.
The method returns a tuple
(xopt, fopt)
wherexopt
is the best position found andfopt = f(xopt)
.References:
[1] http://scipy.org
[2] A Simplex Method for Function Minimization. Nelder, J A, and R Mead (1965) The Computer Journal 7: 30813.
Note: This method requires SciPy to be installed.
 Absolute changes in

myokit.lib.fit.
powell
(f, x, xtol=0.0001, ftol=0.0001, max_iter=500, args=None)¶ Local optimizer that minimizes a function
f
using the implementation of Powell’s conjugate direction method provided by SciPy [1, 2].Powell created a number of derivative free optimization methods. The method used here is one of the earliest, and is described in [2].
The method will stop when one of the following criteria is met:
 Relative changes in
x
are smaller thanxtol
 Relative changes in
f(x)
are smaller thanftol
 The maximum number of iterations is reached.
Arguments:
f
 A function to minimize. The function
f(x)
must be callable withx
a sequence ofm
coordinates and should return a single scalar value. x
 An initial guess for the
x
with the lowestf(x)
. xtol
 Sets the _relative_ difference in
x
between iterations that is acceptable for convergence. ftol
 Sets the _relative_ difference in
f(x)
between iterations that is acceptable for convergence. max_iter
 The maximum number of iterations to perform.
args
 An optional tuple containing extra arguments to
f
. Ifargs
is specified,f
will be called asf(x, *args)
.
The method returns a tuple
(xopt, fopt)
wherexopt
is the best position found andfopt = f(xopt)
.References:
[1] http://scipy.org
[2] An efficient method for finding the minimum of a function of several variables without calculating derivatives. Powell, M. J. D. (1964) Computer Journal 7 (2): 155162. doi:10.1093/comjnl/7.2.155
Note: This method requires SciPy to be installed.
 Relative changes in
Parameter space exploration¶
The following methods can be used for (parallelised) exploration of the parameter space.

myokit.lib.fit.
map_grid
(f, bounds, n, parallel=False, args=None)¶ Maps a parameter space by evaluating every point in a rectangular grid.
Arguments:
f
 A function to map. The function
f(x)
must be callable withx
a sequence ofm
coordinates and should return a single scalar value. bounds
 A list of
m
tuples(min_i, max_i)
specifying the minimum and maximum values in the search space for each dimensioni
. The mapped space will be within these bounds. n
 The number of points to sample in each direction. If
n
is a scalar the function will map a grid ofn
points in each direction, so that the total number of points isn**m
, wherem
is the dimensionality of the search space. Alternatively, the number of points in each dimension can be specified by passing in a lengthm
sequence of sizes, so that the total number of points mapped isn[0] * n[1] * ... * n[m1]
. parallel
 Set to
True
to run evaluations on all available cores. args
 An optional tuple containing extra arguments to
f
. Ifargs
is specified,f
will be called asf(x, *args)
.
Returns a tuple
(x, fx)
wherex
is a numpy array containing all the tested points andfx
contains the calculatedf(x)
for eachx
.

myokit.lib.fit.
evaluate
(f, x, parallel=False, args=None)¶ Evaluates the function
f
on every value present inx
and returns a sequence of evaluationsf(x[i])
.To run the evaluations on all available cores, set
parallel=True
. For details seeParallelEvaluator
.Extra arguments to pass to
f
can be given in the optional tupleargs
. If used,f
will be called asf(x[i], *args)
.

class
myokit.lib.fit.
Evaluator
(function, args=None)¶ Abstract class
Interface for classes that take a function (or callable object)
f(x)
and evaluate it for list of input valuesx
. This interface is shared by a parallel and a sequential implementation, allowing easy switching between parallel or sequential implementations of the same algorithm.Arguments:
function
 A function or other callable object
f
that takes a valuex
and returns an evaluationf(x)
. args
 An optional sequence of extra arguments to
f
. Ifargs
is specified,f
will be called asf(x, *args)
.

evaluate
(positions)¶ Evaluate the function for every value in the sequence
positions
.Returns a list with the returned evaluations.

class
myokit.lib.fit.
SequentialEvaluator
(function, args=None)¶ Evaluates a function (or callable object) for a list of input values.
Runs sequentially, but shares an interface with the
ParallelEvaluator
, allowing parallelism to be switched on/off.Arguments:
function
 The function to evaluate.
args
 An optional tuple containing extra arguments to
f
. Ifargs
is specified,f
will be called asf(x, *args)
.
Returns a list containing the calculated function evaluations.
Extends:
Evaluator

evaluate
(positions)¶ Evaluate the function for every value in the sequence
positions
.Returns a list with the returned evaluations.

class
myokit.lib.fit.
ParallelEvaluator
(function, nworkers=None, max_tasks_per_worker=500, args=None)¶ Evaluates a singlevalued function object for any set of input values given, using all available cores.
Shares an interface with the
SequentialEvaluator
, allowing parallelism to be switched on and off with minimal hassle. Parallelism takes a little time to be set up, so as a general rule of thumb it’s only useful for if the total runtime is at least ten seconds (anno 2015).By default, the number of processes (“workers”) used to evaluate the function is set equal to the number of CPU cores reported by python’s
multiprocessing
module. To override the number of workers used, setnworkers
to some integer greater than 0.There are two important caveats for using multiprocessing to evaluate functions:
 Processes don’t share memory. This means the function to be evaluated will be duplicated (via pickling) for each process (see Avoid shared state for details).
 On windows systems your code should be within an
if __name__ == '__main__':
block (see Windows for details).
Arguments:
function
 The function to evaluate
nworkers
 The number of worker processes to use. If left at the default value
nworkers=None
the number of workers will equal the number of CPU cores in the machine this is run on. In many cases this will provide good performance. max_tasks_per_worker
 Python garbage collection does not seem to be optimized for
multiprocess function evaluation. In many cases, some time can be
saved by refreshing the worker processes after every
max_tasks_per_worker
evaluations. This number can be tweaked for best performance on a given task / system. args
 An optional tuple containing extra arguments to the objective function.
The evaluator will keep it’s subprocesses alive and running until it is tidied up by garbage collection.
Note that while this class uses multiprocessing, it is not thread/process safe itself: It should not be used by more than a single thread/process at a time.
Extends:
Evaluator

evaluate
(positions)¶ Evaluate the function for every value in the sequence
positions
.Returns a list with the returned evaluations.
Loss surface visualisation (2d)¶
The methods below can be used to visualise loss (score, penalty, fitness, etc.) surfaces in two dimensions. Because most search/exploration methods return unevenly spaced points, a Voronoi diagram representation is used.

myokit.lib.fit.
loss_surface_colors
(x, y, f, xlim=None, ylim=None, markers='+')¶ Takes irregularly spaced points
(x, y, f)
and creates a 2d colored Voronoi diagram.This method is useful to visualize the output of an optimisation routine on a 2dimensional parameter space.
Most 2d plotting methods (like matplotlib’s contour and surf, or mayavi’s
default plotting options) accept only regularly spaced
(x, y)
grids as input. Unfortunately, the points sampled by an optimisation routine are typically dense in some areas and sparse in some. While a plot can still be created by interpolating from these points, this may add or remove vital detail. The voronoi color plot provides twodimensional equivalent of a zeroorder hold graph.This method returns a matplotlib figure of a 2d loss surface, represented as a Voronoi diagram with colored surfaces.
Most 2d plotting functions expect a regular 2d grid as input for the x and y coordinates. When estimating the shape of a loss surface it may be much more accurate to sample densely in some areas and sparsley in others. Interpolating this to a regular grid causes loss of detail and an unsatisfactory representation. This method attempts to get around this issue by creating a voronoi diagram of the sampled points and shading the region around each point according to its lossfunction value. This is essentially a kind of 2d zero hold representation of the loss function.
Arguments:
x
 A 1d array of x coordinates.
y
 A 1d array of y coordinates.
f
 A 1d array of loss function evaluations, calculated at
(x, y)
. xlim
 A tuple
(xmin, xmax)
representing the possible range of x values. If set toNone
an automatic range will be computed. ylim
 A tuple
(xmin, xmax)
representing the possible range of x values. If set toNone
an automatic range will be computed. markers
 The markers to use to plot the sampled points. Set to None to disable.
Returns a matplotlib figure.
Note: This method requires Matplotlib to be installed.

myokit.lib.fit.
loss_surface_mesh
(x, y, f)¶ Takes irregularly spaced points
(x, y, f)
and creates a mesh using theMayavi
package and Delaunay triangulation.This method is useful to visualize the output of an optimisation routine on a 2dimensional parameter space.
Note: this method requires Mayavi to be installed.

myokit.lib.fit.
voronoi_regions
(x, y, f, xlim, ylim)¶ Takes a set of
(x, y, f)
points and returns the edgepoints of the xy voronoi region around each point within the bounds specified byxlim
andylim
.Points and voronoi regions entirely outside the specified bounds will be dropped. Voronoi regions partially outside the bounds will be truncated.
The third array
f
will be filtered the same way asx
andy
but is otherwise not used.Returns a tuple
(x, y, f, regions)
wherex
,y
andf
are the coordinates of the accepted points and eachregions[i]
is a list of the vertices making up the voronoi region for point(x[i], y[i])
.The code to extract the voronoi regions was (heavily) adapted from: http://stackoverflow.com/a/20678647/423420
Note: This method requires SciPy to be installed.