Our goal is to make it as easy as possible to solve effectively a
large class of delay differential equations (DDEs). We restrict
ourselves to systems of equations of the form
 (1) 
A popular approach to solving DDEs exploits the fact that the task has much in common with the solution of ordinary differential equations (ODEs). Indeed, the DDE solver dde23 that we discuss here is closely related to the ODE solver ode23 from the MATLAB ODE Suite [16]. In particular, our attention is focused on the RungeKutta triple BS(2,3) used in ode23 because it has some special properties that we exploit in dde23. In § 2 we explain how explicit RungeKutta triples can be extended and used to solve DDEs. This extension is greatly simplified if the maximum step size H is smaller than t because the resulting formulas are explicit. This is no restriction when proving convergence as H® 0, so many papers assume it. However, we believe that it is unacceptable in a quality solver, so we also discuss how to evaluate the implicit formulas that arise when the step size is bigger than t. In § 3 we prove uniform convergence of the numerical scheme in realistic circumstances. In the following section we justify an estimate of the local truncation error.
Generally y¢(a+) from (1) is different from y¢(a) = S¢(a). It is characteristic of DDEs that this discontinuity in y¢ propagates as a discontinuity in y¢¢ at a+t_{1},¼,a+t_{k}. In turn these discontinuities propagate as a discontinuity in y^{(3)}, and so forth. Locating and accounting for loworder discontinuities in the solution is key to the use of RungeKutta methods for solving DDEs. In § 5 we discuss the practical issues of locating these discontinuities. Any DDE solver must account for them, but dde23 also deals with discontinuous changes in f at known points x in both the history and the interval of integration in a similar way.
Some problems involve changes in f at points that depend on the solution itself, or more generally on a function of the solution called an event function. Event functions are used to locate where y satisfies certain conditions, e.g., where a specific component of y(x) has a local maximum. dde23 has a powerful event location capability that is discussed in § 9. It is difficult to deal with the many possibilities that are seen in practice, so providing this capability affected profoundly the design of the solver.
The natural step size of a RungeKutta method is the largest that will yield the desired accuracy. Delays that are long compared to this step size and delays that are short present special difficulties for a DDE solver that we consider in § 7. In the computing environment of MATLAB brief comments about long delays suffice, but we investigate short delays at some length. In this we consider for the BS(2,3) triple how the iteration for evaluating implicit formulas behaves as the delay tends to zero in a model equation. We also consider the stability of the integration in the same circumstances.
We close with some examples that show restricting the class of problems, algorithmic developments, language facilities in MATLAB, and design have resulted in a DDE solver that is both capable and exceptionally easy to use. We have prepared a tutorial that shows how to use dde23. The solver and its auxiliary functions, the tutorial, and complete solutions of many examples from the literature are available at http://www.runet.edu/~ thompson/webddes/.
Explicit RungeKutta triples are a standard way to solve the ODE
problem y¢ = f(x,y) on [a,b] with given y(a). They can be
extended to solve DDEs. Indeed, dde23 is closely related
to the ODE solver ode23 [16] which implements
the BS(2,3) triple [2]. A triple of s stages involves
three formulas. Suppose that we have an approximation y_{n} to
y(x) at x_{n} and wish to compute an approximation at x_{n+1} = x_{n} + h_{n}. For i = 1,¼,s, the stages f_{ni} = f(x_{ni},y_{ni}) are defined in terms of x_{ni} = x_{n} + c_{i} h_{n}
and







In his discussion of continuous extensions, Dormand [4]
treats the local truncation error of the BS(2,3) triple as an
example. In §6.3 he obtains
 (2) 
To use an explicit RungeKutta triple to solve the DDE
(1), we need a strategy for handling the history terms
y(x_{ni}  t_{j}) that appear in



In dde23 we predict S^{(0)}(x) to be the constant y_{0} for the first step. We do not attempt to predict more accurately then because the solution is not smooth at a and we do not yet know a step size appropriate to the scale of the problem. Indeed, reducing the step size as needed to obtain convergence of simple iteration is a valuable tactic for finding an initial step size that is on scale, c.f. [14]. After the first step, we use the continuous extension of the preceding step as S^{(0)}(x) for the current step. This prediction has an appropriate order of accuracy and we can even assess the accuracy quantitatively using (2). Remarkably, the truncation error has a local maximum at x_{n+1} so that extrapolation actually provides a more accurate solution for some distance. Specifically, the ratio of the error of y_{n+s} to the error of y_{n+1} is no larger than 1 for s up to about 1.32. Much more important in practice is the accuracy of the prediction in the span of a subsequent step of the same size or rather larger. Unfortunately, the ratio grows rapidly as s increases. Still, it is no bigger than 13 for 1 £ s £ 2, so the prediction is quite good when the step size is changing slowly. The step size adjustment algorithms of dde23 are those of ode23 augmented to deal with implicit formulas. The convergence test for simple iteration is that  y_{n+1}^{(m+1)} y_{n+1}^{(m)} is no more than one tenth the accuracy required of y_{n+1}. Because each iteration costs as much as an explicit step, any proposed h_{n} with t < h_{n} < 2 t is reduced to t so as to make the formulas explicit. Considering the quality of the prediction when h_{n} ³ 2t, we have allowed up to 5 iterations. If convergence is not achieved, h_{n} is halved and the step repeated. The solver cannot crash because of repeated convergence failures because eventually it will resort to a step size for which the formulas are explicit.
Theorem Suppose an explicit RungeKutta
triple is used to solve the DDE (1) as described in the
preceding section. Assume that the meshes {x_{n}} include all
discontinuities of low orders and that the maximum step size H
satisfies (5) and (7). If f satisfies a
Lipschitz condition in its dependent variables and is sufficiently
smooth in all its variables, then there exists a constant C such
that for a £ x £ b,
y(x)  S(x)  £ C H^{p}. (3)
Before proving this result, we begin our investigation of
convergence by recalling the comments earlier about how
discontinuities arise and propagate for DDEs. We know in advance
all the points where the solution might not be smooth enough for
the usual orders of the formulas to hold. Because we consider
onestep methods, if we take these points to be mesh points, the
analysis is the same as if the solution were smooth throughout the
interval. Accordingly, we consider a sequence of meshes {x_{n}}
which include all these points. Just how this is accomplished in
practice is described in § 5. Between these points we
assume that f and y(x) are smooth enough for the formulas to
have their usual orders. We also assume that f satisfies a
Lipschitz condition with respect to all the dependent variables.
For the sake of simplicity, we suppose that there is only one
delay and take the Lipschitz condition in the form

The RungeKutta formulas extended to DDEs involve a history term that we write generically in the increment function as g(x). We require two lemmas about how the increment functions depend on their arguments.
Lemma 1 There is a constant L
such that for 0 £ s £ 1,
 F(x_{n},
~
y
n
,s;g(x))  F(x_{n},y_{n},s;g(x))  £ L 
~
y
n
 y_{n}  . (4)
Proof:
In a step of size h_{n} from (x_{n},[y\tilde]_{n}), the
intermediate quantities [y\tilde]_{ni} are defined by


The second lemma we need is a bound on the effect of using different history terms.
Lemma 2 Let D be a bound on G(x)  g(x) for all x £ x_{n+1}. If the maximum step size
H is small enough that
then there is a constant G such that
H L
max
i
æ
è
i1
å
j = 1
a_{ij}
ö
ø
£ 1 (5)
 F(x_{n},y_{n},s;G(x))  F(x_{n},y_{n},s;g(x)) £ GD. (6)
Proof:
Let the intermediate quantities of the two formulas be



Recall that if the step size is small enough, the intermediate results
y_{ni} are defined by the explicit formulas

 (7) 
We are now in a position to prove convergence as H ® 0
when the meshes include all discontinuities of low order and H
satisfies (5) and (7). Suppose that the
integration has reached x_{n} and let E_{n} be a bound on y(x) S(x) for all x £ x_{n}. The local truncation error of the
continuous extension is defined by











Nowadays codes based on explicit RungeKutta triples estimate and control the error made in the current step by the lower order formula. They advance the integration with the higher order result y_{n+1} (local extrapolation) because it is more accurate, though just how much is not known. Our notation and proof of convergence incorporate this assumption about the triple. We make use of local extrapolation in our proof that we can estimate the local truncation error of the lower order formula, lte_{n}^{*}, by the computable quantity est = y_{n+1}  y_{n+1}^{*}.
The local truncation error







As with ODEs, the local truncation error can be expanded to


We control the local truncation error of y_{n+1}^{*}, but advance the integration with y_{n+1} because it is believed to be more accurate. Recall that for the BS(2,3) triple, the local truncation error of y_{n+s} attains its maximum for y_{n+1}. Accordingly, though we do not know the local truncation error of the continuous extension, we believe that it is smaller than required throughout the span of the step.
The constant lags t_{1}, ¼, t_{k} are supplied to dde23 as the array lags. It is required that t = min(t_{1},¼,t_{k}) > 0 and that the delays are distinct. In simplest use the only discontinuity is at the initial point where the derivative obtained from the DDE, y¢(a+) = f(a,y(a),y(a  t_{1}),y(a  t_{2}),¼,y(a  t_{k})), is generally different from that provided by the history, S¢(a). Generally the solution itself is continuous at the initial point, so we defer consideration of the case when it is not. It is characteristic of DDEs that this discontinuity in the first derivative results in discontinuities in the second derivative at a + t_{1}, ¼, a + t_{k}. We describe this as discontinuities at the first level. In turn each of these discontinuities propagates in the next level as a discontinuity in the third derivative, and so forth. This is much simpler than the situation with more general DDEs because we can determine easily, and in advance, where the solution might have discontinuities and the lowest order discontinuity possible there. Some solvers ask users to supply details about which y(xt_{j}) appear in which equations so as to determine more precisely how discontinuities propagate. We think that the inconvenience of this design outweighs any advantages gained in the solution of the DDEs. Moreover, assuming that the worst can happen is more robust because there is no opportunity for a user to make a mistake in specifying the structure of the DDEs.
We have to track discontinuities and account for them during the integration because the usual order of a RungeKutta formula is seen in a step of size h_{n} from x_{n} only when f(x,y(x),y(x t_{1}),y(x  t_{2}),¼,y(x  t_{k})) is sufficiently smooth. As we have described the solution of DDEs with explicit RungeKutta methods, each step is taken with two formulas in order to estimate the error made in the step. If the error estimation and step size adjustment algorithms are to work properly, f must be smooth enough for both to have their expected order. If we require that the step size be chosen so that each point of discontinuity is a mesh point x_{m}, then none of y(x),y(xt_{1}),¼,y(xt_{k}) will have a low order discontinuity in the span of the current step. This is obvious for a discontinuity in the argument y(x). There cannot be a x in (x_{n}, x_{n} + h_{n}) for which some y(xt_{j}) is not smooth because the discontinuity in y at xt_{j} would have propagated to x and we would have limited h_{n} so that x_{n} +h_{n} £ x. Because we consider only onestep methods, it is only the smoothness of f in the current step that affects the order of the formula. This is true even if y or f is discontinuous at x_{n} provided that the correct values are used at x_{n}+, but we need take no special action in the solver because with one exception that we take up later, we assume these functions are (at least) continuous.
Although constant delays simplify greatly the location of potential discontinuities, there are some subtleties. One is revealed by a simple example. Suppose we start the integration at x = 0 and the delays are 1/3 and 1. The first lag implies a potentially severe lack of smoothness at 1/3, 2/3, 1, ¼. The difficulty is that in finite precision arithmetic, representation of 1/3 is not exact, implying that the propagated value that ought to be 1 is merely very close to the correct value. Generally this is not important, but the lag of 1 implies a discontinuity precisely at 1. We must recognize that finite precision arithmetic has split two values that are the same else the requirement that the solver step to points of discontinuity will cause it to attempt steps that are too short to be meaningful in the precision available. In dde23 we purge one of any pair of values that differ by no more than ten units of roundoff. This is done at each level of propagation so as to remove duplicates as soon as possible.
Example 4.4 of Oberle and Pesch [11] is a model for the spread of an infection due to Hoppensteadt and Waltman. The problem is posed on [0, 10] and there is one lag, t = 1. The solution is continuous throughout the interval, but there are discontinuous changes in the differential equation at points known in advance. Indeed, the changes are qualitative because at first the equation does not depend on y(x1), hence is an ODE. We provide in dde23 for discontinuities at points known in advance. All the user need do is give the locations of potential discontinuities as the value of the option 'Jumps'. As with the ODE Suite, options for dde23 are set with an auxiliary function, ddeset. In the case of the HoppensteadtWaltman model, the three points where discontinuities occur are defined by
c = 1/sqrt(2); options = ddeset('Jumps',[(1c), 1, (2c)]);and conveyed to the solver by means of the optional argument options. Discontinuities at known points are not limited to the interval of integration. The Marchuk immunology model discussed in [6] is solved on [0, 60] and there is one lag, t = 0.5. One component of the history function is V(t) = max(0,t+10^{6}). Clearly V(t) has a discontinuity in its first derivative at t = 10^{6}. Hairer, Nø rsett, and Wanner do not account for this discontinuity, even though it is as severe as the one at the initial point. With the user interface of dde23 it is easy to specify discontinuities at known points and they are accommodated in the algorithm simply by forming an array that includes these points as well as the initial point and then propagating them all as described already for the initial point. Some details of the solution of the Marchuk model are provided in § 8. When using a solver with the capabilities of dde23 it is easy and certainly better practice to specify discontinuities like those of these two examples, but a robust integrator may well be able to cope with them automatically.
Much more serious complications are introduced by continuation (restarts). As we discuss more fully in § 9, we provide for the location of events. This capability is used, for example, to deal with a coefficient x(m) in the Marchuk model that has a jump in its first derivative with respect to the state variable m(t). Restarting the integration when this happens assures us that the formulas are applied only to smooth functions. Retaining the solution in the form of a structure makes it possible to continue. For the present we note only that we must include in the solution structure the list of potential discontinuities. After all, we might have discarded points outside the original interval of integration that lie in the span of the current interval. Accordingly, on a restart this list, any jumps specified by the user in the current integration, and the initial point of the current integration are formed into an array. The entries are propagated and duplicates purged as described earlier.
An initial discontinuity in the first derivative appears in derivatives of successively higher order. When the discontinuities are in a derivative of sufficiently high order that they do not affect the order of the formulas, we can stop tracking them. The relatively low order of the BS(2,3) triple implies that the effects of discontinuities are rather more shortlived than with other popular formulas. Our basic assumption is that y(x) is continuous, so only four levels of propagation are needed. During the integration itself, the only time we permit a discontinuity in y(x) is at the initial point, which is indicated by providing the value of the solution as the value of the option 'InitialY'. The DDE is evaluated using this value, so both y and f are continuous to the right of the initial point. The facilities of dde23 allow us to deal with discontinuous changes in the solution at other times. If the discontinuity occurs during the integration, as for example at the time of occurrence of an event, the integration is to be restarted with the appropriate value of the solution supplied via 'InitialY'. Potential discontinuities of any order in the initial history are specified via the 'Jumps' option. If either kind of discontinuity in y(x) is possible, discontinuities are propagated to one higher level in the solver.
The HoppensteadtWaltman model involves the solution of ODEs as well as DDEs. The solver makes no assumption about whether terms with lags actually appear in the equations. This makes it possible to solve ODEs, though it is best then to set lags = [] because any delay specified affects the list of potential loworder discontinuities, hence the details of the integration.
Recall that the Marchuk model has a coefficient x(m) that has a jump in its first derivative with respect to the state variable m(t). Such discontinuities are qualitatively different from the discontinuities that we treat with the 'Jumps' option because they occur at unknown times. An event is said to occur at time t^{*} when a scalar function g(t,y(t),y(t t_{1}),¼,y(x  t_{k})), called an event function, vanishes at t = t^{*}. There may be many event functions and it may be important how the function vanishes, but we defer discussion of these complications. As with the propagated discontinuities, if we locate events and restart there, we integrate only with smooth functions. The theoretical and practical difficulties of event location are often not appreciated, but a theoretical analysis is possible with reasonable assumptions, e.g. [17], and a number of quality ODE solvers provide for the location of events. In particular, all the solvers of the MATLAB ODE Suite have a powerful event location capability that we have exploited in developing dde23. The capability is realized by means of the zerofinding function odezero. A nice exposition of its algorithm is found in C.B. Moler [9]. Although we did not change the basic algorithm, we modified the function to account for the circumstances of dde23.
The user interface for event location in dde23 is similar to that of ode23. There is no limit on the number of scalar event functions. They are all evaluated in a single function and the values returned as a vector. Using ddeset, the name of this function is passed to the solver as the value of the 'Events' option. The function must also return some information about the nature of the events. Sometimes a user wants to know when events occur and the solution then, e.g., where a solution component has its local maxima and its values there. This is quite different from the events of the Marchuk model which cause the integration to be terminated. The two situations are distinguished using a vector isterminal. If we want to terminate the integration when event function k vanishes, we set component k of isterminal to 1, and otherwise to 0. There is an annoying matter of some importance: Sometimes we want to start an integration with an event function that vanishes at the initial point. Imagine, for example, that we fire a model rocket into the air and we want to know when it hits the ground. It is natural to use the height of the rocket as a terminal event function, but it vanishes at the initial time as well as the final time. dde23 treats an event at the initial point in a special way. The solver locates such an event and reports it, but does not treat it as terminal, no matter how isterminal is set. It may be important how an event function vanishes. For example, to find local maxima of a solution component, we can locate zeros of the first derivative of this component. However, to distinguish maxima from minima, we want the solver to report a zero only when this function decreases through 0. This is done using the vector direction. If we are interested only in events for which event function k is increasing through 0, we set component k of direction to +1. Correspondingly, we set it to 1 if we are interested only in those events for which the event function is decreasing, and 0 if we are interested in all events. As a concrete example, the event function for the Marchuk model can be coded as
function [value,isterminal,direction] = marchuke(t,y,Z,h6,state) value = y(4)  0.1; isterminal = 1; direction = 0;We defer discussion of the input arguments until § 8. The solution component y(4) » m(t), so the (single) event function is m(t)  0.1. The value of isterminal indicates that the integration is to terminate when an event is located and the value of direction, that all zeros are to be reported.
It is hard to devise a good interface for event location because users may want to do quite different things when an event occurs. We have distinguished between terminal and nonterminal events. dde23 returns its solution in the form of a structure. It can be called anything, but to be specific, suppose it is called sol. Often a user will want to know when an event occurred, what the solution is there, and which event function vanished. This information is returned by means of fields that have fixed names. If there are events, they occur at the entries of the vector sol.xe. For each entry, the solution there is the corresponding column of the array sol.ye. Also, the corresponding entry of the vector sol.ie is the index of the event function that vanished.
Often an event is terminal because it is necessary to change the function defining the equations, as with the Marchuk model, and/or the value of the solution before continuing the integration. The user interface for event location in ode23 is powerful and reasonably convenient when solving ODEs because on a return from a terminal event, it is not difficult to make such changes, continue on as the solution of a new problem, and finally assemble the solutions on subintervals to obtain a solution on the whole interval of interest. The matter is much more difficult for DDEs because they involve previously given or computed quantities. On a restart, the solver may have to evaluate the given history function, which can be specified either as a function or a vector, as well as evaluate the solution computed prior to the restart. As mentioned in § 5, propagated discontinuities also present complications. We deal with restarts by allowing the history to be specified as a solution structure. This structure contains all the information the solver needs to recognize and deal with the various possibilities. The approach is easy for the user, indeed, easier than when solving ODEs because the solution structure always contains the solution from the initial point to the last point reached in the integration.
As stated earlier, discontinuities in the solution are allowed only at the initial point of an integration. The solver is told of this and given the value of the solution there by providing it as the value of the option 'InitialY'. Discontinuities in the solution are not common, and when they do occur, they invariably do so when a suitably defined event function vanishes. By making the event terminal, we can use the 'InitialY' option to give the solution the correct initial value for continuation.
The ``natural'' step size is the largest one that yields the specified accuracy. Codes for the solution of DDEs have difficulties when the delays are both long and short compared to the natural step size. Long delays present no special difficulty for dde23, so after a brief discussion we turn to the troublesome matter of short delays.
Many solvers are written in FORTRAN 77 which lacks the facilities for dynamic memory management seen in Fortran 90 and more conveniently in MATLAB. On reaching a point x_{n} in the integration the solver will need to evaluate the solution at previous points x_{n}  t_{j}, hence must retain the information it needs to do this. When there is a delay long compared to the natural step size this implies that the solution must be retained over many steps. The difficulty is that in an environment like FORTRAN 77 which requires allocation of storage at the beginning of a run, there is a possibility of exceeding the storage provided. This is not an issue for us because we have followed the traditional design of ODE solvers in MATLAB of returning to the user the solution at all mesh points. Virtual storage and dynamic memory management increase the class of problems that can be solved, but we assume that long delays will not prevent solution of the problem given. We remark that MATLAB will handle automatically allocation of additional storage, but we follow the ODE Suite in allocating explicitly additional storage as needed because it reduces the run time.
A short delay is a more troublesome matter that is related to stiffness. A problem is stiff when the natural step size must be reduced greatly and often. Although the reduction necessary for the stability of explicit formulas gets the most attention, [14] points out reductions arising in the evaluation of implicit formulas by simple iteration and in output that might be just as severe. Some DDE solvers require that H < t so as to have the simplicity of working with explicit formulas. For such solvers a short delay causes stiffness when the natural step size must be reduced severely for this reason.
dde23 uses simple iteration to evaluate the implicit formulas that may arise when the step size is bigger than t. The solver is not permitted to step over discontinuities, so a short delay sometimes causes the natural step size to be reduced greatly. However, eventually the propagated discontinuities smooth out to the point that they can be ignored. This happens sooner in dde23 than in many solvers because of the relatively low order formulas used. In a moment we consider restrictions on the step size when the solution is smooth. Before doing so, we note that not stepping over discontinuities is different from limiting the step size to the shortest delay. To appreciate the distinction, suppose there are two delays t_{1} << t_{2}. The first few steps of the integration are limited to t_{1} because there are potential discontinuities of low order at a + t_{1}, a+ 2 t_{1}, ¼. The other delay makes its appearance first at a + t_{2}. Once the discontinuity at a that is propagating at spacing t_{1} has smoothed out, the solver can use step sizes much bigger than t_{1} in the rest of the integration to a + t_{2}. This all repeats at a + t_{2} because the discontinuity there propagates at a spacing of t_{1} for a few steps and the second delay has its next effect at a + 2 t_{2}.
We saw in §3 that we can take steps bigger than the
smallest delay t provided that the maximum step size H
satisfies (5) and (7) so that simple
iteration converges. We found that many of our examples from the
literature were solved with step sizes for which the formulas are
explicit. Some did involve steps big enough to make the formulas
implicit, but permitting such steps was not as advantageous as we
had hoped. This led us to investigate more fully the effects of
short delays. The effects of delays as seen in propagated
discontinuities disappear soon enough, but delays represent a
natural measure of scale for a DDE and it is not obvious that they
will not affect the numerical solution in other ways. For one
thing, the sufficient condition for convergence of simple
iteration does not reveal how this restriction on the step size is
related to t. For another, we have not yet discussed the
stability of the formulas and how the largest stable step size is
related to t. In the following subsections we investigate
these issues for the scalar model problem
 (8) 
For the model DDE (8), we use Maple [7] to
work out an analytical expression for the iterates when evaluating
the BS(2,3) triple by simple iteration. We prove that if the step
size h satisfies
 (9) 
The number of implicit stages depends on how big h is compared
to t. We are interested in the possibility of h >> t,
so we may assume that the BS(2,3) triple has its maximal number of
implicit stages. The continuous extension of this triple is cubic
Hermite interpolation over the span of the step, so the solver
retains at x_{n} the slope f_{n} in addition to the solution value
y_{n}. In a step to x_{n}+h the values y_{n},f_{n},S(x_{n}  t) are
fixed during the iteration. The continuous extension is determined
by these fixed quantities and the vector v = (y_{n+1},f_{n+1})^{T}. If v^{(m)} denotes the current approximation to
this vector, it is not difficult using Maple to work out the
iteration matrix J in v^{(m+1)} = J v^{(m)} + c. It is found
that the entries of J are cubic polynomials in t. Let J_{0}
be this matrix evaluated at t = 0. If we let
D = diag{1,h}, it is found that


Now we investigate the solution of (8) with constant step size and ask whether the numerical method is stable when the problem is. The model DDE (8) includes the test equation y¢ = Ly used when investigating stability of ODE solvers, so matters are necessarily more complicated for DDEs. In particular, the test equation is stable when Re(L) £ 0, but it is not so clear when the more general equation (8) is stable. Hairer et alia [6] discuss this when L and M are real. A sufficient condition for stability of the DDE is that L < 0 and M £ L. Although they show that a delay (M ¹ 0) can stabilize an ODE that is unstable because L > 0, the sufficient condition is reasonably close to the necessary condition. It is only suggestive, but this sufficient condition led us to study how the stability of the BS(2,3) triple when solving (8) is related to its stability when solving the test equation.
A good investigation of the stability of methods for DDEs based on RungeKutta formulas for ODEs is that of Paul and Baker [12], but it does not illuminate the present situation because it assumes that h is small enough that the formulas are explicit. Later in [13] they considered methods for singular DDEs and in this context it was natural to discuss the implicit formulas that arise when t < h. Although they formulate the algebraic equations that allow one to decide whether the computation is stable with a given h, they do not draw any conclusions. Considering the generality of the methods they considered, this is not surprising. In contrast, we study only a specific triple of relatively few stages and focus our attention on the limit t® 0. It is this limit and the exceptionally smooth continuous extension of the BS(2,3) triple that allow us to relate the stability of the method when solving (8) to its stability when solving the test equation. Specifically, we prove that if the method is stable in a strong sense for the test equation, then for all sufficiently small t and M, it is stable for the DDE (8). Note that if L and M are real, the sufficient condition implies that the DDE itself is stable, but our analysis allows complex coefficients and the stability of the DDE is then murky. As with the convergence of simple iteration, we find that it is possible to integrate stably with step sizes arbitrarily bigger than the delay.
Key to our analysis is the observation that as t®0, the stability behavior of the BS(2,3) triple is that of a
onestep method. Suppose the integration has reached x_{n} and we
take a step of size h. For all sufficiently small t, the
only argument that does not lie in the span of the current step is
x_{n}  t. It lies in [x_{n}  h, x_{n}] where S(x) is the
cubic Hermite interpolant to y_{n1},f_{n1},y_{n},f_{n}. As the
form of the interpolant suggests and Maple confirms,
 (10) 
In what follows we work only with terms through O(t^{2}). The
quantity f_{n} is defined implicitly because f_{n} = L y_{n} + MS(x_{n}  t) and S(x_{n}  t) depends on f_{n}. A little
calculation using (10) shows that

Insight and a check on the computations is provided by supposing that M = 0, hence Z = 0. This corresponds to solving an ODE, so y_{n+1} should be, and is, a third order approximation to the solution of the test equation with y_{n} = 1, namely y_{n+1} = P(z) = 1 + z + z^{2}/2 + z^{3}/6. The polynomial P(z) appearing here is the stability polynomial of the method for ODEs. When Re(z) < 0 the differential equation is stable and the method is stable if P(z) £ 1. We shall say that it is ``damped'' if P(z) < 1.
So far we have been considering an expression for y_{n+1} that
is useful for small t. Let us now suppose that Z is also
small. A straightforward computation using Maple shows that

In this section we consider a few examples that show how easily even rather complicated DDEs can solved with dde23.
A KermackMcKendrick model of an infectious disease with periodic
outbreak is discussed in [6] where Figure 15.6 shows the
solution of

Solving this model illustrates the simplest use of dde23. In general a call to the solver with all options set by default has the form
sol = dde23(ddefile,lags,history,tspan);The call list could scarcely be shorter. The interval of integration, tspan, is here [0, 40]. history can be specified in several ways. In the common case that the solution is constant prior to the initial point, the vector itself can be supplied as history. For the example it is [5; 0.1; 1]. The initial value of the solution is not an input argument for dde23 because most problems have initial values that are the same as those given by the history function evaluated at the initial point. The constant delays are supplied as an array lags, here [1, 10]. There is no limit on the number of delays, but they must be distinct. ddefile is the name of the function for evaluating the DDEs. For the example this function can be coded as
function v = kmf(t,y,Z) ylag1 = Z(:,1); ylag2 = Z(:,2); v = zeros(3,1); v(1) =  y(1)*ylag1(2) + ylag2(2); v(2) = y(1)*ylag1(2)  y(2); v(3) = y(2)  ylag2(2);Here the vector y approximates y(t) and column j of the array Z approximates y(tt_{j}) for j = 1,¼,k. The delays can be specified in any order, but in defining the DDEs, t_{j} is lags(j). It is not necessary to create a local vector ylag1 approximating y(tt_{1}) as we have done here, but we find that it often makes the equations easier to read.
Having coded the DDEs, we now solve the initial value problem with the command
sol = dde23('kmf',[1, 10],[5; 0.1; 1],[0, 40]);The input arguments of dde23 are much like those of ode23, but the output differs formally in that it is one structure, here called sol, rather than several arrays
[t,y,...] = ode23(...The field sol.x corresponds to the array t of values of the independent variable returned by ode23 and the field sol.y, to the array y of solution values. Figure 15.6 of [6] is reproduced by
plot(sol.x,sol.y);In summary, a complete program for solving the KermackMcKendrick model consists of the file kmf.m defining the DDEs, the one command invoking dde23, and the one plot command.
Output from dde23 is not just formally different from that of ode23. The method of dde23 approximates y(t) by a piecewisepolynomial function S(t) Î C^{1} [a,b]. The solver places in sol the information necessary to evaluate this function with ddeval. Values of S(t) and optionally S¢(t) are obtained at an array of arguments t by
[S,Sp] = ddeval(sol,t);With this form of output, we solve a DDE just once and then obtain inexpensively as many solution values as we like, anywhere we like. Plotting the solution at the mesh points of sol.x is generally satisfactory, but when the graph is not sufficiently smooth, we can always get a smooth graph by evaluating it at enough points with ddeval.
To further illustrate the advantages of output in this form, we explain how to reproduce a figure in J.D. Farmer [5]. A DDE with a single lag of 14 is solved there on [0,300]. Fig. 2a plots y(t 14) against y(t), but starts with t = 50 so as to allow transient behavior to die out. The problem is solved easily with dde23, but we cannot make this plot using results at the mesh points sol.x alone. This is because they are not equally spaced: If t^{*} appears in sol.x so that we have y(t^{*}) available in sol.y, it is generally not the case that t^{*}  14 appears in sol.x, so we do not have y(t^{*}  14). However, using ddeval we can reproduce the figure easily with
t = linspace(50,300,1000); y = ddeval(sol,t); ylag = ddeval(sol,t  14); plot(y,ylag);We specify 1000 points because generally a smooth graph in the phase plane requires a good many. Because ddeval is vectorized and exploits fast builtin functions, there is no need to be concerned about evaluating the solution at so many points.
As noted in § 5, dde23 does not require that the equations actually involve the delays specified. By specifying an additional, artificial delay that is short, the way the solution is computed is affected, but the solution and the natural step size remain the same. Solving the KermackMcKendrick model with the option 'Stats' set 'on', it is found that the problem is solved in 133 successful steps, with 17 failed attempts and a total of 451 function evaluations. Adding an artificial delay of 10^{4} by changing lags to [1, 10, 1e4] increased the number of successful steps to 164, the number of failed attempts dropped to 12, and the total number of function evaluations rose to 1027. If the step size had been restricted to the shortest delay, the integration would have required at least 300 ×10^{4} steps!
A careful solution of the Marchuk immunology model discussed briefly in § 5 is relatively complicated. It serves to make a number of points about how to use dde23 and illustrates some interesting capabilities. The listing that follows shows the pertinent details, but it has been abbreviated by deleting the details of scaling and plotting the solution components, which are standard for MATLAB. Also, we do not discuss here some aspects of the computation that were taken up in preceding sections.
The Marchuk problem depends on a parameter h6. Figure 15.8 of [6] presents plots for two values that we reproduce here. In § 9 we list an event function, marchuke, for locating where the solution component m(t) has value 0.1. To communicate to the ddefile the information it needs to evaluate the DDEs properly, we introduce a parameter state that is +1 if m(t) £ 0.1 and 1 otherwise. As is common in the functions of MATLAB, parameters can be added at the end of the list of input variables. When we listed marchuke in § 9, we deferred discussion of the input arguments h6 and state. Now we understand that they are parameters that are being passed to this function by the solver. As it happens, they are not actually used by this function, but when parameters are passed into dde23, they must be added to the ends of the call lists of the ddefile and, if present, the history function and the event function. We do not list the history function, marchukh, nor the ddefile, marchukf, for this example because their coding is straightforward.
As stated earlier, options are set by means of the auxiliary function ddeset exactly as with odeset when using ode23. In addition to specifying a jump in the history and the presence of an event function, we specify relative and absolute error tolerances to show how this is done. The parameter state is initialized according to the value of m(0). Events are terminal, so cause a return to the main program where we reset state and restart with the current solution structure sol as history function. Because we do not know how many events will occur, restarts are coded in a while loop. The mesh {x_{n}} used by the solver is returned as the field sol.x. The last entry in this vector, sol.x(end), is the last point reached in the integration, so we compare it to 60 to see whether we have reached the end of the interval.
opts = ddeset('RelTol',1e5,'AbsTol',1e8,... 'Jumps',1e6,'Events','marchuke'); for h6 = [10 300] state = +1; sol = dde23('marchukf',0.5,'marchukh',[0,60],opts,h6,state); while sol.x(end) < 60 state = state; sol = dde23('marchukf',0.5,sol,[sol.x(end),60],opts,h6,state); end % Scaling and plot commands go here. endPrinting out the event locations returned in sol.xe shows that there are no events for h6 = 10 and three for h6 = 300.
A twowheeled suitcase may begin to rock from side to side as it
is pulled. When this happens, the person pulling it attempts to
return it to the vertical by applying a restoring moment to the
handle. There is a delay in this response that can affect
significantly the stability of the motion. This is modeled by
Suherman et alia [18] with the DDE
 (12) 
A wheel hits the ground (the suitcase is vertical) when y_{1}(t) = 0 and the suitcase has fallen over when y_{1}(t) = p/2. All these events are terminal and all are to be reported. The event function can be coded as
function [value,isterminal,direction] = scasee(t,y,Z,state) value = [y(1); abs(y(1))pi/2]; isterminal = [1; 1]; direction = [0; 0];
The parameter state seen in the event function is introduced in the main program much as in the preceding example so as to evaluate properly the discontinuous coefficient sign(y_{1}(t)) in (12). When a wheel hits the ground, the integration is to be restarted with y_{1}(t) = 0 and y_{2}(t) multiplied by the coefficient of restitution 0.913. The 'InitialY' option is used for this purpose. The solution at all the mesh points is available as the field sol.y and in particular, the solution at the time of a terminal event is the last column of this array, sol.y(:,end). If the suitcase falls over, the run is terminated, so we must check which event occurred. Recall that this information is provided in sol.ie and it is the last event that we must check. The following program for the suitcase problem has much in common with the preceding example, but also illustrates other aspects of using the solver. Note, for example, how ddeset is used to add an option and to change the value of an option previously set.
state = +1; opts = ddeset('RelTol',1e5,'AbsTol',1e5,'Events','scasee'); sol = dde23('scasef',0.1,[0; 0],[0, 12],opts,state); while sol.x(end) < 12 if sol.ie(end) == 1 % A wheel hit the ground. state =  state; opts = ddeset(opts,'InitialY',[0; 0.913*sol.y(2,end)]); sol = dde23('scasef',0.1,sol,[sol.x(end), 12],opts,state); else % The suitcase fell over. break; end end
This program reproduces Figure 3 of [18] except that we do not include the standard commands that plot the figures. The event locations computed by dde23 are compared in Table 1 to reference values computed with the FORTRAN 77 code DKLAG5 [10] used in [18] and verified with its successor DKLAG6 [3]. Having written the three solvers, we can fairly say that it is very much easier to solve this problem in MATLAB with dde23.
dde23  reference  
A wheel hit the ground at  4.5168  4.516757 
A wheel hit the ground at  9.7511  9.751053 
The suitcase fell over at  11.6704  11.670393 
After running this program, sol.xe is
0.0000 4.5168 4.5168 9.7511 9.7511 11.6704This does not seem to agree with the event locations of Table 1. For instance, why is there an event at 0? That is because one of the event functions is y_{1} and this component of the solution has initial value 0. As explained in § 9, dde23 locates this event, but does not terminate the integration because the terminal event occurs at the initial point. The first integration terminates at the first point after the initial point where y_{1}(t^{*}) = 0, namely t^{*} = 4.5168. The second appearance of 4.5168 in sol.xe is the same event at the initial point of the second integration. The same thing happens at 9.7511 and finally the event at 11.6704 tells us that the suitcase fell over and we are finished.
We have prepared a tutorial on the use of dde23 that includes a brief discussion of DDEs, complete solutions of many examples from the literature that illustrate how to use the solver, and some exercises (with solutions). The solver and its auxiliary functions, the tutorial in various formats, and all the example programs are available at http://www.runet.edu/~ thompson/webddes/. The programs run in version 5 of MATLAB.