This habilitation thesis was presented on January 22nd 1997.
Source: Irene Saverino ( firstname.lastname@example.org)
Supervisor: Professor Alistair I. Mees.
Text versions of the abstract and summary, and a postscript file of the entire thesis are available at http://maths.uwa.edu.au/~gary/.
We consider discrete, differentiable dynamical systems $T:M\rightarrow M$ where M is a smooth d-dimensional manifold embedded in Euclidean space, and shall be concerned with ergodic averages of real-valued functions g:M\rightarrowR. Such averages may be calculated by arithmetically averaging g along an infinitely long single orbit (time averaging) or by integrating g with respect to an ergodic invariant measure (space averaging). We are particularly interested in the situation where these two methods yield identical answers for a large number of orbits, as in this situation the invariant measure has some physical significance.
A dynamical indicator that arises as an ergodic average are the Lyapunov exponents of T. These quantities describe asymptotic rates of local stretching (or contraction) of phase space under T. Chapter 1 of this thesis describes in detail a new method of computing Lyapunov exponents from either an experimental set of data or a known map T, using a spatial average rather than the conventional time average. Our approach involves calculating the Lyapunov exponents of a related Markov chain, with the unique invariant density of this random system providing us with an estimate of the physical invariant measure of T. Numerically computing the estimates of both the Lyapunov exponents and the physical invariant measure is a matter of solving two eigenvalue problems. A detailed application of the technique is given for the two-dimensional Hénon system.
In Part II of this thesis we consider the question of whether the density of our induced Markov chain is indeed approximating the "physical" invariant measure of our deterministic system. Following an idea of Ulam, the transition matrix governing our Markov chain is constructed from the one-step interactions of sets in a finite partition of phase space. In Chapter 2 it is shown that this Markov chain may be viewed as a small random perturbation of T, and that as the magnitude of these perturbations go to zero, the limiting zero-noise measure is an invariant measure of T. Our argument in Chapter 1 was that since our approximation arises as a zero-noise limit of random perturbations of T, this limiting measure is in some sense robust with respect to small perturbations and is therefore of physical significance. We examine our invariant measure approximation in more detail, and include encouraging numerical examples for the Hénon system and a nonlinear torus map. It is then shown via counterexamples that not all limits of randomly perturbed systems are physical measures; in light of this it seems that our particular perturbation has some special properties not enjoyed by other perturbations.
Chapter 3 shows that this is indeed the case, with a proof that our approximation converges to the physical invariant measure of d-dimensional expanding maps and two-dimensional Anosov systems, provided that the partition used to generate our transition matrix is a Markov partition for T. By using Markov partition sets, the entries of the transition matrix governing our induced Markov chain take on a special meaning concerning the rate of local stretching of T. To my knowledge, this result represents the first proof that Ulam's approximation may be applied to Anosov systems to approximate physical measures.
The requirement in Chapter 3 that a Markov partition be used is rather restrictive for computer implementation of the approximation. Chapter 4 attempts to extend the result of the previous chapter to more general partitions using simple observations concerning the structure of the transition matrix. It is noted that our transition matrix is close to a special transition matrix whose invariant density produces the physical invariant measure in the limit as our partition is refined. The problem now boils down to one of how sensitive the invariant density of the special transition matrix is to perturbations of the entries in the matrix. For two classes of maps we prove that this special transition matrix is sufficiently insensitive to guarantee convergence of our approximate invariant measures to the physical invariant measure. For more general maps, we present numerical results to support our conjecture on robustness of the transition matrices.
The sensitivity of the special transition matrix is dependent on how quickly the Markov chain approaches equilibrium; in other words, its rate of mixing. In Chapter 5 we conjecture that maps that display certain mixing properties produce transition matrices that are also strongly mixing. A comparison of the mixing rates of various model maps (and flavours of mixing) with the mixing rates of the corresponding transition matrices is made in an effort to find which particular mixing property of the map controls the mixing properties of the induced Markov chains. We conclude with a technique for linking the induced Markov chains constructed from a partition and its refinement, and put forward arguments as to why our extension of Ulam's approximation may be considered to be the best possible finite approximation of the dynamics of d.
Source: Gary Froyland (email@example.com).
The thesis is available by anonymous ftp from: ftp://mrb.niddk.nih.gov/pub/crook/diss.ps.
There has been growing interest in the mechanisms underlying the oscillatory properties of the mammalian cerebral cortex. It is now apparent that oscillations in all regions of the cortex have remarkably common temporal characteristics. However, there is much more variability in the spatial domain. For example, some cortical oscillations tend to be synchronous while others produce traveling waves of oscillations or other phase shifts which may be computationally significant. We use coupled oscillator models to examine several mechanisms which affect the dynamics of cortical oscillations. Such models use a single phase variable to approximate the voltage oscillation of each neuron during repetitive firing where the behaviour of a pair of coupled oscillators depends critically on the interaction function chosen to represent the coupling.
We derive interaction functions that describe the coupling for pairs of pyramidal cells from olfactory cortex under a variety of conditions and show that spike frequency adaptation plays a subtle but important role in establishing synchrony in cortical networks. Then we introduce a network model consisting of a continuum of these oscillators which includes the effects of spatially decaying coupling and axonal delay. This system can undergo a bifurcation from synchronous oscillations to waves due to the axonal delay which causes distant connections to encourage a phase shift between cells. We also examine the effects of dendritic delay using a system of two coupled somatic oscillators where each is attached to a dendritic cable and the coupling occurs through synaptic connections at the ends of the cables. We derive the associated interaction function and show that the cable length affects the stability of phase-locked solutions. The synchronous solution can change from a stable solution to an unstable one as the cable lengthens and the synaptic position moves further from the soma. We address the same issue using a compartmental adapting pyramidal cell model where we find that proximal connections encourage synchrony, but distal connections and broad synaptic
Source: Sharon Crook (firstname.lastname@example.org)