 Research
 Open Access
 Published:
Attractor and saddle node dynamics in heterogeneous neural fields
EPJ Nonlinear Biomedical Physics volume 2, Article number: 4 (2014)
Abstract
Background
We present analytical and numerical studies on the linear stability of spatially nonconstant stationary states in heterogeneous neural fields for specific synaptic interaction kernels.
Methods
The work shows the linear stabiliy analysis of stationary states and the implementation of a nonlinear heteroclinic orbit.
Results
We find that the stationary state obeys the Hammerstein equation and that the neural field dynamics may obey a saddlenode bifurcation. Moreover our work takes up this finding and shows how to construct heteroclinic orbits built on a sequence of saddle nodes on multiple hierarchical levels on the basis of a LotkaVolterra population dynamics.
Conclusions
The work represents the basis for future implementation of metastable attractor dynamics observed experimentally in neural population activity, such as Local Field Potentials and EEG.
Background
Neural field models, such as [1, 2] are continuum limits of largescale neural networks. Typically, their dynamic variables describe either mean voltage [2] or mean firing rate [1, 3] of a population element of neural tissue (see [4, 5] for recent reviews).
The present article considers the paradigmatic Amari equation [2] describing the spatiotemporal dynamics of mean potential V(x, t) over a cortical ddimensional manifold $\Omega \subset {\mathbb{R}}^{d}$:
where K(x, y) is the spatial synaptic connectivity between site y ∈ Ω and site x ∈ Ω, and S is a nonlinear, typically sigmoidal, transfer function. This model neglects external inputs for simplicity but without constraining the generality of the subsequent results. Possible synaptic time scales are supposed to be included in the kernel function K and can be introduced by a simple scaling of time.
In general, the connectivity kernel K(x, y) fully depends on both sites x and y, which case is referred to as spatial heterogeneity. If the connectivity solely depends on the difference between x and y, i.e. K(x, y) = K(x  y), the kernel is called spatially homogeneous[2]. Furthermore, if the connectivity depends on the distance between x and y only, i.e. K(x, y) = K(x  y), with x as some norm in Ω, the kernel is spatially homogeneous and isotropic[6].
Spatially homogeneous (respectively isotropic) kernels have been intensively studied in the literature due to their nice analytical properties. In this case, the evolution equations have exact solutions such as bumps [2, 7], breathers [8–10] or traveling waves [7, 11]. Moreover, such kernels allow the application of the technique of Green’s functions for deriving partial neural wave equations [7, 12, 13].
The present work focuses on spatially heterogeneous neural fields which have been discussed to a much lesser extent in previous studies than homogeneous neural fields [14–22]. This study resumes these attempts by investigating stationary states of the Amari equation (1) with heterogeneous kernels and their stability. Such a theory would be mandatory for modeling transient neurodynamics as is characteristic, e.g., for human cognitive phenomena [23], human early evoked potentials [24] or, among many other phenomena, for bird songs [25].
The article is structured in the following way. In the “Results” section we present new analytical results on stationary solutions of the Amari equation (1) and their stability in the presence of heterogeneous connectivity kernels. Moreover, we present numerical simulation results for the kernel construction and its stability analysis. The “Methods” section is devoted to construct such kernels through dyadic products of desired stationary states (cf. the previous work of Veltz and Faugeras [26]). A subsequent linear stability analysis reveals that these stationary solutions could be either attractors or saddles, depending on the chosen parametrization. Finally, we present a way to connect such saddle state solutions via heteroclinic sequences [27, 28] in order to construct transient processes.
Results
In this section we present the main results of our study on heterogeneous neural fields.
Stationary states and their stability
Analytical study
The Amari equation (1) has a trivial solution V_{0}(x) = 0 and nontrivial solutions V_{0}(x) ≠ 0 that obey the Hammerstein integral equation [29]
Inspired by Hebbian learning rules for the synaptic connectivity kernel K(x, y) which found successful applications, e.g., in bidirectional associative memory [30], we consider symmetric spatially heterogeneous kernels K(x, y) = K(y, x) that can be constructed from dyadic products of the system’s nontrivial stationary states
Together with Eq. (2), this choice yields the additional condition for nontrivial stationary states
which is a nonlinear integral equation of Fredholm type. Since 0 < S(x) < 1 for a logistic transfer function, a necessary condition for nontrivial stationary states is
which indicates immediately a method to find a nontrivial solution numerically as shown below.
Small deviations u(x, t) = V(x, t)  V_{0}(x) from a nontrivial stationary state V_{0}(x) obey the linear integrodifferential equation
where $L(x,y)=K(x,y){S}^{\prime}\left(y\right)$ and ${S}^{\prime}\left(y\right)=\mathrm{d}S\left[\phantom{\rule{0.3em}{0ex}}{V}_{0}\right(y\left)\right]/\mathrm{d}{V}_{0}\left(y\right)$. A linear stability analysis of (6) carried out in the next section shows that a nontrivial stationary state V_{0}(x) is either a fixed point attractor, or (neglecting a singular case) a saddle with onedimensional unstable manifold. Such saddles can be connected to form stable heteroclinic sequences.
Numerical study
To gain deeper insight into possible stationary solutions of the Amari equation (1) and their stability, the subsequent section presents the numerical integration of equation (1) in one spatial dimension for a specific spatial synaptic connectivity kernel.
Since previous experimental studies [31] have revealed Gaussian distributed probability densities of neuron interactions in the visual cortex of rats, it is reasonable to look for spatially discretized stationary states in the family of Gaussian functions
parameterized by the amplitude W_{0}, the variance σ^{2}, the noise level κ and the spatial discretization interval Δ x.
By virtue of this parametrization of the discrete Hammerstein equation, it is sufficient to fit the model parameters optimally in such a way that the Hammerstein equation holds. Figure 1(a) illustrates a noisy kernel K and (b) shows the corresponding stationary state V_{0}(x) for certain parameters.
For each stationary state, one obtains a kernel L(x, y) of the linear stability analysis whose spectrum characterizes the stability of the system in the vicinity of the stationary state. If the eigenvalue with the maximum real part ε_{1} > 1, then the stationary state V_{0}(x) is exponentially unstable whereas ε_{1} < 1 guarantees exponential stability. Figure 1(b) shows the eigenmode e_{1}(x) corresponding to the eigenvalue with maximum real part which has a similar shape as the stationary state.
Moreover Figure 2 presents parameters for which V_{0}(x) fulfills the Hammerstein equation (2), i.e. for which V_{0}(x) is the stationary solution. We observe that some parameter sets exhibit a change of stability, i.e. the eigenvalue with maximum real part may be ε_{1} > 1 or ε_{1} < 1 for certain parameter subsets.
Taking a closer look at the stability of V_{0}(x), the computation of the eigenvalues ε_{ k }, k = 1, …, n reveals a dramatic gap in the spectrum: the eigenvalue with maximum real part ε_{1} is well isolated from the rest of the spectrum $\left\{{\epsilon}_{k>1}\right\}$ with ε_{k>1} < 10^{14}. This is in accordance to the discussion of Eq. (17) on the linear spectrum.
Figure 3 presents the spatiotemporal evolution of the heterogeneous neural field starting close to the stable stationary state V_{0}(x), see point (1) in Figure 2. As expected, the field activity remains in the vicinity of the stable state.
In contrast, for the system starting close to an unstable stationary state, cf. point (2) in Figure 2, the field activity moves away from V_{0}(x) and approaches a new stationary state close to but different from V_{0}(x), cf. Figure 4. This new stationary state obeys the Hammerstein equation (2).
Recalling the presence of the trivial stable solution V = 0, the activity shown in Figure 4 indicates the bistability of the system for the given parameter set.
Figure 5 supports this bistability for the same parameter set but different initial conditions, which presents the jump from the unstable stationary state V_{0}(x) to the trivial stable stationary state V = 0. The choice whether the system approaches the upper or lower stable stationary state depends on the initial condition of the simulation and is random for random initial conditions as implemented in Figures 3, 4 and 5. Hence, this example reveals the existence of a saddlenode bifurcation in heterogeneous neural fields.
Finally, we would like to stress that the analysis presented above does not depend on the smoothness of the kernel and stationary state. For a strong noise level in the synaptic interaction kernel K, the analytical discussion above still describes the stationary state and the linear stability quite well as shown in Figure 6 for a stable stationary state V_{0}(x) close to the stability threshold.
Heteroclinic orbits
The previous section has shown that heterogeneous neural fields may exhibit various stationary states with different stability properties. In particular we found that stationary states could be saddles with onedimensional unstable manifolds that could be connected to stable heteroclinic sequences (SHS: [27, 28]) which is supported by experimental evidence [24, 32, 33]. We present in the following paragraphs our main findings on heterogeneous neural fields exhibiting heteroclinic sequences and also hierarchies of such sequences.
One level heteroclinic sequence
It is possible to expand the integral in the Amari equation (1) into a power series yielding
with kernels
The solution of Eq. (8) represents a heteroclinic sequence that connects saddle points {V_{ k }(x)} along their respective stable and unstable manifolds. Its transient evolution is described as winnerless competition in a LotkaVolterra population dynamics governed by interaction weights ρ_{ i k } between neural populations k and i and their respective growth rates σ_{ i }.
In Eq. (9) the $\left\{{V}_{k}^{+}\right(x\left)\right\}$ comprise a biorthogonal system of the saddles {V_{ k }(x)}. Therefore, the kernel K_{1}(x, y) describes a Hebbian synapse between sites y and x that has been trained with pattern sequence V_{ k }. This finding confirms the previous result of [18]. Moreover the threepoint kernel K_{2}(x, y, z) further generalizes Hebbian learning to interactions between three sites x, y, z ∈ Ω. Note that the kernels K_{ i } are linear combinations of dyadic product kernels, similar to those introduced in (3). Thus, our construction of heteroclinic orbits straightforwardly results in PincherleGoursat kernels used in [26].
Multilevel hierarchy of heteroclinic sequences
Now we assume that the neural field supports a hierarchy of stable heteroclinic sequences in the sense of [25, 34]. For the general case one has to construct integral kernels for a much wider class of neural field equations which can be written as
where the new temporal kernel G describing synapticdendritic filtering is usually the Green’s function of a linear differential operator Q, such that G(t, τ) = G(t  τ) and the temporal integration in Eq. (10) is temporal convolution. Equation (10) can be simplified by condensing space x and time t into spacetime s = (x, t). Then, (10) becomes
with a tensor product kernel $H(s,{s}^{\prime})=(K\otimes G)(s,{s}^{\prime})$ and integration domain $M=\Omega \times ]\infty ,t]$.
For only two levels in such a hierarchy, we obtain
with kernels
Here, the ${V}_{{k}_{\nu}}^{\left(\nu \right)}\left(x\right)$ denote the k_{ ν }th saddle in the νth level of the hierarchy (containing n_{ ν } stationary states). Saddles are chosen again in such a way that they form a system of biorthogonal modes ${V}_{{k}_{\nu}}^{{\left(\nu \right)}^{+}}\left(x\right)$ whose behavior is determined by LotkaVolterra dynamics with growth rates ${\sigma}_{{k}_{\nu}}^{\left(\nu \right)}>0$ and timedependent interaction weights ${\rho}_{{k}_{\nu}{j}_{\nu}}^{\left(\nu \right)}\left(t\right)>0$, ${\rho}_{{k}_{\nu}{k}_{\nu}}^{\left(\nu \right)}\left(t\right)=1$ that are given by linear superpositions of templates ${r}_{{k}_{\nu}{j}_{\nu}{l}_{\nu +1}}^{\left(\nu \right)}$. Additionally, τ^{(ν)} are the characteristic time scales for level ν. Levels are temporally well separated through τ^{(ν)}≫τ^{(ν + 1)}[35].
Interestingly these kernels are timeindependent. Since neural field equations can be written in the same form as Eq. (12), this result shows that hierarchies of LotkaVolterra systems are included in the neural field description. Again we point out that the resulting kernels are linear combinations of dyadic products as introduced in Eq. (3), see also the work of Veltz and Faugeras [26].
Discussion
This study considers spatially heterogeneous neural fields describing the mean potential in neural populations according to the Amari equation [2]. To our best knowledge this work is one of the first deriving the implicit conditions for stationary solutions and identifying the corresponding stability constraint as the Hammerstein integral equation. Moreover, as one of the first studies our work derives conditions for the linear stability of such stationary solutions and derives an analytical expression for stability subjected to the properties of heterogeneous synaptic interaction. The analytical results are complemented by numerical simulations illustrating the stability and instability of heterogeneous stationary states. We point out that the results obtained extend previous studies both on homogeneous neural fields and heterogeneous neural fields as other studies in this context have done before [21].
By virtue of the heterogeneity of the model, it is possible to consider more complex spatiotemporal dynamical behavior than the dynamics close to a single stationary state. We show how to construct hierarchical heteroclinic sequences built of multiple stationary states each exhibiting saddle node dynamics. The work demonstrates in detail how to construct such sequences given the stationary states and their saddle node dynamics involved. Motivated by a previous study on hierarchical heteroclinic sequences [25, 34], we constructed such sequences in heterogeneous neural fields of the Amari type. Our results indicate that such a hierarchy may be present in a single heterogeneous neural field whereas previous studies [25, 34] consider the presence of several neural populations to describe heteroclinic sequences. The kernels obtained from heteroclinic saddle node dynamics are linear superpositions of tensor product kernels, known as PincherleGoursat kernels in the literature [26].
Conclusion
The present work is strongly related to the literature hypothesizing the presence of chaotic itinerant neural activity, cf. previous work by [36, 37]. This concept of chaotic itinerancy is attractive but yet lacking a realistic neural model. We admit that the present work represents just a first initial starting point for further model analysis of sequential neural activity in heterogeneous neural systems. It opens up an avenue of future research and promises to close the gap between the rather abstract concept of sequential, i.e. temporally transient, neural activity and corresponding mathematical neural models.
Methods
Stationary states and their stability
In order to learn more about the spatiotemporal dynamics of a neural field, in general it is reasonable to determine stationary states and to study their linear stability. This already gives some insight into the nonlinear dynamics of the system. Since the dynamics depends strongly on the interaction kernel and the corresponding stationary states, it is necessary to work out conditions for the existence and uniqueness of stationary states and their stability.
Analytical study
For stationary solutions, the left hand side of Eq. (1) vanishes and we obtain the Hammerstein equation (2) [29]. It has a trivial solution V_{0}(x) = 0 and further nontrivial solutions V_{0}(x) ≠ 0 under certain conditions. The existence and number of solutions of Eq. (2) depends mainly on the operator $\mathcal{A}=\mathcal{I}+\mathcal{K}\mathcal{F}$[38] and its monotonicity [39], with the unity operator $\mathcal{I}$, the linear integral operator
and the Nemytskij operator $\mathcal{Fu}\left(x\right)=S\left(u\right(x\left)\right)$. For instance, a simple criterion for the existence of at least a single nontrivial solution is the symmetry and positive definiteness of the kernel K(x, y) and the condition S(u) ≤ C_{1}u + C_{2}[29]. Moreover previous studies have proposed analytical [40] and numerical [41] methods to find solutions of the Hammerstein equation (2).
To illustrate the nonuniqueness of solutions of the Hammerstein equation, let us expand the stationary solution into a set of biorthogonal spatial modes {ϕ_{ n }(x)}, {ψ_{ n }(x)}
with constant coefficients V_{ n }, U_{ n } and
with the Kronecker symbol δ_{n,m}. Then the Hammerstein equation recasts to
with
and
Since f_{ m } is a nonlinear function of V_{ n }, Eq. (13) has multiple solutions {V_{ n }} for a given biorthogonal basis. Hence, V_{0}(x) may not be unique.
Considering small deviations u(x, t) = V(x, t)  V_{0}(x) from a nontrivial stationary state V_{0}(x), these deviations obey the linear equation (6) with $L(x,y)=K(x,y){S}^{\prime}\left(y\right)$ and ${S}^{\prime}\left(y\right)=\mathrm{d}S\left[\phantom{\rule{0.3em}{0ex}}{V}_{0}\right(y\left)\right]/\mathrm{d}{V}_{0}\left(y\right)$.
A solution of (6) is then $u(x,t)=exp\left(\mathrm{\lambda t}\right)e\left(x\right),\phantom{\rule{1em}{0ex}}\lambda \in \u2102$ with mode e(x). Inserting this solution into Eq. (6) yields the continuous spectrum of the corresponding linear operator L(x, y) determined implicitly by the eigenvalue equation
Under the above assumption of a dyadic product kernel (3), the eigenfunctions {e_{ k }(x)} of the kernel L(x, y), defined through
with eigenvalues ${\epsilon}_{k}={\lambda}_{k}+1\in \u2102$ form an orthonormal system with respect to the scalar product
with weight ${S}^{\prime}\left(y\right)$. This follows from the dyadic product kernel (3) through
From (17) we deduce two important results:

a certain eigenmode ${e}_{{k}_{0}}\left(x\right)$ is proportional to the stationary state V_{0}(x) with scaling factor $\u3008{V}_{0},{e}_{{k}_{0}}{\u3009}_{S}/{\epsilon}_{{k}_{0}}$, and

other eigenmodes in the eigenbasis ${e}_{k\ne {k}_{0}}\left(x\right)$ are orthogonal to V_{0}(x) yielding 〈V_{0},e_{ k }〉_{ S } = ε_{ k } = 0.
Hence for dyadic product kernels (3) the spectrum of L includes one eigenmode with eigenvalue ε_{1} ≠ 0, i.e. λ_{1} ≠ 1, while all other eigenmodes are stable with ε_{k≠1} = 0 and thus λ_{k≠1} = 1. Therefore, a nontrivial stationary state could become either an asymptotically stable fixed point, i.e. an attractor, for ε_{ n } < 1, for all n, or a saddle with a onedimensional unstable manifold, for ε_{1} > 1 and ε_{n≠1} < 1 (neglecting the singular case ε_{1}= 1 ).
Finally we have to justify the necessary condition (5) which follows from 0 < S(x) < 1 and
inserted into the Hammerstein equation (2) for a dyadic product kernel (3).
Numerical study
In order to investigate stationary states of the Amari equation numerically, we choose a finite onedimensional spatial domain of length L and discretize it into a regular grid of n intervals with grid interval length Δ x = L / n. Then the kernel function is K(x = n Δ x, y = m Δ x) = K_{ n m } / Δ x and Eq. (1) reads
where V_{ n }(t) = V(n Δ x, t). The corresponding Hammerstein equation (2) is given by
Taking into account the insight from the discussion of Eq. (3) and its consequences for the eigenmodes, we have chosen the spatial kernel to
We employ an Eulerforward integration scheme for the temporal evolution with discrete time step Δ t = 0.05.
We render the stationary state random by adding the noise term η_{ i } which are random numbers taken from a normal distribution with zero mean and unit variance. We point out that we choose η_{ i } such like $\left\sum _{i=1}^{n}{\eta}_{i}\right<0.05$ in order to not permit an amplitude increase in the dynamics by noise, but just as a modulation. The sigmoid function is chosen as $S\left(V\right)=1/(1+exp(\alpha (V\theta )\left)\right)$ parameterized by the slope parameter α and the mean threshold θ.
Heteroclinic orbits
The general neural field equation (10) supplies the Amari equation (1) as a special case for G(t) = e^{t}Θ(t) (Θ(t) as Heaviside’s step function). Then Q = ∂_{ t } + 1 is the Amari operator for the Green’s function G(t). For secondorder synaptic dynamics [42] and for the filter properties of complex dendritc trees [43], more complicated kernels or differential operators, respectively, have to be taken into account.
As a first step for constructing heteroclinic sequences for (1) or (10), we expand the nonlinear transfer function S in Eq. (11) into a power series about a certain state $\stackrel{\u0304}{V}\left(s\right)$[15],
Inserting V(s) from Eq. (11) yields
By eliminating the nonlinearities S[ V(s_{ j })] using the power series above again, we get
Inserting this expression into Eq. (11) leads to a generalized Volterra series
with a sequence of integral kernels H_{ m } that can be read off after some further tedious rearrangements [44].
Onelevel hierarchy
In a previous work, we have derived a onelevel hierarchy of stable heterogenic sequences [44], which we briefly recapitulate here. We assume a family of n stationary states V_{ k }(x), 1 ≤ k ≤ n that are to be connected along a heteroclinic sequence. Each state is assumed to be realized by a population in the neural field governed by (10), that is characterized by a population activity α_{ k }(t) ∈ [0, 1]. Then the overall field quantity is obtained through an order parameter expansion [15, 45]
These population amplitudes result from a winnerless competition in a generalized LotkaVolterra system approximating a WilsonCowan model [46, 47]
with growth rates σ_{ k } > 0, and interaction weights ρ_{ k j } > 0, ρ_{ k k } = 1, that are trained by the algorithm of [27] and [28] for the desired sequence of transitions.
For the following construction, we also assume that the system of stationary states {V_{ k }(x)} is linearly independent such that there is a biorthogonal system $\left\{{V}_{k}^{+}\right(x\left)\right\}$, obeying
Then, we obtain from (22)
and hence
Next, we take the derivation of (22) with respect to time t, by exploiting (23)
Adding (22), we obtain
from which we eliminate all occurrences of ξ by means of (25). This yields
Moreover, the nonlinear spatial integral transformation in the Amari equation (1) may be written as a generalized Volterra series (21)
Comparison of the first three terms with (28) yields the result (9).
Multilevel hierarchy of heteroclinic sequences
Here we assume that the SHS are given by (possibly infinitely many) generalized LotkaVolterra systems
with growth rates ${\sigma}_{{k}_{\nu}}^{\left(\nu \right)}>0$, and timedependent interaction weights ${\rho}_{{k}_{\nu}{j}_{\nu}}^{\left(\nu \right)}\left(t\right)>0$, ${\rho}_{{k}_{\nu}{k}_{\nu}}^{\left(\nu \right)}\left(t\right)=1$. Again, $\nu \in \mathbb{N}$ indicates the level within the hierarchy, n_{ ν } is the number of stationary states and τ^{(ν)} represents the characteristic time scale of that level. Levels are temporally well separated through τ^{(ν)}≫τ^{(ν + 1)}[35].
Following [34], the population amplitudes ${\alpha}_{{k}_{\nu}}^{\left(\nu \right)}\left(t\right)$ of level ν prescribe the control parameters of the higher level ν1 by virtue of
where the constants ${r}_{{k}_{\nu}{j}_{\nu}{l}_{\nu +1}}^{\left(\nu \right)}$ serve as parameter templates.
Finally, the amplitudes ${\alpha}_{{k}_{\nu}}^{\left(\nu \right)}\left(t\right)$ recruit a hierarchy of modes ${V}_{{k}_{\nu}}^{\left(\nu \right)}\left(x\right)$ in the neural field such that
Assuming a system of biorthogonal modes ${V}_{{k}_{\nu}}^{{\left(\nu \right)}^{+}}\left(x\right)$ with
we obtain from (32)
Next we convert the LotkaVolterra differential equations (30) into their corresponding integral equations by formally integrating
and obtain a recursion equation after inserting (31)
Eventually, we insert all results of the recursion of (36) into the field equation (32) and eliminate the amplitudes ${\alpha}_{{k}_{\nu}}^{\left(\nu \right)}\left(t\right)$ by means of (34) in order to get a series expansion of V(s) in terms of spatiotemporal integrals. This series must equal the generalized Volterra series (21), such that the kernel H(s,s^{′}) can be reconstructed to solve the neural field inverse problem [44]. For more mathematical details on the derivation of the twolevel hierarchy, see Additional file 1.
References
 1.
Wilson H, Cowan J: A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue. Kybernetik 1973, 13: 55–80. 10.1007/BF00288786
 2.
Amari SI: Dynamics of pattern formation in lateralinhibition type neural fields. Biol Cybern 1977, 27: 77–87. 10.1007/BF00337259
 3.
Jancke D, Erlhagen W, Dinse HR, Akhavan AC, Giese M, Steinhage A, Schöner G: Parametric population representation of retinal location: neuronal interaction dynamics in cat primary visual cortex. J Neurosci 1999, 19(20):9016–9028. [http://www.jneurosci.org/content/19/20/9016.abstract] []
 4.
Bressloff PC: Spatiotemporal dynamics of continuum neural fields. J Phys A 2012, 45(3):033001. [http://stacks.iop.org/1751–8121/45/i=3/a=033001] [] 10.1088/17518113/45/3/033001
 5.
Coombes S: Largescale neural dynamics: simple and complex. NeuroImage 2010, 52(3):731–739. [http://www.sciencedirect.com/science/article/B6WNP4Y70C6H3/2/334a01e2662e998a0fdd3e1bbe9087d7] [] 10.1016/j.neuroimage.2010.01.045
 6.
Coombes S, Venkov NA, Shiau L, Bojak I, Liley DTJ, Laing CR: Modeling electrocortical activity through improved local approximations of integral neural field equations. Phys Rev E 2007, 76(5):051901.
 7.
Coombes S, Lord G, Owen M: Waves and bumps in neuronal networks with axodendritic synaptic interactions. Physica D 2003, 178: 219–241. 10.1016/S01672789(03)000022
 8.
Folias S, Bressloff P: Stimuluslocked waves and breathers in an excitatory neural network. SIAM J Appl Math 2005, 65: 2067–2092. 10.1137/040615171
 9.
Hutt A, Rougier N: Activity spread and breathers induced by finite transmission speeds in twodimensional neural fields. Phys Rev E 2010, 82: R055701.
 10.
Coombes S, Owen M: Bumps, breathers, and waves in a neural network with spike frequency adaptation. Phys Rev Lett 2005, 94: 148102.
 11.
Ermentrout GB, McLeod JB: Existence and uniqueness of travelling waves for a neural network. Proc R Soc E 1993, 123A: 461–478.
 12.
Jirsa VK, Haken H: Field theory of electromagnetic brain activity. Phys Rev Lett 1996, 77(5):960–963. 10.1103/PhysRevLett.77.960
 13.
Hutt A: Generalization of the reactiondiffusion, SwiftHohenberg, and KuramotoSivashinsky equations and effects of finite propagation speeds. Phys Rev E 2007, 75: 026214.
 14.
Bressloff PC: Traveling fronts and wave propagation failure in an inhomogeneous neural network. Physica D 2001, 155: 83–100. 10.1016/S01672789(01)002664
 15.
Jirsa VK, Kelso JAS: Spatiotemporal pattern formation in neural systems with heterogeneous connection toplogies. Phys Rev E 2000, 62(6):8462–8465. 10.1103/PhysRevE.62.8462
 16.
Kilpatrick ZP, Folias SE, Bressloff PC: Traveling pulses and wave propagation failure in inhomogeneous neural media. SIAM J Appl Dynanmical Syst 2008, 7: 161–185. 10.1137/070699214
 17.
Schmidt H, Hutt A, SchimanskyGeier L: Wave fronts in inhomogeneous neural field models. Physica D 2009, 238(14):1101–1112. 10.1016/j.physd.2009.02.017
 18.
Potthast R, beim Graben P: Inverse problems in neural field theory. SIAM J Appl Dynamical Syst 2009, 8(4):1405–1433. 10.1137/080731220
 19.
Potthast R, beim Graben P: Existence and properties of solutions for neural field equations. Math Methods Appl Sci 2010, 33(8):935–949.
 20.
Coombes S, Laing C, Schmidt H, Svanstedt N, Wyller J: Waves in random neural media. Discrete Contin Dyn Syst A 2012, 32: 2951–2970.
 21.
Coombes S, Laing C: Pulsating fronts in periodically modulated neural field models. Phys Rev E 2011, 83: 011912.
 22.
Brackley C, Turner M: Persistent fluctuations of activity in undriven continuum neural field models with powerlaw connections. Phys Rev E 2009, 79: 011918.
 23.
beim Graben P, Potthast R: Inverse problems in dynamic cognitive modeling. Chaos 2009, 19: 015103. 10.1063/1.3097067
 24.
Hutt A, Riedel H: Analysis and modeling of quasistationary multivariate time series and their application to middle latency auditory evoked potentials. Physica D 2003, 177(1–4):203–232.
 25.
Yildiz I, Kiebel SJ: A hierarchical neuronal model for generation and online recognition of birdsongs. PloS Comput Biol 2011, 7(12):e1002303. 10.1371/journal.pcbi.1002303
 26.
Veltz R, Faugeras O: Local/global analysis of the stationary solutions of some neural field equations. SIAM J Appl Dynamical Syst 2010, 9: 954–998. 10.1137/090773611
 27.
Afraimovich VS, Zhigulin VP, Rabinovich MI: On the origin of reproducible sequential activity in neural circuits. Chaos 2004, 14(4):1123–1129. 10.1063/1.1819625
 28.
Rabinovich MI, Huerta R, Varona P, Afraimovichs VS: Transient cognitive dynamics, metastability, and decision making. PLoS Comput Biolog 2008, 4(5):e1000072. 10.1371/journal.pcbi.1000072
 29.
Hammerstein A: Nichtlineare Integralgleichungen nebst Anwendungen. Acta Math 1930, 54: 117–176. 10.1007/BF02547519
 30.
Kosko B: Bidirectional associated memories. IEEE Trans Syst Man Cybernet 1988, 18: 49–60. 10.1109/21.87054
 31.
Hellwig B: A quantitative analysis of the local connectivity between pyramidal neurons in layers 2/3 of the rat visual cortex. Biol Cybernet 2000, 82: 11–121.
 32.
Mazor O, Laurent G: Transient dynamics versus fixed points in odor representations by locust antennal lobe projection neurons. Neuron 2005, 48(4):661–673. 10.1016/j.neuron.2005.09.032
 33.
Rabinovich MI, Huerta R, Laurent G: Transient dynamics for neural processing. Science 2008, 321(5885):48–50. 10.1126/science.1155564
 34.
Kiebel SJ, von Kriegstein K, Daunizeau J, Friston KJ: Recognizing sequences of sequences. Plos Comp Biol 2009, 5(8):e1000464. 10.1371/journal.pcbi.1000464
 35.
Desroches M, Guckenheimer J, Krauskopf B, Kuehn C, Osinga H, Wechselberger M: Mixedmode oscillations with multiple time scales. SIAM Rev 2012, 54(2):211–288. [http://epubs.siam.org/doi/abs/10.1137/100791233] [] 10.1137/100791233
 36.
Tsuda I: Toward an interpretation of dynamic neural activity in terms of chaotic dynamical systems. Behav Brain Sci 2001, 24(5):793–847. 10.1017/S0140525X01000097
 37.
Freeman W: Evidence from human scalp EEG of global chaotic itinerancy. Chaos 2003, 13(3):1069.
 38.
Appell J, Chen CJ: How to solve Hammerstein equations. J Integr Equat Appl 2006, 18(3):287–296. 10.1216/jiea/1181075392
 39.
Banas J: Integrable solutions of Hammerstein and Urysohn integral equations. J Austral Math Soc (Series A) 1989, 46: 61–68. 10.1017/S1446788700030378
 40.
Lakestani M, Razzaghi M, Dehghan M: Solution of nonlinear FredholmHammerstein integral equations by using semiorthogonal spline wavelets. Math Problems Eng 2005, 113–121.
 41.
Djitteab N, Senea M: An iterative algorithm for approximating solutions of Hammerstein integral equations. Numerical Funct Anal Optimization 2013, 34(12):1299–1316. 10.1080/01630563.2013.812111
 42.
Hutt A, Longtin A: Effects of the anesthetic agent propofol on neural populations. Cogn Neurodyn 2010, 4: 37–59.
 43.
Bressloff PC, Coombes S: Physics of the extended neuron. Int J Mod Phys 1997, B 11(20):2343–2392.
 44.
beim Graben P, Potthast R: A dynamic field account to languagerelated brain potentials. In Principles of Brain Dynamics: Global State Interactions. Edited by: Rabinovich M, Friston K, Varona P. Cambridge (MA): MIT Press; 2012:93–112.
 45.
Haken H: Synergetics. An Introduction Volume 1 of Springer Series in Synergetics. Berlin: Springer; 1983. [1st edition 1977] [1st edition 1977]
 46.
Fukai T, Tanaka S: A simple neural network exhibiting selective activation of neuronal ensembles: from winnertakeall to winnersshareall. Neural Comp 1997, 9: 77–97. [http://www.mitpressjournals.org/doi/abs/10.1162/neco.1997.9.1.77] [] 10.1162/neco.1997.9.1.77
 47.
Wilson H, Cowan J: Excitatory and inhibitory interactions in localized populations of model neurons. Biophys J 1972, 12: 1–24.
Acknowledgments
This research has been supported by the European Union’s Seventh Framework Programme (FP7/20072013) ERC grant agreement No. 257253 awarded to AH, hosting PbG during fall 2013 in Nancy, and by a Heisenberg fellowship (GR 3711/12) of the German Research Foundation (DFG) awarded to PbG.
Author information
Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
Both authors have developed the approach of the work. AH has worked out the analytical and numerical stability analysis part, while PbG has developed analytically the hierarchy of heteroclinic sequences. Both authors have written the manuscript. Both authors read and approved the final manuscript.
Electronic supplementary material
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
About this article
Cite this article
Graben, P.b., Hutt, A. Attractor and saddle node dynamics in heterogeneous neural fields. EPJ Nonlinear Biomed Phys 2, 4 (2014). https://doi.org/10.1140/epjnbp17
Received:
Accepted:
Published:
Keywords
 Chaotic itinerancy
 Linear stability
 Heteroclinic orbits
 LotkaVolterra model