 Research
 Open Access
Attractor and saddle node dynamics in heterogeneous neural fields
 Peter beim Graben^{1, 2} and
 Axel Hutt^{3}Email author
https://doi.org/10.1140/epjnbp17
© beim Graben and Hutt; licensee Springer. 2014
 Received: 21 November 2013
 Accepted: 26 March 2014
 Published: 9 May 2014
Abstract
Background
We present analytical and numerical studies on the linear stability of spatially nonconstant stationary states in heterogeneous neural fields for specific synaptic interaction kernels.
Methods
The work shows the linear stabiliy analysis of stationary states and the implementation of a nonlinear heteroclinic orbit.
Results
We find that the stationary state obeys the Hammerstein equation and that the neural field dynamics may obey a saddlenode bifurcation. Moreover our work takes up this finding and shows how to construct heteroclinic orbits built on a sequence of saddle nodes on multiple hierarchical levels on the basis of a LotkaVolterra population dynamics.
Conclusions
The work represents the basis for future implementation of metastable attractor dynamics observed experimentally in neural population activity, such as Local Field Potentials and EEG.
Keywords
 Chaotic itinerancy
 Linear stability
 Heteroclinic orbits
 LotkaVolterra model
Background
Neural field models, such as [1, 2] are continuum limits of largescale neural networks. Typically, their dynamic variables describe either mean voltage [2] or mean firing rate [1, 3] of a population element of neural tissue (see [4, 5] for recent reviews).
where K(x, y) is the spatial synaptic connectivity between site y ∈ Ω and site x ∈ Ω, and S is a nonlinear, typically sigmoidal, transfer function. This model neglects external inputs for simplicity but without constraining the generality of the subsequent results. Possible synaptic time scales are supposed to be included in the kernel function K and can be introduced by a simple scaling of time.
In general, the connectivity kernel K(x, y) fully depends on both sites x and y, which case is referred to as spatial heterogeneity. If the connectivity solely depends on the difference between x and y, i.e. K(x, y) = K(x  y), the kernel is called spatially homogeneous[2]. Furthermore, if the connectivity depends on the distance between x and y only, i.e. K(x, y) = K(x  y), with x as some norm in Ω, the kernel is spatially homogeneous and isotropic[6].
Spatially homogeneous (respectively isotropic) kernels have been intensively studied in the literature due to their nice analytical properties. In this case, the evolution equations have exact solutions such as bumps [2, 7], breathers [8–10] or traveling waves [7, 11]. Moreover, such kernels allow the application of the technique of Green’s functions for deriving partial neural wave equations [7, 12, 13].
The present work focuses on spatially heterogeneous neural fields which have been discussed to a much lesser extent in previous studies than homogeneous neural fields [14–22]. This study resumes these attempts by investigating stationary states of the Amari equation (1) with heterogeneous kernels and their stability. Such a theory would be mandatory for modeling transient neurodynamics as is characteristic, e.g., for human cognitive phenomena [23], human early evoked potentials [24] or, among many other phenomena, for bird songs [25].
The article is structured in the following way. In the “Results” section we present new analytical results on stationary solutions of the Amari equation (1) and their stability in the presence of heterogeneous connectivity kernels. Moreover, we present numerical simulation results for the kernel construction and its stability analysis. The “Methods” section is devoted to construct such kernels through dyadic products of desired stationary states (cf. the previous work of Veltz and Faugeras [26]). A subsequent linear stability analysis reveals that these stationary solutions could be either attractors or saddles, depending on the chosen parametrization. Finally, we present a way to connect such saddle state solutions via heteroclinic sequences [27, 28] in order to construct transient processes.
Results
In this section we present the main results of our study on heterogeneous neural fields.
Stationary states and their stability
Analytical study
which indicates immediately a method to find a nontrivial solution numerically as shown below.
where $L(x,y)=K(x,y){S}^{\prime}\left(y\right)$ and ${S}^{\prime}\left(y\right)=\mathrm{d}S\left[\phantom{\rule{0.3em}{0ex}}{V}_{0}\right(y\left)\right]/\mathrm{d}{V}_{0}\left(y\right)$. A linear stability analysis of (6) carried out in the next section shows that a nontrivial stationary state V_{0}(x) is either a fixed point attractor, or (neglecting a singular case) a saddle with onedimensional unstable manifold. Such saddles can be connected to form stable heteroclinic sequences.
Numerical study
To gain deeper insight into possible stationary solutions of the Amari equation (1) and their stability, the subsequent section presents the numerical integration of equation (1) in one spatial dimension for a specific spatial synaptic connectivity kernel.
parameterized by the amplitude W_{0}, the variance σ^{2}, the noise level κ and the spatial discretization interval Δ x.
For each stationary state, one obtains a kernel L(x, y) of the linear stability analysis whose spectrum characterizes the stability of the system in the vicinity of the stationary state. If the eigenvalue with the maximum real part ε_{1} > 1, then the stationary state V_{0}(x) is exponentially unstable whereas ε_{1} < 1 guarantees exponential stability. Figure 1(b) shows the eigenmode e_{1}(x) corresponding to the eigenvalue with maximum real part which has a similar shape as the stationary state.
Taking a closer look at the stability of V_{0}(x), the computation of the eigenvalues ε_{ k }, k = 1, …, n reveals a dramatic gap in the spectrum: the eigenvalue with maximum real part ε_{1} is well isolated from the rest of the spectrum $\left\{{\epsilon}_{k>1}\right\}$ with ε_{k>1} < 10^{14}. This is in accordance to the discussion of Eq. (17) on the linear spectrum.
Recalling the presence of the trivial stable solution V = 0, the activity shown in Figure 4 indicates the bistability of the system for the given parameter set.
Heteroclinic orbits
The previous section has shown that heterogeneous neural fields may exhibit various stationary states with different stability properties. In particular we found that stationary states could be saddles with onedimensional unstable manifolds that could be connected to stable heteroclinic sequences (SHS: [27, 28]) which is supported by experimental evidence [24, 32, 33]. We present in the following paragraphs our main findings on heterogeneous neural fields exhibiting heteroclinic sequences and also hierarchies of such sequences.
One level heteroclinic sequence
The solution of Eq. (8) represents a heteroclinic sequence that connects saddle points {V_{ k }(x)} along their respective stable and unstable manifolds. Its transient evolution is described as winnerless competition in a LotkaVolterra population dynamics governed by interaction weights ρ_{ i k } between neural populations k and i and their respective growth rates σ_{ i }.
In Eq. (9) the $\left\{{V}_{k}^{+}\right(x\left)\right\}$ comprise a biorthogonal system of the saddles {V_{ k }(x)}. Therefore, the kernel K_{1}(x, y) describes a Hebbian synapse between sites y and x that has been trained with pattern sequence V_{ k }. This finding confirms the previous result of [18]. Moreover the threepoint kernel K_{2}(x, y, z) further generalizes Hebbian learning to interactions between three sites x, y, z ∈ Ω. Note that the kernels K_{ i } are linear combinations of dyadic product kernels, similar to those introduced in (3). Thus, our construction of heteroclinic orbits straightforwardly results in PincherleGoursat kernels used in [26].
Multilevel hierarchy of heteroclinic sequences
with a tensor product kernel $H(s,{s}^{\prime})=(K\otimes G)(s,{s}^{\prime})$ and integration domain $M=\Omega \times ]\infty ,t]$.
Here, the ${V}_{{k}_{\nu}}^{\left(\nu \right)}\left(x\right)$ denote the k_{ ν }th saddle in the νth level of the hierarchy (containing n_{ ν } stationary states). Saddles are chosen again in such a way that they form a system of biorthogonal modes ${V}_{{k}_{\nu}}^{{\left(\nu \right)}^{+}}\left(x\right)$ whose behavior is determined by LotkaVolterra dynamics with growth rates ${\sigma}_{{k}_{\nu}}^{\left(\nu \right)}>0$ and timedependent interaction weights ${\rho}_{{k}_{\nu}{j}_{\nu}}^{\left(\nu \right)}\left(t\right)>0$, ${\rho}_{{k}_{\nu}{k}_{\nu}}^{\left(\nu \right)}\left(t\right)=1$ that are given by linear superpositions of templates ${r}_{{k}_{\nu}{j}_{\nu}{l}_{\nu +1}}^{\left(\nu \right)}$. Additionally, τ^{(ν)} are the characteristic time scales for level ν. Levels are temporally well separated through τ^{(ν)}≫τ^{(ν + 1)}[35].
Interestingly these kernels are timeindependent. Since neural field equations can be written in the same form as Eq. (12), this result shows that hierarchies of LotkaVolterra systems are included in the neural field description. Again we point out that the resulting kernels are linear combinations of dyadic products as introduced in Eq. (3), see also the work of Veltz and Faugeras [26].
Discussion
This study considers spatially heterogeneous neural fields describing the mean potential in neural populations according to the Amari equation [2]. To our best knowledge this work is one of the first deriving the implicit conditions for stationary solutions and identifying the corresponding stability constraint as the Hammerstein integral equation. Moreover, as one of the first studies our work derives conditions for the linear stability of such stationary solutions and derives an analytical expression for stability subjected to the properties of heterogeneous synaptic interaction. The analytical results are complemented by numerical simulations illustrating the stability and instability of heterogeneous stationary states. We point out that the results obtained extend previous studies both on homogeneous neural fields and heterogeneous neural fields as other studies in this context have done before [21].
By virtue of the heterogeneity of the model, it is possible to consider more complex spatiotemporal dynamical behavior than the dynamics close to a single stationary state. We show how to construct hierarchical heteroclinic sequences built of multiple stationary states each exhibiting saddle node dynamics. The work demonstrates in detail how to construct such sequences given the stationary states and their saddle node dynamics involved. Motivated by a previous study on hierarchical heteroclinic sequences [25, 34], we constructed such sequences in heterogeneous neural fields of the Amari type. Our results indicate that such a hierarchy may be present in a single heterogeneous neural field whereas previous studies [25, 34] consider the presence of several neural populations to describe heteroclinic sequences. The kernels obtained from heteroclinic saddle node dynamics are linear superpositions of tensor product kernels, known as PincherleGoursat kernels in the literature [26].
Conclusion
The present work is strongly related to the literature hypothesizing the presence of chaotic itinerant neural activity, cf. previous work by [36, 37]. This concept of chaotic itinerancy is attractive but yet lacking a realistic neural model. We admit that the present work represents just a first initial starting point for further model analysis of sequential neural activity in heterogeneous neural systems. It opens up an avenue of future research and promises to close the gap between the rather abstract concept of sequential, i.e. temporally transient, neural activity and corresponding mathematical neural models.
Methods
Stationary states and their stability
In order to learn more about the spatiotemporal dynamics of a neural field, in general it is reasonable to determine stationary states and to study their linear stability. This already gives some insight into the nonlinear dynamics of the system. Since the dynamics depends strongly on the interaction kernel and the corresponding stationary states, it is necessary to work out conditions for the existence and uniqueness of stationary states and their stability.
Analytical study
and the Nemytskij operator $\mathcal{Fu}\left(x\right)=S\left(u\right(x\left)\right)$. For instance, a simple criterion for the existence of at least a single nontrivial solution is the symmetry and positive definiteness of the kernel K(x, y) and the condition S(u) ≤ C_{1}u + C_{2}[29]. Moreover previous studies have proposed analytical [40] and numerical [41] methods to find solutions of the Hammerstein equation (2).
Since f_{ m } is a nonlinear function of V_{ n }, Eq. (13) has multiple solutions {V_{ n }} for a given biorthogonal basis. Hence, V_{0}(x) may not be unique.
Considering small deviations u(x, t) = V(x, t)  V_{0}(x) from a nontrivial stationary state V_{0}(x), these deviations obey the linear equation (6) with $L(x,y)=K(x,y){S}^{\prime}\left(y\right)$ and ${S}^{\prime}\left(y\right)=\mathrm{d}S\left[\phantom{\rule{0.3em}{0ex}}{V}_{0}\right(y\left)\right]/\mathrm{d}{V}_{0}\left(y\right)$.
From (17) we deduce two important results:

a certain eigenmode ${e}_{{k}_{0}}\left(x\right)$ is proportional to the stationary state V_{0}(x) with scaling factor $\u3008{V}_{0},{e}_{{k}_{0}}{\u3009}_{S}/{\epsilon}_{{k}_{0}}$, and

other eigenmodes in the eigenbasis ${e}_{k\ne {k}_{0}}\left(x\right)$ are orthogonal to V_{0}(x) yielding 〈V_{0},e_{ k }〉_{ S } = ε_{ k } = 0.
Hence for dyadic product kernels (3) the spectrum of L includes one eigenmode with eigenvalue ε_{1} ≠ 0, i.e. λ_{1} ≠ 1, while all other eigenmodes are stable with ε_{k≠1} = 0 and thus λ_{k≠1} = 1. Therefore, a nontrivial stationary state could become either an asymptotically stable fixed point, i.e. an attractor, for ε_{ n } < 1, for all n, or a saddle with a onedimensional unstable manifold, for ε_{1} > 1 and ε_{n≠1} < 1 (neglecting the singular case ε_{1}= 1 ).
inserted into the Hammerstein equation (2) for a dyadic product kernel (3).
Numerical study
We employ an Eulerforward integration scheme for the temporal evolution with discrete time step Δ t = 0.05.
We render the stationary state random by adding the noise term η_{ i } which are random numbers taken from a normal distribution with zero mean and unit variance. We point out that we choose η_{ i } such like $\left\sum _{i=1}^{n}{\eta}_{i}\right<0.05$ in order to not permit an amplitude increase in the dynamics by noise, but just as a modulation. The sigmoid function is chosen as $S\left(V\right)=1/(1+exp(\alpha (V\theta )\left)\right)$ parameterized by the slope parameter α and the mean threshold θ.
Heteroclinic orbits
The general neural field equation (10) supplies the Amari equation (1) as a special case for G(t) = e^{t}Θ(t) (Θ(t) as Heaviside’s step function). Then Q = ∂_{ t } + 1 is the Amari operator for the Green’s function G(t). For secondorder synaptic dynamics [42] and for the filter properties of complex dendritc trees [43], more complicated kernels or differential operators, respectively, have to be taken into account.
with a sequence of integral kernels H_{ m } that can be read off after some further tedious rearrangements [44].
Onelevel hierarchy
with growth rates σ_{ k } > 0, and interaction weights ρ_{ k j } > 0, ρ_{ k k } = 1, that are trained by the algorithm of [27] and [28] for the desired sequence of transitions.
Comparison of the first three terms with (28) yields the result (9).
Multilevel hierarchy of heteroclinic sequences
with growth rates ${\sigma}_{{k}_{\nu}}^{\left(\nu \right)}>0$, and timedependent interaction weights ${\rho}_{{k}_{\nu}{j}_{\nu}}^{\left(\nu \right)}\left(t\right)>0$, ${\rho}_{{k}_{\nu}{k}_{\nu}}^{\left(\nu \right)}\left(t\right)=1$. Again, $\nu \in \mathbb{N}$ indicates the level within the hierarchy, n_{ ν } is the number of stationary states and τ^{(ν)} represents the characteristic time scale of that level. Levels are temporally well separated through τ^{(ν)}≫τ^{(ν + 1)}[35].
where the constants ${r}_{{k}_{\nu}{j}_{\nu}{l}_{\nu +1}}^{\left(\nu \right)}$ serve as parameter templates.
Eventually, we insert all results of the recursion of (36) into the field equation (32) and eliminate the amplitudes ${\alpha}_{{k}_{\nu}}^{\left(\nu \right)}\left(t\right)$ by means of (34) in order to get a series expansion of V(s) in terms of spatiotemporal integrals. This series must equal the generalized Volterra series (21), such that the kernel H(s,s^{′}) can be reconstructed to solve the neural field inverse problem [44]. For more mathematical details on the derivation of the twolevel hierarchy, see Additional file 1.
Declarations
Acknowledgments
This research has been supported by the European Union’s Seventh Framework Programme (FP7/20072013) ERC grant agreement No. 257253 awarded to AH, hosting PbG during fall 2013 in Nancy, and by a Heisenberg fellowship (GR 3711/12) of the German Research Foundation (DFG) awarded to PbG.
Authors’ Affiliations
References
 Wilson H, Cowan J: A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue. Kybernetik 1973, 13: 55–80. 10.1007/BF00288786MATHView ArticleGoogle Scholar
 Amari SI: Dynamics of pattern formation in lateralinhibition type neural fields. Biol Cybern 1977, 27: 77–87. 10.1007/BF00337259MATHMathSciNetView ArticleGoogle Scholar
 Jancke D, Erlhagen W, Dinse HR, Akhavan AC, Giese M, Steinhage A, Schöner G: Parametric population representation of retinal location: neuronal interaction dynamics in cat primary visual cortex. J Neurosci 1999, 19(20):9016–9028. [http://www.jneurosci.org/content/19/20/9016.abstract] []Google Scholar
 Bressloff PC: Spatiotemporal dynamics of continuum neural fields. J Phys A 2012, 45(3):033001. [http://stacks.iop.org/1751–8121/45/i=3/a=033001] [] 10.1088/17518113/45/3/033001MathSciNetView ArticleGoogle Scholar
 Coombes S: Largescale neural dynamics: simple and complex. NeuroImage 2010, 52(3):731–739. [http://www.sciencedirect.com/science/article/B6WNP4Y70C6H3/2/334a01e2662e998a0fdd3e1bbe9087d7] [] 10.1016/j.neuroimage.2010.01.045View ArticleGoogle Scholar
 Coombes S, Venkov NA, Shiau L, Bojak I, Liley DTJ, Laing CR: Modeling electrocortical activity through improved local approximations of integral neural field equations. Phys Rev E 2007, 76(5):051901.MathSciNetView ArticleGoogle Scholar
 Coombes S, Lord G, Owen M: Waves and bumps in neuronal networks with axodendritic synaptic interactions. Physica D 2003, 178: 219–241. 10.1016/S01672789(03)000022MATHMathSciNetView ArticleGoogle Scholar
 Folias S, Bressloff P: Stimuluslocked waves and breathers in an excitatory neural network. SIAM J Appl Math 2005, 65: 2067–2092. 10.1137/040615171MATHMathSciNetView ArticleGoogle Scholar
 Hutt A, Rougier N: Activity spread and breathers induced by finite transmission speeds in twodimensional neural fields. Phys Rev E 2010, 82: R055701.View ArticleGoogle Scholar
 Coombes S, Owen M: Bumps, breathers, and waves in a neural network with spike frequency adaptation. Phys Rev Lett 2005, 94: 148102.View ArticleGoogle Scholar
 Ermentrout GB, McLeod JB: Existence and uniqueness of travelling waves for a neural network. Proc R Soc E 1993, 123A: 461–478.MathSciNetView ArticleGoogle Scholar
 Jirsa VK, Haken H: Field theory of electromagnetic brain activity. Phys Rev Lett 1996, 77(5):960–963. 10.1103/PhysRevLett.77.960View ArticleGoogle Scholar
 Hutt A: Generalization of the reactiondiffusion, SwiftHohenberg, and KuramotoSivashinsky equations and effects of finite propagation speeds. Phys Rev E 2007, 75: 026214.MathSciNetView ArticleGoogle Scholar
 Bressloff PC: Traveling fronts and wave propagation failure in an inhomogeneous neural network. Physica D 2001, 155: 83–100. 10.1016/S01672789(01)002664MATHMathSciNetView ArticleGoogle Scholar
 Jirsa VK, Kelso JAS: Spatiotemporal pattern formation in neural systems with heterogeneous connection toplogies. Phys Rev E 2000, 62(6):8462–8465. 10.1103/PhysRevE.62.8462View ArticleGoogle Scholar
 Kilpatrick ZP, Folias SE, Bressloff PC: Traveling pulses and wave propagation failure in inhomogeneous neural media. SIAM J Appl Dynanmical Syst 2008, 7: 161–185. 10.1137/070699214MATHMathSciNetView ArticleGoogle Scholar
 Schmidt H, Hutt A, SchimanskyGeier L: Wave fronts in inhomogeneous neural field models. Physica D 2009, 238(14):1101–1112. 10.1016/j.physd.2009.02.017MATHMathSciNetView ArticleGoogle Scholar
 Potthast R, beim Graben P: Inverse problems in neural field theory. SIAM J Appl Dynamical Syst 2009, 8(4):1405–1433. 10.1137/080731220MATHMathSciNetView ArticleGoogle Scholar
 Potthast R, beim Graben P: Existence and properties of solutions for neural field equations. Math Methods Appl Sci 2010, 33(8):935–949.MATHMathSciNetGoogle Scholar
 Coombes S, Laing C, Schmidt H, Svanstedt N, Wyller J: Waves in random neural media. Discrete Contin Dyn Syst A 2012, 32: 2951–2970.MATHMathSciNetView ArticleGoogle Scholar
 Coombes S, Laing C: Pulsating fronts in periodically modulated neural field models. Phys Rev E 2011, 83: 011912.MathSciNetView ArticleGoogle Scholar
 Brackley C, Turner M: Persistent fluctuations of activity in undriven continuum neural field models with powerlaw connections. Phys Rev E 2009, 79: 011918.View ArticleGoogle Scholar
 beim Graben P, Potthast R: Inverse problems in dynamic cognitive modeling. Chaos 2009, 19: 015103. 10.1063/1.3097067MathSciNetView ArticleGoogle Scholar
 Hutt A, Riedel H: Analysis and modeling of quasistationary multivariate time series and their application to middle latency auditory evoked potentials. Physica D 2003, 177(1–4):203–232.MATHMathSciNetView ArticleGoogle Scholar
 Yildiz I, Kiebel SJ: A hierarchical neuronal model for generation and online recognition of birdsongs. PloS Comput Biol 2011, 7(12):e1002303. 10.1371/journal.pcbi.1002303View ArticleGoogle Scholar
 Veltz R, Faugeras O: Local/global analysis of the stationary solutions of some neural field equations. SIAM J Appl Dynamical Syst 2010, 9: 954–998. 10.1137/090773611MATHMathSciNetView ArticleGoogle Scholar
 Afraimovich VS, Zhigulin VP, Rabinovich MI: On the origin of reproducible sequential activity in neural circuits. Chaos 2004, 14(4):1123–1129. 10.1063/1.1819625MATHMathSciNetView ArticleGoogle Scholar
 Rabinovich MI, Huerta R, Varona P, Afraimovichs VS: Transient cognitive dynamics, metastability, and decision making. PLoS Comput Biolog 2008, 4(5):e1000072. 10.1371/journal.pcbi.1000072View ArticleGoogle Scholar
 Hammerstein A: Nichtlineare Integralgleichungen nebst Anwendungen. Acta Math 1930, 54: 117–176. 10.1007/BF02547519MATHMathSciNetView ArticleGoogle Scholar
 Kosko B: Bidirectional associated memories. IEEE Trans Syst Man Cybernet 1988, 18: 49–60. 10.1109/21.87054MathSciNetView ArticleGoogle Scholar
 Hellwig B: A quantitative analysis of the local connectivity between pyramidal neurons in layers 2/3 of the rat visual cortex. Biol Cybernet 2000, 82: 11–121.View ArticleGoogle Scholar
 Mazor O, Laurent G: Transient dynamics versus fixed points in odor representations by locust antennal lobe projection neurons. Neuron 2005, 48(4):661–673. 10.1016/j.neuron.2005.09.032View ArticleGoogle Scholar
 Rabinovich MI, Huerta R, Laurent G: Transient dynamics for neural processing. Science 2008, 321(5885):48–50. 10.1126/science.1155564View ArticleGoogle Scholar
 Kiebel SJ, von Kriegstein K, Daunizeau J, Friston KJ: Recognizing sequences of sequences. Plos Comp Biol 2009, 5(8):e1000464. 10.1371/journal.pcbi.1000464MathSciNetView ArticleGoogle Scholar
 Desroches M, Guckenheimer J, Krauskopf B, Kuehn C, Osinga H, Wechselberger M: Mixedmode oscillations with multiple time scales. SIAM Rev 2012, 54(2):211–288. [http://epubs.siam.org/doi/abs/10.1137/100791233] [] 10.1137/100791233MATHMathSciNetView ArticleGoogle Scholar
 Tsuda I: Toward an interpretation of dynamic neural activity in terms of chaotic dynamical systems. Behav Brain Sci 2001, 24(5):793–847. 10.1017/S0140525X01000097View ArticleGoogle Scholar
 Freeman W: Evidence from human scalp EEG of global chaotic itinerancy. Chaos 2003, 13(3):1069.View ArticleGoogle Scholar
 Appell J, Chen CJ: How to solve Hammerstein equations. J Integr Equat Appl 2006, 18(3):287–296. 10.1216/jiea/1181075392MATHMathSciNetView ArticleGoogle Scholar
 Banas J: Integrable solutions of Hammerstein and Urysohn integral equations. J Austral Math Soc (Series A) 1989, 46: 61–68. 10.1017/S1446788700030378MATHMathSciNetView ArticleGoogle Scholar
 Lakestani M, Razzaghi M, Dehghan M: Solution of nonlinear FredholmHammerstein integral equations by using semiorthogonal spline wavelets. Math Problems Eng 2005, 113–121.Google Scholar
 Djitteab N, Senea M: An iterative algorithm for approximating solutions of Hammerstein integral equations. Numerical Funct Anal Optimization 2013, 34(12):1299–1316. 10.1080/01630563.2013.812111View ArticleGoogle Scholar
 Hutt A, Longtin A: Effects of the anesthetic agent propofol on neural populations. Cogn Neurodyn 2010, 4: 37–59.View ArticleGoogle Scholar
 Bressloff PC, Coombes S: Physics of the extended neuron. Int J Mod Phys 1997, B 11(20):2343–2392.View ArticleGoogle Scholar
 beim Graben P, Potthast R: A dynamic field account to languagerelated brain potentials. In Principles of Brain Dynamics: Global State Interactions. Edited by: Rabinovich M, Friston K, Varona P. Cambridge (MA): MIT Press; 2012:93–112.Google Scholar
 Haken H: Synergetics. An Introduction Volume 1 of Springer Series in Synergetics. Berlin: Springer; 1983. [1st edition 1977] [1st edition 1977]Google Scholar
 Fukai T, Tanaka S: A simple neural network exhibiting selective activation of neuronal ensembles: from winnertakeall to winnersshareall. Neural Comp 1997, 9: 77–97. [http://www.mitpressjournals.org/doi/abs/10.1162/neco.1997.9.1.77] [] 10.1162/neco.1997.9.1.77MATHView ArticleGoogle Scholar
 Wilson H, Cowan J: Excitatory and inhibitory interactions in localized populations of model neurons. Biophys J 1972, 12: 1–24.View ArticleGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.