# Extracting novel information from neuroimaging data using neural fields

- Dimitris A Pinotsis
^{1}Email author and - Karl J Friston
^{1}

**2**:5

https://doi.org/10.1140/epjnbp18

© Pinotsis and Friston; licensee Springer. 2014

**Received: **8 October 2013

**Accepted: **31 March 2014

**Published: **9 May 2014

## Abstract

We showcase three case studies that illustrate how neural fields can be useful in the analysis of neuroimaging data. In particular, we argue that neural fields allow one to: (i) compare evidences for alternative hypotheses regarding neurobiological determinants of stimulus-specific response variability; (ii) make inferences about between subject variability in cortical function and microstructure using non-invasive data and (iii) estimate spatial parameters describing cortical sources, even without spatially resolved data.

### Keywords

Neural field theory Dynamic causal modelling Attention Connectivity Gamma oscillations V1 Electrocorticography Visual cortex Electrophysiology## Introduction

This paper reviews some recent applications of neural field models in the analysis of neuroimaging data. Neural fields model current fluxes as continuous processes on the cortical sheet, using partial differential equations (PDEs). The key advance that neural field models offer, over other population models (like neural masses), is that they embody spatial parameters (like the density and extent of lateral connections). This allows one to model responses not just in time but also over space. Conversely, these models are particularly useful for explaining observed cortical responses over different spatial scales; for example, with high-density recordings, at the epidural or intracortical level. However, the impact of spatially extensive dynamics is not restricted to expression over space but can also have profound effects on temporal (e.g., spectral) responses at one point (or averaged locally over the cortical surface) [1]. This means that neural field models may also play a key role in the modelling of non-invasive electrophysiological data that does not resolve spatial activity directly. In what follows, we attempt to shed light on these different uses of neural fields and put forward three reasons why these models can be useful in the analysis of neuroimaging data. Each of these motivations is demonstrated by analysing a particular dataset obtained using three different modalities: electrocorticography (ECoG), magnetoencephalography (MEG) and local field potential recordings (LFPs). We argue that neural fields allow one to: (i) compare evidences for alternative hypotheses regarding the important neurobiological determinants of stimulus-specific response variability; (ii) make inferences about between subject variability in cortical function and microstructure using non-invasive data and (iii) obtain estimates of spatial parameters describing cortical sources in the absence of spatially resolved data. Our analyses exploit dynamic causal modelling [2] and include model space explorations that embody different hypotheses about the generation of observed responses in relation to model evidence - obtained using Variational Bayes [3]. This model comparison uses a variational free-energy bound to furnish optimized models in a manner similar to fitting empirical spectra with AR and ARMA models, see e.g. [4]. The advantage this approach has over other optimization criteria is that it provides an optimal balance between model fit and complexity; yielding models that are both parsimonious and accurate. The analyses presented here showcase particular instances where neural field models serve as a mathematical microscope, allowing us to extract information that is hidden in electrophysiological data.

## Review

In the following, we present analyses of three different datasets obtained with ECoG, MEG and LFP recordings respectively; these illustrate how neural fields combined with dynamic causal modelling can disclose important neurobiological properties of cortical microcircuitry to which we do not have direct access through conventional analysis techniques.

### Comparing alternative hypotheses about the determinants of stimulus-specific gamma peak variability

**Prior expectations of model parameters**

Parameter | Physiological interpretation | Prior mean |
---|---|---|

| Postsynaptic rate constants | 1/2, 1/35, 1/35, 1/2 (ms |

| 108,45,1.8 | |

| Amplitude of intrinsic connectivity kernels | 9,162,18,45 |

| (× 10 | 36,18,9 |

| Maximum postsynaptic depolarization | 8,32 (mV) |

*σ*(⋅) to provide inputs to other populations. These inputs are weighted by connection strengths. In Figure 3, electrophysiological responses and model predictions are shown in dashed and solid lines. The real and imaginary parts are shown in the left and right panels respectively. In this figure, lines of different colours correspond to different contrast conditions, where the peak frequency ranges between 48 and 60 Hz and contrast levels vary between 0 and 82% of some maximum value. There are 9 contrast conditions; where in the first two there is no prominent gamma peak. It is known that gamma peak frequency is highly contrast-dependent, see e.g. [10, 11]. This is in accordance to what we observe here, namely a monotonic relation between peak frequency and stimulus contrast.

Bipolar differences were extracted from ECoG sensors covering a large part of the primate brain. From the modeller’s vantage point, the important issue here is that a single source in the visual cortex is sampled by several sensors. In our approach we have allowed for the optimization of both the parameters governing topographic aspects of cortical activity and the deployment of the lead fields (their dispersion and location). In brief, we endowed our observation model with sufficient degrees of freedom to characterize unknown aspects of sensor sensitivity.

The first set of parameters comprised the gains of neural populations that are thought to encode precision errors. These gain parameters correspond to the precision (inverse variance or reliability) of prediction errors in predictive coding. This fits comfortably with neurobiological theories of attention under predictive coding and the hypothesis that contrast manipulation amounts to changes in the precision of sensory input, as reviewed in [12]. These changes affect hierarchical processing in the brain and control the interaction between bottom up sensory information and top down modulation: here, we focus on the sensory level of the visual hierarchy.

The second set of parameters comprised intrinsic connection strengths among pyramidal and spiny stellate cells and inhibitory interneurons. This speaks to variations in cortical excitability, which modulates the contributions of different neuronal populations under varying contrast conditions – and a fixed dispersion of lateral connections. This hypothesis fits comfortably well with studies focusing on the activation of reciprocally connected networks of excitatory and inhibitory neurons [13–15].

The last set included the spatial extent of excitatory and inhibitory connections. From single cell recordings, it is known that as the boundary between classical and non-classical receptive field depends crucially upon contrast [16]. As stimulus contrast is decreased, the excitatory region becomes larger and the inhibitory flanks become smaller. The hypothesis here rests on a modulation of the effective spatial extent of lateral connections; effective extent refers to the extent of neuronal populations that subserve stimulus integration - that are differentially engaged depending on stimulus properties (as opposed to the anatomical extent of connections).

*G*,

*q*) ≈ ln

*p*(

*G*|

*m*) approximates the log-evidence (marginal likelihood) of the data. In other words, the free energy reports the probability of obtaining the data under any given model. Variational free energy was introduced by Richard Feynman in the context of path integral formulations of quantum mechanics and has been used extensively in machine learning to finesse the difficult problem of exact Bayesian inference. The models we compared allowed only a subset of parameters to vary with contrast level, where each model corresponds to a hypothesis about contrast-specific effects on cortical responses. We found that the model involving modulations of all candidate parameters has the highest evidence (with a relative log evidence difference of 16 with respect to the model that allows for modulations of all but the extent parameters – model 6). This simply means that we can be confident that all three candidate mechanisms contribute to the modulation of spectral responses, which is consistent with a plethora of studies that have considered visual contrast effects in terms of adaptive (extraclassical) gain control mechanisms and predictive coding in the visual cortex.

### Explaining inter-subject variability of peak gamma frequency of visually induced oscillations

Our second case study also involved spectral responses obtained from the visual cortex during a perception experiment [8]. This study was however based on non-invasive MEG data. These data were obtained during a task whose focus was on understanding how activity in the gamma band is related to visual perception [18]. In earlier work, the surface of the primary visual cortex was estimated via retinotopic mapping and was found to correlate with individual gamma peak frequencies. A similar visual MEG experiment found a correlation between gamma peak frequency and resting GABA concentration, as measured with MR spectroscopy [19].

Our focus here was on obtaining mechanistic explanations for this intersubject variability in peak gamma frequency, observed during visually induced oscillations. In particular, we wanted to disclose the origins of individual gamma peak variability and understand whether this variability can be attributed to cortical structure or function (the level of cortical inhibition as expressed by resting GABA concentration). The generative model we used in this study was similar to that of the study described above (but allowing for patchy horizontal connectivity, see [1, 20]). This neural field DCM was particularly useful to discern the roles of cortical anatomy and function as it parameterises both structure and functional excitation-inhibition balance. Our model included parameters describing the dispersion of lateral or horizontal connections which we associate with columnar width and kinetic parameters describing the synaptic drive that various populations are exposed to. We focussed on estimates of the parameters describing columnar width and the excitatory drive to inhibitory interneurons. We then looked at the three-way relation between these estimates, the size of V1 and peak gamma frequency.

### Estimating the spatial properties of sources when there is no explicit spatial information

A common theme in the examples discussed so far has been the modelling of lateral interactions in sensory cortices. Both studies considered above exploited spatially resolved data (either invasive or non-invasive). However, it is possible to obtain estimates of parameters that describe the topographic properties of cortical sources like the extent of lateral connections, even when using spatially unresolved data, like data from a single LFP electrode. This is precisely what we did in this final case study [21]. This study considered LFP data obtained from the rat auditory cortex under anaesthesia. Local field potentials were recorded from primary (A1) auditory cortex in the Lister hooded rat, following the application of the anaesthetic agent Isoflurane; 1.4 mg (see [22] for details). Ten minutes of recordings were used to obtain frequency domain data-features **g**_{
Y
}(*ω*) using a vector autoregression model [23].

*m*. This bound is optimized with respect to a variational density

*q*(

*θ*) over unknown model parameters. By construction, the free energy bound ensures that when the variational density maximizes free energy, it approximates the true posterior density over parameters,

*q*(

*θ*) ≈

*p*(

*θ*|

*G*,

*m*). The (approximate) conditional density and (approximate) log-evidence are used for inference on parameters and models respectively (see also Figure 8).

Finally, we used the empirical LFP data described above to obtain conditional estimates of the (log-scaling of the) parameters. Figure 9 (right panel), presents these estimates in terms of their conditional means and 95% confidence intervals. Most confidence intervals include a log – scaling of zero, with the exception of inverse intrinsic conduction speed *v* and neuronal gain *g*, which increased by 160 and 120% respectively with respect to their prior values. This is not surprising as the priors were selected to reproduce spectra that are typical of this experimental setup. There key thing to notice here is that that we were able to obtain fairly precise estimates of topographic parameters like the extent of intrinsic connections *c* despite the fact that we are only using data from a single electrode.

## Conclusions

This paper reviews some recent results regarding the modelling of spatially extended neuronal dynamics using neural field models. We have focused on fitting these models to empirical data and entertained questions about cortical structure and function. Addressing these questions relies on exploiting neuroimaging data to invert neural field DCMs. Optimizing neural field models can be a hard task due to their intrinsic nonlinearities. We have here shown that using dynamic causal modelling it is possible to validate neural field models in relation to empirical data. Our approach comprises three important components: first, a careful selection of priors that conform to known neurobiology; where parameters about which we have little prior knowledge are given flat priors – and the parameters of the observation model are optimized concurrently. Second, an appropriate cost function: here a variational free energy bound on log model evidence. Third, we call on linearity and ergodicity assumptions that allow one to summarize cortical activity efficiently, in terms of frequency domain responses.

In our approach, neural fields serve as generative models that allow us to define a probabilistic mapping from free parameters to observed cross spectra. This assumes that the measured signal is a mixture of predicted spectra, channel and observation noise and can furnish predictions for conventional measures of linear systems; like coherence, phase delay or cross correlation functions, as detailed in [25]. In brief, there is a mapping between model parameters (effective connectivity) and spectral characterizations (functional connectivity) that provides a useful link between the generative modelling of biophysical time series and dynamical systems theory. Neural field models prescribe a likelihood function; this function taken together with the priors specify a dynamic causal model that can be inverted using standard variational procedures [3]. This Variational Laplace scheme approximates model evidence with a variational free energy. The (approximate) posterior density and (approximate) log-evidence are used for inference on parameters and models respectively. In other words, one can compare different models (e.g., neural field and mass models) using their log-evidence and also make inferences on parameters, under the model selected.

In brief, our approach is based on a combination of neural fields with appropriate observation models that are optimised in relation to observed data and are – crucially – compared in terms of their evidence. This provides a principled way to adjudicate among different models or hypotheses about functional brain architectures and the physiological correlates of neuronal computations.

### Software note

The procedures described in this review can be accessed as part of the SPM academic freeware (in the DCM toolbox: http://www.fil.ion.ucl.ac.uk/spm/). This code has been written in a modular way that allows people to select from a suite of neural mass and field models to analyse their data – or indeed analyse simulated data that can be generated by the routines. It has also been written in a way that allows people to specify their own models – and connectivity architectures – in terms of Matlab routines (using an equation of motion and a nonlinear mapping from hidden states to observed measurements).

## Declarations

## Authors’ Affiliations

## References

- Pinotsis DA, Friston KJ: Neural fields, spectral responses and lateral connections.
*Neuroimage*2011, 55: 39–48. 10.1016/j.neuroimage.2010.11.081View ArticleGoogle Scholar - Friston KJ, Harrison L, Penny W: Dynamic causal modelling.
*Neuroimage*2003, 19: 1273–1302. 10.1016/S1053-8119(03)00202-7View ArticleGoogle Scholar - Friston K, Mattout J, Trujillo-Barreto N, Ashburner J, Penny W: Variational free energy and the Laplace approximation.
*Neuroimage*2007, 34: 220–234. 10.1016/j.neuroimage.2006.08.035View ArticleGoogle Scholar - Hannan EJ:
*Multiple Time Series*. New York: Wiley. com; 2009:38.Google Scholar - Pinotsis DA, Brunet N, Bastos A, Bosman CA, Litvak V, Fries P, Friston KJ: Contrast gain-control and horizontal interactions in V1: a DCM study.
*Neuroimage*2014, 92: 143–155. to appear, http://dx.doi.org/10.1016/j.neuroimage.2014.01.047 to appear,View ArticleGoogle Scholar - Rubehn B, Bosman C, Oostenveld R, Fries P, Stieglitz T: A MEMS-based flexible multichannel ECoG-electrode array.
*J Neural Eng*2009, 6: 036003. 10.1088/1741-2560/6/3/036003View ArticleGoogle Scholar - Bosman CA, Schoffelen J-M, Brunet N, Oostenveld R, Bastos AM, Womelsdorf T, Rubehn B, Stieglitz T, De Weerd P, Fries P: Attentional stimulus selection through selective synchronization between monkey visual areas.
*Neuron*2012, 75: 875–888. 10.1016/j.neuron.2012.06.037View ArticleGoogle Scholar - Pinotsis DA, Schwarzkopf DS, Litvak V, Rees G, Barnes G, Friston KJ: Dynamic causal modelling of lateral interactions in the visual cortex.
*Neuroimage*2013, 66: 563–576.View ArticleGoogle Scholar - Bastos AM, Usrey WM, Adams RA, Mangun GR, Fries P, Friston KJ: Canonical microcircuits for predictive coding.
*Neuron*2012, 76: 695–711. 10.1016/j.neuron.2012.10.038View ArticleGoogle Scholar - Sceniak MP, Hawken MJ, Shapley R: Visual spatial characterization of macaque V1 neurons.
*J Neurophysiol*2001, 85: 1873–1887.Google Scholar - Sceniak MP, Chatterjee S, Callaway EM: Visual spatial summation in macaque geniculocortical afferents.
*J Neurophysiol*2006, 96: 3474–3484. 10.1152/jn.00734.2006View ArticleGoogle Scholar - Feldman H, Friston KJ: Attention, uncertainty, and free-energy.
*Front Hum Neurosci*2010, 4: 215. doi:10.3389/fnhum.2010.00215 doi:10.3389/fnhum.2010.00215View ArticleGoogle Scholar - Kang K, Shelley M, Henrie JA, Shapley R: LFP spectral peaks in V1 cortex: network resonance and cortico-cortical feedback.
*J Comput Neurosci*2010, 29: 495–507. 10.1007/s10827-009-0190-2View ArticleGoogle Scholar - Brunel N, Wang X-J: What determines the frequency of fast network oscillations with irregular neural discharges? I. Synaptic dynamics and excitation-inhibition balance.
*J Neurophysiol*2003, 90: 415–430. 10.1152/jn.01095.2002View ArticleGoogle Scholar - Traub RD, Jefferys JG, Whittington MA: Simulation of gamma rhythms in networks of interneurons and pyramidal cells.
*J Comput Neurosci*1997, 4: 141–150. 10.1023/A:1008839312043MATHView ArticleGoogle Scholar - Kapadia MK, Westheimer G, Gilbert CD: Dynamics of spatial summation in primary visual cortex of alert monkeys.
*Proc Natl Acad Sci*1999, 96: 12073–12078. 10.1073/pnas.96.21.12073View ArticleGoogle Scholar - Penny WD, Stephan KE, Daunizeau J, Rosa MJ, Friston KJ, Schofield TM, Leff AP: Comparing families of dynamic causal models.
*Plos Comput Biol*2010, 6: e1000709. 10.1371/journal.pcbi.1000709MathSciNetView ArticleGoogle Scholar - Schwarzkopf DS, Robertson DJ, Song C, Barnes GR, Rees G: The frequency of visually induced gamma-band oscillations depends on the size of early human visual cortex.
*J Neurosci*2012, 32: 1507–1512. 10.1523/JNEUROSCI.4771-11.2012View ArticleGoogle Scholar - Muthukumaraswamy SD, Edden RA, Jones DK, Swettenham JB, Singh KD: Resting GABA concentration predicts peak gamma frequency and fMRI amplitude in response to visual stimulation in humans.
*Proc Natl Acad Sci*2009, 106: 8356. 10.1073/pnas.0900728106View ArticleGoogle Scholar - Grindrod P, Pinotsis DA: On the spectra of certain integro-differential-delay problems with applications in neurodynamics.
*Physica D: Nonlinear Phenomena*2011, 240: 13–20. 10.1016/j.physd.2010.08.002MATHMathSciNetView ArticleGoogle Scholar - Pinotsis DA, Moran RJ, Friston KJ: Dynamic causal modeling with neural fields.
*Neuroimage*2012, 59: 1261–1274. 10.1016/j.neuroimage.2011.08.020View ArticleGoogle Scholar - Moran RJ, Jung F, Kumagai T, Endepols H, Graf R, Dolan RJ, Friston KJ, Stephan KE, Tittgemeyer M: Dynamic causal models and physiological inference: a validation study using isoflurane anaesthesia in rodents.
*PLoS One*2011, 6: e22790. 10.1371/journal.pone.0022790View ArticleGoogle Scholar - Roberts SJ, Penny WD: Variational Bayes for generalized autoregressive models.
*Signal Processing, IEEE Trans*2002, 50: 2245–2257. doi:101109/TSP2002801921 doi:101109/TSP2002801921 10.1109/TSP.2002.801921MathSciNetView ArticleGoogle Scholar - Jansen BH, Rit VG: Electroencephalogram and visual evoked potential generation in a mathematical model of coupled cortical columns.
*Biol Cybern*1995, 73: 357–366. 10.1007/BF00199471MATHView ArticleGoogle Scholar - Friston KJ, Bastos A, Litvak V, Stephan KE, Fries P, Moran RJ: DCM for complex-valued data: cross-spectra, coherence and phase-delays.
*Neuroimage*2012, 59: 439–455. 10.1016/j.neuroimage.2011.07.048View ArticleGoogle Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.