Effettua una ricerca
Sebastiano Stramaglia
Ruolo
Professore Associato
Organizzazione
Università degli Studi di Bari Aldo Moro
Dipartimento
DIPARTIMENTO INTERATENEO DI FISICA
Area Scientifica
AREA 02 - Scienze fisiche
Settore Scientifico Disciplinare
FIS/07 - Fisica Applicata (a Beni Culturali, Ambientali, Biologia e Medicina)
Settore ERC 1° livello
Non Disponibile
Settore ERC 2° livello
Non Disponibile
Settore ERC 3° livello
Non Disponibile
A great improvement to the insight on brain function that we can get from fMRI data can come from effective connectivity analysis, in which the flow of information between even remote brain regions is inferred by the parameters of a predictive dynamical model. As opposed to biologically inspired models, some techniques as Granger causality (GC) are purely data-driven and rely on statistical prediction and temporal precedence. While powerful and widely applicable, this approach could suffer from two main limitations when applied to BOLD fMRI data: confounding effect of hemodynamic response function (HRF) and conditioning to a large number of variables in presence of short time series. For task-related fMRI, neural population dynamics can be captured by modeling signal dynamics with explicit exogenous inputs; for resting-state fMRI on the other hand, the absence of explicit inputs makes this task more difficult, unless relying on some specific prior physiological hypothesis. In order to overcome these issues and to allow a more general approach, here we present a simple and novel blind-deconvolution technique for BOLD-fMRI signal. In a recent study it has been proposed that relevant information in resting-state fMRI can be obtained by inspecting the discrete events resulting in relatively large amplitude BOLD signal peaks. Following this idea, we consider resting fMRI as ‘spontaneous event-related’, we individuate point processes corresponding to signal fluctuations with a given signature, extract a region-specific HRF and use it in deconvolution, after following an alignment procedure. Coming to the second limitation, a fully multivariate conditioning with short and noisy data leads to computational problems due to overfitting. Furthermore, conceptual issues arise in presence of redundancy. We thus apply partial conditioning to a limited subset of variables in the framework of information theory, as recently proposed. Mixing these two improvements we compare the differences between BOLD and deconvolved BOLD level effective networks and draw some conclusions.
Migraine is a cyclic disorder, in which functional and morphological brain changes fluctuate over time, culminating periodically in an attack. In the migrainous brain, temporal processing of external stimuli and sequential recruitment of neuronal networks are often dysfunctional. These changes reflect complex CNS dysfunction patterns. Assessment of multimodal evoked potentials and nociceptive reflex responses can reveal altered patterns of the brain's electrophysiological activity, thereby aiding our understanding of the pathophysiology of migraine. In this Review, we summarize the most important findings on temporal processing of evoked and reflex responses in migraine. Considering these data, we propose that thalamocortical dysrhythmia may be responsible for the altered synchronicity in migraine. To test this hypothesis in future research, electrophysiological recordings should be combined with neuroimaging studies so that the temporal patterns of sensory processing in patients with migraine can be correlated with the accompanying anatomical and functional changes.
When evaluating causal influence from one time series to another in a multivariate data set it is necessary to take into account the conditioning effect of the other variables. In the presence of many variables and possibly of a reduced number of samples, full conditioning can lead to computational and numerical problems. In this paper, we address the problem of partial conditioning to a limited subset of variables, in the framework of information theory. The proposed approach is tested on simulated data sets and on an example of intracranial EEG recording from an epileptic subject. We show that, in many instances, conditioning on a small number of variables, chosen as the most informative ones for the driver node, leads to results very close to those obtained with a fully multivariate analysis and even better in the presence of a small number of samples. This is particularly relevant when the pattern of causalities is sparse.
We propose a formal expansion of the transfer entropy to address the problem or partial conditioning evaluating information flow in multivariate datasets. This approach will then be adapted to put in evidence irreducible sets of variables which provide information for the future state of each assigned target. Multiplets characterized by an high value will be associated to informational circuits present in the system, with an informational character (synergetic or redundant) which can be associated to the sign of the contribution. These methods are then applied to the analysis of fMRI data.
We propose a formal expansion of the transfer entropy to put in evidence irreducible sets of variables which provide information for the future state of each assigned target. Multiplets characterized by an high value will be associated to informational circuits present in the system, with an informational character (synergetic or redundant) which can be associated to the sign of the contribution. We also present preliminary results on fMRI and EEG data sets.
We propose a formal expansion of the transfer entropy to put in evidence irreducible sets of variables which provide information for the future state of each assigned target. Multiplets characterized by a large contribution to the expansion are associated to the informational circuits present in the system, with an informational character which can be associated to the sign of the contribution. For the sake of computational complexity, we adopt the assumption of Gaussianity and use the corresponding exact formula for the conditional mutual information. We report the application of the proposed methodology on two electroencephalography (EEG) data sets.
The inference of the couplings of an Ising model with given means and correlations is called the inverse Ising problem. This approach has received a lot of attention as a tool to analyze neural data. We show that autoregressive methods may be used to learn the couplings of an Ising model, also in the case of asymmetric connections and for multispin interactions. We find that, for each link, the linear Granger causality is two times the corresponding transfer entropy (i.e., the information flow on that link) in the weak coupling limit. For sparse connections and a low number of samples, the `1 regularized least squares method is used to detect the interacting pairs of spins. Nonlinear Granger causality is related to multispin interactions.
A novel approach is proposed to group redundant time series in the frame of causality. It assumes that (i) the dynamics of the system can be described using just a small number of characteristic modes, and that (ii) a pairwise measure of redundancy is sufficient to elicit the presence of correlated degrees of freedom. We show the application of the proposed approach on fMRI data from a resting human brain and gene expression profiles from HeLa cell culture.
Neural systems are comprised of interacting units, and relevant information regarding their function or malfunction can be inferred by analyzing the statistical dependencies between the activity of each unit. While correlations and mutual information are commonly used to characterize these dependencies, our objective here is to extend interactions to triplets of variables to better detect and characterize dynamic information transfer.
We analyze simple dynamical network models which describe the limited capacity of nodes to process the input information. For a proper range of their parameters, the information flow pattern in these models is characterized by exponential distribution of the incoming information and a fat-tailed distribution of the outgoing information, as a signature of the law of diminishing marginal returns. We apply this analysis to effective connectivity networks from human EEG signals, obtained by Granger Causality, which has recently been given an interpretation in the framework of information theory. From the distributions of the incoming versus the outgoing values of the information flow it is evident that the incoming information is exponentially distributed whilst the outgoing information shows a fat tail. This suggests that overall brain effective connectivity networks may also be considered in the light of the law of diminishing marginal returns. Interestingly, this pattern is reproduced locally but with a clear modulation: a topographic analysis has also been made considering the distribution of incoming and outgoing values at each electrode, suggesting a functional role for this phenomenon.
The resting brain dynamics self-organize into a finite number of correlated patterns known as resting-state networks (RSNs). It is well known that techniques such as independent component analysis can separate the brain activity at rest to provide such RSNs, but the specific pattern of interaction between RSNs is not yet fully understood. To this aim, we propose here a novel method to compute the information flow (IF) between different RSNs from resting-state magnetic resonance imaging. After hemodynamic response function blind deconvolution of all voxel signals, and under the hypothesis that RSNs define regions of interest, our method first uses principal component analysis to reduce dimensionality in each RSN to next compute IF (estimated here in terms of transfer entropy) between the different RSNs by systematically increasing k (the number of principal components used in the calculation). When k=1, this method is equivalent to computing IF using the average of all voxel activities in each RSN. For k≥1, our method calculates the k multivariate IF between the different RSNs. We find that the average IF among RSNs is dimension dependent, increasing from k=1 (i.e., the average voxel activity) up to a maximum occurring at k=5 and to finally decay to zero for k≥10. This suggests that a small number of components (close to five) is sufficient to describe the IF pattern between RSNs. Our method-addressing differences in IF between RSNs for any generic data-can be used for group comparison in health or disease. To illustrate this, we have calculated the inter-RSN IF in a data set of Alzheimer's disease (AD) to find that the most significant differences between AD and controls occurred for k=2, in addition to AD showing increased IF w.r.t.
We analyze the information flow in the Ising model on two real networks, describing the brain at the mesoscale, with Glauber dynamics. We find that the critical state is characterized by the maximal amount of information flow in the system, and that this does not happen when the Ising model is implemented on the two-dimensional regular grid. At criticality the system shows signatures of the law of diminishing marginal returns, some nodes showing disparity between incoming and outgoing information. We also implement the Ising model with conserved dynamics and show that there are regions of the systems exhibiting anticorrelation, in spite of the fact that all couplings are positive; this phenomenon may be connected with some evidences in real brains (the default mode network is characterized by anticorrelated components).
We implement the Ising model on a structural connectivity matrix describing the brain at two different resolutions. Tuning the model temperature to its critical value, i.e. at the susceptibility peak, we find a maximal amount of total information transfer between the spin variables. At this point the amount of information that can be redistributed by some nodes reaches a limit and the net dynamics exhibits signature of the law of diminishing marginal returns, a fundamental principle connected to saturated levels of production. Our results extend the recent analysis of dynamical oscillators models on the connectome structure, taking into account lagged and directional influences, focusing only on the nodes that are more prone to became bottlenecks of information. The ratio between the outgoing and the incoming information at each node is related to the the sum of the weights to that node and to the average time between consecutive time flips of spins. The results for the connectome of 66 nodes and for that of 998 nodes are similar, thus suggesting that these properties are scale-independent. Finally, we also find that the brain dynamics at criticality is organized maximally to a rich-club w.r.t. the network of information flows.
Measuring directed interactions in the brain in terms of information flow is a promising approach, mathematically treatable and amenable to encompass several methods. In this chapter we propose some approaches rooted in this framework for the analysis of neuroimaging data. First we will explore how the transfer of information depends on the network structure, showing how for hierarchical networks the information flow pattern is characterized by exponential distribution of the incoming information and a fat-tailed distribution of the outgoing information, as a signature of the law of diminishing marginal returns. This was reported to be true also for effective connectivity networks from human EEG data. Then we address the problem of partial conditioning to a limited subset of variables, chosen as the most informative ones for the driver node.We will then propose a formal expansion of the transfer entropy to put in evidence irreducible sets of variables which provide information for the future state of each assigned target. Multiplets characterized by a large contribution to the expansion are associated to informational circuits present in the system, with an informational character (synergetic or redundant) which can be associated to the sign of the contribution. Applications are reported for EEG and fMRI data.
Factor analysis is a well known statistical method to describe the variability among observed variables in terms of a smaller number of unobserved latent variables called factors. While dealing with multivariate time series, the temporal correlation structure of data may be modeled by including correlations in latent factors, but a crucial choice is the covariance function to be implemented. We show that analyzing multivariate time series in terms of latent Gaussian processes, which are mutually independent but with each of them being characterized by exponentially decaying temporal correlations, leads to an efficient implementation of the expectation–maximization algorithm for the maximum likelihood estimation of parameters, due to the properties of block-tridiagonal matrices. The proposed approach solves an ambiguity known as the identifiability problem, which renders the solution of factor analysis determined only up to an orthogonal transformation. Samples with just two temporal points are sufficient for the parameter estimation: hence the proposed approach may be applied even in the absence of prior information about the correlation structure of latent variables by fitting the model to pairs of points with varying time delay. Our modeling allows one to make predictions of the future values of time series and we illustrate our method by applying it to an analysis of published gene expression data from cell culture HeLa.
The study aimed to test the modulation induced by 1Hz repetitive Transcranial Magnetic Stimulation (rTMS) of the occipital cortex on the alpha phase synchronization under repetitive flash stimuli in 15 migraine without aura patients compared to 10 controls. The EEG was recorded by 7 channels, while flash stimuli were delivered at 9, 18, 21 and 24 Hz in basal, rTMS (15 min of 1Hz stimulation of the occipital cortex) and sham conditions. Migraine patients displayed increased alpha-band phase synchronization under visual stimulation, while an overall desynchronizing effect was evident in controls. The rTMS resulted in a slight increase of synchronization index in migraine patients, which did not cause significant differences in respect to the basal and sham conditions. The synchronizing–desynchronizing changes of alpha rhythm under repetitive flash stimulation, seem independent from the state of occipital cortex excitability. Other mechanisms beyond cortical excitability may contribute to explain migraine pathogenesis.
Brain Functional Connectivity (FC) quantifies statistical dependencies between areas of the brain. FC has been widely used to address altered function of brain circuits in control conditions compared to different pathological states, including epilepsy, a major neurological disorder. However, FC also has the as yet unexplored potential to help us understand the pathological transformation of the brain circuitry. Our hypothesis is that FC can differentiate global brain interactions across a time-scale of days. To this end, we present a case report study based on a mouse model for epilepsy and analyze longitudinal intracranial electroencephalography data of epilepsy to calculate FC across three stages: 1, the initial insult (status epilepticus); 2, the latent period, when epileptogenic networks emerge; and 3, chronic epilepsy, when unprovoked seizures occur as spontaneous events. We found that the overall network FC at low frequency bands decreased immediately after status epilepticus was provoked, and increased monotonously later on during the latent period. Overall, our results demonstrate the capacity of FC to address longitudinal variations of brain connectivity across the establishment of pathological states.
A network approach to brain and dynamics opens new perspectives towards understanding of its function. The functional connectivity from functional MRI recordings in humans is widely explored at large scale, and recently also at the voxel level. The networks of dynamical directed connections are far less investigated, in particular at the voxel level. To reconstruct full brain effective connectivity network and study its topological organization, we present a novel approach to multivariate Granger causality which integrates information theory and the architecture of the dynamical network to efficiently select a limited number of variables. The proposed method aggregates conditional information sets according to community organization, allowing to perform Granger causality analysis avoiding redundancy and overfitting even for high-dimensional and short datasets, such as time series from individual voxels in fMRI. We for the first time depicted the voxel-wise hubs of incoming and outgoing information, called Granger causality density (GCD), as a complement to previous repertoire of functional and anatomical connectomes. Analogies with these networks have been presented in most part of default mode network; while differences suggested differences in the specific measure of centrality. Our findings could open the way to a new description of global organization and information influence of brain function. With this approach is thus feasible to study the architecture of directed networks at the voxel level and individuating hubs by investigation of degree, betweenness and clustering coefficient.
The communication among neuronal populations, reflected by transient synchronous activity, is the mechanism underlying the information processing in the brain. Although it is widely assumed that the interactions among those populations (i.e. functional connectivity) are highly nonlinear, the amount of nonlinear information transmission and its functional roles are not clear. The state of the art to understand the communication between brain systems are dynamic causal modeling (DCM) and Granger causality. While DCM models nonlinear couplings, Granger causality, which constitutes a major tool to reveal effective connectivity, and is widely used to analyze EEG/MEG data as well as fMRI signals, is usually applied in its linear version. In order to capture nonlinear interactions between even short and noisy time series, a few approaches have been proposed. We review them and focus on a recently proposed flexible approach has been recently proposed, consisting in the kernel version of Granger causality. We show the application of the proposed approach on EEG signals and fMRI data.
The communication among neuronal populations, reflected by transient synchronous activity, is the mechanism underlying the information processing in the brain. Although it is widely assumed that the interactions among those populations (i.e. functional connectivity) are highly nonlinear, the amount of nonlinear information transmission and its functional roles are not clear. Granger causality constitutes a major tool to reveal effective connectivity, and it is widely used to analyze EEG/MEG data as well as fMRI signals in its linear version. In order to capture nonlinear interactions between even short and noisy time series, a kernel version of Granger causality has been recently proposed. We review kernel Granger causality and show the application of this approach on EEG signals.
Cortical spreading depression, a depolarization wave originating in the visual cortex and traveling towards the frontal lobe, is commonly accepted as a correlate of migraine visual aura. As of today, little is known about the mechanisms that can trigger or stop such phenomenon. However, the complex and highly individual characteristics of the brain cortex suggest that the geometry might have a significant impact in supporting or contrasting the propagation of cortical spreading depression. Accurate patient-specific computational models are fundamental to cope with the high variability in cortical geometries among individuals, but also with the conduction anisotropy induced in a given cortex by the complex neuronal organisation in the grey matter. In this paper, we integrate a distributed model for extracellular potassium concentration with patient-specific diffusivity tensors derived locally from diffusion tensor imaging data.
Recovering directed pathways of information transfer between brain areas is an important issue in neuroscience and helps to shed light on the brain function in several physiological and cognitive states. Granger causality (GC) analysis is a valuable tool to detect directed dynamical connectivity, and it is being increasingly used. Unfortunately, this approach encounters some limitations in particularly when applied to neuroimaging datasets, often consisting in short and noisy data and for which redundancy plays an important role. In this article, we address one of these limitations, namely, the computational and conceptual problems arising when conditional GC, necessary to disambiguate direct and mediated influences, is used on short and noisy datasets of many variables, as it is typically the case in some electroencephalography (EEG) protocols and in functional magnetic resonance imaging (fMRI). We show that considering GC in the framework of information theory we can limit the conditioning to a limited number of variables chosen as the most informative, obtaining more stable and reliable results both in EEG and fMRI data.
We analyze, by means of Granger causality (GC), the effect of synergy and redundancy in the inference (from time series data) of the information flow between subsystems of a complex network. While we show that fully conditioned GC (CGC) is not affected by synergy, the pairwise analysis fails to prove synergetic effects. In cases when the number of samples is low, thus making the fully conditioned approach unfeasible, we show that partially conditioned GC (PCGC) is an effective approach if the set of conditioning variables is properly chosen. Here we consider two different strategies (based either on informational content for the candidate driver or on selecting the variables with highest pairwise influences) for PCGC and show that, depending on the data structure, either one or the other might be equally valid. On the other hand, we observe that fully conditioned approaches do not work well in the presence of redundancy, thus suggesting the strategy of separating the pairwise links in two subsets: those corresponding to indirect connections of the CGC (which should thus be excluded) and links that can be ascribed to redundancy effects and, together with the results from the fully connected approach, provide a better description of the causality pattern in the presence of redundancy. Finally we apply these methodsto two different real datasets. First, analyzing electrophysiological data from an epileptic brain, we show that synergetic effects are dominant just before seizure occurrences. Second, our analysis applied to gene expression time series from HeLa culture shows that the underlying regulatory networks are characterized by both redundancy and synergy.
Event-related potentials (ERPs) are usually obtained by averaging thus neglecting the trial-to-trial latency variability in cognitive electroencephalography (EEG) responses. As a consequence the shape and the peak amplitude of the averaged ERP are smeared and reduced, respectively, when the single-trial latencies show a relevant variability. To date, the majority of the methodologies for single-trial latencies inference are iterative schemes providing suboptimal solutions, the most commonly used being the Woody's algorithm.
Condividi questo sito sui social