arxiv_papers / 1001.0036.txt
alx-ai's picture
Upload 920 files
b13a737
The Computational Structure of Spike Trains
Robert Haslinger,1, 2Kristina Lisa Klinkner,3and Cosma Rohilla Shalizi3, 4
1Martinos Center for Biomedical Imaging,Massachusetts General Hospital, Charlestown MA
2Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge MA
3Department of Statistics, Carnegie Mellon University, Pittsburgh PA
4Santa Fe Institute, Santa Fe NM
(Dated: September 2008; January 2009)
Neurons perform computations, and convey the results of those computations
through the statistical structure of their output spike trains. Here we present a
practical method, grounded in the information-theoretic analysis of prediction, for
inferring a minimal representation of that structure and for characterizing its com-
plexity. Starting from spike trains, our approach nds their causal state models
(CSMs), the minimal hidden Markov models or stochastic automata capable of
generating statistically-identical time series. We then use these CSMs to objec-
tively quantify both the generalizable structure and the idiosyncratic randomness
of the spike train. Speci cally, we show that the expected algorithmic informa-
tion content (the information needed to describe the spike train exactly) can be
split into three parts describing (1) the time-invariant structure (complexity) of
the minimal spike-generating process, which describes the spike train statistically ,
(2) the randomness (internal entropy rate) of the minimal spike-generating process,
and (3) a residual pure noise term not described by the minimal spike generating
process. We use CSMs to approximate each of these quantities. The CSMs are in-
ferred non-parametrically from the data, making only mild regularity assumptions,
via the Causal State Splitting Reconstruction (CSSR) algorithm. The methods
presented here complement more traditional spike train analyses by describing not
only spiking probability, and spike train entropy, but also the complexity of a spike
train's structure. We demonstrate our approach using both simulated spike trains
and experimental data recorded in rat barrel cortex during vibrissa stimulation.
I. INTRODUCTION
The recognition that neurons are computational devices is one of the foundations of modern neuroscience (McCulloch
& Pitts, 1943). However, determining the functional form of such computation is extremely dicult, if only because
while one often knows the output (the spikes) the input (synaptic activity) is almost always unknown. Often, therefore,
scientists must draw inferences about the computation from its results, namely the output spike trains and their
statistics. In this vein, many researchers have used information theory to determine, via calculation of the entropy
rate, a neuron's channel capacity, i.e., how much information the neuron could conceivably transmit, given the
distribution of observed spikes (Rieke et al., 1997). However, entropy quanti es randomness, and says little about
how much structure a spike train has, or the amount and type of computation which must have, at a minimum, taken
place to produce this structure. Here, and throughout this paper, we mean \computational structure" information-
theoretically, i.e., the most compact e ective description of a process capable of statistically reproducing the observed
spike trains. The complexity of this structure is the number of bits needed to describe it. This is di erent from the
algorithmic information content of a spike train, which is the number of bits needed to reproduce the latter exactly ,
describing not only its regularities, but also its accidental, noisy details.
Our goal is to develop rigorous yet practical methods for determining the minimal computational structure necessary
and sucient to generate neural spike trains. We are able to do this through non-parametric analysis of the directly-
observable spike trains, without resorting to a priori assumptions about what kind of structure they have. We do this
by identifying the minimal hidden Markov model (HMM) which can statistically predict the future of the spike train
without loss of information. This HMM also generates spike trains with the same statistics as the observed train.
It thus de nes a program which describes the spike train's computational structure, letting us quantify, in bits, the
structure's complexity.
From multiple directions, several groups, including our own, have shown that minimal generative models of time
series can be discovered by clustering histories into \states", based on their conditional distributions over future events
(Crutch eld & Young, 1989; Grassberger, 1986; Jaeger, 2000; Knight, 1975; Littman et al., 2002; Shalizi & Crutch eld,
2001). The observed time series need notbe Markovian (few spike trains are), but the construction always yieldsarXiv:1001.0036v1 [q-bio.NC] 30 Dec 20092
the minimal HMM capable of generating and predicting the original process. Following Shalizi (2001); Shalizi &
Crutch eld (2001), we will call such a HMM a \Causal State Model" (CSM). Within this framework, the model
discovery algorithm called Causal State Splitting Reconstruction , or CSSR (Shalizi & Klinkner, 2004) is an adaptive
non-parametric method which consistently estimates a system's CSM from time-series data. In this paper we adapt
CSSR for use in spike train analysis.
CSSR provides us with non-parametric estimates of the time- and history- dependent spiking probabilities found by
more familiar parametric analyses. Unlike those analyses, it is also capable, in the limit of in nite data, of capturing all
the information about the computational structure of the spike-generating process contained in the spikes themselves.
In particular, the CSM quanti es the complexity of the spike-generating process by showing how much information
about the history of the spikes is relevant to their future, i.e., how much information is needed to reproduce the
spike train statistically. This is equivalent to the log of the e ective number of statistically-distinct states of the
process (Crutch eld & Young, 1989; Grassberger, 1986; Shalizi & Crutch eld, 2001). While this is not the same as
the algorithmic information content, we show that CSMs can also approximate the average algorithmic information
content, splitting it into three parts: (1) The generative process's complexity in our sense; (2) the internal entropy
rateof the generative process, the extra information needed to describe the exact state transitions the undergone while
generating the spike train; and (3) the residual randomness in the spikes, unconstrained by the generative process.
The rst of these quanti es the spike train's structure, the last two its randomness.
Below, we give precise de nitions of these quantities, both their ensemble averages ( xII.C) and their functional
dependence on time ( xII.D). The time-dependent versions allow us to determine when the neuron is traversing states
requiring complex descriptions. Our methods put hard numerical lower bounds on the amount of computational
structure which must be present to generate the observed spikes. They also quantify, in bits, the extent to which the
neuron is driven by external forces. We demonstrate our approach using both simulated and experimentally recorded
single-neuron spike trains. We discuss the interpretation of our measures, and how they add to our understanding of
neuronal computation.
II. THEORY AND METHODS
Throughout this paper we treat spike trains as stochastic binary time series, with time divided into discrete, equal-
duration bins steps (typically at one millisecond resolution); \1" corresponds to a spike and \0" to no spike. Our aim is
to nd a minimal description of the computational structure present in such a time series. Heuristically, the structure
present in a spike train can be described by a \program" which can reproduce the spikes statistically. The information
needed to describe this program (loosely speaking the program length) quanti es the structure's complexity. Our
approach uses minimal, optimally predictive HMMs, or Causal State Models (CSMs), reconstructed from the data, to
describe the program. (We clarify our use of \minimal" below.) The CSMs are then used to calculate various measures
of the computational structure, such as its complexity.
The states are chosen so that they are optimal predictors of the spike train's future, using only the information
available from the train's history. (We discuss the limitations of this below.) Speci cally the states Stare de ned
by grouping the histories of past spiking activity Xt
1which occur in the spike train into equivalence classes, where
all members of a given equivalence class are statistically equivalent in terms of predicting the future spiking X1
t+1.
(Xt
t0denotes the sequence of random observables, i.e., spikes or their absence, between t0andt>t0whileXtdenotes
the random observable at time t. The notation is similar for the states.) This construction ensures that the causal
states are Markovian, even if the spike train is not (Shalizi & Crutch eld, 2001, Lemma 6, p. 839). Therefore, at all
timestthe system and its possible future evolution(s) can be speci ed by the state St. Like all HMMs, a CSM can
be represented pictorially by a directed graph, with nodes standing for the process's hidden states and directed edges
the possible transitions between these states. Each edge is labeled with the observable/symbol emitted during the
corresponding transition (\1" for a spike and \0" for no spike), and the probability of traversing that edge given that
the system started in that state. The CSM also speci es the time-averaged probability of occupying any state (via
the ergodic theorem for Markov chains).
The theory is described in more detail below, but at this point examples may clarify the ideas. Figures 1 A and B
show two simple CSMs. Both are built from simulated 40 Hz spike trains 200 seconds in length (1 msec time bins,
p= 0:04 IID at each time when spiking is possible). However, spike trains generated from the CSM in Figure 1 B
have a 5 msec refractory period after each spike (when p= 0), while the spiking rate in non-refractory periods is still
40 Hz (p= 0:04). The refractory period is additional structure, represented by the extra states. State Arepresents
the status of the neuron during 40 Hz spiking, outside of the refractory periods. While in this state, the neuron either
emits no spike ( Xt+1= 0), staying in state A, or emits a spike ( Xt+1= 1) with probability p= 0:04 and moves to
stateB. The equivalence class of past spiking histories de ning state Atherefore includes all past spiking histories
for which the most recent ve symbols are 0, symbolically f00000g. StateBis the neuron's state during the rst3
msec of the refractory period. It is de ned by the set of spiking histories f1g. No spike can be emitted during a
refractory period so the transition to state Cis certain and the symbol emitted is always '0'. In this manner the
neuron proceeds through states CtoFand back to state Awhereupon it is possible to spike again.
The rest of this section is divided into four subsections. First, we brie y review the formal theory behind CSMs
(for details, see Shalizi (2001); Shalizi & Crutch eld (2001)) and discuss why they can be considered a good choice for
understanding the structural content of spike trains. Second, we describe the Causal State Splitting Reconstruction
(CSSR) algorithm used to reconstruct CSMs from observed spike trains (Shalizi & Klinkner, 2004). We emphasize
that CSSR requires no a priori knowledge of the structure of the CSM which is discovered from the spike train. Third,
we discuss two di erent notions of spike train structure, namely statistical complexity and algorithmic information
content. These two measures can be interpreted as di erent aspects of a spike train's computational structure, and
each can be related to the reconstructed CSM. Fourth and nally, we show how the reconstructed CSM can be used
to predict spiking, measure the neural response and detect the in uence of external stimuli.
A. Causal State Models
The foundation of the theory of causal states is the concept of a predictively sucient statistics . A statistic, ,
on one random variable, X, is sucient for predicting another random variable, Y, when(X) andXhave the
same information1aboutY,I[X;Y] =I[(X);Y]. This holds if and only if XandYare conditionally independent
given(X):P(YjX;(X)) = P(Yj(X)). This is a close relative of the familiar idea of parametric suciency;
in Bayesian statistics, where parameters are random variables, parametric suciency is a special case of predictive
suciency (Bernardo & Smith, 1994). Predictive suciency shares all of parametric suciency's optimality properties
(Bernardo & Smith, 1994). However, a statistic's predictive suciency depends only on the actual joint distribution of
XandY, not on any parametric model of that distribution. Again as in the parametric case, a minimal predictively
sucient statistic is one which is a function of every other sucient statistic , i.e.,(X) =h((X)) for some h.
Minimal sucient statistics are the most compact summaries of the data which retain all the predictively-relevant
information. A basic result is that a minimal sucient statistic always exists and is (essentially) unique, up to
isomorphism (Bernardo & Smith, 1994; Shalizi & Crutch eld, 2001).
In the context of stochastic processes, such as spike trains, is the minimal sucient statistic of the history Xt
1
for predicting future of the process, X1
t+1. This statistic is the optimal predictor of the observations. The sequence
of values of the minimal sucient statistic, St=(Xt
1), is another stochastic process. This process is always a
homogeneous Markov chain, whether or not the Xtprocess is (Knight, 1975; Shalizi & Crutch eld, 2001). Turned
around, this means that the original Xtprocess is always a random function of a homogeneous Markov chain, whose
latent states, named the causal states by Crutch eld & Young (1989), are optimal, minimal predictors of the future
of the time series.
Acausal state model orcausal state machine is a stochastic automaton or HMM constructed so that its Markov
states are minimal sucient statistics for predicting the future of the spike train, and consequently can generate spike
trains statistically identical to those observed.2Causal state reconstruction means inferring the causal states from
the observed spike train. Following Crutch eld & Young (1989); Shalizi & Crutch eld (2001), the causal states can be
seen as equivalence classes of spike-train histories Xt
1which maximize the mutual information between the state(s)
and the future of the spike train X1
t+1. Because they are sucient, they predict the future of the spike train as well
as it can be predicted from its history alone. Because they are minimal, the number of states or equivalence classes
is as small as it can be without discarding predictive power.3
Formally, two histories, xandy, are equivalent when P(X1
t+1jXt
1=x) =P(X1
t+1jXt
1=y). The equiva-
lence class of xis [x]. De ne the function which maps histories to their equivalence classes:
(x)[x]
=
y:P(X1
t+1jXt
1=y) =P(X1
t+1jXt
1=x)
1See Cover & Thomas (1991) for information-theoretic de nitions and notation.
2Some authors use \hidden Markov Model" only for models where the current observation is independent of all other
variables given the current state, and call the broader class which includes CSMs \partially observable Markov
model".
3There may exist more compact representations, but then the states, or their equivalents, can never be empirically
identi ed | see Shalizi & Crutch eld (2001, thm. 3, p. 846), or L ohr & Ay (2009).4
The causal states are the possible values of , i.e., the equivalence classes; each corresponds to a distinct distribution
for the future. The state at time tisSt=(Xt
1). Clearly,(x) is a sucient statistic. It is also minimal, since if 
is sucient, then (x) =(y) implies(x) =(y). One can further show (Shalizi & Crutch eld, 2001, Theorem
3) thatis the unique minimal sucient statistic, meaning that any other must be isomorphic to it.
In addition to being minimal sucient statistics, the causal states have some other important properties which
make them ideal for quantifying structure (Shalizi & Crutch eld, 2001). (1) As mentioned, fStgis a Markov process,
and one can write the observed process Xas a random function of the causal state process, i.e., Xhas a natural
hidden-Markov-model representation. (2) The causal states are recursively calculable; there is a function Tsuch
thatSt+1=T(St;Xt+1) | see Appendix A. (3) CSMs are closely related to the \predictive state representations" of
controlled dynamical systems (Littman et al., 2002); see Appendix C.
B. Causal State Splitting Reconstruction
Our goal is to nd a minimal sucient statistic for the spike train, which will form a hidden Markov model. As
stated previously, the states of this model are equivalence classes of spiking histories Xt
1. In practice, we need an
algorithm which can both cluster histories into groups which preserve their conditional distribution of futures, and
nd the history length  at which the past may be truncated while preserving the computational structure of the
spike train. The former is accomplished by the CSSR algorithm (Shalizi & Klinkner, 2004) for inferring causal states
from data by building a recursive next-step-sucient statistic.4We do the latter by minimizing Schwartz's Bayesian
Information Criterion (BIC) over .
To save space, we just sketch the CSSR algorithm here.5CSSR starts by treating the process as an independent,
identically-distributed sequence, with one causal state. It adds states when statistical tests show that the current set
of states is not sucient. Suppose we have a sequence xN
1=x1;x2;:::xNof lengthNfrom a nite alphabet Aof
sizek. We wish to derive from this an estimate ^ of the minimal sucient statistic . We do this by nding a set  of
states, each of which will be a set of strings, or nite-length histories. The function ^ will then map a history xto
whichever state contains a sux of x(taking \sux" in the usual string-manipulation sense). Although each state
can contain multiple suxes, one can check (Shalizi & Klinkner, 2004) that the mapping ^ will never be ambiguous.
The null hypothesis is that the process is Markovian on the basis of the states in ,
P(XtjXt1
tL=axt1
tL+1) =P(Xtj^S= ^(xt1
tL+1)) (1)
for alla2A. In words, adding an extra piece of history does not change the conditional distribution for the next
observation. We can check this with standard statistical tests, such as 2or Kolmogorov-Smirnov. In this paper, we
used a KS test of size = 0:01.6If we reject this hypothesis, we fall back on a restricted alternative hypothesis , that
we have the right set of conditional distributions, but have matched them with the wrong histories. That is,
P(XtjXt1
tL=axt1
tL+1) =P(Xtj^S=s) (2)
for somes2, buts6= ^(xt1
tL+1). If this hypothesis passes a test of size , thensis the state to which we assign
the history7. Only if the (2) is itself rejected do we create a new state, with the sux axt1
tL+1.8
4A next-step-sucient statistic contains all the information needed for optimal one-step-ahead prediction,
I[Xt+1;(Xt
1)] =I[Xt+1;Xt
1], but not necessarily for longer predictions. CSSR relies on the theorem that
ifis next-step sucient, and it is recursively calculable, then is sucient for the whole of the future (Shalizi &
Crutch eld, 2001, pp. 842{843). CSSR rst nds a next-step sucient statistic, and then re nes it to be recursive.
5In addition to Shalizi & Klinkner (2004), which gives pseudocode, some details of convergence, and applications to
process classi cation, are treated in Klinkner & Shalizi (2009); Shalizi et al. (2009). An open-source C++ imple-
mentation is available at http://bactra.org/CSSR/ . The CSMs generated by CSSR can be displayed graphically,
as we do in this paper, with the open-source program dot(http://www.graphviz.org/
6For niteN, decreasing tends to yield simpler CSMs with fewer states. In a sense, it is a sort of regularization
coecient. The in uence of this regularization diminishes as Nincreases. For the data used in the Results section
of this paper, varying in the range 0 :001< < 0:1 made little di erence.
7If more than one such state sexists, we chose the one for which bP(Xtj^S=s) di ers least, in total variation
distance, from bP(Xtjt1
tL=axt1
tL+1), which is plausible and convenient. However, which state we chose is irrelevant
in the limit N!1 , so long as the di erence between the distributions is not statistically signi cant.
8The conceptually similar algorithm of Kennel & Mees (2002) in e ect always creates a new state, which leads to
more complex models, sometimes in nitely more complex ones; see Shalizi & Klinkner (2004).5
The algorithm itself has three phases. Phase I initializes  to a single state, which contains only the null sux ;.
(That is,;is a sux of any string.) The length of the longest sux in  is L; this starts at 0. Phase II iteratively
tests the successive versions of the null hypothesis, Eq. 1, and Lincreases by one each iteration, until we reach some
maximum length . At the end of II, ^ is (approximately) next-step sucient. Phase III makes ^ recursively calculable,
by splitting the states until they have deterministic transitions. Under mild technical conditions (a nite true number
of states, etc.), CSSR converges in probability on the correct CSM as N!1 , provided only that  is long enough
to discriminate all of the states. The error of the predicted distributions of futures P(X1
t+1jXt
1), measured by total
variation distance, decays as N1=2. Section 4 of Shalizi & Klinkner (2004) details CSSR's convergence properties.
Comparisons of CSSR's performance with that of more traditional expectation maximization based approaches can
also be found in Shalizi & Klinkner (2004) as can time complexity bounds for the algorithm. Depending upon the
machine used, CSSR can process an N= 106time series in under a minute.
1. Choosing 
CSSR requires no a priori knowledge of the CSM's structure, but does need a choice of of ; here pick it by
minimizing the BIC of the reconstructed models over , i.e.,
BIC2 logL+dlogN (3)
whereLis the likelihood, Nis the data length and dis the number of model parameters, in our case the number
of predictive states9BIC's logarithmic-with- Npenalty term helps keep the number of causal states from growing
too quickly with increased data size, which is why we use it instead of the Akaike Information Criterion (AIC). Also,
BIC is known to be consistent for selecting the order of Markov chains and variable-length Markov models (Csisz ar
& Talata, 2006), both of which are sub-classes of CSMs.
Writing the observed spike train as xN
1, and the state sequence as sN
0, the total likelihood of the spike train is
L=X
sN
02N+1P(XN
1=xN
1jSN
0=sN
0)P(SN
0=sN
0); (4)
the sum over all possible causal state sequences of the joint probability of the spike train and the state sequence.
Since the states update recursively, st+1=T(st;xt+1), the starting state s0and the spike train xN
1 x the entire state
sequencesN
0. Thus the sum over state sequences can be replaced by a sum over initial states
L=X
si2P(XN
1=xN
1jS0=si)P(S0=si) (5)
with the state probabilities P(S0=si) coming from the CSM. By the Markov property,
P(XN
1=xN
1jS0=si) =NY
j=1P(Xj=xjjSj1=sj1) (6)
Selecting  is now straightfoward: for each value of , we build the CSM from the spike train, calculate the
likelihood using Eq. 5 and 6, and pick the value, and CSM, minimizing Eq. 3. We try all values of  up to a model-
independent upper bound. For a wide range of stochastic processes, Marton & Shields (1994) showed that the length
mof subsequences for which probabilities can be consistently and non-parametrically estimated can grow as fast as
logN=h, wherehis the entropy rate, but no faster. CSSR estimates the distribution of the next symbol given the
previous  symbols, which is equivalent to estimating joint probabilities of blocks of length m=  + 1. Thus Marton
and Shield's result limits the usable values of :
logN
h1 (7)
9The number of independent parameters dinvolved in describing the CSM will be (number of states)*(number of
symbols - 1) since the sum of the outgoing probabilities for each state is constrained to be 1. Thus, for a binary
alphabet,d= number of states.6
Using Eq. 7 requires the entropy rate h. The latter can either be upper bounded as the log of the alphabet size (here,
log 2 = 1), or by some other, less pessimistic, estimator of the entropy rate (such as the output of CSSR with  = 1).
Use of an upper bound on hresults in a conservative maximum value for . For example, a 30 minute experiment
with 1 msec time bins lets us use at least 20 by the most pessimistic estimate of h= 1; the actual maximum
value of  may be much larger. We use  25 in this paper but see no indication that this can't be extended further,
if need be.
2. Condensing the CSM
For real neural data, the number of causal states can be very large | hundreds or more. This creates an interpreta-
tion problem, if only because it is hard to t such a CSM on a single page for inspection. We thus developed a way to
reduce the full CSM while still accounting for most of the spike train's structure. Our \state culling" technique found
the least-probable states and selectively removed them, appropriately redirecting state transitions and reassigning
state occupation probabilities. By keeping the most probable states, we focus on the ones which contribute the most
to the spike train's structure and complexity. Again, we used BIC as our model selection criterion.
First, we sorted the states by probability, nding the least probable state (\remove" state) with a single incoming
edge from a state (its \ancestor") with outgoing transitions to two di erent states, the remove state and a second,
\keep" state. We redirected both of the ancestor's outgoing edges to the keep state. Second, we reassigned the remove
state's outgoing transitions to the keep state. If the outgoing transitions from the keep state were still deterministic (at
most a single 0 emitting edge and a single 1 emitting edge), we stopped. If the transitions were non-deterministic, we
merged states reached by emitting 0s with each other (likewise those reached by 1s), repeating this until termination.
Third, we checked that there existed a state sequence of the new model which could generate the observed spikes. If
there was, we accepted the new CSM. If not, we rejected the new CSM and chose the next lowest probability state
from the original CSM to remove.
This culling was iterated until removing any state made it impossible for the CSM to generate the spike train. At
each iteration, we calculated BIC (as described in the previous section), and ultimate chose the culled CSM with the
minimum BIC. This gave a culled CSM for each value of ; the nal one we used was chosen after also minimizing
BIC over . The CSMs shown below in the results section paper result from this minimizing of BIC over  and state
culling.
3. ISI Bootstrapping
While we do model selection with BIC, we also want to do model checking or adequacy-testing. For the most part,
we do this by using the CSM to bootstrap point-wise con dence bounds on the interspike interval (ISI) distribution,
and checking their coverage of the empirical ISI distribution. Because this distribution is not used by CSSR in
reconstructing the CSM, it provides a check on the latter's ability to accurately describe the spike-train's statistics.
Speci cally, we generated con dence bounds as follows. To simulate one spike train, we picked a random starting
state according to the CSM's inferred state-occupation probabilities, and then ran the CSM forward for Ntime-steps,
Nbeing the length of the original spike train. This gives a binary time-series, where a \1" stands for a spike and
a \0" for no-spike, and gave us a sample of inter-spike intervals from the CSM. This in turn gave an \empirical"
ISI distribution. Repeated over 104independent runs of the CSM, and taking the 0 :005 and 0:995 quantiles of the
distributions at each ISI length, gives 99% pointwise con dence bounds. (Pointwise bounds are necessary because of
the ISI distribution often modulates rapidly with ISI length.) If the CSM is correct, the empirical ISI will, by chance,
lie outside the bounds at 1% of the ISI lengths.
If we split the data into training and validation sets, a CSM reconstructed from the training set can be used to
bootstrap ISI con dence bounds, which can be compared to the ISI distribution of the test set. We discuss this sort
of of cross validation, as well as an additional test based on the time rescaling theorem, in Appendix B.7
C. Complexity and Algorithmic Information Content
The algorithmic information content K(xn
1) of a sequence xn
1is the length of the shortest complete (input-free)
computer program which will output xn
1exactly and then halt (Cover & Thomas, 1991)10. In general, K(xn
1) is
strictly uncomputable, but when xn
1is the realization of a stochastic process Xn
1, the ensemble-averaged algorithmic
information essentially coincides with the Shannon entropy (\Brudno's theorem"; see Badii & Politi (1997)), re ecting
the fact that both are maximized for completely random sequences (Cover & Thomas, 1991). Both the algorithmic
information and the Shannon entropy can be conveniently written in terms of a minimal sucient statistic Q:
E[K(Xn
1)] =H[Xn
1] +o(n)
=H[Q] +H[Xn
1jQ] +o(n) (8)
The equality H[Xn
1] =H[Q] +H[Xn
1jQ] holds because Qis a function of Xn
1, soH[QjXn
1] = 0.
The key to determining a spike train's expected algorithmic information is thus to nd a minimal sucient statistic.
By construction, causal state models provide exactly this; a minimal sucient statistic for xn
1is the state sequence
sn
0=s0;s1;:::sn(Shalizi & Crutch eld, 2001). Thus the ensemble-averaged algorithmic information content, dropping
termso(n) and smaller, is
E[K(Xn
1)] =H[Sn
0] +H[Xn
1jSn
0]
=H[S0] +nX
i=1H[SijSi1] +nX
i=1H[XijSi;Si1] (9)
Going from the rst to the second line uses the causal states' Markov property. Assuming stationarity, Eq. 9 becomes
E[K(Xn
1)] =H[St] +n(H[StjSt1] +H[XtjSt;St1])
=C+n(J+R) (10)
This separates terms representing structure from those representing randomness.
The rst term in Eq. 10 is the complexity ,C, of the spike-generating process (Crutch eld & Young, 1989; Grass-
berger, 1986; Shalizi et al., 2004).
C=H[St] =E[logP(St)] (11)
Cis the entropy of the causal states, quantifying the structure present in the observed spikes. This is distinct from
the entropy of the spikes themselves, which quanti es not their structure but their randomness (and is approximated
by the other two terms). Intuitively, Cis the (time-averaged) amount of information about the past of the system
which is relevant to predicting its future. For example, consider again the IID 40 Hz Bernoulli process of Figure
1 A. With p= 0:04, this has an entropy of 0 :24 bits/msec, but because it can be described by a single state, the
complexity is zero. (That state emits either a \0" or a \1", with respective probabilities 0 :96 and 0:04, but either way
the state transitions back to itself.) In contrast, adding a 5 ms refractory period to the process means six states are
needed to describe the spike trains (Figure 1 B). The new structure of the refractory period is quanti ed by the higher
complexity, C= 1:05 bits.
The second and third terms in Eq. 10 both describe randomness, but of distinct kinds. The second term, the
internal entropy rate J, quanti es the randomness in the state transitions; it is the entropy of the next state given
the current state.
J=H[St+1jSt] =E[logP(St+1jSt)] (12)
This is the average number of bits per time-step needed to describe the sequence of states the process moved through
(beyond those given by C). The last term in Eq. 10 accounts for any residual randomness in the spikes which is not
captured by the state transitions.
R=H[Xt+1jSt;St+1] =E[logP(Xt+1jSt;St+1)] (13)
10The algorithmic information content is also called the Kolmogorov complexity. We do not use this term, to avoid
confusion with our \complexity" Cthe information needed to reproduce the spike train statistically rather than
exactly (Eq. 11). See Badii & Politi (1997) for a detailed comparison of complexity measures.8
For long trains, the entropy of the spikes, H[Xn
1], is approximately the sum of these two terms, H[Xn
1]n(J+R).
Computationally, Crepresents the xed generating structure of the process, which needs to be described once, at
the beginning of the time series, and n(J+R) represents the growing list of details which pick out a particular time
series from the ensemble which could be generated; this needs, on average, J+Rextra bits per time-step. (Cf. the
\sophistication" of G acs et al. (2001).)
Consider again the 40 Hz Bernoulli process. As there is only one state, the process always stays in that state. Thus
the entropy of the next state J= 0. However, the state sequence yields no information about the emitted symbols
(the process is IID), so the residual randomness R= 0:24 bits/msec | as it must be, since the total entropy rate is
0:24 bits/msec. In contrast, the states of the 5 msec refractory process are informative about the process's future. The
internal entropy rate J= 0:20 bits/msec and the residual randomness R= 0. All of the randomness is in the state
transitions, because they uniquely de ne the output spike train. The randomness in the state transition is con ned
to stateA, where the process \decides" whether it will stay in A, emitting no spike, or emit a spike and go to B. The
decision needs, or gives, 0 :24 bits of information. The transitions from BthroughFand back to Aare xed and
contribute 0 bits, reducing the expected J.
The important point is that the structure present in the refractory period makes the spike train less random,
lowering its entropy. Averaged over time, the mean ring rate of the process is p= 0:0333. Were the spikes IID, the
entropy rate would be 0 :21 bits/msec, but in fact J+R= 0:20 bits/msec. This is because a minimal description
of a long sequence Xt1:::XtN=XtN
t1, the generating process only needs to be described once (C), while the internal
entropy rate and randomness need to be updated at each time step ( n(J+R)). Simply put, a complex, structured
spike train can be exactly described in fewer bits than one which is entirely random. The CSM lets us calculate this
reduction in algorithmic information, and quantify the structure by means of the complexity.
D. Time-Varying Complexity and Entropies
The complexity and entropy are ensemble-averaged quantities. In the previous section the ensemble was the entire
time series, and the averaged complexity and entropies were analogous to a mean ring rate. The time-varying
complexity and entropies are also of interest, for example their variation after stimuli. A peri-stimulus time histogram
(PSTH) shows how the ring probability varies with time; the same idea works for the complexity and entropy.
Since the states form a Markov chain, and any one spike train stays within a single ergodic component, we can
invoke the ergodic theorem (Gray, 1988), and (almost surely) assert that
X
St;St+1P(St;St+1;Xt+1)f(St;St+1;Xt+1) = lim
N!11
NNX
t=1f(St;St+1;Xt+1)
= lim
N!1hf(St;St+1;Xt+1)iN (14)
for arbitrary integrable functions f(St;St+1;Xt+1).
In the case of the mean ring rate, the function to time-average is l(t)Xt+1. For the time averaged-complexity,
internal entropy and residual randomness, the functions (respectively c,jandr) are
c(t) =logP(St)
j(t) =logP(St+1jSt)
r(t) =logP(Xt+1jSt;St+1) (15)
and time-varying entropy h(t) =j(t) +r(t).
The PSTH averages over an ensemble of stimulus presentations, rather than time:
PSTH (t) =1
MMX
i=1li(t) =1
MMX
i=1Xt+1;i (16)
withMbeing number of stimulus presentations, and tre-set to zero at each presentation. Analogously, the \PSTH"
of the complexity is
CPSTH (t) =1
MMX
i=1ci(t) =1
MMX
i=1logP(St;i) (17)9
For the entropies, replace cwithj,rorhas appropriate. Similar calculations can be made with any well-de ned
ensemble of reference times, not just stimulus presentations; we will also calculate cand the entropies as functions of
the time since the latest spike.
We can estimate the error these time-dependent quantities as the standard error of the mean as a function of time,
SEt=st=p
Mwherestis the sample standard deviation in each time bin tandMis the number of trials. The
probabilities appearing in the de nitions of c(t),j(t),r(t) also have some estimation errors, either because of sampling
noise or, more interestingly, because the ensemble is being distorted by outside in uences. The latter creates a gap
between their averages (over time or stimuli) and what the CSM predicts for those averages. In the next section, we
explain how to use this to measure the in uence of external drivers.
E. The In uence of External Forces
If we know that St=s, the CSM predicts that ring probability is (t) =P(Xt+1= 1jSt=s). By means of the
CSM's recursive ltering property (Appendix A), once a transient regime has passed, the state is always known with
certainty. Thereafter, the CSM predicts what the ring probability should be at all times, incorporating the e ects of
the spike train's history. As we show in the next section, these predictions give good matches to the actual response
function in simulations where the spiking probability depends only on the spike history. But real neurons' spiking
rates generally also depend on external processes, e.g., stimuli. As currently formulated, the CSM is (or, rather,
converges on) the optimal predictor of the future of the process given its own past. Such an \output only" model
does not represent the (possible) e ects of other processes, and so ignores external covariates and stimuli. Presently,
determining the precise form of spike trains' responses to external forces is best left to parametric models.
However, we can use output-only CSMs to learn something about the computation: the PSTH-calculated entropy
rateHPSTH (t) =JPSTH (t) +RPSTH (t) quanti es the extent to which external processes drive the neuron. (The
PSTH subscript is henceforce supressed.) Suppose we know the true ring probability true(t). At each time step,
the CSM predicts the ring probability CSM(t). IfCSM(t) =true(t), then the CSM correctly describes the spiking
and the PSTH entropy rate is
HCSM(t) =CSM(t) log [CSM(t)](1CSM(t)) log [1CSM(t)] (18)
However, if CSM(t)6=true(t), then the CSM mis-describes the spiking, because it neglects the in uence of external
processes. Simply put, the CSM has no way of knowing when the stimuli happen. The PSTH entropy rate calculated
using the CSM becomes
HCSM(t) =true(t) log [CSM(t)](1true(t)) log [1CSM(t)] (19)
Solvingtrue(t),
true(t) =HCSM(t) + log [1CSM(t)]
log [1CSM(t)]log [CSM(t)](20)
The discrepancy between CSM(t) andtrue(t) indicates how much of the apparent randomness in the entropy rate
is actually due to external driving. The true PSTH entropy rate Htrue(t) is
Htrue(t) =true(t) log [true(t)](1true(t)) log [1true(t)] (21)
The di erence between HCSM(t) andHtrue(t) quanti es, in bits, the driving by external forces as a function of the
time since stimulus presentation.
H=HCSM(t)Htrue(t)
=true(t) logtrue(t)
CSM(t)
+ (1true(t)) log1true(t)
1CSM(t)
(22)
This stimulus-driven entropy His the relative entropy or Kullback-Leibler divergence D(XtruekXCSM) between the
true distribution of symbol emissions and that predicted by the CSM. Information-theoretically, this relative entropy
is the error in our prediction of the next state due to assuming the neuron is running autonomously when it's actually
externally driven. Since every state corresponds to a distinct distribution over future behavior, this is our error in
predicting the future due to ignorance of the stimulus.11
11Cf. the informational coherence introduced by Klinkner et al. (2006) to measure of information-sharing between10
III. RESULTS
We now present a few examples. (All of them use a time-step of 1 millisecond.) We begin with idealized model
neurons to illustrate our technique. We recover CSMs for the model neurons using only the simulated spike trains
as input to our algorithms. From the CSM we calculate the complexity, entropies, and, when appropriate, stimulus-
driven entropy (Kullback-Leibler divergence between the true and CSM predicted ring probabilities) of each model
neuron. We then analyze spikes recorded in vivo from a neuron in layer II/III of rat SI (barrel) cortex. We use spike
trains recorded both with and without external stimulation of the rat's whiskers. See Andermann & Moore (2006)
for experimental details.
A. Model neuron with a \soft" refractory period and bursting
We begin with a refractory, bursting model neuron, whose spiking rate depends only on the time since the last
spike. The baseline rate is 40 Hz. Every spike is followed by a 2 msec \hard" refractory period, during which spikes
never occur. The spiking rate then rebounds to twice its baseline, to which it slowly decays. (See dashed line in the
rst panel of Figure 3 B.) This history dependence mimics that of a bursting neuron, and is, intuitively, more complex
than the simple refractory period of the model in Figure 1.
Figure 2 shows the 17-state CSM reconstructed from a 200 second spike train (at 1 msec resolution) generated by
this model. It has a complexity of C= 3:16 bits (higher than that of the model in Figure 1, as anticipated), an
internal entropy rate of J= 0:25 bits/msec and a residual randomness of R= 0 bits/msec. The CSM was obtained
with  = 17 (selected by BIC). Figure 3 A shows how the 99% ISI bounds bootstrapped from the CSM enclose the
empirical ISI distribution, with the exception of one short segment.
The CSM is easily interpreted. State Ais the baseline state. When it emits a spike, the CSM moves to state
B. There are then two deterministic transitions, to Cand thenD, which never emit spikes; this is the hard 2 msec
refractory period. Once in Dit is possible to spike again, and if that happens, the transition is back to state B.
However, if no spike is emitted, the transition is to state E. This is repeated, with varying ring probabilities, as
statesEthroughQare traversed. Eventually, the process returns to Aand so to baseline.
Figure 3 B plots the ring rate, complexity, and internal entropies as functions of the time since the last spike
conditional on no subsequent spike emission . This lets us compare the ring rate predicted by the CSM (solid line
squares) to the speci cation of the model which generated the spike train (dashed line) and a PSTH calculated by
triggering on the last spike (solid line). Except at 16 and 17 msec post spike, the CSM-predicted ring rate agrees
with both the generating model and the PSTH. The discrepancy arises because the CSM only discerns the structure
in the data, and most of the ISIs are shorter than 16 msec. There is much closer agreement between the CSM and
the PSTH if ring rates are plotted as a function of time since a spike without conditioning on no subsequent spike
emission (not shown).
The second and third panels of Figure 3 plot the time-dependent complexity and entropies. The complexity is
much higher after the emission of a spike than during baseline, because the states traversed (B-Q) are less probable,
and represent the additional structures of refractoriness and bursting. The time-dependent entropies (third panel)
show that just after a spike, the refractory period imposes temporary determinism on the spike train, but burstiness
increases the randomness before the dynamics return to the baseline state.
B. Model neuron under periodic stimulation
Figure 4 shows the CSM for a periodically-stimulated model neuron . This CSM was reconstructed from 200 seconds
of spikes with a baseline ring rate of 40 Hz ( p= 0:04). Each second, the ring rate rose over the course of 5 msec to
p= 0:54 spikes/msec, falling slowly back to baseline over the next 50 msec. This mimics the periodic presentation of
a strong external stimulus. (The exact inhomogeneous ring rate used was (t) = 0:93[et=10et=2] + 0:04 witht
in msec. See Figure 5 B, rst panel, dashed line.) In this model, the ring rate does not directly depend on the spike
train's history, but there is a sort of history dependence in the stimulus time-course, and this is what CSSR discovers.
BIC selected  = 7, giving a 16 state CSM with C= 0:89 bits,J= 0:27 bits/msec and R= 0:0007 bits/msec. The
baseline is again state Aand if no spike is emitted then the process stays in A. Spikes are either spontaneous and
neurons, by quantifying the error in predicting the distribution of the future of one neuron due to ignoring its
coupling with another.11
random, or stimulus-driven. Because the stimulus is external, it is not immediately clear which of these two causes
produced a given spike. Thus, if a spike is emitted, the CSM traverses states BthroughF, deciding, so to speak,
whether or not the spike is due to a stimulus. If two spikes happen within 3 msec of each other, then the CSM decides
that it is being stimulated and goes to one of states G,HorM. StatesGthroughPrepresent the response to the
stimulus. The CSM moves between these states until no spike is emitted for 3 msec, when it returns to the baseline,
A.
The ISI distribution from the CSM matches that from the model (Figure 5 A). However, because the stimulus
doesn't depend on spike train's history, the CSM makes inaccurate predictions during stimulation. The rst panel of
Figure 5 B plots the ring rate as a function of time since stimulus presentation, comparing the model (dashed line)
and the PSTH (solid line) with the CSM's prediction (line with squares). The discrepancy between these is due to
the CSM having no way of knowing that an external stimulus has been applied until several spikes in a row have been
emitted (represented, as we just say, by states B{F)12. Despite this, c(t) shows that something more complex than
simple random ring is happening (second panel of Figure 5 B), as do j(t) andr(t) (third panel). Further, something
is clearly wrong with the entropy rate, because it should be upper-bounded by h= 1 bit/msec (when p= 0:5). The
fact thath(t) exceeds this bound indicates an external force, not fully captured by the CSM, is at work.
As discussed in Methods ( xII.E), drive from the stimulus can be quanti ed with a relative entropy (Figure 5 C).
Stimuli are presented at t= 1 msec, where  H(t)>1 bit. It is not until 25 msec post-stimulus that  H(t)0 and
the CSM once again correctly describes the internal entropy rate. Thus, as expected, the stimulus strongly in uences
neuronal dynamics immediately after its presentation. The true internal entropy rate Htrue(t) is slightly less than 1
bit/msec shortly after stimulation, when the true spiking rate has a maximum of pmax= 0:54. The fact that the CSM
gives an inaccurate value for Jactually lets us nd the number of bits of information gain supplied by the stimulus,
e.g., H > 1 bit immediately after the stimulus is presented.
C. Spontaneously Spiking Barrel Cortex Neuron
We reconstructed a CSM from 90 seconds of spontaneous (no vibrissa de ection) spiking recorded from a layer
II/III FSU barrel cortex neuron. CSSR, using  = 21, discovered a CSM with 315 states, a complexity of C= 1:78
bits, and internal entropy rate of J= 0:013 bits/msec. After state culling ( xII.B.2), the reduced CSM, plotted in
Figure 6, has 14 states, C= 1:02,J= 0:10 bits/msec, and residual randomness of R= 0:005 bits/msec. We focus on
the reduced CSM from this point onwards.
This CSM resembles that of the spontaneously- ring model neuron of xIII.A and Fig. 2. The complexity and
entropies are lower than those of our model neuron because the mean spike rate is much lower, and so simple
descriptions suce most of the time. (Barrel cortex neurons exhibit notoriously low spike rates, especially during
anesthesia.) There is a baseline state Awhich emits a spike with probability p= 0:01, i.e., 10 Hz. When a spike is
emitted, the CSM moves to state Band then on through the chain of states CthroughN, return to Aif no spike is
subsequently emitted. However, the CSM can emit a second or even third spike after the rst, and indeed this neuron
displays spike doublets and triplets. In general, emitting a spike moves the CSM to B, with some exceptions that
show the structure to be more intricate than the model neuron's.
Figure 7 A shows the CSM's 99% con dence bounds almost completely enclosing the empirical ISI distribution.
The rst panel of Figure 7 B plots the history-dependent ring probability predicted by the CSM as a function of the
time since the latest spike, according to both the PSTH and the CSM's prediction. They are highly similar in the
rst 13 msec post-spike, indicating that the CSM gets the spiking statistics right in this epoch. The CSM and PSTH
the diverge after this, for two reasons. First, as with the model neuron, there are few ISIs of this length. Most of the
ISIs are either shorter, due to the nueron's burstiness, or much longer, due to the low baseline ring rate. Secondly,
90 seconds is not very much data. We show in Figure 10 that a CSM reconstructed from a longer spike train does
capture all of the structure. We present the results of this shorter spike train to emphasize that, as a non-parametric
method, CSSR only uncovers the statistical structure in the data , no more, no less.
Finally, the second and third panels of Figure 6 B show, respectively, the complexity and entropies as functions of
the time since the latest spike. As with the model of xIII.A, the structure in the process occurs after spiking, during
the refractory and bursting periods. This is when the complexity is largest, and also when the entropies vary most.
12In e ect, this part of the CSM implements Bayes's rule, balancing the increased likelihood of a spike after a stimulus
against the low a priori probability or base-rate of stimulation.12
D. Periodically Stimulated Barrel Cortex Neuron
We reconstructed CSMs from 335 seconds of spike trains taken from the same neuron used above, but recorded
while it was being periodically stimulated by vibrissa de ection. BIC selected  = 25, giving the 29-state CSM shown
in Figure 8. (Before state culling, the original CSM had 1916 states, C= 2:55 andJ= 0:11.) The reduced CSM has a
complexity of C= 1:97 bits, an internal entropy rate of J= 0:10 bits/msec, and a residual randomness of R= 0:005
bits/msec. Note that Cis higher when the neuron is being stimulated as opposed to when it is spontaneously ring,
indicating more structure in the spike train.
While at rst the CSM may seem to only represents history-dependent refractoriness and bursting, ignoring the
external stimulus, this is not quite true. Once again, there is a baseline state A, and most of the other states ( B{X)
comprise a refractory/bursting chain, like this neuron has during spontaneous ring. However, the transition upon A
emitting a spike is not back to Band then down the chain again, but to either state C1, and subsequently C2, or more
often to state ZZ. These three states represent the structure induced by the external stimulus, as we saw with the
model stimulated neuron of xIII.B and Figure 4. (The state ZZis comparable to the state Mof the model stimulated
neuron: both loop back to themselves if they emit a spike.) Three states are enough because, in this experiment,
barrel cortex neurons spike extremely sparsely, 0 :1{0:2 spikes per stimulus presentation.
Figure 9 A plots the ISI distribution, nicely enclosed by the bootstrapped con dence bounds. Figure 9 B shows
the ring rate, complexity and entropies as functions of the time since stimulus presentation (averaged over all
presentations). These plots look much like those in Figure 7 B. However, there is a clear indication that something
more complex takes place after stimulation: the CSM's ring-rate predictions are wrong. The stimulus-driven entropy
Hturns out to be as large as 0 :02 bits within 5{15 msec post-stimulus. This agrees with the known 5{10 msec
stimulus propagation time between vibrissae and barrel cortex (Andermann & Moore, 2006). The reason that  H
is so much smaller for the real neuron than the stimulated model neuron of xIII.B is that the former's ring rate is
much lower. Although the ring rate post-stimulus can be almost twice as large as the CSM's prediction, the actual
rate is still low, max (t)0:04 spikes/msec. Most of the time the neuron does not spike, even when stimulated, so
on average, the stimulus provides little information per presentation. For completeness, Figure 10 shows the spike
probability, complexity and entropies as functions of the time since the latest spike. Averaged over this ensemble, the
CSM's predictions are highly accurate.
IV. DISCUSSION
The goal of this paper was to present methods for determining the structural content of spike trains while making
minimal a priori assumptions as to the form which that structure takes. We use the CSSR algorithm to build
minimal, optimally predictive hidden Markov models (CSMs) from spike trains, Schwartz's Bayesian Information
Criterion to nd the optimal history length  of the CSSR algorithm, and bootstrapped con dence bounds on the
ISI distribution from the CSM to check goodness-of- t. We demonstrated how CSMs can estimate a spike train's
complexity, thus quantifying its structure, and its mean algorithmic information content, quantifying the minimal
computation necessary to generate the spike train. Finally we showed how to quantify, in bits, the in uence of
external stimuli upon the spike-generating process. We applied these methods both to simulated spike trains, for
which the resulting CSMs agreed with intuition, and to real spike trains recorded from a layer II/II rat barrel cortex
neuron, demonstrating increased structure, as measured by the complexity, when the neuron was being stimulated.
We are unaware of any other practical techniques for quantifying the complexity and computational structure of
a spike train as we de ne them. Intuitively, neither random (Poisson), nor highly ordered (e.g., strictly periodic, as
in Olufsen et al. (2003)) spike trains should be thought of as complex since they do not possess structure requiring a
sophisticated program to generate. Instead, complexity lies between order and disorder (Badii & Politi, 1997), in the
non-random variation of the spikes. Higher complexity means a greater degree of organization in neural activity than
would be implied by random spiking. It is the reconstruction of the CSM through CSSR which allows us to calculate
the complexity.
Our de nition of complexity stands in stark contrast to other complexity measures which assign high values to
highly disordered systems. Some of these, such as Lempel Ziv complexity (Amigo et al., 2002, 2004; Jimenez-Montano
et al., 2002; Szczepanski et al., 2004) and context free grammar complexity (Rapp et al., 1994) have been applied to
spike trains. However, both of these are measures of the amount of information required to reproduce the spike train
exactly , and take on very high values for completely random sequences. These \complexity" measures are therefore
much more similar to total algorithmic information content and even to the entropy rate than to our sort of complexity.
Our measure of complexity is the entropy of the distribution of causal states. This has the desired property of
being maximized for structured, rather than ordered or disordered systems, because the causal states are de ned
statistically, as equivalence classes of histories conditioned on future events. Other researchers have also calculated13
complexity measures which are entropies of state distributions, but have de ned their states di erently. Amigo et al.
(2002) uses the observables (symbol strings) present in the spike train to de ne a k-th order Markov process and calls
each individual length k string which appears in the spike train a state. Gorse and Taylor (1990) similarly use single
sux symbol strings to de ne the states of a Markov process. In both cases, IID Bernoulli sequences could exhibit
up to 2kstates (in long enough sequences), and possess an extremely high \complexity". However, all of these states
make the same prediction for the future of the process. The minimal representation is a single causal state, a CSM
with a complexity of zero.
It should be noted that there are also many works which model spike trains using HMMs, but in which the hidden
states represent macro -states of the system (awake/asleep, Up/Down, etc.), and spiking rates are modeled separately
in each macro-state (Abeles et al., 1995; Achtman et al., 2007; Chen et al., 2008; Danoczy & Hahnloser, 2005; Jones
et al., 2007). Although the graphical representation of such HMMs may look like those of CSMs, the two kinds of
states have very di erent meanings. Finally, there are also state space methods which model the dynamical state of
the system as a continuous hidden variable, the most well known of which is the linear Gaussian model with Kalman
ltering. These have been extensively applied to neural encoding and decoding problems (Eden et al., 2007; Smith
et al., 2004; Srinivasan et al., 2007). Interestingly, for a univariate Gaussian ARMA model in state-space form, the
Kalman lter's one-step-ahead prediction and mean-squared prediction error are, jointly, minimal-sucient for next-
step prediction, and since they can be updated recursively they in fact constitute the minimal sucient statistic, and
hence the causal state in this special case.
Neurons are driven by their a erent synapses. Although as discussed in Appendix C, there is a parallel \transducer"
formalism for generating CSMs which take external in uences into account, this is not yet computationally imple-
mented, and our current approach reconstructs CSMs only from the spike train. Since the history of the neuron under
study is typically connected with the history of the network in which it is located, this CSM will, in general, re ect
more than a neuron's internal biophysical properties. Nonetheless, in both our model neurons and in the real barrel
cortex neuron, states not interpretable as simple refractoriness or bursting appeared when a stimulus was present,
proving we can detect stimulus-driven complexity. Further, we showed that the CSM can be used to determine the
extent (in bits) to which a neuron is driven by external stimuli.
The methods presented here complement more established modes of spike-train analysis, which have di erent goals.
Parametric methods, such as PSTHs or maximum likelihood estimation (Brown et al., 2004; Truccolo et al., 2005)
generally focus on determining a neuron's ring rate (mean, instantaneous or history-dependent), and on how known
external covariates modulate that rate. They have the advantage of requiring less data than non-parametric methods
such as CSSR, but the disadvantage, for our purposes, of imposing the structure of the model at the outset. When
the experimenter wants to know how a neuron encodes a particular aspect of a covariate, e.g., how neurons in the
sensory periphery or primary sensory cortices encode stimuli, parametric methods have proved highly illuminating.
However, in many cases the identity or even existence of relevant external covariates is uncertain. For example, one
could envision using CSMs to analyze recordings in pre-frontal cortex during di erent cognitive tasks, or to perhaps
compare spiking structure during di erent attentional states. In both cases, the relevant external covariates are not
at all clear, but CSMs could still be used to quantify changes in computational structure, for single neurons or for
groups of them. For neural populations one can envision generating distributions (over the population) of complexities
and examining how these distributions change in di erent cortical macro-states. This would be entirely analagous to
analyzing distributions of ring rates or tuning curves.
In addition to calculations of the complexity, the whole array of mutual-information analyses can be applied to
CSMs, but instead of calculating mutual information between the spikes and the covariates (which could include other
spike trains), one can calculate the mutual information between the covariates and the causal states . The advantage
is that the causal states represent the behavioral patterns of the spike-generating process, and so are closer to the
actual state of the system than the spikes (output observables) are themselves. Results on calculating the mutual
information between the causal states of di erent neurons (informational coherence) in a large simulated network
show that synchronous neuronal dynamics are more e ectively revealed than when calculated directly from the spikes
(Klinkner et al., 2006).
In closing, our methods provide a way to understand structure in spike trains, and should be considered as comple-
ments to traditional analysis methods. We rigorously de ne structure, and show how to discover it from the data itself.
Our methods go beyond those which seek to describe the observed variation in the spiking rates by also describing
the underlying computational process (in the form of a CSM) needed to generate that variation. A CSM can show
not only that the spike rate has changed, but also show exactly howit has changed.
Acknowledgments The authors thank Mark Andermann and Christopher Moore for the use of their data. RH thanks
Emery Brown, Anna Dreyer and Christopher Moore for valuable discussions. CRS thanks Anthony Brockwell, Dave
Feldman, Chris Genovese, Rob Kass and Alessandro Rinaldo for valuable discussions.14
APPENDIX A: Filtering with CSMs
A common diculty with hidden Markov models is that predictions can only be made from a knowledge of the state,
which must itself be guessed at from the time series, since it is, after all, hidden. This creates the state estimation
or ltering problem. Under strong assumptions (linear Gaussian stochastic dynamics, linearly observed through IID
additive Gaussian noise) the Kalman lter is an optimal yet tractable solution. For non-linear processes, however,
optimal ltering essentially amounts to maintaining a posterior distribution over the states and updating it via Bayes's
rule (Ahmed, 1998). (This distribution is sometimes called the process's \information state".)
One convenient and important feature of CSMs is that this whole machinery of ltering is unnecessary, because of
their recursive-updating property. Given the state at time t,St, and the observation at time t+ 1,Xt+1, the state at
timet+ 1 is xed, St+1=T(St;Xt+1) for some transition function T. Clearly, if the state is known with certainty
at any time, it will remain known. However, the same recursive updating property also allows us to show that the
state does become certain, i.e., that after some nite (but possibly random) time ,P(S=sjX
1) is either 0 or 1 for
all statess. For Markov chains of order k, clearlyk; under more general circumstances P(t) goes to zero
exponentially or faster.
Thus, after a transient period, the state is completely unambiguous. This will be useful to us in multiple places,
including understanding the computational structure of the process and predicting the ring rate of the neuron. It also
leads to considerable numerical simpli cations, compared to approaches which demand conventional ltering. Further,
recursive ltering is easily applied to a new spike train, not merely the one from which the CSM was reconstructed.
This helps in cross-validating CSMs, as discussed in the next appendix.
APPENDIX B: Cross-Validation
It is often desirable to cross-validate a statistical model by spliting one's data set in two, using one part (generally
the larger) as a training set for the model and the other part to validate the model by some statistical test. In the
case of CSMs it is particularly important to check the validity of the BIC used to regularize the  control-setting.
One possible test is the ISI bootstrapping of xII.B.3. A second, somewhat stronger, goodness-of- t test is based
on the time rescaling theorem of Brown et al. (2002). This test rescales the interspike intervals as a function of the
integrated history-dependent spiking rate over the ISI:
k= 1eRtk+1
tk(t)dt(B1)
where theftkgare the spike times and (t) is the history-dependent spiking rate from the CSM. If the CSM describes
the data well, then rescaled ISI's fkgshould follow a uniform distribution. This can be tested using either a
Kolmogorov Smirnov test or by plotting the empirical CDF of the rescaled times against the CDF of the uniform
distribution (Kolmogorov Smirnov or \KS" plot) (Brown et al., 2002).
Figure 11 gives cross-validation results for the rat barrel cortex neuron, during both spontaneous ring and periodic
vibrissae de ection. 90 seconds of spontaneously ring spikes were split into a 75 second training set and a 15 second
validation set. The 335 seconds of stimulus-evoked ring were split into a 270 second training set and a 65 second
validation set. Panels A and B show the ISI bootstrapping results for the spontaneous and stimulus evoked ring
respectively. The dashed lines are 99% con dence bounds from a CSM reconstructed from the training set and the
solid line is the ISI distribution of the validation set. The ISI distribution largely falls within these bounds for both
the spontaneous and stimulus evoked data.
Panels C-F display the time rescaling test. Panels C and D show the time rescaling plots for the spontaneous and
stimulus evoked training data respectively. The dashed lines are 95% con dence bounds. The spontaneous KS plot
largely falls within the bounds. The stimulus-evoked does not, but this is expected because, as discussed, the CSM
does not completely capture the imposition of the external stimulus. (The jagged \steps" in both plots result from
the 1 msec temporal discretization.) Panels E and F show the time rescaling plots for, respectively, the spontaneous
and stimulus evoked validation data. The ts here are somewhat worse. In the stimulated case, this is not surprising.
In the spontaneous case the cause is likely non-stationarity in the data, a problem shared with other spike train
analysis techniques, such as the Generalized Linear Model approaches described in the next Appendix. It should be
emphasized that the point of reconstructing CSMs is not to obtain perfect ts to the data, but instead to estimate
the structure inherent in the spike train, and the cross-validation results should be viewed in this light.15
APPENDIX C: Causal State Transducers and Predictive State Representations
Mathematically, CSMs can be expanded to include the in uence of external stimuli on the process, yielding causal
state transducers , which are optimal representations of the history-dependent mapping from inputs to outputs (Shalizi,
2001, ch. 7). Such causal state transducers are a type of partially-observable Markov decision process, closely related
to predictive state representations (PSRs) (Littman et al., 2002). In both formalisms, the right notion of \state"
is a statistic, a measurable function of the observable past of the process. Causal states represent this through an
equivalence relation on the space of observable histories. For PSRs, the representation is through \tests", i.e., a dis-
tinguished set input/output sequence pairs; the idea is that states can be uniquely characterized by their probabilities
of producing the output sequences conditional on the input sequences.
An algorithm for reconstructing causal state transducers would begin by estimating probability distributions of
future histories conditioned on both the history of the spikes and the history of an external covariate Y, e.g.
P(X1
t+1jXt
1;Yt
1), and otherwise be entirely parallel to CSSR. This has not yet implemented.
References
Abeles, M., Bergman, H., Gat, I., Meilijson, I., Seidemann, E., Tishby, N. & Vaadia, E. (1995). Cortical ativity ips among
quasi-stationary states. Proc. Natl. Acad. Sci. USA. ,92, 8616{8620.
Achtman, N., Afshar, A., Santhanam, G., Yu, B. M., Ryu, S. I., & Shenoy, K. V. (2007). Free paced high-performance
brain-computer interfaces. Journal of Neural Engineering ,4, 336{347.
Ahmed, N. U. (1998). Linear and nonlinear ltering for scientists and engineers . Singapore: World Scienti c.
Amigo, J. M., Szczepanski, J.,Wajnryb, E., Sanchez-Vives, M. V. (2002). On the Number of States of the Neuronal Sources.
Biosystems ,68, 57{66.
Amigo, J. M., Szczepanski, J.,Wajnryb, E., Sanchez-Vives, M. V. (2004). Estimating the Entropy Rate of Spike Trains via
Lempel-Ziv Complexity. Neural Computation ,16, 717{736.
Andermann, M. L. & Moore, C. I. (2006). A sub-columnar direction map in rat barrel cortex. Nature Neuroscience ,9, 543{551.
Badii, R. & Politi, A. (1997). Complexity: Hierarchical structures and scaling in physics . Cambridge, England: Cambridge
University Press.
Bernardo, J. M. & Smith, A. F. M. (1994). Bayesian Theory . New York: Wiley.
Brown, E. N., Barbieri, R., Ventura, V., Kass, R. E., & Frank, L. M. (2002). The time rescaling theorem and its application
to neural spike train data analysis. Neural Computation ,14, 325{346.
Brown, E. N., Kass, R. E., & Mitra, P. P. (2004). Multiple neural spike train data analysis: State-of-the-art and future
challanges. Nature Neuroscience ,7, 456{461.
Chen, Z., Vijayan, S., Barbieri, R., Wilson, M. A., & Brown, E. N. (2008) Discrete- and continuous- time probablistic models
and inference algorithms for neuronal decoding of Up and Down states. In review at Neural Computation.
Cover, T. M. & Thomas, J. A. (1991). Elements of Information Theory. New York: Wily.
Crutch eld, J. P. & Young, K. (1989). Inferring statistical complexity. Physical Review Letters ,63, 105{108.
Csisz ar, I. & Talata, Z. (2006). Context tree estimation for not necessarily nite memory processes, via BIC and MDL. IEEE
Transactions on Information Theory ,52, 1007{1016.
Danoczy, M. G., Hahnloser, R. H. R. (2005). Ecient Estimation of Hidden State Dynamics. Advances in Neural Information
Processing Systems (NIPS 2005) Cambridge, Massachusetts. MIT Press.
Eden, U. T., Frank, L. M., Barbieri, R., Solo, V. & Brown, E. N.. (2004). Dynamic analysis of neural encoding by point process
adaptive ltering Neural Computation ,16, 971{998.
G acs, P., Tromp, J. T., & Vitanyi, P. M. B. (2001). Algorithmic statistics. IEEE Transactions on Information Theory ,47,
2443{2463.
Gorse, D., Taylor, J. G. (1990). A General Model of Stochastic Neural Processing. Biological Cybernetics ,63, 299{306.
Grassberger, P. (1986). Toward a quantitative theory of self-generated complexity. International Journal of Theoretical Physics ,
25, 907{938.
Gray, R. M. (1988). Probability, random processes, and ergodic properties . New York: Springer-Verlag.
Jaeger, H. (2000). Observable operator models for discrete stochastic time series. Neural Computation ,12, 1371{1398.
Jimenez-Montano, M. A., Ebeling, W., Pohl, T., Rapp, P. E. (2002). Entropy and Complexity of Finite Sequences and
Fluctuating Quantities. Biosystems ,64, 23{32.
Jones, L. M., Fontanini, A., Sadacca, B. F., & Katz, D. B. (2007). Natural stimuli evoke dynamic sequences of states in sensory
cortical ensembles. Proc. Natl. Acad. Sci. USA. ,104, 18772{18777.
Kennel, M. B. & Mees, A. I. (2002). Context-tree modeling of observed symbolic dynamics. Physical Review E ,66, 056209.
Klinkner, K. L. & Shalizi, C. R. (2009). CSSR: A nonparametric algorithm for predicting and classifying time series. Manuscript
in preparation.
Klinkner, K. L., Shalizi, C. R., & Camperi, M. F. (2006). Measuring shared information and coordinated activity in neuronal
networks. In Weiss, Y., Sch olkopf, B., & Platt, J. C. (Eds.), Advances in neural information processing systems 18 (NIPS
2005) , (pp. 667{674), Cambridge, Massachusetts. MIT Press.
Knight, F. B. (1975). A predictive view of continuous time processes. Annals of Probability ,3, 573{596.16
Littman, M. L., Sutton, R. S., & Singh, S. (2002). Predictive representations of state. In Dietterich, T. G., Becker, S., &
Ghahramani, Z. (Eds.), Advances in neural information processing systems 14 (NIPS 2001) , (pp. 1555{1561)., Cambridge,
Massachusetts. MIT Press.
Lohr, W. & Ay, N. (2009). On the Generative Nature of Prediction. Advances in Complex Systems , forthcoming.
Marton, K. & Shields, P. C. (1994). Entropy and the consistent estimation of joint distributions. Annals of Probability ,22,
960{977. Correction, Annals of Probability ,24(1996): 541{545.
McCulloch, W. S. & Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical
Biophysics ,5, 115{133.
Olufsen, M. S., Whittington, M. A., Camperi, M., & Kopell, N. (2003). New Roles for the Gamma Rhythm: Population Tuning
and Processing for the Beta Rhythm. Journal of Computation Neuroscience ,14, 33{54.
Rapp, P. E., Zimmerman, I. D., Vining, E. P., Cohen, N.,Albano, A. M., Jimenez-Montano, M. A. (1994). The Algorithmic
Complexity of Neural Spike Trains Increases During Focal Seizures. Journal of Neuroscience ,14, 4731{4739.
Rieke, F., Warland, D., de Ruyter van Steveninck, R., & Bialek, W. (1997). Spikes: Exploring the neural code . Cambridge,
Massachusetts: MIT Press.
Shalizi, C. R. (2001). Causal architecture, complexity and self-organization in time series and cellular automata . PhD thesis,
University of Wisconsin-Madison.
Shalizi, C. R. & Crutch eld, J. P. (2001). Computational mechanics: Pattern and prediction, structure and simplicity. Journal
of Statistical Physics ,104, 817{879.
Shalizi, C. R. & Klinkner, K. L. (2004). Blind construction of optimal nonlinear recursive predictors for discrete sequences. In
Chickering, M. & Halpern, J. Y. (Eds.), Uncertainty in arti cial intelligence: Proceedings of the twentieth conference (UAI
2004) , (pp. 504{511)., Arlington, Virginia. AUAI Press.
Shalizi, C. R., Klinkner, K. L., & Haslinger, R. (2004). Quantifying self-organization with optimal predictors. Physical Review
Letters ,93, 118701.
Shalizi, C. R., Rinaldo, A., & Klinkner, K. L. (2009). Adaptive nonparametric prediction and bootstrapping of discrete time
series. Manuscript in preparation.
Singh, S., Littman, M. L., Jong, N. K., Pardoe, D., & Stone, P. (2003) Learning predictive state representations. In T. Fawcett
and N. Mishra, editors, Proceedings of the Twentieth International Conference on Machine Learning (ICML-2003) , (pp.
712-719). AAAI Press.
Smith, A. C., Frank, L. M., Wirth, S., Yanike, M., Hu, D., Kubota, Y., Graybiel, A. M., Suzuki, W. A., & Brown, E. N. (2004).
Dynamic analysis of learning in behavioral experiments. Journal of Neuroscience ,24, 447{461.
Srinivasan, L., Eden, U. T., Mitter, S. K., & Brown, E. N. (2007). General purpose lter design for neural prosthetic devices.
Journal of Neurophysiology ,98, 2456{2475.
Szczepanski, J., Amigo, J. M., Wajnryb, E., Sanchez-Vives, M. V. (2004). Characterizing spike trains with Lempel-Ziv com-
plexity. Neurocomputing ,58, 79{84.
Truccolo, W., Eden, U. T., Fellow, M. R., Donoghue, J. P., & Brown, E. N. (2005). A point process framework for relating
neural spiking activity to spiking history, neural ensemble and covariate e ects. Journal of Neurophysiology ,93, 1074{1089.17
A 0 | 0.96 1 | 0.04
A0 | 0.96 B 1 | 0.04 C0 | 1 D0 | 1
E0 | 1
F0 | 1
0 | 1A
B
FIG. 1 Two simple CSMs reconstructed from 200 sec of simulated spikes using CSSR. States are represented as the nodes
of a directed graph. The transitions between states are labeled with the symbol emitted during the transition (1 = spike, 0
= no spike) and the probability of the transition given the origin state. (A) The CSM for a 40 Hz Bernoulli spiking process
consists of a single state Awhich always transitions back to itself, emitting a spike with probability p= 0:04 per msec. (B)
CSM for 40 Hz Bernoulli spiking process with a 5 msec refractory period imposed after each spike. State Aagain spikes with
probability p= 0:04. Upon spiking the CSM transitions through a deterministic chain of states B{F(squares) which represent
the refractory period. The increased structure of the refractory period requires a more complex representation.18
A0 | 0.957
B1 | 0.043
C0 | 1.000
D0 | 1.000
H1 | 0.053
I0 | 0.9471 | 0.069
J0 | 0.931F1 | 0.024
G0 | 0.9761 | 0.039
0 | 0.9611 | 0.076
K0 | 0.924E1 | 0.012
0 | 0.9881 | 0.003
0 | 0.997
P1 | 0.069
Q0 | 0.9310 | 0.933
1 | 0.067
O1 | 0.074
0 | 0.926N1 | 0.078
0 | 0.922M1 | 0.080
0 | 0.920L1 | 0.075
0 | 0.9251 | 0.079
0 | 0.921
FIG. 2 CSM reconstructed from a 200 sec simulated spike train with a \soft" refractory/bursting structure. C= 3:16,J= 0:25,
R= 0. State A(circle) is the baseline 40 Hz spiking state. Upon emitting a spike the transition is to state B. States Band
C(squares) are \hard" refractory states from which no spike may be emitted. States Dthrough Q(hexagons) compromise a
refractory/bursting chain from which if a spike is emitted the transition is back to state B. Upon exiting the chain the CSM
returns to the baseline state A.19
0 5 10 15 20 25 3000.20.4Entropies0 5 10 15 20 25 300246Complexity (C(t))0 5 10 15 20 25 3000.050.1History Dependent Firing ProbabilityISI Distribution
0 5 10 15 20 25 30 35 40 45 5000.020.040.060.08
time since most recent spike (msec)msecspikes/msec bits bitsA
B
位CSM(t)
位PSTH (t)
J(t)
R(t)H(t)位model (t)
FIG. 3 \Soft" refractory and bursting model ISI distribution and time dependent ring probability, complexity and entropies.
(A) ISI distribution and 99% con dence bounds bootstrapped from the CSM. (B) First panel: Firing probability as a function
of time since the most recent spike. Line with squares = ring probability predicted by CSM. Solid line = ring probability
deduced from PSTH. Dashed line = model ring rate used to generate spikes. Second panel: Complexity as a function of time
since most recent spike. Third panel: Entropies as a function of time since most recent spike. Squares = internal entropy rate,
circles = residual randomness, solid line = entropy rate. (overlaps squares)20
A0 | 0.957
B1 | 0.043
C0 | 0.944
H1 | 0.056D1 | 0.055
F0 | 0.945
E0 | 0.885
M1 | 0.1150 | 0.882
1 | 0.1180 | 0.948
G1 | 0.0520 | 0.821
1 | 0.179
I0 | 0.791
1 | 0.209 J0 | 0.839
K1 | 0.1610 | 0.878
1 | 0.122
L0 | 0.677 1 | 0.323
1 | 0.205
N0 | 0.795
1 | 0.392 0 | 0.6081 | 0.366
O0 | 0.6341 | 0.265
P0 | 0.7350 | 0.816
1 | 0.184
FIG. 4 16-state CSM reconstructed from 200 sec of simulation of periodically-stimulated spiking. C= 0:89,J= 0:27,
R= 0:0007. State Ais the baseline state. States Bthrough F(octagons) are \decision" states in which the CSM evaluates
whether a spike indicates a stimulus or was spontaneous. Two spikes within 3 msec cause the CSM to transition to states G
through P, which represent the structure imposed by the stimulus. If no spikes are emitted within 5 (often fewer) sequential
msec, the CSM goes back to the baseline state A.21
0 5 10 15 20 25 30 35 40 45 5000.020.040.060.080.10.12
0 5 10 15 20 25 30 35 40 45 5000.20.40.6Time Dependent Firing Probability
0 5 10 15 20 25 30 35 40 45 500510Complexity C(t)
0 5 10 15 20 25 30 35 40 45 500123EntropiesISI Distribution
msec
Time since stimulus (msec)
Time since stimulus (msec)bits bitsspikes/msec
Stimulus Driven Entropy (螖H(t))A
B
C位CSM(t)
位PSTH (t)
J(t)
R(t)
H(t)位model (t)
0 5 10 15 20 25 30 35 40 45 5000.511.5 bits
FIG. 5 Stimulus model ISI distribution and time-dependent complexity and entropies. (A) ISI distribution and 99% con dence
bounds. (B) First panel: Firing probability as a function of time since stimulus presentation. Second panel: Time dependent
complexity. Third panel: time-dependent entropies. (C) The stimulus-driven entropy is >1 bit, indicating strong external
drive. See text for discussion.22
A0 | 0.990
B1 | 0.010
D
E0 | 0.9281 | 0.072
1 | 0.065
F0 | 0.935C
0 | 0.9611 | 0.039 0 | 0.9911 | 0.009
H1 | 0.046 G0 | 0.954
M1 | 0.042
N0 | 0.9580 | 0.959
1 | 0.041
L1 | 0.036
0 | 0.964K1 | 0.041
0 | 0.959J
1 | 0.0320 | 0.968I1 | 0.047
0 | 0.9531 | 0.052
0 | 0.9481 | 0.053
0 | 0.947
FIG. 6 14-state CSM reconstructed from 90 sec of spiking recorded from a spontaneously spiking (no stimulus) neuron located
in layer II/III of rat barrel cortex. C= 1:02,J= 0:10,R= 0:005. State A(circle) is baseline 10 Hz spiking. States Bthrough
Ncomprise a refractory/bursting chain similar to, but with a somewhat more intricate structure than, that of the model neuron
in Figure 223
0 5 10 15 20 25 30 35 40 45 5000.020.040.060.080.1
0 5 10 15 20 25 3000.020.040.06History Dependent Firing Probability
0 5 10 15 20 25 3002468Complexity C(t)
0 5 10 15 20 25 3000.20.4EntropiesISI Distribution
msecspikes/msecbits bits
time since most recent spike (msec)BA
J(t)
R(t)
H(t)位CSM(t)
位PSTH (t)
FIG. 7 Spontaneously spiking barrel cortex neuron. (A) ISI distribution and 99% bootstrapped con dence bounds. (B) First
panel: Time dependent ring probability as a function of time since most recent spike. See text for explanation of discrepancy
between CSM and PSTH spike probabilities. Second Panel: Complexity as a function of time since most recent spike. Third
Panel: Entropy rates as a function of time since most recent spike.24
A0 | 0.99
B1 | 0.01
ZZ
1 | 0.02
C2 0 | 0.99C1 | 0.010 | 0.99
X0 | 0.98
1 | 0.02L
1 | 0.03M0 | 0.97
1 | 0.04N0 | 0.96D0 | 0.95
C11 | 0.05H2
1 | 0.050 | 0.95
1 | 0.06E0 | 0.94
H1
1 | 0.01 0 | 0.99
1 | 0.040 | 0.96
O1 | 0.030 | 0.97
1 | 0.03P0 | 0.977J
1 | 0.04K0 | 0.96
1 | 0.040 | 0.97H
1 | 0.04
I0 | 0.96
1 | 0.040 | 0.96G
1 | 0.050 | 0.95F
1 | 0.050 | 0.95
1 | 0.060 | 0.94
0 | 0.991 | 0.01 W
0 | 0.981 | 0.02V
1 | 0.020 | 0.98S
1 | 0.03T0 | 0.97
1 | 0.03U0 | 0.98Q
1 | 0.03R0 | 0.97
1 | 0.030 | 0.97
1 | 0.030 | 0.97
1 | 0.020 | 0.98
FIG. 8 29-state CSM reconstructed from 335 seconds of spikes recorded from a layer II/III barrel cortex neuron undergoing
periodic (125 msec inter-stimulus interval) stimulation via vibrissa de ection. C= 1:97,J= 0:11,R= 0:004. Most of the
states are devoted to refractory/bursting behavior, however states \C1", \C2" and \ZZ" represent the structure imposed by the
external stimulus. See text for discussion.25
0 5 10 15 20 25 30 35 40 45 5000.020.040.060.08
0 5 10 15 20 25 30 35 40 45 5000.020.04Time Dependent Firing Probability
0 5 10 15 20 25 30 35 40 45 501.522.53Complexity C(t)
0 5 10 15 20 25 30 35 40 45 5000.20.4EntropiesISI Distribution A
msecspikes/msecbits bits
time since stimulus presentation (msec)位CSM(t)
位PSTH (t)
J(t)
R(t)
H(t)B
C
time since stimulus presentation (msec)0 5 10 15 20 25 30 35 40 45 5000.0050.010.0150.020.025Stimulus Driven Entropy (螖H(t))bits
FIG. 9 Stimulated barrel cortex neuron ISI distribution and time-dependent complexity and entropies. (A) ISI distribution
and 99% con dence bounds. (B) First panel: Firing probability as a function of time since stimulus presentation. Second
panel: Time-dependent complexity. Third panel: time-dependent entropies. (C) The stimulus driven entropy (maximum of
0:02 bits/msec) is low because the number of spikes per stimulus ( 0:10:2) is very low and hence the stimulus does not
supply much information.26
0 5 10 15 20 25 3000.020.040.06History Dependent Firing Probability
0 5 10 15 20 25 30 35 40 45 500510Complexity C(t)
0 5 10 15 20 25 3000.20.4Entropiesspikes/msecbits bits
time since most recent spike (msec)J(t)
R(t)
H(t)位CSM(t)
位PSTH (t)
FIG. 10 Firing probability, complexity and entropies of the stimulated barrel cortex neuron as a function of time since the
most recent spike.27
0 10 20 30 40 5000.020.040.060.080.10.12
0 10 20 30 40 5000.020.040.060.080.1
0 0.2 0.4 0.6 0.8 100.20.40.60.81
0 0.2 0.4 0.6 0.8 100.20.40.60.81
0 0.2 0.4 0.6 0.8 100.20.40.60.81
0 0.2 0.4 0.6 0.8 100.20.40.60.81ISI Distrib. Spont. Spiking
Validation DataISI Distrib. Stimulated Spiking Validation Data
KS Plot, Spont. Spiking Training Data
Validation DataKS Plot, Stimulated Spiking Training Data
Validation Datamsec msecA B
C D
E F
FIG. 11 Cross validation of CSMs reconstructed from spontaneously ring and stimulus evoked rat barrel cortex on an inde-
pendent validation training set. (A,B) ISI distribution of spontaneously and stimulus evoked ring validation sets and 99%
con dence bounds bootstrapped from CSM. (C-D) Time rescaling plots of training data sets for spontaneously ring and stim-
ulus evoked ring respectively. Dashed lines are 95% con dence bounds and the solid line is the rescaled ISIs. The solid line
along the digagonal is for visual comparison to an ideal t. (E-F) Similar time rescaling plots for the validation data sets.