Unnamed: 0
int64 0
2.45k
| id
stringlengths 20
20
| year
int64 2.01k
2.02k
| title
stringlengths 25
175
| sections
stringlengths 7.34k
32.4k
| headings
stringclasses 25
values | abstract
stringlengths 496
2.55k
| summary
stringlengths 684
1.93k
| keywords
stringlengths 28
908
| toc
stringclasses 1
value |
---|---|---|---|---|---|---|---|---|---|
951 | journal.pcbi.1004999 | 2,016 | Identification of Conserved Moieties in Metabolic Networks by Graph Theoretical Analysis of Atom Transition Networks | Conserved moieties give rise to pools of metabolites with constant total concentration and dependent individual concentrations ., These constant metabolite pools often consist of highly connected cofactors that are distributed throughout a metabolic network ., Representative examples from energy metabolism include the AMP and NAD moieties 1 , 2 ., Changes in concentration ratios within these cofactor pools affect thermodynamic and mass action kinetic driving forces for all reactions they participate in ., Moiety conservation therefore imposes a purely physicochemical form of regulation on metabolism that is mediated through changes in concentration ratios within constant metabolite pools ., Reich and Sel’kov likened conserved moieties to turning wheels that are “geared into a clockwork” 2 ., They described the thermodynamic state of energy metabolism as “open flow through a system closed by moiety conservation” ., Identification of conserved moieties in metabolic networks has helped elucidate complex metabolic phenomena including synchronisation of glycolytic oscillations in yeast cell populations 3 and the function of glycosomes in the African sleeping sickness parasite Trypanosoma brucei 4 ., It has also been shown to be relevant for drug development 4 , 5 ., Identification of conserved moieties has been of interest to the metabolic modelling community for several decades 6 , 7 ., It is particularly important for dynamic modelling 8 and metabolic control analysis 9 where metabolite concentrations are explicitly modelled ., Moiety conservation relations provide a sparse , physically meaningful description of concentration dependencies in a metabolic network ., They can be used to eliminate redundant metabolite concentrations as the latter can be derived from the set of independently varying metabolite concentrations ., Doing so facilitates simulation of metabolic networks and is in fact required for many computational modelling methods 6 , 7 ., Mathematically , moiety conservation gives rise to a stoichiometric matrix with linearly dependent rows ., The left null space of the stoichiometric matrix therefore has nonzero dimension ( see Theoretical Framework , Section Moiety vectors ) ., Vectors in the left null space , hereafter referred to as conservation vectors , can be divided into several interrelated sets based on their numerical properties and biochemical meaning ( Fig 1 ) ., Moiety vectors constitute a subset of conservation vectors with a distinct biochemical interpretation ., Each moiety vector represents conservation of a particular metabolic moiety ., Elements of a moiety vector correspond to the number of instances of a conserved moiety in metabolites of a metabolic network ., As moieties are discrete quantities , moiety vectors are necessarily nonnegative integer vectors ., Methods exist to compute conservation vectors based only on the stoichiometric matrix of a metabolic network ., These methods compute different types of bases for the left null space of the stoichiometric matrix ( see S1 Appendix for mathematical definitions ) ., Each method draws basis vectors from a particular set of conservation vectors ( Fig 1 ) ., There is a tradeoff between the computational complexity of these methods and the biochemical interpretability of the basis vectors they return ., At the low end of the computational complexity spectrum are linear algebraic methods such as singular value decomposition ., Other methods , such as Householder QR factorisation 7 or sparse LU factorisation 10 are more efficient for large stoichiometric matrices ., These methods construct a linear basis for the left null space from real-valued conservation vectors ., Though readily computed , these vectors are also the most difficult to interpret as they generally contain negative and noninteger elements ., Schuster and Höfer 11 introduced the use of vertex enumeration algorithms to compute the extreme rays of the positive orthant of the left null space ., They referred to these extreme rays as “extreme semipositive conservation relations” ., Famili and Palsson 12 later referred to them as “metabolic pools” and the set of all extreme rays as “a convex basis for the left null space” ., Like moiety vectors , extreme rays are nonnegative integer vectors ., They are therefore readily interpreted in terms of constant metabolite pools ., However , extreme rays can currently only be computed for relatively small metabolic networks due to the computational complexity of vertex enumeration algorithms 13 ., Moreover , the set of extreme rays is not identical to the set of moiety vectors ( Fig 1 ) ., Schuster and Hilgetag 14 presented examples of extreme rays that did not represent moiety conservation relations , as well as moiety vectors that were not extreme rays ., Moiety vectors are a property of a metabolic network while extreme rays are a property of its stoichiometric matrix ., Multiple metabolic networks could in theory have the same stoichiometric matrix , despite consisting of different sets of metabolites and reactions ., These networks would all have the same set of extreme rays , but could have different sets of moiety vectors ., Schuster and Hilgetag 14 published an extension to the vertex enumeration algorithm in 11 to compute the set of all nondecomposable nonnegative integer vectors in the left null space of a stoichiometric matrix ., This set is guaranteed to contain all nondecomposable moiety vectors for a particular metabolic network as subset ( Fig 1 ) ., However , it is impossible to identify the subset of moiety vectors without information about the atomic structure of metabolites ., Alternatives to vertex enumeration have been proposed to speed up computation of biochemically meaningful conservation vectors , e . g . , 15–17 ., Most recently , De Martino et al . 17 published a novel method to compute a nonnegative integer basis for the left null space of a stoichiometric matrix ., This method 17 relies on stochastic algorithms , without guaranteed convergence , but that were empirically shown to perform well even on large networks ., Like extreme rays , the nonnegative integer vectors computed with this method are not necessarily moiety vectors ( Fig 1 ) ., In general , methods to analyse stoichiometric matrices are not suited to specifically compute moiety vectors ., Computation of moiety vectors requires information about the atomic composition of metabolites ., To our knowledge , only one method has previously been published to specifically compute moiety vectors for metabolic networks 18 ., This method was based on nonnegative integer factorisation of the elemental matrix; a numerical representation of metabolite formulas ., Nonnegative integer factorisation of a matrix is at least NP-hard 19 and no polynomial time algorithm is known to exist for this problem ., Moreover , only the chemical formula but not the atomic identities of the conserved moieties can be derived from this approach ., Identifying the atoms that belong to each moiety requires additional information about the fate of atoms in metabolic reactions ., This information is not contained in a stoichiometric matrix ., Here , we propose a novel method to identify conserved moieties in metabolic networks ., Our method is based on the premise that atoms within the same conserved moiety follow identical paths through a metabolic network ., Given data on which substrate atoms map to which product atoms in each metabolic reaction , the paths of individual atoms through a metabolic network can be encoded in an atom transition network ., Until recently , the necessary data were difficult to obtain but relatively efficient algorithms have now become available to predict atom mappings in metabolic reactions 20–22 ., These algorithms have made it possible to construct atom transition networks for large metabolic networks ., Unlike metabolic networks , atom transition networks are amenable to analysis with efficient graph theory algorithms ., Here , we take advantage of this fact to identify conserved moieties in metabolic networks in polynomial time ., Furthermore , starting from atom transition networks allows us to associate each conserved moiety with a specific group of atoms in a subset of metabolites in a metabolic network ., This work combines elements of biochemistry , linear algebra and graph theory ., We have made an effort to accommodate readers from all fields ., The main text consists of informal descriptions of our methods and results , accompanied by illustrative examples and a limited number of mathematical equations ., Formal definitions of italicised terms are given in supporting file S1 Appendix ., We precede our results with a section on the theoretical framework for this work , where we introduce key concepts and notation used in the remainder of the text ., A metabolic network consists of a set of metabolites that interconvert via a set of metabolic reactions ., Metabolic networks in living beings are open systems that exchange mass and energy with their environment ., For modelling purposes , the boundary between system and environment can be defined by introducing a set of metabolite sources and sinks collectively known as exchange reactions ., Unlike internal reactions , exchange reactions are artificial constructs that do not conserve mass or charge ., The topology of a metabolic network can be represented in several ways ., Here , we use metabolic maps and stoichiometric matrices ., A metabolic map for a small example metabolic network is shown in Fig 2 ., This example will be used throughout this section to demonstrate key concepts relevant to this work ., A stoichiometric matrix for an open metabolic network with m metabolites and n reactions is denoted by S ∈ R m × n ., Each row of S represents a metabolite and each column a reaction such that element Sij is the stoichiometric coefficient of metabolite i in reaction j ., Coefficients are negative for substrates and positive for products ., Substrates and products in reversible reactions are defined by designating one direction as forward ., The stoichiometric matrix can be written as, S = N , B , ( 1 ), where N ∈ Z m × u consists of columns representing internal ( mass balanced ) reactions and B ∈ R m × ( n - u ) consists of columns representing exchange reactions ( mass imbalanced ) ., Note that N represents a metabolic network that is closed to the environment ., In what follows we will refer to N as the internal stoichiometric matrix , B as the exchange stoichiometric matrix , and S as the total stoichiometric matrix ., The total stoichiometric matrix for the example metabolic network in Fig 2 is given in Table 1 ., Stoichiometric matrices are incidence matrices for generalised graphs known as hypergraphs 24 ., Hypergraphs contain hyperedges that can connect more than two nodes ., The metabolic map in Fig 2 is a planar visualisation of a hypergraph with one hyperedge , connecting four metabolites ., A graph edge that only connects two nodes is a special instance of a hyperedge ., Apart from the occasional isomerisation reaction , metabolic reactions involve more than two metabolites ., As a result , they cannot be represented as graph edges without loss of information ., Metabolic networks are therefore represented as hypergraphs where nodes represent metabolites and hyperedges represent reactions ., Since reactions have a designated forward direction , they are directed hypergraphs ., Representing metabolic networks as hypergraphs has the advantage of conserving basic structure and functional relationships ., The disadvantage is that many graph theoretical algorithms are not applicable to hypergraphs 24 ., An internal stoichiometric matrix N ∈ Z m × u for a closed metabolic network is always row-rank deficient ,, i . e ., , rank ( N ) < m 11 ., The left null space of N , denoted by N ( N T ) , therefore has finite dimension given by dim ( N ( N T ) ) = m - rank ( N ) ., The left null space holds all conservation vectors for a stoichiometric matrix 8 ., The number of linearly independent conservation vectors for a closed metabolic network is dim ( N ( N T ) ) ., The total stoichiometric matrix S for an open metabolic network has a greater rank than the internal stoichiometric matrix N for the corresponding closed metabolic network ( e . g . , Table 1 ) ,, i . e ., , rank ( N ) < rank ( S ) ., Consequently , dim ( N ( S T ) ) < dim ( N ( N T ) ) , meaning that there are fewer linearly independent conservation vectors for an open metabolic network than the corresponding closed network ., This is consistent with physical reality , since mass can flow into and out of open networks but is conserved within closed networks ., All quantities that are conserved in an open metabolic network are also conserved in the corresponding closed network ., That is , if z is a conservation vector for an open metabolic network S , such that ST z = 0 , then z is also a conservation vector for the corresponding closed network N , and NT z = 0 , since S = N , B ., The set of conservation relations for an open network is therefore a subset of all conservation relations for the corresponding closed network ,, i . e ., , N ( S T ) ⊆ N ( N T ) ., In what follows we will mainly be concerned with the larger set of conservation relations for a closed metabolic network ., Schuster and Hilgetag 14 defined a moiety vector l1 as a nonnegative integer vector in the left null space of a stoichiometric matrix ,, i . e ., ,, N T l 1 = 0 , ( 2 ) l 1 ∈ N 0 m ., ( 3 ), In addition , they defined l1 to be a maximal moiety vector if it cannot be decomposed into two other vectors l2 and l3 that satisfy Eqs 2 and 3 ,, i . e ., , if, l 1 ≠ α 2 l 2 + α 3 l 3 , ( 4 ), where α 2 , α 3 ∈ N + ., We propose a more specific definition ., The properties above define increasingly small sets of conservation vectors ( Fig 1 ) ., Eq 2 defines the set of all conservation vectors ., Addition of Eq 3 defines the set of nonnegative integer conservation vectors and addition of Eq 4 defines the set of nonnegative integer conservation vectors that are nondecomposable ., Although this set includes all nondecomposable moiety vectors as subset it is not equivalent ( Fig 1 ) ., To define the set of moiety vectors we require a fourth property ., We define l1 to be a moiety vector if it satisfies Eqs 2 and 3 and represents conservation of a specific metabolic moiety ,, i . e ., , an identifiable group of atoms in network metabolites ., Element l1 , i should correspond to the number of instances of the conserved moiety in metabolite, i . We define l1 to be a nondecomposable moiety vector if it satisfies condition 4 and a composite moiety vector if it does not ., Nondecomposable moiety vectors for the DOPA decarboxylase reaction from the example metabolic network in Fig 2 are given in Table 2a ., For comparison , conservation vectors computed with existing methods for conservation analysis of metabolic networks are given in Table 2b–2d ., In general , these vectors do not represent moiety conservation ., Metabolic reactions conserve mass and chemical elements ., Therefore , there must exist a mapping from each atom in a reactant metabolite to a single atom of the same element in a product metabolite ., An atom transition is a single mapping from a substrate to a product atom ., An atom transition network contains information about all atom transitions in a metabolic network ., It is a mathematical structure that enables one to trace the paths of each individual atom through a metabolic network ., An atom transition network can be generated automatically from a stoichiometric matrix for a metabolic network and atom mappings for internal reactions ., The atom transition network for the DOPA decarboxylase reaction from the example metabolic network in Fig 2 is shown in Fig 3 ., Unlike metabolic networks , atom transition networks are graphs since every atom transition ( edge ) connects exactly two atoms ( nodes ) ., They are directed graphs since every atom transition has a designated direction that is determined by the directionality of the parent metabolic reaction ,, i . e ., , the designation of substrates and products ., Because atom transition networks are graphs , they are amenable to analysis with efficient graph algorithms that are not generally applicable to metabolic networks due to the presence of hyperedges 24 ., We will demonstrate our method by identifying conserved moieties in the simple dopamine synthesis network DAS in Fig 4 . This network consists of 11 metabolites , four internal reactions and seven exchange reactions ., The total stoichiometric matrix S = N , B is given in Table 3 . The internal stoichiometric matrix N is row rank deficient with rank ( N ) = 4 . The dimension of the left null space is therefore dim ( N ( N T ) ) = 7 , meaning that there are seven linearly independent conservation vectors for the closed metabolic network ., Our analysis of an atom transition network for DAS will conclude with the computation of seven linearly independent moiety vectors that span N ( N T ) ., To compute these vectors we require the internal stoichiometric matrix in Table 3 and atom mappings for the four internal reactions ., Here , we used algorithmically predicted atom mappings 20 ., These data are required to generate an atom transition network for DAS ( see Methods , Section Generation of atom transition networks ) ., By graph theoretical analysis of this atom transition network we derive the first of two alternative representations of moiety conservation relations which we term moiety graphs ., Nodes in a moiety graph represent separate instances of a conserved moiety ., Each node is associated with a specific set of atoms in a particular metabolite ., The second representation of moiety conservation relations are the moiety vectors which can be derived from moiety graphs in a straightforward manner ., Moiety vectors computed with our method are therefore associated with specific atoms via moiety graphs ., To identify all conserved moieties in DAS we require an atom transition network for all atoms regardless of element but for demonstration purposes we will initially focus only on carbon atoms ., A carbon atom transition network for DAS is shown in Fig 5a ., Our working definition of a conserved moiety is a group of atoms that follow identical paths through a metabolic network ., To identify conserved moieties , we therefore need to trace the paths of individual atoms and determine which paths are identical ., The paths of individual atoms through the carbon atom transition network for DAS can be traced by visual inspection of Fig 5a ., For example , we can trace a path from C1 in L-phenylalanine to C7 in dopamine via C3 in L-tyrosine and C8 in levodopa ., This path is made up of atom transitions in reactions R1 , R2 , and R3 from Fig 4 . In graph theory terms , these four carbon atoms and the atom transitions that connect them constitute a connected component 25 or , simply , a component of the directed graph representing the carbon atom transition network for DAS ., A directed graph is said to be connected if a path exists between any pair of nodes when edge directions are ignored ., A component of a directed graph is a maximal connected subgraph ., In total , the carbon atom transition network for DAS in Fig 5a consists of 18 components ., The paths of the first eight carbon atoms ( C1–C8 ) in L-phenylalanine are identical in the sense that they include the same number of atoms in each metabolite and the same number of atom transitions in each reaction ., In graph theory terms , the components containing C1–C8 in L-phenylalanine are isomorphic ., An isomorphism between two graphs is a structure preserving vertex bijection 25 ., The definition of isomorphism varies for different types of graphs as they have different structural elements that need to be preserved ., An isomorphism between two simple graphs is a vertex bijection that preserves the adjacency and nonadjacency of every node , i . e . , its connectivity ., An isomorphism between two simple directed graphs must also preserve edge directions ., We define an isomorphism between two components of an atom transition network as a vertex bijection that preserves the metabolic identity of every node ., The nature of chemical reactions ensures that all other structural elements are preserved along with metabolic identities , including the connectivity of atoms and the number , directions and reaction identities of atom transitions ., The 18 components of the carbon atom transition network for DAS in Fig 5a can be divided into three sets , where every pair of components within each set is isomorphic ., An isomorphism between two components of an atom transition network is a one-to-one mapping between atoms in the two components ., For example , the isomorphism between the two left-most components in Fig 5a maps between C1 and C2 in L-phenylalanine , C3 and C2 in L-tyrosine , C8 and C7 in L-DOPA , and C7 and C8 in dopamine ., We say that two atoms are equivalent if an isomorphism maps between them ., We note that our definition of isomorphism only allows mappings between atoms with the same metabolic identity ., Two atoms can therefore only be equivalent if they are in the same metabolite ., Equivalent atoms follow identical paths through a metabolic network and therefore belong to the same conserved moiety ., In general , we define a conserved moiety to be a maximal set of equivalent atoms in an atom transition network ., To identify conserved moieties , we must therefore determine isomorphisms between components of an atom transition network to identify maximal sets of equivalent atoms ., The first eight carbon atoms ( C1–C8 ) in L-phenylalanine are equivalent ., They are therefore part of the same conserved moiety , which we denote λ1 ., The last eight carbon atoms ( C2–C9 ) in L-tyrosine are likewise part of the same conserved moiety ., They make up another instance of the λ1 moiety ., The λ1 moiety is conserved between L-phenylalanine and L-tyrosine in reaction R1 , between L-tyrosine and levodopa in reaction R2 , and between levodopa and dopamine in reaction R3 ., Each of the four metabolites contains one instance of the λ1 moiety ., The path of this moiety through DAS defines its conservation relation ., This brings us to our first representation of moiety conservation relations , which we term moiety graphs ., Moiety graphs are obtained from atom transition networks by merging a set of isomorphic components into a single graph ., Moiety graphs for the three carbon atom moieties in DAS are shown in Fig 5b ., Four additional moieties were identified by analysis of an atom transition network for DAS that included all atoms regardless of element ., All seven moiety graphs are shown in Fig 6 ., Atoms belonging to each node in the moiety graphs are highlighted in Fig 4 . The second way to represent moiety conservation relations is as moiety vectors ., Above we defined a moiety vector as a conservation vector lk where element lk , i corresponds to the number of instances of moiety k in metabolite i of a metabolic network ( see Section Moiety vectors in Theoretical Framework ) ., We can now make this definition exact by relating moiety vectors to moiety graphs ., Each instance of a conserved moiety is represented as a node in its moiety graph ., Element lk , i of a moiety vector therefore corresponds to the number of nodes in moiety graph λk that represent moieties in metabolite i ., Moiety vectors are readily derived from moiety graphs by counting the number of nodes in each metabolite ., Moiety vectors for DAS were derived from the moiety graphs in Fig 6 ., The seven moiety vectors are given as columns of the moiety matrix L ∈ Z 11 × 7 in Table 4 . These seven vectors are linearly independent and therefore span all seven dimensions of N ( N T ) ., The moiety matrix L is therefore a moiety basis for the left null space ., Atom transition networks are generated from atom mappings for internal reactions of metabolic networks ., However , atom mappings for metabolic reactions are not necessarily unique ., Computationally predicted atom mappings , as used here , are always associated with some uncertainty ., In addition , there can be biochemical variability in atom mappings , in particular for metabolites containing symmetric atoms ., All reactions of the O2 molecule , for example , have at least two biochemically equivalent atom mappings since the two symmetric oxygen atoms map with equal probability to connected atoms ., Different atom mappings give rise to different atom transition networks that may contain different moiety conservation relations ., For the most part , we found that varying the set of input atom mappings did not affect the number of computed moiety conservation relations , only their atomic structure ., An important exception was when atom mappings between the same pair of metabolites varied between reactions in the same metabolic network ., The same pair of metabolites often exchange atoms in multiple reactions throughout the same metabolic network ., Common cofactors such as ATP and ADP , for example , exchange atoms in hundreds of reactions in large metabolic networks 26 ., In the dopamine synthesis network , DAS in Fig 4 , O2 and H2O exchange an oxygen atom in two reactions , R1 and R2 ., Since the two oxygen atoms of O2 are symmetric , there are four possible combinations of oxygen atom mappings for these two reactions ., Each combination gives rise to a different oxygen transition network as shown in Fig 7 ., Two of these oxygen transition networks , shown in Fig 7a and 7b , contain two moiety conservation relations each , λ6 and λ7 , which are shown in Fig 7c ., The other two oxygen transition networks , shown in Fig 7d and 7e , contain only one moiety conservation relation each , λ8 , which is shown in Fig 7f ., The DAS atom transition network considered in the previous section was generated with the oxygen atom mappings in Fig 7a and thus contained the two moiety conservation relations λ6 and λ7 ( see Fig 6 ) ., An atom transition network generated with the atom mappings in Fig 7d or 7e would contain the single moiety conservation relation λ8 instead of these two ., What distinguishes the oxygen transition networks in Fig 7d and 7e is that the oxygen atom in O2 that maps to H2O varies between the two reactions R1 and R2 ., The atom transition network for DAS therefore contains one less moiety conservation relation if the atom mapping between this recurring metabolite pair varies between reactions ., The moiety matrix for these alternative atom transition networks ,, L = l 1 , l 2 , l 3 , l 4 , l 5 , l 8 , ( 5 ), only contains six linearly independent columns and is therefore not a basis for the seven dimensional left null space of N . The vector representation of moiety graph λ8 is, l 8 T = 0 1 2 2 0 0 0 0 2 1 0 ., ( 6 ), We note that l8 = l6 + l7 where, l 6 T = 0 1 2 2 0 0 0 0 1 0 0 , ( 7 ) l 7 T = 0 0 0 0 0 0 0 0 1 1 0 , ( 8 ), from Table 4 . The moiety vector l8 therefore represents a composite moiety ., It does not meet the definition of a nondecomposable moiety vector in Eq 4 . This example shows that variable atom mappings between recurring metabolite pairs may cause multiple nondecomposable moiety conservation relations to be joined together into a single composite moiety conservation relation ., We formulated an optimisation problem , described in Methods , Section Decomposition of moiety vectors , to decompose composite moiety vectors ., Solving this problem for the composite moiety vector l8 yields the two nondecomposable components l6 and l7 ., We applied our method to identify conserved moieties in three metabolic networks of increasing size ., The networks , listed from smallest to largest , were the dopamine synthesis network , DAS in Fig 4 , the E . coli core metabolic network , iCore 27 , and an atom mapped subset of the generic human metabolic reconstruction , Recon 2 26 which we refer to here as subRecon ., The dimensions of the three networks are given in Table 5a ., Further descriptions are provided in Methods , Section Metabolic networks ., There are seven linearly independent conservation relations for the closed DAS network , 11 for iCore , and 351 for subRecon ., Atom transition networks were generated using algorithmically predicted atom mappings 20 as described in Methods , Section Generation of atom transition networks ., Seven , ten and 345 moiety conservation relations were identified in the predicted atom transition network for DAS , iCore and subRecon , respectively ( Table 5b ) ., Characterisation of identified moieties revealed some trends ( Fig 8 ) ., We found a roughly inverse relationship between the frequency of a moiety , defined as the number of instances , and the size of that moiety , defined as the number of atoms per instance ., We also found a relationship between moiety size , frequency and classification ., Internal moieties tended to be large and infrequent , occurring in a small number of closely related secondary metabolites , e . g . , the 35 atom AMP moiety found in the three iCore metabolites AMP , ADP and ATP ., Integrative moieties were usually small and frequent while transitive moieties were intermediate in both size and frequency ., The smallest moieties consisted of single atoms ., These were often highly frequent , occurring in up to 62/72 iCore metabolites and 2 , 472/2 , 970 subRecon metabolites ., These results indicate a remarkable interconnectivity between metabolites at the atomic level ., Due to their frequency , single atom moieties accounted for a large portion of atoms in each metabolic network ., Single atom moieties accounted for nearly half ( 791/1 , 697 ) of all atoms in iCore , and approximately two thirds ( 104 , 268/153 , 298 ) of all atoms in subRecon ., Moiety matrices derived from the predicted atom transition networks for iCore and subRecon did not span the left null spaces of their respective stoichiometric matrices , indicating that they might contain composite moiety vectors ., Using the method described in Methods , Section Decomposition of moiety vectors , we found two composite moiety vectors in the moiety matrix for iCore , and 10 in the one for subRecon ., Decomposition of these vectors yielded three new nondecomposable moiety vectors for iCore and 18 for subRecon ( Table 5b ) ., The 11 nondecomposable moiety vectors for iCore were linearly independent ., They therefore comprised a basis for the 11 dimensional left null space of N for iCore ., The 353 nondecomposable moiety vectors for subRecon , on the other hand , were not linearly independent and only spanned 347 out of 351 dimensions in the left null space of S for subRecon ., This indicated that there existed conservation relations for subRecon that were independent of atom conservation ., Schuster and Höfer , citing earlier work by Aris 28 and Corio 29 , noted the importance of considering electron conservation in addition to atom conservation 11 ., Unfortunately , it is not as straightforward to map electrons as atoms and no formalism currently exists for electron mappings ., As a result , electron conservation relations cannot be computed with the current version of our algorithm ., We therefore computed electron conservation relations for subRecon by decomposing the electron vector with the method described in Methods , Section Decomposition of moiety vectors ., An electron vector for a metabolic network with m metabolites is a vector e ∈ N m where ei is the total number of electrons in metabolite i ., Decomposition of e for subRecon yielded 11 new conservation vectors ., When combined , the 11 electron vectors and the 353 fully decomposed moiety vectors for subRecon ( Table 5b ) spanned the left null space of the subRecon stoichiometric matrix ., Internal moieties define pools of metabolites with constant total conce | Introduction, Theoretical Framework, Results, Discussion, Methods | Conserved moieties are groups of atoms that remain intact in all reactions of a metabolic network ., Identification of conserved moieties gives insight into the structure and function of metabolic networks and facilitates metabolic modelling ., All moiety conservation relations can be represented as nonnegative integer vectors in the left null space of the stoichiometric matrix corresponding to a biochemical network ., Algorithms exist to compute such vectors based only on reaction stoichiometry but their computational complexity has limited their application to relatively small metabolic networks ., Moreover , the vectors returned by existing algorithms do not , in general , represent conservation of a specific moiety with a defined atomic structure ., Here , we show that identification of conserved moieties requires data on reaction atom mappings in addition to stoichiometry ., We present a novel method to identify conserved moieties in metabolic networks by graph theoretical analysis of their underlying atom transition networks ., Our method returns the exact group of atoms belonging to each conserved moiety as well as the corresponding vector in the left null space of the stoichiometric matrix ., It can be implemented as a pipeline of polynomial time algorithms ., Our implementation completes in under five minutes on a metabolic network with more than 4 , 000 mass balanced reactions ., The scalability of the method enables extension of existing applications for moiety conservation relations to genome-scale metabolic networks ., We also give examples of new applications made possible by elucidating the atomic structure of conserved moieties . | Conserved moieties are transferred between metabolites in internal reactions of a metabolic network but are not synthesised , degraded or exchanged with the environment ., The total amount of a conserved moiety in the metabolic network is therefore constant over time ., Metabolites that share a conserved moiety have interdependent concentrations because their total amount is constant ., Identification of conserved moieties results in a concise description of all concentration dependencies in a metabolic network ., The problem of identifying conserved moieties has previously been formulated in terms of the stoichiometry of metabolic reactions ., Methods based on this formulation are computationally intractable for large networks ., We show that reaction stoichiometry alone gives insufficient information to identify conserved moieties ., By first incorporating additional data on the fate of atoms in metabolic reactions , we developed and implemented a computationally tractable algorithm to identify conserved moieties and their atomic structure . | oxygen, applied mathematics, metabolic networks, simulation and modeling, algorithms, mathematics, metabolites, algebra, network analysis, carbon, molecular biology techniques, research and analysis methods, cell labeling, computer and information sciences, vector spaces, chemistry, graph theory, molecular biology, biochemistry, metabolic labeling, linear algebra, biology and life sciences, physical sciences, metabolism, chemical elements | null |
2,439 | journal.pcbi.1002875 | 2,013 | Significance Analysis of Prognostic Signatures | The identification of pathways that predict prognosis in cancer is important for enhancing our understanding of the biology of cancer progression and for identifying new therapeutic targets ., There are three widely-recognized breast cancer molecular subtypes , “luminal” ( ER+/HER2− ) 1 , 2 , 3 , 4 , “HER2-enriched” ( HER2+ ) 5 , 6 and “basal-like” ( ER−/HER2− ) 6 , 7 , 8 , 9 and a considerable body of work has focused on defining prognostic signatures in these 10 , 11 ., Several groups have analyzed prognostic biological pathways across breast cancer molecular subtypes 12 , 13 , 14; a tacit assumption is that if a gene signature is associated with prognosis , it is likely to encode a biological signature driving carcinogenesis ., Recent work by Venet et al . has questioned the validity of this assumption by showing that most random gene sets are able to separate breast cancer cases into groups exhibiting significant survival differences 15 ., This suggests that it is not valid to infer the biologic significance of a gene set in breast cancer based on its association with breast cancer prognosis and further , that new rigorous statistical methods are needed to identify biologically informative prognostic pathways ., To this end , we developed Significance Analysis of Prognostic Signatures ( SAPS ) ., The score derived from SAPS summarizes three distinct significance tests related to a candidate gene sets association with patient prognosis ., The statistical significance of the SAPSscore is estimated using an empirical permutation-based procedure to estimate the proportion of random gene sets achieving at least as significant a SAPS score as the candidate prognostic gene set ., We apply SAPS to a large breast cancer meta-dataset and identify prognostic genes sets in breast cancer overall , as well as within breast cancer molecular subtypes ., Only a small subset of gene sets that achieve statistical significance using standard statistical measures achieves significance using SAPS ., Further , the gene sets identified by SAPS provide new insight into the mechanisms driving breast cancer development and progression ., To assess the generalizability of SAPS , we apply it to a large ovarian cancer meta-dataset and identify significant prognostic gene sets ., Lastly , we compare prognostic gene sets in breast and ovarian cancer molecular subtypes , identifying a core set of shared biological signatures driving prognosis in ER+ breast cancer molecular subtypes , a distinct core set of signatures associated with prognosis in ER− breast cancer and ovarian cancer molecular subtypes , and a set of signatures associated with improved prognosis across breast and ovarian cancer ., The assumption behind SAPS is that to use a prognostic association to indicate the biological significance of a gene set , a gene set should achieve three distinct and complimentary objectives ., First , the gene set should cluster patients into groups that show survival differences ., Second , the gene set should perform significantly better than random gene sets at this task , and third , the gene set should be enriched for genes that show strong univariate associations with prognosis ., To achieve this end , SAPS computes three p-values ( Ppure , Prandom , and Penrichment ) for a candidate prognostic gene set ., These individual P-Values are summarized in the SAPSscore ., The statistical significance of the SAPSscore is estimated by permutation testing involving permuting the gene labels ( Figure 1 ) ., To compute the Ppure , we stratify patients into two groups by performing k-means clustering ( k\u200a=\u200a2 ) of an n×p data matrix , consisting of the n patients in the dataset and the p genes in the candidate prognostic gene set ., We then compute a log-rank P-Value to indicate the probability that the two groups of patients show no survival difference ( Figure 1A ) ., Next , we assess the probability that a random gene set would perform as well as the candidate gene set in clustering cases into prognostically variable groups ., This P-Value is the Prandom ., To compute the Prandom , we randomly sample genes to create random gene sets of similar size to the candidate gene set ., We randomly sample r gene sets , and for each random gene set we determine a using the procedure described above ., The Prandom is the proportion of at least as significant as the true observed Ppure for the candidate gene set ( Figure 1B ) ., Third , we compute the Penrichment to indicate if a candidate gene set is enriched for prognostic genes ., While the procedure to compute the Ppure uses the label determined by k-means clustering with a candidate gene set as a binary feature to correlate with survival , the procedure to compute the Penrichment uses the univariate prognostic association of genes within a candidate gene to produce a gene set enrichment score to indicate the degree to which a gene set is enriched for genes that show strong univariate associations with survival ( Figure 1C ) ., To compute the Penrichment , we first rank all the genes in our meta-dataset according to their concordance index by using the function concordance . index in the survcomp package in R 16 ., The concordance index of a gene represents the probability that , for a pair of patients randomly selected in our dataset , the patient whose tumor expresses that gene at a higher level will experience the appearance of distant metastasis or death before the other patient ., Based on this genome-wide ranking we perform a pre-ranked GSEA 17 , 18 to identify the candidate gene sets that are significantly enriched in genes with either significantly low or high concordance indices ., The GSEA procedure for SAPS has two basic steps ., First , an enrichment score is computed to indicate the overrepresentation of a candidate gene set at the top or bottom extremes of the ranked list of concordance indices ., This enrichment score is normalized to account for a candidate gene sets size ., Second , the statistical significance of the normalized enrichment score is estimated by permuting the genes to generate the Penrichment ( see Refs . 17 , 18 for further description of pre-ranked GSEA procedure ) , which indicates the probability that a similarly sized random gene set would achieve at least as extreme a normalized enrichment score as the candidate gene set ( Figure 1C ) ., The SAPSscore for each candidate gene set is then computed as the negative log10 of the maximum of the ( Ppure , Prandom , and Penrichment ) times the direction of the association ( positive or negative ) ( Figure 1D ) ., For a given candidate gene set , the SAPSscore specifies the direction of the prognostic association as well as indicates the raw P-Value achieved on all 3 of the ( Ppure prognosis , Prandom prognosis , and Penrichment ) ., Since we take the negative log10 of the maximum of the ( Ppure prognosis , Prandom prognosis , and Penrichment ) , the larger the absolute value of the SAPSscore the more significant the prognostic association of all 3 P-Values ., The statistical significance of the SAPSscore is determined by permuting genes , generating a null distribution for the SAPSscore and computing the proportion of similarly sized gene sets from the null distribution achieving at least as large an absolute value of the SAPSscore as that observed with the candidate gene set ., When multiple candidate gene sets are evaluated , after generating each gene sets raw SAPSP-Value by permutation testing , we account for multiple hypotheses and control the false discovery rate using the method of Benjamini and Hochberg 19 to generate the SAPSq-value ( Figure 1E ) ., In our experiments , we have required a minimum absolute value ( SAPSscore ) of greater than 1 . 3 and a maximum SAPSq-value of less than 0 . 05 to consider a gene set prognostically significant ., These thresholds ensure that a significant prognostic gene set will have achieved a raw P-Value of less than or equal to 0 . 05 for each of Ppure , Prandom , and Penrichment , and will have achieved an overall SAPSq-Value of less than or equal to 0 . 05 ., We chose two model systems to investigate the performance of SAPS ., The first is a curated sample of breast cancer datasets previously described in Haibe-Kains et al . 20 ., Our analysis focused on nineteen datasets with patient survival information ( total n\u200a=\u200a3832 ) ( Table S1 ) ., The second dataset was a compendium of twelve ovarian cancer datasets with survival data , as described in Bentink et al . 21 , which includes data from 1735 ovarian cancer patients for whom overall survival data were available ( Table S2 ) ., In breast cancer , we used SCMGENE 20 as implemented in the R/Bioconductor genefu package 22 to assign patients to one of four molecular subtypes: ER+/HER2− low proliferation , ER+/HER2− high proliferation , ER−/HER2− and HER2+ ., In ovarian cancer , we used the ovcAngiogenic model 21 as implemented in genefu to classify patients as having disease of either angiogenic or non-angiogenic subtype ., One challenge in the analysis of large published datasets is the heterogeneity of the platforms used to collect data ( see Table S1 and Table S2 ) ., To standardize the data , we used normalized log2 ( intensity ) for single-channel platforms and log2 ( ratio ) in dual-channel platforms ., Hybridization probes were mapped to Entrez GeneID as described in Shi et al . 23 using RefSeq and Entrez whenever possible; otherwise mapping was performed using IDconverter ( http://idconverter . bioinfo . cnio . es ) 24 ., When multiple probes mapped to the same Entrez GeneID , we used the one with the highest variance in the dataset under study ., To allow for simultaneous analysis of datasets from multiple institutions , we tested two data merging protocols ., First , we scaled and centered each expression feature across all patients in each dataset ( standard Z scores ) , and we merged the scaled data from the different datasets ( “traditional scaling” ) ., In a second scaling procedure , we first assigned each patient in each data set to a breast or ovarian cancer molecular subtype , using the SCMGENE 20 and ovcAngiogenic 21 models , respectively ., We then scaled and centered each expression feature separately within a specific molecular subtype within each dataset , so that each expression value was transformed into a Z score indicating the level of expression within patients of a specific molecular subtype within a dataset ( “subtype-specific scaling” ) ., After merging datasets , we removed genes with missing data in more than half of the samples and we removed samples that were missing data on more than half of the genes or for which there was no information on distant metastasis free survival ( for breast ) or overall survival ( for ovarian ) ., The resulting breast cancer dataset contained 2731 cases with 13091 unique Entrez gene IDs and the ovarian cancer dataset had 1670 cases and 11247 unique Entrez gene IDs for ., For each of these reduced data matrices , we estimated missing values using the function knn . impute in the impute package in R 25 ., Given that breast cancer is an extremely heterogeneous disease with well-defined disease subtypes , and a primary objective of our work is to identify subtype-specific prognostic pathways in breast cancer , we focus our subsequent analyses on the subtype-specific scaled data ., Given that ovarian cancer subtypes are more subtle and less well defined than breast cancer molecular subtypes , we focus our subsequent analyses in ovarian cancer on the traditional scaled data ., SAPS scores in breast and ovarian cancer generated from the two different scaling procedures showed moderate to strong correlation across the breast and ovarian cancer molecular subtypes ., We downloaded gene sets from the Molecular Signatures Database ( MSigDB ) 17 ( http://www . broadinstitute . org/gsea/msigdb/collections . jsp ) ( “molsigdb . v3 . 0 . entrez . gmt” ) ., MSigDB contains 5 major collections ( positional gene sets , curated gene sets , motif gene sets , computational gene sets , and GO gene sets ) comprising of a total of 6769 gene sets ., We limited our analysis to gene sets with less than or equal to 250 genes and valid data for genes included in the meta-data sets , resulting in 5320 gene sets in the breast cancer analysis and 5355 in the ovarian cancer analysis ., We first applied SAPS to the entire collection of breast cancer cases independent of subtype ., Of the 5320 gene sets evaluated , 1510 ( 28% ) achieved a raw P-Value of 0 . 05 by Ppure , 1539 ( 29% ) by Penrichment , 755 ( 14% ) by Prandom , 581 ( 11% ) by all 3 raw P-Values , and 564 ( 11% ) of these are significant at the SAPSq-value of 0 . 05 ( Figure 2 ) ., The top-ranked gene sets identified by SAPS and associated with poor prognosis in all breast cancers independent of subtype contained gene sets previously found to be associated with poor prognosis in breast cancer ( Table 1 ) ., Thus it is not surprising that these emerged as the most significant , and this result serves as a measure of validation ., We note that the list of top gene sets associated with poor breast cancer prognosis identified in our overall analysis includes the gene set VANTVEER_BREAST_CANCER_METASTASIS_DN , which according to the Molecular Signatures Database website is defined as “Genes whose expression is significantly and negatively correlated with poor breast cancer clinical outcome ( defined as developing distant metastases in less than 5 years ) . ”, Our analysis suggests that the set of genes is positively correlated with poor breast cancer clinical outcome ., Comparison the gene list to the published “poor prognosis” gene list from vant Veer et al . 26 confirms that the gene list is mislabeled in the Molecular Signatures Database and is in fact the set of genes positively associated with metastasis in vant Veer et al . 26 The top-ranking gene sets associated with good prognosis were not originally identified in breast cancers , and represent a range of biological processes ., Several were from analyses of hematolymphoid cells , including: genes down-regulated in monocytes isolated from peripheral blood samples of patients with mycosis fungoides compared to those from normal healthy donors , genes associated with the IL-2 receptor beta chain in T cell activation , and genes down-regulated in B2264-19/3 cells ( primary B lymphocytes ) within 60–180 min after activation of LMP1 ( an oncogene encoded by Epstein Barr virus ) ., These gene sets suggest that specific subsets of immune system activation are associated with improved breast cancer prognosis , consistent with reports that the presence infiltrating lymphocytes is predictive of outcome in many cancers ., We then applied SAPS to the ER+/HER2− high proliferation subtype ., Of the 5320 gene sets evaluated , 1503 ( 28% ) achieved a raw P-Value of 0 . 05 by Ppure , 1667 ( 31% ) by Penrichment , 1079 ( 20% ) by Prandom , 675 ( 13% ) by all 3 raw P-Values , and all 675 of these are significant at the SAPSq-value of 0 . 05 ., The top-ranking gene sets by SAPSscore are associated with cancer and proliferation ., One of the top-ranking gene sets was associated with Ki67 , a well-known prognostic marker in Luminal B breast cancers 27 ., Overall , the patterns of significance are highly similar to that seen in breast cancer analyzed independent of subtype ( Figure 3 , Table 2 ) ., Next , we used SAPS to analyze the ER+/HER2− low proliferation samples ., Of the 5320 gene sets evaluated , 494 ( 9% ) achieved a raw P-Value of 0 . 05 by Ppure , 1113 ( 21% ) by Penrichment , 939 ( 18% ) by Prandom , 303 ( 6% ) by all 3 raw P-Values , and all 303 of these were significant at the SAPSq-value of 0 . 05 ., The top-ranking ER+/HER2− low proliferation prognostic gene sets by SAPSscore are also highly enriched for genes involved in proliferation ( Figure 4 , Table 3 ) ., Top ranking gene sets associated with good prognosis include those highly expressed in lobular breast carcinoma relative to ductal and inflammation-associated genes up-regulated following infection with human cytomegalovirus ., Then , we applied SAPS to the HER2+ subset ., Of the 5320 gene sets evaluated , 1247 ( 23% ) achieved a raw P-Value of 0 . 05 by Ppure , 1425 ( 27% ) by Penrichment , 683 ( 13% ) by Prandom , 439 ( 8% ) by all 3 raw P-Values , and 342 ( 6% ) of these are significant at the SAPSq-value of 0 . 05 ., Most of the top-ranking prognostic pathways in the HER2+ group by SAPSscore are associated with better prognosis and include several gene sets associated with inflammatory response ( Figure 5 , Table 4 ) ., A gene set containing genes down-regulated in multiple myeloma cell lines treated with the hypomethylating agents decitabine and trichostatin A was significantly associated with improved prognosis in HER2+ breast cancer ., The top-ranking gene set associated with decreased survival is a hypoxia-associated gene set ., Hypoxia is a well-known prognostic factor in breast cancer 28 , 29 , and our analysis suggests it shows a very strong association with survival in the HER2+ breast cancer molecular subtype ., Finally , we used SAPS to analyze the poor-prognosis “basal like” subtype which was classified as being ER−/HER2− ., Of the 5320 gene sets evaluated , 786 ( 15% ) achieved a raw P-Value of 0 . 05 by Ppure , 1208 ( 23% ) by Penrichment , 304 ( 6% ) by Prandom , 126 ( 2% ) by all 3 raw P-Values , and 25 ( 0 . 5% ) of these are significant at the SAPSq-value of 0 . 05 ., Top-ranking gene sets associated with poor survival include genes up-regulated in MCF7 breast cancer cells treated with hypoxia mimetic DMOG , genes down-regulated in MCF7 cells after knockdown of HIF1A and HIF2A , genes regulated by hypoxia based on literature searches , genes up-regulated in response to both hypoxia and overexpression of an active form of HIF1A , and genes down-regulated in fibroblasts with defective XPC ( an important DNA damage response protein ) in response to cisplatin ( Figure 6 , Table 5 ) ., This analysis suggests that hypoxia-associated gene sets are key drivers of poor prognosis in HER2+ and ER−/HER2− breast cancer subtypes ., Interestingly , cisplatin is an agent with activity in ER−/HER2− breast cancer , and it is has been suggested that ER−/HER2− breast cancers with defective DNA repair may show increased susceptibility to cisplatin 30 ., Our analysis for ovarian cancer was similar to that for breast cancer ., We began by applying SAPS to the entire collection of ovarian cancer samples independent of subtype ., Of the 5355 gene sets evaluated , 1190 ( 22% ) achieved a raw P-Value of 0 . 05 by Ppure , 1391 ( 26% ) by Penrichment , 755 ( 14% ) by Prandom , 497 ( 9% ) by all 3 raw P-Values ( Figure 7 , Table 6 ) , and all 497 of these are significant at the SAPSq-value of 0 . 05 ., The top gene sets are involved in stem cell-related pathways and pathways related to epithelial-mesenchymal transition , including genes up-regulated in HMLE cells ( immortalized non-transformed mammary epithelium ) after E-cadhedrin ( CDH1 ) knockdown by RNAi , genes down-regulated in adipose tissue mesenchymal stem cells vs . bone marrow mesenchymal stem cells , genes down-regulated in medullary breast cancer relative to ductal breast cancer , genes down-regulated in basal-like breast cancer cell lines as compared to the mesenchymal-like cell lines , genes up-regulated in metaplastic carcinoma of the breast subclass 2 compared to the medullary carcinoma subclass 1 , and genes down-regulated in invasive ductal carcinoma compared to invasive lobular carcinoma ., We then analyzed the angiogenic subtype ., Of the 5355 gene sets evaluated , 1153 ( 22% ) achieved a raw P-Value of 0 . 05 by Ppure , 1377 ( 26% ) by Penrichment , 624 ( 12% ) by Prandom , 371 ( 7% ) by all 3 raw P-Values ( Figure 7 , Table 6 ) , and all of these are significant at the SAPSq-value of 0 . 05 ., Top-ranking gene sets associated with poor prognosis in the angiogenic subtype include: a set of targets of miR-33 ( associated with poor prognosis ) ( Figure 8 , Table 7 ) ., This microRNA has not previously been implicated in ovarian carcinogenesis ., Other top hits include several immune response gene sets , which were associated with improved prognosis ., Finally , we analyzed the non-angiogenic subtype of ovarian cancer ., Of the 5355 gene sets evaluated , 981 ( 18% ) achieved a raw P-Value of 0 . 05 by Ppure , 957 ( 18% ) by Penrichment , 658 ( 12% ) by Prandom , 261 ( 5% ) by all 3 raw P-Values ( Figure 7 , Table 6 ) , and of these , 254 ( 5% ) are significant at the SAPSq-value of 0 . 05 ( Figure 9 , Table 8 ) ., The top ranked pathways associated with improved survival are immune-related gene sets and a gene set found to be negatively associated with metastasis in head and neck cancers ., To assess similarities and differences in prognostic pathways in both breast and ovarian cancer molecular subtypes , we performed hierarchical clustering of the disease subtypes using SAPSscores ., Specifically , we identified the 1300 gene sets with SAPSq-value≤0 . 05 and absolute value ( SAPSscore ) ≥1 . 3 in at least one of the breast and ovarian cancer molecular subtypes ., We clustered the gene sets and disease subtypes using hierarchical clustering with complete linkage and distance defined as one minus Spearman rank correlation ( Figure 10 ) ., This analysis shows two dominant clusters of disease subtypes , with one cluster containing ER+/HER2− high proliferation and ER+/HER2− low proliferation breast cancer molecular subtypes , and the second cluster containing ovarian cancer molecular subtypes and the ER−/HER2− and HER2+ breast cancer molecular subtypes ., SAPSscores for within ER+ breast cancer molecular subtypes , within ER−/HER2− and HER2+ breast cancer molecular subtypes , and within ovarian cancer molecular subtypes show high correlation ( Spearman rho\u200a=\u200a0 . 61 , 0 . 68 , and 0 . 51 , respectively , all p<2 . 2×10−16 ) ., Interestingly , the SAPSscores for the ER−/HER2− and HER2+ breast cancer subtypes show far greater correlation with the SAPSscores in the ovarian cancer molecular subtypes than with the SAPSscores in ER+ molecular subtypes ( median Spearman rho is 0 . 5 for correlation of ER−/HER2− and HER2+ breast cancer molecular subtypes with ovarian cancer molecular subtypes vs . 0 . 16 for ER− molecular subtypes with ER+ molecular subtypes ( Figure 10 ) ., This analysis demonstrates the importance of performing subtype-specific analyses in breast cancer , as breast cancer is an extremely heterogeneous disease and prognostic pathways in ER−/HER2− and HER2+ breast cancer subtypes are far more similar to prognostic pathways in ovarian cancer than with prognostic pathways in ER+ breast cancer subtypes ., Recently , the TCGA breast cancer analysis demonstrated that the “basal” subtype of breast cancer ( ER−/HER2− ) showed genomic alterations far more similar to ovarian cancer than to other breast cancer molecular subtypes 31 ., Our findings show that ER−/HER2− breast cancers share not only genomic alterations but also prognostic pathways with ovarian cancer ., Examining the clusters of gene sets with differential prognostic associations across breast and ovarian cancer molecular subtypes shows three predominant clusters of gene sets ., The first cluster is predominantly composed of proliferation-associated gene sets ., The second cluster comprised a mixture of EMT-associated gene sets , gene sets associated with angiogenesis , and with developmental processes ., The third is comprised predominantly of gene sets associated with inflammation ., The proliferation cluster of gene sets is strongly associated with poor prognosis in breast cancer overall and ER+ breast cancer subtypes ., This supports prior studies demonstrating that proliferation is the strongest factor associated with prognosis in breast cancer overall 15 and in its ER+ molecular subtypes 6 ., Interestingly , the proliferation cluster of gene sets shows little association with survival in ER−/HER2− and HER2+ breast cancer and ovarian cancer and its subtypes , and it is the EMT , hypoxia , angiogenesis , and development-associated cluster of gene sets that are associated with poor prognosis in these diseases/subtypes with these pathways showing little association with poor prognosis in ER+ breast cancer ., The cluster of immune-related pathways tends to show association with improved prognosis across breast and ovarian cancer and their subtypes ( Figure 10 ) ., A significant body of work has focused on identifying prognostic signatures in breast cancer ., Recently , Venet et al . showed that most random signatures are able to stratify patients into groups that show significantly different survival 15 ., This work suggests that more sophisticated and statistically rigorous methods are needed to identify biologically informative gene sets based on observed prognostic associations ., Here we describe such a statistical and computational framework ( Significance Analysis of Prognostic Signature ( SAPS ) ) to allow robust and biologically informative prognostic gene sets to be identified in disease ., The basic premise of SAPS is that in order for a candidate gene sets association with prognosis to be used to imply its biological significance , the gene set must satisfy three conditions ., First , the gene set should cluster patients into prognostically variable groups ., The p value generated from this analysis is the standard Ppure , which has been frequently used in the literature to indicate a gene sets clinical and biological relevance for a particular disease ., A key insight of the SAPS method ( building on the work of Venet et al . 15 ) is that clinical utility and biological relevance of a gene set are two very different properties , necessitating distinct statistical tests ., The Ppure assesses the statistical significance of survival differences observed between two groups of patients stratified using a candidate gene set , and thus this test provides insight into the potential clinical utility of a gene set for stratifying patients into prognostically variable groups; however , this statistical test provides no information to compare the prognostic performance of the candidate gene set with randomly generated ( “biologically null” ) gene sets ., We believe that it is essential for a candidate prognostic gene set to not only stratify patients into prognostically variable groups , but to do so in a way that is significantly superior to a random gene set of similar size ., Therefore , the second condition of the SAPS method is that a gene set must stratify patients significantly more effectively than a random gene set ., This analysis produces the Prandom ., The Prandom directly compares the prognostic association of a candidate gene set with the prognostic association of “biologically null” random gene sets ., Lastly , to avoid selecting a gene set that is linked to prognosis solely by the unsupervised k-means clustering procedure , the SAPS procedure additionally requires a prognostic gene set to be enriched for genes that show strong univariate associations with prognosis ., Therefore , the third condition of the SAPS method is that a candidate gene set should achieve a statistically significant Penrichment , which is a measure of the statistical significance of a candidate gene sets enrichment with genes showing strong univariate prognostic associations ., Our results in breast and ovarian cancer and their molecular subtypes demonstrate that the Penrichment shows only moderate overall correlation with the Ppure and Prandom ( range Spearman rho\u200a= ( 0 . 23–0 . 35 ) , median Spearman rho\u200a=\u200a0 . 30 ) ) and there is only moderate overlap between gene sets identified at a raw p value of 0 . 05 by Ppure , Prandom , and Penrichment ( Figures 2A–9A ) ., These data suggest that the Penrichment provides useful additional information to the Ppure and Prandom and allows prioritization of gene sets that are enriched for genes showing strong univariate prognostic associations ., Summarizing these three distinct statistical tests into a single score is a difficult task as they were each generated using different methods and they test different hypotheses ., We chose to use the maximum as the summary function ( as opposed to a median or average , for example ) , as the maximum is a conservative summary measure and it is easily interpretable ., It is important to note that the SAPS method provides users with the SAPSscore as well as all 3 component P values ( and the 3 component q-values corrected for multiple hypotheses to control the FDR ) , and therefore the user can choose to use the SAPSscore or to focus on a particular SAPS component , as desired for the specific experimental question being evaluated ., Importantly , the SAPS method also performs a permutation-test to estimate the statistical significance of gene sets SAPSscore ., To test the utility of SAPS in providing insight into prognostic pathways in cancer , we performed a systematic , comprehensive , and well-powered analysis of prognostic gene signatures in breast and ovarian cancers and their molecular subtypes ., This represents the largest meta-analysis of subtype-specific prognostic pathways ever performed in these malignancies ., The analysis identified new prognostic gene sets in breast and ovarian cancer molecular subtypes , and demonstrated significant variability in prognostic associations across the diseases and their subtypes ., We find that proliferation drives prognosis in ER+ breast cancer , while pathways related to hypoxia , angiogenesis , development , and expression of extracellular matrix-associated proteins drive prognosis in ER−/HER2− and HER2+ breast cancer and ovarian cancer ., We see an association of immune-related pathways with improved prognosis across all subtypes of breast and ovarian cancers ., Our analysis demonstrates that prognostic pathways in HER2+ and ER−/HER2− breast cancer are far more similar to prognostic pathways in angiogenic and non-angiogenic ovarian cancer than to prognostic pathways in ER+ breast cancer ., This finding parallels the recent identification of similar genomic alterations in ovarian cancer and basal-like ( ER−/HER2− ) breast cancer 31 ., These results demonstrate the importance of performing subtype-specific analyses to gain insight into the factors driving biology in cancer molecular subtypes ., If molecular subtype is not accounted for , prognostic gene sets identified in breast cancer are strongly associated with proliferation 15; however , when subtype is accounted for , significant and highly distinct pathways ( showing no significant association with proliferation ) are identified as driving prognosis in ER− breast cancer subtypes ., Overall , these data show the utility of performing subtype-specific analyses and using SAPS to test the significance of prognostic pathways ., Furthermore , our data suggest that ER− breast cancer subtypes and ovarian cancer may share common therapeutic targets , and future work should address this hypothesis ., In summary , we believe SAPS will be widely useful for the identification of prognostic and predictive biomarkers from clinically annotated genomic data ., The method is not specific to gene expression data and can be directly applied to other genomic data types ., In the future , we believe that prior to reporting a prognostic gene set , researchers should be encouraged ( and perhaps required ) to apply the SAPS ( or a related ) method to ensure that their candidate prognostic gene set is significantly enriched for prognostic genes and stratifies patients into prognostic groups significantly better than the stratification obtained by random gene sets ., Data-sets were provided as Supplemental Material in Haibe-Kains et al . 20 ., Our analysis included 19 datasets with survival data ( total n\u200a=\u200a3832 ) ( ) ., Data-sets were provided as Supplemental Material in Bentink et al . 21 ., Our analysis included 1735 ovarian cancer patients for whom overall survival data were available ( Table S2 ) ., For b | Introduction, Results, Discussion, Methods | A major goal in translational cancer research is to identify biological signatures driving cancer progression and metastasis ., A common technique applied in genomics research is to cluster patients using gene expression data from a candidate prognostic gene set , and if the resulting clusters show statistically significant outcome stratification , to associate the gene set with prognosis , suggesting its biological and clinical importance ., Recent work has questioned the validity of this approach by showing in several breast cancer data sets that “random” gene sets tend to cluster patients into prognostically variable subgroups ., This work suggests that new rigorous statistical methods are needed to identify biologically informative prognostic gene sets ., To address this problem , we developed Significance Analysis of Prognostic Signatures ( SAPS ) which integrates standard prognostic tests with a new prognostic significance test based on stratifying patients into prognostic subtypes with random gene sets ., SAPS ensures that a significant gene set is not only able to stratify patients into prognostically variable groups , but is also enriched for genes showing strong univariate associations with patient prognosis , and performs significantly better than random gene sets ., We use SAPS to perform a large meta-analysis ( the largest completed to date ) of prognostic pathways in breast and ovarian cancer and their molecular subtypes ., Our analyses show that only a small subset of the gene sets found statistically significant using standard measures achieve significance by SAPS ., We identify new prognostic signatures in breast and ovarian cancer and their corresponding molecular subtypes , and we show that prognostic signatures in ER negative breast cancer are more similar to prognostic signatures in ovarian cancer than to prognostic signatures in ER positive breast cancer ., SAPS is a powerful new method for deriving robust prognostic biological signatures from clinically annotated genomic datasets . | A major goal in biomedical research is to identify sets of genes ( or “biological signatures” ) associated with patient survival , as these genes could be targeted to aid in diagnosing and treating disease ., A major challenge in using prognostic associations to identify biologically informative signatures is that in some diseases , “random” gene sets are associated with prognosis ., To address this problem , we developed a new method called “Significance Analysis of Prognostic Signatures” ( or “SAPS” ) for the identification of biologically informative gene sets associated with patient survival ., To test the effectiveness of SAPS , we use SAPS to perform a subtype-specific meta-analysis of prognostic signatures in large breast and ovarian cancer meta-data sets ., This analysis represents the largest of its kind ever performed ., Our analyses show that only a small subset of the gene sets found statistically significant using standard measures achieve significance by SAPS ., We identify new prognostic signatures in breast and ovarian cancer and their corresponding molecular subtypes , and we demonstrate a striking similarity between prognostic pathways in ER negative breast cancer and ovarian cancer , suggesting new shared therapeutic targets for these aggressive malignancies ., SAPS is a powerful new method for deriving robust prognostic biological pathways from clinically annotated genomic datasets . | oncology, medicine, breast tumors, mathematics, gynecological tumors, statistics, biostatistics, cancers and neoplasms, statistical methods | null |
1,085 | journal.pcbi.1002264 | 2,011 | Stochastic Delay Accelerates Signaling in Gene Networks | Gene regulation forms a basis for cellular decision-making processes and transcriptional signaling is one way in which cells can modulate gene expression patterns 1 ., The intricate networks of transcription factors and their targets are of intense interest to theorists because it is hoped that topological similarities between networks will reveal functional parallels 2 ., Models of gene regulatory networks have taken many forms , ranging from simplified Boolean networks 3 , 4 , to full-scale , stochastic descriptions simulated using Gillespies algorithm 5 ., The majority of models , however , are systems of nonlinear ordinary differential equations ( ODEs ) ., Yet , because of the complexity of protein production , ODE models of transcriptional networks are at best heuristic reductions of the true system , and often fail to capture many aspects of network dynamics ., Many ignored reactions , like oligomerization of transcription factors or enzyme-substrate binding , occur at much faster timescales than reactions such as transcription and degradation of proteins ., Reduced models are frequently obtained by eliminating these fast reactions 6–9 ., Unfortunately , even when such reductions are done correctly , problems might still exist ., For instance , if within the reaction network there exists a linear ( or approximately linear ) sequence of reactions , the resulting dynamics can appear to be delayed ., This type of behavior has long been known to exist in gene regulatory networks 10 ., Delay differential equations ( DDEs ) have been used as an alternative to ODE models to address this problem ., In protein production , one can think of delay as resulting from the sequential assembly of first mRNA and then protein 10–12 ., Delay can qualitatively alter the local stability of genetic regulatory network models 13 as well as their dynamics , especially in those containing feedback ., For instance , delay can lead to oscillations in models of transcriptional negative feedback 11 , 14–18 , and experimental evidence suggests that robust oscillations in simple synthetic networks are due to transcriptional delay 19 , 20 ., Protein production delay times are difficult to measure in live cells , though recent work has shown that the time it takes for transcription to occur in yeast can be on the order of minutes and is highly variable 21 ., Still , transcriptional delay is thought to be important in a host of naturally occurring gene networks ., For instance , mathematical models suggest that circadian oscillations are governed by delayed negative feedback systems 22 , 23 , and this was experimentally shown to be true in mammalian cells 24 ., Delay appears to play a role in cell cycle control 25 , 26 , apoptosis induction by the p53 network 27 , and the response of the network 15 ., Delay can also affect the stochastic nature of gene expression , and the relation between the two can be subtle and complex 28–31 ., In this study , we examine the consequences of randomly distributed delay on simple gene regulatory networks: We assume that the delay time for protein production , , is not constant but instead a random variable ., If denotes the probability density function ( PDF ) of , this situation can be described deterministically by an integro-delay differential equation 32 of the form ( 1 ) where is a positive definite state vector of protein concentrations , and is a vector function representing the production and degradation rates of the proteins ., Note that processes that do not require protein synthesis ( like dilution and degradation ) will depend on the instantaneous , rather than the delayed , state of the system ., Therefore is in general a function of both the present and past state of the system ., Equation ( 1 ) only holds in the limit of large protein numbers 32 ., As protein numbers approach zero , the stochasticity associated with chemical interactions becomes non-negligible ., Here , we address this issue by expanding on Eq ., ( 1 ) using an exact stochastic algorithm that takes into account variability within the delay time 32 ., We further use a queueing theory approach to examine how this variability affects timing in signaling cascades ., We find that when the mean of the delay time is fixed , increased delay variability accelerates downstream signaling ., Noise can thus increase signaling speed in gene networks ., In addition , we find that in simple transcriptional networks containing feed-forward or feedback loops the variability in the delay time nontrivially affects network dynamics ., Queueing theory has recently been used to understand the behavior of genetic networks 33–35 ., Here we are mainly interested in dynamical phenomena to which the theory of queues in equilibrium used in previous studies cannot be applied ., As we explain below , gene networks can be modeled as thresholded queueing systems: Proteins exiting one queue do not enter another queue , as would be the case in typical queueing networks ., Rather , they modulate the rate at which transcription is initiated , and thus affect the rate at which proteins enter other queues ., The transcription of genetic material into mRNA and its subsequent translation into protein involves potentially hundreds or thousands of biochemical reactions ., Hence , detailed models of these processes are prohibitively complex ., When simulating genetic circuits it is frequently assumed that gene expression instantaneously results in fully formed proteins ., However , each step in the chain of reactions leading from transcription initiation to a folded protein takes time ( Figure 1 ) ., Models that do not incorporate the resulting delay may not accurately capture the dynamical behavior of genetic circuits 17 ., While earlier models have included either fixed or distributed delay 32 , 36 , 37 , here we examine specifically the effects of delay variability on transcriptional signaling ., In one recent study , Bel et al . studied completion time distributions associated with Markov chains modeling linear chemical reaction pathways 38 ., Using rigorous analysis and numerical simulations they show that , if the number of reactions is large , completion time distributions for an idealized class of models exhibit a sharp transition in the coefficient of variation ( CV , defined as the standard deviation divided by the mean of the distribution ) , going from near ( indicating a nearly deterministic completion time ) to near ( indicating an exponentially distributed completion time ) as system bias moves from forward to reverse ., However , it is possible , and perhaps likely , that the limiting distributions described by Bel et al . do not provide good approximations for protein production ., For instance , when the number of rate limiting reactions is small , but greater than one , the distribution of delay times can be more complex ., Moreover , linear reaction pathways only represent one possible and necessarily simplified reaction scheme ., Protein production involves many reaction types that are nonlinear and/or reversible , each of which is influenced by intrinsic and extrinsic noise 39 , and these reactions may impact the delay time distribution in complicated ways ., Therefore , we do not try to derive the actual shape of , but examine the effects its statistical properties have on transcriptional signaling ., To do this , we represent protein production as a delayed reaction of the form ( 2 ) where is the gene , and transcription is initiated at rate , which can depend explicitly on both time and protein number , ., After initiation , it takes a random time , , for a protein to be formed ., Note that the presence of time delay implies that scheme ( 2 ) defines a non-Markovian process ., Such processes can be simulated exactly using an extension of the Gillespie algorithm ( See Methods and 28 , 32 ) ., If the biochemical reaction pathway that leads to functional protein is known and relatively simple , direct stochastic simulation of every step in the network is preferable to simulation based on scheme ( 2 ) ., From the point of view of multi-scale modeling , however , paradigm ( 2 ) is useful when the biochemical reaction network is either extremely complex or poorly mapped , since one needs to know only the statistical properties of ., In the setting of scheme ( 2 ) , first assume that does not depend on , and protein formation is initiated according to a memoryless process with rate ., A fully formed protein enters the population a random time after the initiation of protein formation ., We assume that the molecules do not interact while forming; that is , the formation of one protein does not affect that of another ., Each protein therefore emerges from an independent reaction channel after a random time ., This process is equivalent to an queue 40 , where indicates a memoryless source ( transcription initiation ) , a general service time distribution ( delay time distribution ) , and refers to the number of service channels ., In our model , the order in which initiation events enter a queue is not necessarily preserved ., As Figure 1 ( B ) illustrates , it is possible for the initiation order to be permuted upon exit 32 ., The assumption that proteins can “skip ahead” complicates the analysis of transient dynamics of such queues , and is essential in much of the following ., While there are steps where such skipping can occur ( such as protein folding ) , there are others for which it cannot ., For instance , it is unlikely that one RNA polymerase can skip ahead of another – and similarly for ribosomes during translation off of the same transcript ., Therefore , protein skipping may be more relevant in eukaryotes , where transcription and translation must occur separately , than prokaryotes , where they may occur simultaneously ., However , if there is more than one copy of the gene ( which is common for plasmid-based synthetic gene networks in E . coli ) , or more than one transcript , some skipping is likely occur ., Therefore it is likely that the full results that follow are more relevant for genes of copy number greater than one ., One purpose of transcription factors is to propagate signals to downstream target genes ., Determining the dynamics and stochasticity of these signaling cascades is of both theoretical and experimental interest 41 , 42 ., Therefore , we first examine the impact that distributed delay has on simple downstream signaling ., Consider the situation depicted in Figure 1 ( C ) , in which the product of the first gene regulates the transcription of a second gene ., Using the same nomenclature as in scheme ( 2 ) we write ( 3a ) ( 3b ) where and are the copy numbers of the upstream and downstream genes , and are the number of functional proteins of each type , and is the random delay time of gene ., The transcription rate of gene 2 depends on and is given by a Hill function ., We consider the case in which activates ( depicted in Figure 1 ) and the case in which represses ., We now ask: If starts at zero and gene 1 is suddenly turned on , how long does it take until the signal is detected by gene 2 ?, In other words , assume , where is the Heaviside step function ., At what time does reach a level that is detectable by gene 2 ?, In order to make the problem tractable , we assume that the Hill function is steep and switch-like , so that we can make the approximation ( 4a ) ( 4b ) Here is the maximum transcriptional initiation rate of and is the threshold value of the Hill function , i . e . the number of molecules of needed for half repression ( or half activation ) of gene, 2 . The second gene therefore becomes repressed ( or activated ) at the time at which copies of protein have been fully formed ., We first examine reaction ( 3a ) ., Assume that at time there are no proteins in the system ., Let denote the number of transcription initiation events that have occurred by time ( the arrival process of the queueing system ) , the number of proteins being formed at time ( the size of the queue at time ) , and the number of functional proteins that have been completed by time ( the exit process of the queueing system ) ., Since the arrival process is memoryless , is a Poisson process with constant rate for ., Hence , the expected value of is ., The exit process , i . e . the number of fully functional proteins that have emerged from the queue , , is a nonhomogenous Poisson process with time-dependent rate , where is the cumulative distribution function ( CDF ) of the delay time ., It then follows that Inactivation ( or activation ) of gene 2 occurs when enough protein has accumulated to trigger a transcriptional change , according to Eq ., ( 4a ) or ( 4b ) ., In other words , the random time it takes for the signal to propagate , , is given by ., Trivially , changes by an amount identical to a change in the mean of the delay distribution ., To examine the effects of randomness in delay on the signaling time , we therefore keep the mean of the delay distribution fixed , , and vary ., The probability density function of is given by ( See Methods ) ( 5 ) Consequently , the mean and variance of the time it takes for the original signal to propagate to the downstream gene can be written as: ( 6 ) ( 7 ) To gain insight into the behaviors of Eqs ., ( 6 ) and ( 7 ) , we first examine a representative , analytically tractable example ., Assume that the delay time can take on discrete values , and with equal probability ., In this case , ( 8 ) where is the upper incomplete gamma function ., Expanding for small , we obtain ( See Methods ) ( 9 ) which is the deterministic limit ., The first term is the mean delay time and the second is the average time to initiate proteins at rate ., A similar expansion for fixed and large gives ( see panel ( c ) in Figure 2 ) ( 10 ) It follows that for larger delay variability , the mean signaling time decreases with delay variability ( See Figure 2 ( A ) ) ., Indeed , Eqs ., ( 9 ) and ( 10 ) form the asymptotic boundaries for the mean signaling time ., The intersection of the two asymptotes at , gives an estimate of when the behavior of the system changes from the deterministic limit ( for ) to a regime in which increasing the variability decreases the mean signaling time ( for ) ., It follows that the deterministic approximation given by Eq ., ( 9 ) is valid in an increasing range , as grows ( See Figure 2 ( C ) ) ., Indeed , an asymptotic analysis of Eq ., ( 8 ) shows that the corrections to Eq ., ( 9 ) are approximately of size , and therefore rapidly decrease with ( See Methods ) ., The bottom row of Figure 2 shows that these observations hold more generally: When is gamma distributed the mean time to produce proteins , , is very sensitive to randomness in delay time , but only when is small to intermediate ., As expected , the densities of the times to produce proteins , , are approximately normal and independent of the delay distribution when is large ( Middle panels of Figure 2 ) ., We therefore expect that for each fixed threshold , is a decreasing function of the standard deviation of the delay ., We have proved this to be true for symmetric delay distributions ( See Methods ) ., Intuitively , this is due to the fact that the order in which proteins enter the queue is not the same as the order in which they exit ., Proteins that enter the queue before the protein , but exit after the protein increase , while the opposite is true for proteins that enter the queue after the protein , and exit before it ., Since only finitely many proteins enter the queue before the protein , while infinitely many enter after it , the balance favors a decrease in the mean signaling time ., Moreover , as delay variability increases , interchanges in exit order become more likely , and this effect becomes more pronounced ., We outline the analytical argument: For each fixed time , is an increasing function of , hence is decreasing function of for all ., Referring to Eq ., ( 6 ) , this implies that is a decreasing function of in the symmetric case ., In sum , mean signaling times decrease as delay variability increases ( with fixed mean delay ) ., This effect is most significant for small to moderate thresholds ., We note that the decrease in mean signaling time phenomenon depends on a sufficient number of proteins entering the queue ., If transcription is only active long enough for less than proteins to be initiated , then mean signaling time will actually increase as delay variability increases ., This phenomenon is explained in the subsection of the Methods section that analyzes repressor switches ., Using the above results , we now examine more complicated transcriptional signaling networks ., In particular , we turn to two common feedforward loops - the type 1 coherent and the type 1 incoherent feedforward loops ( FFL ) 43 , shown in Figure, 3 . Each of these networks is a transcriptional cascade resulting in the specific response of the output , gene, 3 . The coherent FFL generally acts as a delayed response network , while the incoherent FFL has various possible responses , such as pulsatile response 43 , response time acceleration 44 , and fold-change detection 45 ., To examine the effect of distributed delay on these networks we assume that at gene starts transcription of protein at rate , i . e ., The second gene , , starts transcription after reaches the threshold , so that ., For the coherent FFL , we assume that the promoter of gene acts as an AND gate so that ., We further assume that the promoter of in the incoherent FFL is active only in the presence of and absence of , so that we may write ., The signaling time between any two nodes and within the network , i . e . the random time between the initiation of transcription of gene to the formation of a total of proteins is denoted ., For each of the three pathways , the PDF of the signaling time is given by Eq ., ( 5 ) ., In addition , because the random times and are additive ( as are their variances ) , we can directly calculate the time at which reaches the threshold of gene as ., Therefore , the random time at which the coherent FFL turns on is simply given by ., Because and are decreasing functions of the delay variability , it can be expected that so is ., In contrast to the coherent FFL , the dynamics of the pulse generating incoherent FFL are less trivial ., Since the repressor ( ) overrides the activator ( ) , assuming transcription of turns on at time and turns off at time , generating a pulse of duration ., Note that can increase or decrease as a function of the standard deviation of the delay ( see Figure 4 where was equal for all pathways ) ., To see this , write as follows: ( 11 ) Each of the terms on the right side of Eq ., ( 11 ) is the expected signaling time of a single gene ( , , and , respectively ) ., Consequently , depends on as a linear combination of expected signaling time curves of the type pictured in Figure, 2 . The shapes of these signaling time curves determine the behavior of as a function of ., Figure 4 shows that the behavior of the duration of the transcriptional pulse as a function of the delay variability depends on the values of each threshold within the network ., These observations can also be extended to networks with recurrent architectures ., For instance , consider the transcriptional delayed negative feedback circuit 17 , which can be described using an extension of scheme ( 2 ) : ( 12a ) ( 12b ) where is a decreasing Hill function ( i . e . represses its own production ) and is the degradation rate due to dilution and proteolysis ., Mather et al . examined the oscillations produced by systems of the type described by scheme ( 12 ) when the delay is nonrandom ( degrade and fire oscillators ) 17 ., Starting with no proteins , is produced at a rate governed by the Hill function ., When the level of exceeds the midpoint of the Hill function , gene effectively shuts down ., The proteins remaining in the queue exit , producing a spike , after which degradation diminishes ., When the protein level drops sufficiently , reaction ( 12a ) reactivates and production of resumes , commencing another oscillation cycle ., Note that this circuit will not oscillate without delay ., As a result during each oscillation the gene is turned on until its own signal reaches itself , at which time the gene is turned off 17 ., Therefore , the peak height of one oscillation is determined by the length of time the gene was in the “ON” state ., Since that time is determined by the genes signaling time , our theory predicts that the mean peak height of the oscillations will decrease as the variability in the delay time increases ., Indeed , this is exactly what our stochastic simulations show in Figure 5 ., This is consistent with the fact that the negative feedback circuit is dynamically similar to the sub-circuit within the incoherent FFL ., Here we explicitly used a gamma-distributed delay time with mean , and ., We can use our theory to predict the change in the peak height of the oscillator as a function of ., For a delay that is gamma-distributed , the change in signaling time as a function of can be written as ( 13 ) where is given by Eq ., ( 6 ) and is the reduction in the expected signaling time ., If we assume that the amount of time that protein is produced during a burst in the delayed negative feedback oscillator is also reduced by this amount , then it is possible to predict the change in the peak height accordingly ., To a first approximation , if the promoter is in the “ON” state for a time that is less , then a total of less protein will be produced ., Therefore we can write the expected peak height of the oscillator as ( 14 ) However , due to degradation , Eq ., ( 14 ) overestimates the correction to the peak height ., Due to exponential degradation , only a fraction of the lost protein would have made it through to the peak ., Also , the duration of enzymatic decay is also reduced by a time ., Therefore , if we assume that the enzymatic decay reaction is saturated , we need to add an amount to Eq ., ( 14 ) ., This gives us a more accurate prediction of the mean peak height as ( 15 ) Figure 5 shows that this approximation works well , even for a Hill coefficient as low as ., The existence of delay in the production of protein has been known of for some time ., For many systems its presence does not seriously impact performance ., For example , the existence of fixed points in simple downstream regulatory networks without feedback is unaffected by delay ., Delay is important if the timing of signal propagation impacts the function of the network ., Delay can also change a networks dynamics ., In networks with feedback , for instance , delay can result in bifurcations that are not present in the corresponding non-delayed system ., The delayed negative feedback oscillator is a prime example 17 ., Moreover , while the effect of delay in a single reaction may be small , it is cumulative and linearly additive in directed lines ., The intrinsic stochasticity of the reactions that create mature protein make some variation in delay time inevitable ., However , we do not yet know the exact nature of this variability or the functional form of the probability density function ., To further complicate matters , there may exist a substantial amount of extrinsic variability in the delay time – the statistics of the PDF may vary from cell to cell ., We focused on the transient dynamics of queues in order to demonstrate the effects of distributed delay in a tractable setting ., However , as mentioned earlier , queues may not always be a good model for protein production ., For genes with low copy number or few available transcripts queues with service channels ( queues ) may provide a better description ., For eukaryotic systems models in which transcription and translation are decoupled into separate queues may also be relevant ., In addition , as protein production rates are often coupled with extrinsic factors such as growth rate and cell cycle phase , may depend on time and on the state of the system ., The complexity of biochemical reaction networks suggests the use of networks of queues 46 , and sources could be toggled on and off by other components of a reaction network ., Even protein production from a single transcript may be more accurately described by a sequence of queues with each codon as one in a chain of service stations ., In such a model ribosomes move from one codon station to the next , and are not able to skip ahead ., Such models will be considered in future studies ., One further complication occurs if the burstiness of the promoter is large 47 ., In the above analysis , we assumed that the initiation events of proteins were exponentially distributed in time ., Since this is not necessarily the case due to the burstiness of promoters , some limits need to be put on the usefulness of the above results ., Equations ( 9 ) and ( 10 ) suggest that the transition to accelerated behavior occurs when ( 16 ) One can think of as the average time , , it takes to initiate proteins , and rewrite the boundary as ., One can then assume that if the burstiness of the initiation events is not large , i . e . that the mean burst size is less than the signal threshold , then it does not matter what the distribution of initiation events is ., In other words , as long as approximately proteins are initiated in the time , and the variance of that number is not large , then Eq ., ( 16 ) still holds ., Gillespies stochastic simulation algorithm generates an exact stochastic realization for a system of species interacting through reactions ., The state of the system is stored in the vector , and each reaction is characterized by a state change vector and its propensity function ., If the system is in state and reaction occurs then the system state changes to 5 ., The idea behind extending Gillespies SSA to model distributed delay is that if a reaction is to be delayed by some amount of time then we temporarily store this reaction along with the time at which the event will occur and we only apply this reaction at the given time ., We used a version of the algorithm equivalent to those described in 32 , 48 ., Note that 48 also describes a more efficient version of the algorithm . | Introduction, Results, Discussion, Methods | The creation of protein from DNA is a dynamic process consisting of numerous reactions , such as transcription , translation and protein folding ., Each of these reactions is further comprised of hundreds or thousands of sub-steps that must be completed before a protein is fully mature ., Consequently , the time it takes to create a single protein depends on the number of steps in the reaction chain and the nature of each step ., One way to account for these reactions in models of gene regulatory networks is to incorporate dynamical delay ., However , the stochastic nature of the reactions necessary to produce protein leads to a waiting time that is randomly distributed ., Here , we use queueing theory to examine the effects of such distributed delay on the propagation of information through transcriptionally regulated genetic networks ., In an analytically tractable model we find that increasing the randomness in protein production delay can increase signaling speed in transcriptional networks ., The effect is confirmed in stochastic simulations , and we demonstrate its impact in several common transcriptional motifs ., In particular , we show that in feedforward loops signaling time and magnitude are significantly affected by distributed delay ., In addition , delay has previously been shown to cause stable oscillations in circuits with negative feedback ., We show that the period and the amplitude of the oscillations monotonically decrease as the variability of the delay time increases . | Delay in gene regulatory networks often arises from the numerous sequential reactions necessary to create fully functional protein from DNA ., While the molecular mechanisms behind protein production and maturation are known , it is still unknown to what extent the resulting delay affects signaling in transcriptional networks ., In contrast to previous studies that have examined the consequences of fixed delay in gene networks , here we investigate how the variability of the delay time influences the resulting dynamics ., The exact distribution of “transcriptional delay” is still unknown , and most likely greatly depends on both intrinsic and extrinsic factors ., Nevertheless , we are able to deduce specific effects of distributed delay on transcriptional signaling that are independent of the underlying distribution ., We find that the time it takes for a gene encoding a transcription factor to signal its downstream target decreases as the delay variability increases ., We use queueing theory to derive a simple relationship describing this result , and use stochastic simulations to confirm it ., The consequences of distributed delay for several common transcriptional motifs are also discussed . | systems biology, stochastic processes, mathematics, theoretical biology, regulatory networks, synthetic biology, biology, computational biology, signaling networks, molecular biology, genetics and genomics, probability theory | null |
1,036 | journal.pcbi.1006245 | 2,018 | Exploring the single-cell RNA-seq analysis landscape with the scRNA-tools database | Single-cell RNA-sequencing ( scRNA-seq ) has rapidly gained traction as an effective tool for interrogating the transcriptome at the resolution of individual cells ., Since the first protocols were published in 2009 1 the number of cells profiled in individual scRNA-seq experiments has increased exponentially , outstripping Moore’s Law 2 ., This new kind of transcriptomic data brings a demand for new analysis methods ., Not only is the scale of scRNA-seq datasets much greater than that of bulk experiments but there are also a variety of challenges unique to the single-cell context 3 ., Specifically , scRNA-seq data is extremely sparse ( there is no expression measured for many genes in most cells ) , it can have technical artefacts such as low-quality cells or differences between sequencing batches and the scientific questions of interest are often different to those asked of bulk RNA-seq datasets ., For example many bulk RNA-seq datasets are generated to discover differentially expressed genes through a designed experiment while many scRNA-seq experiments aim to identify or classify cell types in complex tissues ., The bioinformatics community has embraced this new type of data at an astonishing rate , designing a plethora of methods for the analysis of scRNA-seq data ., Keeping up with the current state of scRNA-seq analysis is now a significant challenge as the field is presented with a huge number of choices for analysing a dataset ., Since September 2016 we have collated and categorised scRNA-seq analysis tools as they have become available ., This database is being continually updated and is publicly available at www . scRNA-tools . org ., In order to help researchers navigate the vast ocean of analysis tools we categorise tools in the database in the context of the typical phases of an scRNA-seq analysis ., Through the analysis of this database we show trends in not only the analysis applications these methods address but how they are published and licensed , and the platforms they use ., Based on this database we gain insight into the current state of current tools in this rapidly developing field ., The scRNA-tools database contains information on software tools specifically designed for the analysis of scRNA-seq data ., For a tool to be eligible for inclusion in the database it must be available for download and public use ., This can be from a software package repository ( such as Bioconductor 4 , CRAN or PyPI ) , a code sharing website such as GitHub or directly from a private website ., When new tools come to our attention they are added to the scRNA-tools database ., DOIs and publication dates are recorded for any associated publications ., As preprints may be frequently updated they are marked as a preprint instead of recording a date ., The platform used to build the tool , links to code repositories , associated licenses and a short description are also recorded ., Each tool is categorised according to the analysis tasks it can perform , receiving a true or false for each category based on what is described in the accompanying paper or documentation ., We also record the date that each entry was added to the database and the date that it was last updated ., Most tools are added after a preprint or publication becomes available but some have been added after being mentioned on social media or in similar collections such as Sean Davis’ awesome-single-cell page ( https://github . com/seandavi/awesome-single-cell ) ., To build the website we start with the table described above as a CSV file which is processed using an R script ., The lists of packages available in the CRAN , Bioconductor , PyPI and Anaconda software repositories are downloaded and matched with tools in the database ., For tools with associated publications the number of citations they have received is retrieved from the Crossref database ( www . crossref . org ) using the rcrossref package ( v0 . 8 . 0 ) 5 ., We also make use of the aRxiv package ( v0 . 5 . 16 ) 6 to retrieve information about arXiv preprints ., JSON files describing the complete table , tools and categories are produced and used to populate the website ., The website consists of three main pages ., The home page shows an interactive table with the ability to sort , filter and download the database ., The second page shows an entry for each tool , giving the description , details of publications , details of the software code and license and the associated software categories ., Badges are added to tools to provide clearly visible details of any associated software or GitHub repositories ., The final page describes the categories , providing easy access to the tools associated with them ., Both the tools and categories pages can be sorted in a variety of ways , including by the number of associated publications or citations ., An additional page shows a live and up-to-date version of some of the analysis presented here with visualisations produced using ggplot2 ( v2 . 2 . 1 . 9000 ) 7 and plotly ( v4 . 7 . 1 ) 8 ., We welcome contributions to the database from the wider community via submitting an issue to the project GitHub page ( https://github . com/Oshlack/scRNA-tools ) or by filling in the submission form on the scRNA-tools website ., The most recent version of the scRNA-tools database as of 6 June 2018 was used for the analysis presented in this paper ., Data was manipulated in R ( v3 . 5 . 0 ) using the dplyr package ( v0 . 7 . 5 ) 9 and plots produced using the ggplot2 ( v2 . 2 . 1 . 9000 ) and cowplot ( v0 . 9 . 2 ) 10 packages ., When the database was first constructed it contained 70 scRNA-seq analysis tools representing the majority of work in the field during the three years from the publication of SAMstrt 11 in November 2013 up to September 2016 ., In the time since then over 160 new tools have been added ( Fig 1A ) ., The almost tripling of the number of available tools in such a short time demonstrates the booming interest in scRNA-seq and its maturation from a technique requiring custom-built equipment with specialised protocols to a commercially available product ., Single-cell RNA-sequencing is often used to explore complex mixtures of cell types in an unsupervised manner ., As has been described in previous reviews a standard scRNA-seq analysis in this setting consists of several tasks which can be completed using various tools 13–17 ., In the scRNA-tools database we categorise tools based on the analysis tasks they perform ., Here we group these tasks into four broad phases of analysis: data acquisition , data cleaning , cell assignment and gene identification ( Fig 2 ) ., The data acquisition phase ( Phase, 1 ) takes the raw nucleotide sequences from the sequencing experiment and returns a matrix describing the expression of each gene in each cell ., This phase consists of tasks common to bulk RNA-seq experiments , such as alignment to a reference genome or transcriptome and quantification of expression , but is often extended to handle Unique Molecular Identifiers ( UMIs ) 18 ., Once an expression matrix has been obtained it is vital to make sure the resulting data is of high enough quality ., In the data cleaning phase ( Phase, 2 ) quality control of cells is performed as well as filtering of uninformative genes ., Additional tasks may be performed to normalise the data or impute missing values ., Exploratory data analysis tasks are often performed in this phase , such as viewing the datasets in reduced dimensions to look for underlying structure ., The high-quality expression matrix is the focus of the next phases of analysis ., In Phase 3 cells are assigned , either to discrete groups via clustering or along a continuous trajectory from one cell type to another ., As high-quality reference datasets become available it will also become feasible to classify cells directly into different cell types ., Once cells have been assigned the focus of analysis turns to interpreting what those assignments mean ., Identifying interesting genes ( Phase 4 ) , such as those that are differentially expressed across groups , marker genes expressed in a single group or genes that change expression along a trajectory , is the typical way to do this ., The biological significance of those genes can then be interpreted to give meaning to the experiment , either by investigating the genes themselves or by getting a higher-level view through techniques such as gene set testing ., While there are other approaches that could be taken to analyse scRNA-seq data these phases represent the most common path from raw sequencing reads to biological insight applicable to many studies ., An exception to this may be experiments designed to test a specific hypothesis where cell populations may have been sorted or the interest lies in differences between experimental conditions rather than cell types ., In this case Phase 3 may not be required , and slightly different tools or approaches may be used , but many of the same challenges will apply ., In addition , as the field expands and develops it is likely that data will be used in new ways to answer other biological questions , requiring new analysis techniques ., Descriptions of the categories in the scRNA-tools database are given in Table 1 , along with the associated analysis phases ., The scRNA-tools databases is publicly accessible via the website at www . scRNA-tools . org ., Suggestions for additions , updates and improvements are warmly welcomed at the associated GitHub repository ( https://github . com/Oshlack/scRNA-tools ) or via the submission form on the website ., The code and datasets used for the analysis in this paper are available from https://github . com/Oshlack/scRNAtools-paper . | Introduction, Design and implementation, Results, Availability and future directions | As single-cell RNA-sequencing ( scRNA-seq ) datasets have become more widespread the number of tools designed to analyse these data has dramatically increased ., Navigating the vast sea of tools now available is becoming increasingly challenging for researchers ., In order to better facilitate selection of appropriate analysis tools we have created the scRNA-tools database ( www . scRNA-tools . org ) to catalogue and curate analysis tools as they become available ., Our database collects a range of information on each scRNA-seq analysis tool and categorises them according to the analysis tasks they perform ., Exploration of this database gives insights into the areas of rapid development of analysis methods for scRNA-seq data ., We see that many tools perform tasks specific to scRNA-seq analysis , particularly clustering and ordering of cells ., We also find that the scRNA-seq community embraces an open-source and open-science approach , with most tools available under open-source licenses and preprints being extensively used as a means to describe methods ., The scRNA-tools database provides a valuable resource for researchers embarking on scRNA-seq analysis and records the growth of the field over time . | In recent years single-cell RNA-sequencing technologies have emerged that allow scientists to measure the activity of genes in thousands of individual cells simultaneously ., This means we can start to look at what each cell in a sample is doing instead of considering an average across all cells in a sample , as was the case with older technologies ., However , while access to this kind of data presents a wealth of opportunities it comes with a new set of challenges ., Researchers across the world have developed new methods and software tools to make the most of these datasets but the field is moving at such a rapid pace it is difficult to keep up with what is currently available ., To make this easier we have developed the scRNA-tools database and website ( www . scRNA-tools . org ) ., Our database catalogues analysis tools , recording the tasks they can be used for , where they can be downloaded from and the publications that describe how they work ., By looking at this database we can see that developers have focused on methods specific to single-cell data and that they embrace an open-source approach with permissive licensing , sharing of code and release of preprint publications . | data acquisition, engineering and technology, rna analysis, molecular biology techniques, research and analysis methods, computer and information sciences, gene expression, molecular biology, molecular biology assays and analysis techniques, programming languages, data visualization, source code, database and informatics methods, nucleic acid analysis, software engineering, genetics, biology and life sciences, software tools | null |
1,537 | journal.pcbi.1000469 | 2,009 | Red Queen Dynamics with Non-Standard Fitness Interactions | Host-parasite interactions have the potential to produce rapid co-evolutionary dynamics ., If host genotypes are favoured that resist infection by the most common parasites and parasite genotypes are favoured that thrive on frequent hosts , this will produce selection against common genotypes and hence may result in cyclically fluctuating genotype frequencies in both interacting species ., Such ‘Red Queen’ dynamics have been the focus of several theoretical studies e . g . , 1–3 and are also documented empirically ., For example , analysing ‘archived’ Daphnia hosts and their Pasteuria parasites in a pond sediment , Decaestecker et al . 4 observed rapid co-evolutionary change over time and temporal adaptation of parasites to hosts ., Based on Red Queen dynamics is the Red Queen Hypothesis ( RQH ) for the maintenance of sexual reproduction and recombination 5 , reviewed in 6 ., Despite being costly in many important respects , sexual reproduction is very widespread and common among eukaryotes , and many hypotheses have been put forward to explain this pattern through a selective advantage of recombination 7–9 ., The RQH states that an advantage to sexual reproduction arises because Red Queen dynamics lead to deleterious statistical associations ( linkage disequilibria , or LD ) between alleles in the hosts that are involved in defence against parasites ., According to the RQH , recombination is then favoured because it breaks up these associations ( i . e . , reduces LD ) , and a modifier allele that increases recombination rate can spread in the population through hitchhiking with disproportionately fit genotypes ., Previous theoretical work has established several key results regarding the conditions under which the RQH works as well as the underlying mechanisms ., It has been demonstrated that selection on loci modifying recombination rates can be partitioned into a long-term and a short-term effect 10 , 11 ., The long-term effect arises from increasing the additive genetic variance for fitness so that selection operates more efficiently ., The short-term effect is determined by the relative fitness of the combinations of alleles generated through recombination ., A characteristic of the RQH is that the short-term effect can be positive and it has recently been shown that it can be responsible for a substantial part of the selection for recombination in the RQ 12 ., Rapid fluctuations in epistasis are a necessary condition for selection for increased recombination through the short-term effect ., In particular , Barton 10 showed that epistasis needs to change its sign every 2–5 generations if high recombination rates are to evolve ., To produce such rapid fluctuations in epistasis , selection on either the host or the parasite must be strong 13 , a requirement that is in accord with the predictions of a number of different Red Queen models 14–17 ., One of the most important factors influencing both the coevolutionary dynamics and selection for recombination is the type of interaction model that defines fitness values for hosts and parasites 16 ., One of the most widely used interaction models is the matching allele ( MA ) model and derivations thereof e . g . , 3 , 5 , 11 , 14 , 17 , 18 ., In the MA model , it is assumed that parasites can infect the host if all alleles at a number of parasite interaction loci match the alleles at corresponding loci in the host ., In this case , the parasite fitness is maximal and the host fitness is reduced by a certain amount that corresponds to the virulence of the parasite ., Conversely , if none of the parasite alleles matches the host alleles , the parasite cannot invade and has its fitness reduced , and the host fitness is maximal ., If only a subset of alleles match , fitness is affected in a variety of ways in different version of the MA model , and the fitness values for these semi-matching interactions are crucial for whether recombination is favoured or disfavoured 3 , 14 , 16 ., Interaction models other than MA models include the gene-for-gene ( GFG ) model 19 , 20 and the Nee model 21 ., A common feature of all interaction models that have been used to date is that they are defined by only few parameters ., For example , interactions in the simplest case of a two-locus/two-alleles system are in general described by two 4×4 matrices that give the fitness for each host genotype when interacting with each parasite and vice versa ., Nevertheless , even the most general matching allele models utilise at most three parameters to fill these 32 matrix entries e . g . , 14 ., As a consequence , the interactions models that have been used previously are simplistic in several ways , usually assuming , for instance , equal fitness effects at the two loci involved ., Although these standard interaction models have been invaluable in assessing the plausibility of the RQH and identifying the population genetic forces that are at work , they explore but a very limited and probably unrealistic set of possible host-parasite interactions in general ., Agrawal & Lively 22 addressed this problem by investigating models that lie on a continuum between MA and GFG models ., Here , we go a step further and study interactions in two-locus/two allele models in their most general form ., We construct large numbers of randomly generated interaction models and analyse the resulting dynamics ., Specifically , we investigate how properties of the fitness matrices affect the co-evolutionary dynamics , and how the dynamics in turn influence selection for or against recombination ., One important property of interaction matrices that we identify is the ‘antagonicity’ of the interaction , which we define as ., Our results indicate that whilst some of the previous results on the RQH appear to be fairly robust with respect to interaction models ( including the requirement for strong selection on hosts or parasites ) , other predictions – in particular those concerning LD fluctuations – need to be qualified based on the results with our generalised interaction models ., In a strict sense , extinction of genotypes cannot occur in our model , because the population is of infinite size and recurrent mutation will lead to continuous replenishment of genotypes even if these are under strong negative selection ., For the following results , we call a genotype ‘extinct’ if the frequency of this genotype does not exceed 10−4 during the 10 , 000 generations that follow the burn-in phase ., For comparison , this threshold is approximately reached under mutation-selection balance with a mutation rate of ( as in most of our simulations ) and a selection coefficient of s\u200a=\u200a−0 . 1 ., An inverse relation between host and parasite fitness – corresponding to high antagonicity , A , in our terminology – is one of the key assumptions of Red Queen models ( see Methods for the definition of A ) ., Therefore , we have tested how antagonicity affects extinction patterns by creating sets of matrices with a different range of A values and comparing simulation results ( Figure 1 ) ., As expected , we observe fewer extinction events as A increases ., This makes intuitive sense , because when co-evolution between hosts and parasites becomes less antagonistic ( low values of A ) , increases in host fitness will often also lead to increased parasite fitness and vice versa ., Therefore , such interaction matrices often lead to a state where fitness is optimal for both hosts and parasites , in which case all but one genotype become extinct ., Aside from antagonicity , genotype extinction is likely to be influenced by the strength of selection acting on hosts and parasites ., We therefore compared host allele extinction patterns for sets of interaction matrices that differ by the range values from which the random fitness entries were drawn ., The resulting proportions of fitness matrices for which extinction of at least one allele occurred are given in Table 1 ., These numbers indicate that extinction becomes more likely when selection pressure on the hosts is high , whereas for small fitness differences ( all fitness values between 0 . 9 and 1 ) , no allele extinctions were observed ., The impact of the strength of selection acting on the parasites is weaker and does not show a clear-cut pattern ., Thus , even though parasite allele extinction becomes more frequent with increasing strength of selection on parasites ( in line with the symmetry of the model with respect to the two interacting species ) , these extinction patterns in parasites do not seem to translate in a simple way into extinction patterns of host alleles ., Although the primary objective of this study is the impact of interaction matrices on host-parasite coevolutionary dynamics , it is also important to assess how the other parameters of the model influence these dynamics ., Figure 2 shows some results regarding the impact of the number of parasite generations per host generation ( nPG ) , the recombination and the mutation rate ., Increasing nPG increases the proportion of simulations where one or two host genotypes become extinct , but this effect is rather weak ., With a recombination rate of rH\u200a=\u200a0 . 1 compared to no recombination , the proportion of simulations where one or two host genotypes become extinct is substantially decreased ., This makes sense as genotypes that become extinct in the absence of recombination may be continuously produced by recombination if the constituting alleles are present in the population ., Interestingly , a high recombination rate of rH\u200a=\u200a0 . 5 leads to a greater rate of extinction of three host genotypes , suggesting that recombination may also decrease genetic variation in the population ., Finally , low mutation rates or absence of mutation appears to boost extinction of host genotypes ., Comparison of genotype dynamics in individual simulations ( not shown ) suggests the following explanation for this phenomenon ., Mutation maintains a certain minimum of genotype frequencies even if these genotypes are selectively disfavoured ., As a result , when the composition of the parasite population changes , selection for these low frequency host genotypes results in a relatively quick response , which keeps the cyclic dynamics of the system going ., By contrast , if mutation is absent or occurs at a very low rate only , genotype frequencies may become so low due to selection that the cyclic dynamics break down and host genotypes become extinct ., Since the only effect of recombination is to break down linkage disequilibria ( LD ) , the LD dynamics that result from host-parasite co-evolution are at the core of the RQH ., Figure 3 shows the distribution of mean LD and variance in LD , as well as the distribution of minimum and maximum LD for a particular set of interaction matrices ., As we have not built in any systematic asymmetry in constructing the random interaction matrices , the distribution is symmetric around a mean LD of zero ( Fig . 3A ) ., The stem of the ‘mushroom’ shaped distribution , where mean LD is approximately zero and variance in LD is very low , usually corresponds to extinction or near extinction of one or two alleles ., Interestingly , as variance in LD increases , simulations with mean LD close to zero become more rare ., Rather , most simulations with high variance in LD show moderate to high absolute values of mean LD ., Finally , there are also some simulations with strongly positive or negative means ( close to the maximum value of ±0 . 25 ) and low variance ., Surprisingly , we observed that the sign of LD did not change during the 10 , 000 generations of recorded coevolution in the majority of our simulations , i . e . , LD was either always positive or always negative ( compare also the width of the bars in Fig . 4B ) ., In Fig . 3B , such instances of LD with constant sign are represented by data points with either positive minimum LD or negative maximum LD ., Similarly high incidence of LD dynamics with constant sign were also found in simulations with all other sets of interaction matrices that we tested ( Table 2 ) ., The relevance of these observations stems from the intuition that rapid changes in the sign of LD are a prerequisite for selection for increased recombination ., As will be demonstrated in the following section , this intuition is misguided ., An increase in frequency of the recombination modifier allele M ( i . e . , selection for increased recombination ) was observed with many of our interaction matrices ( Table 3 ) ., Figure 4 shows , for a particular set of interaction matrices , how various properties of the dynamics before introduction of M relate to selection for or against M . Extinction of host genotypes has a strong impact on selection acting on M ( Fig . 4A ) ., The highest proportion of simulations where M was under positive selection was observed when no genotype became extinct , but M was also selected for in about 20% of simulations when one genotype went extinct ., On the other hand , if two genotypes became extinct , M always either decreased in frequency or was selectively neutral ., As expected , M was always neutral when one of the alleles became extinct ., The proportion of simulations where M increased in frequency was substantially higher when the dynamics exhibited changes in the sign of LD than when LD was of constant sign ( Figure 4B ) ., However , even among the simulations where LD was of constant sign we observed selection for recombination in about 20% of the simulations ., Figure 5 provides a more detailed picture of how LD statistics relate to the fate of the recombination modifier M . A high variance in LD generally favours selection for M , but LD does not need to fluctuate around a mean of zero for this to happen ( Fig . 5A ) ., In the majority simulations where LD did change its sign and both the minimum and the maximum of LD were substantially different from zero , M was under positive selection ( Fig . 5B ) ., Conversely , when LD was always strongly negative or always strongly positive , M was usually disfavoured ., However , in many simulations either the minimum or the maximum of LD was close to zero , in which case no trend with respect to selection on M was apparent ., Examples of the dynamics with selection for or against M in the presence or absence of changes in LD sign are shown in Figure 6 ., The strength of selection acting on the two interaction loci is another decisive factor for selection on M ( Fig . 4C ) ., With very weak selection on the interaction loci – corresponding largely to extinction of alleles – M is selectively neutral ., With increasing strength of selection on the interaction loci , the proportion of simulations where M was under positive selection increases continuously , reaching a maximum of more than 70% of simulations ., On the other hand , disregarding the simulations with very weak ( <10−4 ) selection on the interaction loci , the proportion of simulations where selection against M was observed remained more or less constant with increasing strength of selection ., These results on the impact of measured selection intensity on the interaction loci are mirrored in the results comparing selection for M with different sets of interaction matrices ( Table 3 ) ., We also examined the product of epistasis and LD ( ) in hosts as an indicator for selection for increased recombination ( Fig . 4D ) ., This quantity is of interest because if epistasis and LD are of opposite sign ( i . e . , ) , an immediate benefit to recombination is expected ( because disproportionately fit individuals are underrepresented in the population ) ., Among the simulations where was negative over most of the 10 , 000 generations prior to introduction of M , M increased in frequency in more than 80% of simulations ., When the median of was close to zero , M was largely neutral , and increasingly positive values of median are associated with an increasing proportion of simulations where selection against M was observed ., Interestingly , however , even when was mainly positive , M was under positive selection in many simulations ., Similar results are obtained when the mean of rather than the median is considered ( results not shown ) ., In many of our simulations where M was selectively favoured , we observed that M did not become fixed in the population ., Rather , M often remained polymorphic even following the 10 , 000 generations of simulation , with periods of increase and decrease in its frequency ., This observation led us to ask whether there exists an evolutionarily stable ( ES ) recombination rate for a particular pair of interactions , i . e . an allele m coding for a recombination rate r that cannot be invaded by alleles coding for other recombination rates ., Previous studies have demonstrated the existence of an ES recombination rate 11 , 13 , but it is not clear if this result can be generalized to arbitrary fitness interaction models ., To study this question , we screened the entire range of resident recombination alleles m and modifier recombination alleles M for particular pairs of interaction matrices ( Figure 7 ) ., In plots 7A and 7B , it appears that there is indeed an allele m associated with a certain recombination rate r>0 which is stable against invasion of all alleles M ( intersections of the white ‘lines’ ) ., Plot ( C ) shows a case where recombination is disfavoured ., Plot ( D ) gives an example for more irregular patterns of selection on the mutant modifier M , exhibiting bands of neutrality even when the resident recombination rate is much higher than the optimum ., An interesting feature of the plots in Figure 7 is that selection for the optimal recombination rate is much stronger when the resident recombination allele codes for a suboptimal recombination rate than when it codes for a superoptimal recombination rate ., These results suggest that an in-depth future investigation of ES recombination rates in Red Queen models with arbitrary fitness interactions might be worthwhile ., For this study , we have created large sets of interaction matrices determining host and parasite fitness in specific genotype-genotype interactions ., We would like to stress that these randomly generated interaction matrices are by no means intended to represent the distribution of naturally occurring interactions between hosts and parasites , and results like the proportion of matrices for which we find selection for increased recombination are therefore , in themselves , biologically meaningless ., Rather , our aim was to investigate to what extent previous results regarding Red Queen dynamics and the RQH depend on the niceties of particular interaction models , to identify informative properties of interaction matrices , and to discover interesting dynamical behaviours that differ qualitatively from the dynamics that arise in standard interaction models ., The ‘true’ spectrum of host-parasite interactions found in natural populations is far from being understood ., To date , fitness components for interactions between various host and parasite genotypes have been studied for only a few systems e . g . , 23–27 , and even then the underlying genetics are usually poorly understood ., The data that are available , however , suggest that fitness interactions are much more complicated in general than in the standard interaction models that have been assumed in previous Red Queen models e . g . , 23 ., One of the most basic questions concerning host-parasite co-evolution is whether and how much polymorphism is maintained at the interaction loci ., Different standard interaction models produce both extremes in that respect: extinction of all but one parasite genotype in the simplest ( cost-free ) version of the gene-for-gene model 1 , and generally complete maintenance of all host and parasite genotypes in the various matching allele models and in the Nee model 21 ., In the present study , different randomly generated interaction matrices also led to both complete annihilation and preservation of polymorphism , as well as intermediate outcomes ( e . g . , extinction of only one allele ) ., We demonstrated that this is determined to a large extent by the level of antagonicity between host and parasite interactions , with decreasing antagonicity leading on average to decreasing polymorphism in the populations ., However , even with highly antagonistic interactions , extinction of one or more alleles occurred frequently ., This latter result is perhaps not surprising given that the gene-for-gene model is also completely antagonistic ( i . e . , A\u200a=\u200a1 ) according to our definition of this term ., We also found that moderate selection coefficients favour the maintenance of polymorphism ., Based on these results , we predict that polymorphic loci involved in host-parasite interactions observed in natural systems will tend to be characterized by strong antagonicity , but moderate selection coefficients ., We would like to caution at this point that ‘antagonicity’ as defined here is only loosely related to the virulence of the parasite or the nature of the species-species relationship in general ., Rather , it is a measure for the specific genetic interactions under study ., As an example to illustrate this difference , consider a parasite that is highly virulent , i . e . , that leads to strong fitness reduction in infected hosts ., This relationship would therefore be described as ‘highly antagonistic’ in the common sense ., Let us assume that there are two genotypes of this parasite , P1 that induces optimal levels of virulence ( from the parasites point of view ) in the host , and P2 that is slightly more virulent than the optimum ., ( A classic result in evolutionary epidemiology is that if there are trade-offs between virulence and transmission , intermediate levels of virulence are expected to be evolutionarily stable reviewed in 28 . ), Everything else being equal , P1 then has a higher fitness than P2 , and hosts infected with P1 will have a higher fitness than hosts infected with P2 ., Thus , antagonicity for this simple 1×2 interaction matrix would be , i . e . , the genotypic interaction would be characterized as ‘synergistic’: a mutation from P2 to P1 benefits both parasite and host ., Similarly , different host genotypes are conceivable for which fitness differences go into the same direction in hosts and parasites ., This shows that there may be genotype-genotype interactions that are not or only slightly antagonistic , even though the host-parasite relationship as a whole is very antagonistic ., Whereas our definition of antagonicity refers to interactions between different host and parasite genotypes , antagonicity in terms of the interacting species per se refers to infection versus no infection ., In many of our simulations , we observed selection for or against modifier alleles that increase the recombination rate between the interaction loci ., As has been reported previously for different interaction models e . g . , 14 , 16 , strong selection on either hosts or parasites is conducive for selection for higher recombination in the hosts , although strong selection on the hosts appears to be more important ., This result holds both for comparisons between different sets of interaction matrices ( where average selection coefficients differ , see Table 3 ) and within single sets of interaction matrices ( where the strength of selection was measured directly , see Fig . 4C ) ., Strong selection on the host implies highly virulent parasites , but this is not the only aspect that is important: the parasites must also be very abundant ( if only few hosts in a population are infected , selection to resist parasite infection will be low ) , there must be high levels of genetic variation in hosts to resist the parasites , and resistance must not be too costly ., It is important therefore to keep in mind that the fitness values in population genetic models like the one presented here combine all fitness components ., To our knowledge there is currently no study that has measured all relevant components of lifetime reproductive success in different host and parasite genotypes , making it impossible to parameterize our models based on real data ., Another quantity that appears to be important in determining whether recombination is favoured or disfavoured is the product of epistasis and LD ., Negative median values of this quantity usually lead to selection for recombination , whereas sufficiently high , positive values led to selection against increased recombination in the majority of simulations ., These results indicate that immediate effects of the recombination modifier ( i . e . , the production of disproportionally fit offspring through recombination ) may have been responsible for selection for the modifier in many of our simulations ., However , there are also simulations in which there is selection for recombination despite being mainly positive ., We even found instances where the sign of both LD and epistasis was constantly the same ( i . e . , was always positive ) and where recombination was nevertheless favoured ., Hence , recombination is sometimes favoured despite an immediate disadvantage of producing disproportionately unfit offspring , indicating that delayed short-term effects and/or long-term effects are also important ( for a classification and analysis of these effects , see Ref . 12 ) ., A rather unexpected outcome of our simulations was the distribution of LD statistics and their impact on selection for or against recombination ( see Figures 3 and 5 ) ., With most of our random interaction matrices , no change in the sign of LD occurred following the burn-in phase ., We suspect that LD fluctuations around a mean of zero that are usually observed with standard interaction models are a result of the intrinsic symmetry of these models ., Importantly , constant sign of LD does not imply absence of selection for recombination ., LD dynamics appear to be informative about selection for recombination in three extreme cases ., First , if LD is constantly zero ( as happened in many simulations because of quasi-extinction of alleles ) , any recombination modifier is selectively neutral ., Second , when LD is more or less constant but different from zero , the recombination modifier decreased in frequency ., This situation is similar to that of so-called high complementarity equilibria , which have been observed in the multiplicative matching allele model 29 and which are expected from the reduction principle 30 to disfavour recombination ., ( According to the reduction principle , in populations at equilibrium in which genotypes of suboptimal fitness are constantly produced through imperfect transmission – e . g . , mutation or recombination – modifier alleles that decrease this imperfect transmission can always spread in the population . ), Finally , when LD fluctuates very strongly around zero , recombination is usually favoured ., We would like to stress that in many simulations the LD dynamics could not be assigned to any of these three classes of outcomes , so that the fate of a recombination modifier could not be predicted from LD ., We also note that extremely fast fluctuations of either LD or epistasis with sign changes every two to five generations ( the so-called Barton zone ) were never observed in our simulations ., Although such dynamics have been predicted to be necessary for fluctuating epistasis to favour high recombination rates ( near 0 . 5; see Ref . 10 ) , our results indicate that at least for the moderately high recombination rates ( 0 . 1 ) that we assumed , this may not be an important requirement for the RQH to work 13 , 17 ., A general conclusion from our results is that it is very difficult to predict from empirical data whether recombination is favoured ., Even when the dynamics of allele frequencies , LD , epistasis etc . are completely recorded over a long time span and without sampling error , these data do not allow us in general to make accurate predictions with respect to selection acting on a recombination modifier ., Given that natural systems will be much more complex in terms of genotypic architecture and population dynamics than our simple , deterministic two-locus model , these conclusion are somewhat dispiriting ., Further theoretical investigations into the population genetic mechanism of the RQH and novel , more general theoretical predictions as to when recombination should be favoured or disfavoured in Red Queen models would therefore seem desirable ., We constructed a deterministic discrete time model that is similar to previous models of Red-Queen dynamics e . g . , 14 , 16 ., Both hosts and parasites are haploid and have two interaction loci A and B with two alleles a/A and b/B , respectively , at each locus ., In addition , hosts have a third locus M ( recombination modifier ) with two alleles m and M . At each time step , three processes occur in the following order for both hosts and parasites: ( 1 ) reproduction , ( 2 ) selection , and ( 3 ) mutation ., A number nPG of parasite life cycles are completed during a single host life cycle , and updating of host and parasite frequencies occurred simultaneously ., The three steps of the life-cycle are defined as follows ., First , during reproduction , hosts mate and recombine ., The order of loci is ABM ., Recombination between the two interaction loci A and B is determined by the alleles at the M locus , with recombination rates denoted by rmm , rMm and rMM ., Recombination between the B and the M locus takes place at a rate R . Parasites are assumed to reproduce asexually ., Second , selection acting on hosts and parasites is determined by a pair of 4×4 interaction matrices , WH and WP ., Here , is the fitness of a host with genotype i ( i\u200a=\u200aab , Ab , aB or AB ) that interacts with a parasite of genotype j ( j\u200a=\u200aab , Ab , aB or AB ) ., Likewise , is the fitness of a parasite with genotype j that interacts with a host of genotype i ., Interactions between host and parasite genotypes occur proportional to their relative frequencies ( mass-action assumption ) ., Note that WH and WP may represent or combine various fitness components of the hosts ( e . g . , parasite virulence , overall parasite prevalence or costs of resistance alleles ) and parasites ( e . g . , infectivity or within-host growth ) ., Denoting by pi the frequency of hosts with genotype i and by qj the respective parasite frequencies , the host frequencies following selection are given by ( 1 ) The numerator in equation ( 1 ) can be interpreted as the relative fitness of host i with the present composition vector q of genotypes in the parasite population , and the denominator is the average fitness in the host population ., The parasite frequencies following selection are determined analogously , based on host frequencies and WP ., Finally , mutation takes place at host and parasite interaction loci ., The mutation rate μ is the same for hosts and parasites , for the two interaction loci , and for both directions of mutation ., We assume that no mutations occur at the M locus ., Host genes involved in defence against parasites as well as parasite genes involved in host invasion are expected to show antagonistic fitness effects ., In order to construct random interaction matrices that emulate host-parasite relationships , we therefore defined an ‘antagonicity’ A of a pair of interaction matrices as , the Pearson product-moment correlation coefficient between the corresponding entries of WH and WP ., A is a measure of how changes in host fitness relate to changes in parasite fitness ., High values of A ( close to, 1 ) indicate that in interactions between host and parasite genotypes , a high host fitness implies a low parasite fitness and vice versa ., We then constructed the interaction matrices by first filling each entry of the two matrices with a random number drawn from a uniform distribution ranging from ( 1-sH ) or ( 1-sP ) to 1 ., Thus , sH and sP dete | Introduction, Results, Discussion, Methods | Antagonistic coevolution between hosts and parasites can involve rapid fluctuations of genotype frequencies that are known as Red Queen dynamics ., Under such dynamics , recombination in the hosts may be advantageous because genetic shuffling can quickly produce disproportionately fit offspring ( the Red Queen hypothesis ) ., Previous models investigating these dynamics have assumed rather simple models of genetic interactions between hosts and parasites ., Here , we assess the robustness of earlier theoretical predictions about the Red Queen with respect to the underlying host-parasite interactions ., To this end , we created large numbers of random interaction matrices , analysed the resulting dynamics through simulation , and ascertained whether recombination was favoured or disfavoured ., We observed Red Queen dynamics in many of our simulations provided the interaction matrices exhibited sufficient ‘antagonicity’ ., In agreement with previous studies , strong selection on either hosts or parasites favours selection for increased recombination ., However , fast changes in the sign of linkage disequilibrium or epistasis were only infrequently observed and do not appear to be a necessary condition for the Red Queen hypothesis to work ., Indeed , recombination was often favoured even though the linkage disequilibrium remained of constant sign throughout the simulations ., We conclude that Red Queen-type dynamics involving persistent fluctuations in host and parasite genotype frequencies appear to not be an artefact of specific assumptions about host-parasite fitness interactions , but emerge readily with the general interactions studied here ., Our results also indicate that although recombination is often favoured , some of the factors previously thought to be important in this process such as linkage disequilibrium fluctuations need to be reassessed when fitness interactions between hosts and parasites are complex . | The Red Queen has become an eponym for rapid and perpetual evolutionary arms races between hosts and parasites ., The Red Queen also lends her name to the idea that such arms races are at the core of the question of why sexual reproduction is so widespread among higher-level organisms ., According to this view , recombination provides the hosts with an advantage that allows faster adaptation to the parasite population ., To date , mathematical models trying to quantify Red Queen dynamics and the Red Queen hypothesis for the evolution of sex have generally made several simplifying assumptions about how host and parasite genotypes interact with each other ( i . e . , how they influence each others fitness ) ., In this article we present a model that allows for arbitrary patterns of fitness interactions between both parties ., We demonstrate that the degree of ‘antagonicity’ in these interactions is decisive for whether Red Queen dynamics are observed , and assess the robustness of various previous results concerning the Red Queen hypothesis with respect to fitness interactions ., Our results also make clear how difficult predictions of coevolutionary dynamics and selection for recombination are likely to be in real host-parasite systems . | computational biology/population genetics, evolutionary biology/animal genetics, ecology/evolutionary ecology, evolutionary biology/evolutionary ecology, ecology, evolutionary biology, computational biology/evolutionary modeling, evolutionary biology/plant genetics and gene expression, ecology/theoretical ecology, computational biology/ecosystem modeling, genetics and genomics/population genetics | null |
418 | journal.pgen.1006601 | 2,017 | Excess of genomic defects in a woolly mammoth on Wrangel island | Woolly mammoths ( Mammuthus primigenius ) were among the most populous large herbivores in North America , Siberia , and Beringia during the Pleistocene and early Holocene 1 ., However warming climates and human predation led to extinction on the mainland roughly 10 , 000 years ago 2 ., Lone isolated island populations persisted out of human reach until roughly 3 , 700 years ago when the species finally went extinct 3 ., Recently , two complete high-quality high-coverage genomes were produced for two woolly mammoths 4 ., One specimen is derived from the Siberian mainland at Oimyakon , dated to 45 , 000 years ago 4 ., This sample comes from a time when mammoth populations were plentiful , with estimated effective population size of Ne = 13 , 000 individuals 4 ., The second specimen is from Wrangel Island off the north Siberian coast 4 ., This sample from 4 , 300 years ago represents one of the last known mammoth specimens ., This individual comes from a small population estimated to contain roughly 300 individuals 4 ., These two specimens offer the rare chance to explore the ways the genome responds to pre-extinction population dynamics ., Nearly neutral theories of genome evolution predict that small population sizes will lead to an accumulation of detrimental variation in the genome 5 ., Such explanations have previously been invoked to explain genome content and genome size differences across multiple species 6 ., Yet , within-species comparisons of how genomes are changed by small effective population sizes remain necessarily rare ., These mammoth specimens offer the unique opportunity for within-species comparative genomics under a 43-fold reduction in population size ., This comparison offers a major advantage as it will be free from confounding biological variables that are present in cross species comparisons ., If nearly neutral dynamics lead to an excess of detrimental variation , we should observe an excess of harmful mutations in pre-extinction mammoths from Wrangel Island ., We use these two ancient DNA sequences to identify retrogenes , deletions , premature stop codons , and point mutations found in the Wrangel Island and Oimyakon mammoths ., We identify an excess of putatively detrimental mutations , with an excess of stop codons , an excess of deletions , an increase in the proportion of deletions affecting gene sequences , an increase in non-synonymous substitutions relative to synonymous substitutions , and an excess of retrogenes , reflecting increased transposable element activity ., These data bear the signature of genomic meltdown in small populations , consistent with nearly-neutral genome evolution ., They furthermore suggest large numbers of detrimental variants collecting in pre-extinction genomes , a warning for continued efforts to protect current endangered species with small population sizes ., We identified all SNPs in each mammoth genome as well as one Indian elephant specimen , Maya , using GATK 7 ., We identified all non-synonymous and synonymous changes relative to the L . africana reference genome ( https://www . broadinstitute . org/scientific-community/science/projects/mammals-models/elephant/elephant-genome-project ) using r3 . 7 annotations lifted over to L . africana 4 . 0 genome sequences ., We observe a significant increase in the number of heterozygous non-synonymous changes relative to synonymous changes in the Wrangel island genome compared with Oimyakon ( χ2 = 68 . 799 , df = 1 , P < 2 . 2 × 10−16; S1 Table ) ., There is also a significant increase in the number of homozygous mutations at non-synonymous sites relative to synonymous sites ( χ2 = 9 . 96 , df = 1 , P < 0 . 0016; S1 Table ) ., We further observe an excess of premature stop codons in the genome of the Wrangel Island mammoth , with 1 . 8X as many genes affected ., There are 503 premature stop codons in the Oimyakon genome ( adjusting for a 30% false negative rate at heterozygous sites ) compared with 819 in the Wrangel island genome ( Fig 1 , Table 1 ) ., There are 318 genes that have premature stop codons that are shared across the two mammoths , and 357 genes that are truncated in both mammoths , including mutations that form at independent sites ., A total of 120 of these genes have stop codons in the two mammoths as well as in Maya the Indian elephant , suggesting read through in the L . africana reference ., Among truncated genes , there is a significant excess of olfactory genes and oderant binding receptors that appear to be pseudogenized with an EASE enrichment score of 9 . 1 ( S2 Table ) 8 , 9 ., We observe 85 truncated olfactory receptors and 3 vomeronasal receptors as well as multiple signal transduction peptides compared with 44 olfactory receptors and 2 vomeronasal receptors pseudogenized in the mainland mammoth ., It is possible that DNA damage in the archaic specimens could contribute to a portion of the observed stop codons ., When we exclude A/G and C/T mutations , there is still a gross excess of premature stop codons , with 645 genes truncated in the Wrangel Island mammoth compared with 377 in the Oimyakon mammoth ., Hence , the patterns are not explained solely by differential DNA damage in the two mammoths ., Maya , the Indian Elephant specimen shows 450 premature stop codons , but 401 when A/G and T/C mutations are excluded ., When putative damage to ancient DNA is excluded , Maya appears to house an intermediate number of premature stop codons , with a 6% increase compared to the Oimyakon mammoth ., We identify 27228 deletions over 1 kb long in the Wrangel island genome , and 21346 ( correcting for a 0 . 5% false negative rate at heterozygous sites ) in the Oimyakon genome ( Table 1 ) ., There are 6147 deletions ( 23% ) identified in the Wrangel Island mammoth that are homozygous ( ≤ 10% coverage ) compared with 5035 ( 24% ) in the Oimyakon mammoth ., ( S3 Table ) ., A total of 13 , 459 deletions are identified in both mammoth genomes ( S4 Table ) ., Some 4813 deletions in the Wrangel Island mammoth and 4598 in the Oimyakon mammoth appear hemizygous but have stretches of zero coverage for at least 50% of their length ., These sites may represent multiple independent homozygous deletions that cannot be differentiated via change point statistics ., Alternatively , they might indicate smaller secondary deletions that appear on hemizygous haplotypes ., Such secondary deletions are common when large loop mismatch repair attacks unpaired , hemizygous stretches of DNA 10 , 11 ., The Wrangel Island Mammoth has sharply increased heterozygosity for deletions in comparison with the Oimyakon mammoth ( S3 Table ) ., Some portion of the inflated heterozygosity for deletions in the Wrangel Island mammoth could be due to this difficulty in inferring genotypes in a high throughput setting ., Alternatively , the effective mutation rate may have increased as fewer deletions were removed from the population via purifying selection , inflating θdel ., It is also possible that there was an increase in the rate of deletions in the Wrangel Island lineage due to defective DNA repair mechanisms ., An increase in non-homologous end joining after DNA breaks rather than double stranded break repair could putatively induce such a change in the deletion rate ., Maya the Indian elephant shows a larger number of deletions than the Oimyakon mammoth , but with different character from the Wrangel Island mammoth ., The bulk of these are derived from 22 , 954 hemizygous deletions ( S3 Table ) ., Maya houses only 5141 homozygous deletions , similar to the mainland mammoth ( S3 Table ) ., There is an increase in the number of hemizygous deletions that affect gene sequences , but only a modest increase in the number of homozygous deletions that affect gene sequences ( S3 Table ) ., Competing pressures of higher Ne , longer time frames to accumulate mutations toward equilibrium frequencies , differences in mutation rates between the mammoths and elephants , differences in selective pressures , differences in the distribution of selective coefficients for deletions , different effective mutation rates due to different selective constraints , or differences in dominance coefficients might all contribute to differences in the number of deletions observed in elephants and mammoths ., Additional samples would be necessary to determine the extent to which genetic declines may be influencing the diversity of deletions in modern Indian elephants ., We currently have no basis for conclusions given this single sample , with no prior comparison ., There is a significant difference in the size distribution of deletions identified in the two mammoth samples , with a mean of 1707 bp in Oimyakon and 1606 bp in the Wrangel mammoth ( Wilcox W = 304430000 , P < 2 . 2e − 16; Fig 2 ) ., This difference could reflect either differences in DNA replication or repair mechanisms in the two mammoths , or altered selective constraints for different types of duplications ., No significant difference is observed between the Wrangel island mammoth down sampled sequence data ( W = 2004400 , P = 0 . 3917 ) suggesting that the observed decrease in size is not due to differences in coverage ., Some 1628 genes have deleted exons in the Wrangel Island mammoth compared to 1110 in Oimyakon ( Table 1 ) , a significant excess of genes deleted compared to expectations based on the number of deletions ( χ2 = 12 . 717 , df = 1 , P = 0 . 0003623 ) ., Among these deleted genes , 112 in the mainland mammoth are homozygous compared to 133 homozygous exon deletions in the Wrangel Island Mammoth ., Gene functions for affected genes in the Oimyakon mammoth include synapse functions , PHD domains , zinc fingers , aldo-keto metabolism , calcium dependent membrane targeting , DNA repair , transcription regulation , and development ( S5 Table ) ., Gene functions overrepresented among deletions in the Wrangel Island mammoth include major urinary proteins , lipocalins , and pheromones , pleckstrins , transcription regulation , cell transport , DNA repair , chromatin regulation , hox domains , and development ( S5 Table ) ., Among the genes deleted in the Wrangel Island mammoth , several have phenotypes of interest in other organisms ., We observe a hemizygous deletion in riboflavin kinase RFK in the Wrangel Island mammoth , but normal coverage in the Oimyakon mainland mammoth ( S1 Fig ) ., Homozygous knockouts of riboflavin kinase , essential for B2 utilization/FAD synthesis , are embryonic lethal in mice 12 ., Finally , we identify a hemizygous deletion in the Wrangel island mammoth that would remove the entire gene sequence at the FOXQ1 locus ( S2 Fig ) ., The alternative haplotype carries a frameshift mutation that disrupts the FOXQ1 functional domain ., FOXQ1 knock-outs in mice are associated with the satin coat phenotype , which results in translucent fur but normal pigmentation due to abnormal development of the inner medulla of hairs 13 , with two independent mutations producing this phenotype 13 ., FOXQ1 also regulates mucin secretion in the GI tract , a case of pleiotropic functions from a single gene 14 ., If the phenotype in elephantids matches the phenotype exhibited in mice , this mammoth would have translucent hairs and a shiny satin coat , caused by two independently formed knock-out alleles at the same locus ., These genes each have functions that are conserved across mammals , though there is no guarantee that they would produce identical phenotypes in other species ., Retrogene formation can serve as a proxy for retrotransposon activity ., We identify retrogenes that display exon-exon junction reads in genomic DNA ., We observe 1 . 3X more retrogenes formed in the Wrangel island mammoth ., The Wrangel Island mammoth has 2853 candidate retrogenes , in comparison with 2130 in the Oimyakon mammoth and 1575 in Maya ( Table 1 ) ., There are 436 retrogenes that are shared between the two mammoths , though some of these could arise via independent mutations ., This excess of retrogenes is consistent with increased retroelement activity in the Wrangel Island lineage ., During retrogene formation , highly expressed genes , especially those expressed in the germline , are expected to contribute to new retrogenes ., To determine the types of loci that had been copied by retrotransposons , we performed a gene ontology analysis using DAVID 8 , 9 ., Functional categories overrepresented among candidate retrogenes include genes involved in transcription , translation , cell division/cytoskeleton , post translational modification , ubiquitination , and chaperones for protein folding ( S6 and S7 Tables ) ., All of these are expected to be highly expressed during cell divisions or constitutively expressed , consistent with expectations that highly expressed genes will be overrepresented ., Gene ontologies represented are similar for both mammoths ( S6 and S7 Tables ) ., Although these retrogenes are unlikely to be detrimental in and of themselves , they may point to a burst of transposable element activity in the lineage that led to the Wrangel island individual ., Such a burst of TE activity would be expected to have detrimental consequences , additionally contributing to genomic decline ., Under nearly-neutral theory of genome evolution , detrimental mutations should accumulate in small populations as selection becomes less efficient 5 ., This increase in non-neutral amino acid changes and premature stop codons is consistent with reduced efficacy of selection in small populations ., We attempted to determine whether the data is consistent with this nearly-neutral theory at silent and amino acid replacement substitutions whose mutation rates and selection coefficients are well estimated in the literature ., Under nearly neutral theory , population level variation for non-synonymous amino acid changes should accelerate toward parity with population level variation at synonymous sites ., Given the decreased population size on Wrangel Island , we expect to observe an accumulation of detrimental changes that would increase heterozygosity at non-synonymous sites ( HN ) relative to synonymous sites ( HS ) in the island mammoth ., Heterozygosity depends directly on effective population sizes ., We observe HS = 0 . 00130 ± 0 . 00002 in the Wrangel Island mammoth , which is 80% of HS = 0 . 00161 ± 0 . 00002 observed in the Oimyakon mammoth ( Table 2 ) ., The magnitude of the difference between HS in these two mammoths is 28 standard deviations apart , suggesting that these two mammoths could not have come from populations with the same effective population sizes ., The specimens are well beyond the limits of expected segregating variation for a single population ., To determine whether such results are consistent with theory , we fitted a model using PSMC 42 inferred population sizes for the Wrangel island mammoth , based on decay of heterozygosity of ( 1 − 1/2N ) t H0 ., The observed reduction in heterozygosity is directly consistent theoretical expectations that decreased effective population sizes would lower heterozygosity to HS = 0 . 00131 ., At non-synonymous sites , however , there are no closed-form solutions for how HN would decay under reduced population sizes ., We observe HN = 0 . 000490 in the Wrangel Island Mammoth , 95% of HN = 0 . 000506 in the Oimyakon mammoth ( Table 2 ) ., To determine whether such results could be caused by accumulation of nearly-neutral variation , we simulated population trajectories estimated using PSMC 42 ., This trajectory shows ancient populations with Ne ≈ 104 , followed by a population decline prior to extinction ., These numbers are slightly lower than previous estimates of ancestral Ne based on mitochondrial DNA 43 ., We were able to qualitatively confirm results that population trajectories from PSMC with previously described mutation rates and selection coefficients can lead to an accumulation of detrimental alleles in populations ., However , the magnitude of the effects is difficult to fit precisely ., The simulations show a mean HS = 0 . 00148 and HN = 0 . 000339 in Oimyakon and HS = 0 . 00126 and HN = 0 . 000295 for the Wrangel Island Mammoth ( S3 Fig ) ., In simulations , we estimate HN/HS = 0 . 229 both for the Oimyakon mammoth and directly after the bottleneck , but HN/HS = 0 . 233 in the Wrangel Island Mammoth at the time of the Wrangel Island mammoth ., These numbers are less than empirical observations of HN/HS = 0 . 370 ( Table 2 ) ., Several possibilities might explain the observed disparity between precise estimates from simulations versus the data ., The simulations may be particularly sensitive to perturbations from PSMC population levels or time intervals ., Similarly , selection coefficients that differ from the gamma distribution previously estimated for humans might lead to greater or lesser changes in small populations ., Additionally , an acceleration in generation time on Wrangel Island is conceivable , especially given the reduced size of Wrangel Island mammoths 15 ., Finally , positive selection altering nucleotide variation on the island or the mainland could influence diversity levels ., Founder effects during island invasion sometimes alter genetic diversity in populations ., However , it is unlikely that a bottleneck alone could cause an increase in HN/HS ., There is no evidence in effective population sizes inferred using PSMC to suggest a strong bottleneck during Island colonization 4 ., The power of such genetic analyses may be limited , but these results are in agreement with paleontological evidence showing no phenotypic differentiation from the mainland around 12 , 000 years ago followed by island dwarfism much later 15 ., During glacial maxima , the island was fully connected to the mainland , becoming cut off as ice melted and sea levels rose ., The timing of separation between the island and mainland lies between 10 , 000 years and 14 , 000 years before present 3 , 15–17 , but strontium isotope data for mammoth fossils suggests full isolation of island populations was not complete until 10 , 000-10 , 500 years ago 18 ., Forward simulations suggest that hundreds of generations at small Ne are required for detrimental mutations to appear and accumulate in the population ., These results are consistent with recent theory suggesting extended bottlenecks are required to diminish population fitness 19 ., Thus , we suggest that a bottleneck alone could not produce the accumulation of HN/HS that we observe ., E . maximus indicus specimen , Maya shows an independent population decline in the past 100 , 000 years , with current estimates of Ne = 1000 individuals ( S4 Fig ) ., This specimen shows a parallel case of declining population sizes in a similar species of elephantid ., Maya houses hemizygous deletions in similar numbers with the Wrangel Island Mammoth ., However , the number of stop codons and homozygous deletions is intermediate in comparison with the Oimyakon and Wrangel mammoths ( Table 1 ) ., It is possible that Indian elephants , with their recently reduced population sizes may be subject to similar accumulation of detrimental mutations , a prospect that would need to be more fully addressed in the future using population genomic samples for multiple individuals or timepoints and more thorough analyses ., Nearly-neutral theories of genome evolution have attempted to explain the accumulation of genome architecture changes across taxa 5 ., Under such models , mutations with selection coefficients less than the nearly neutral threshold will accumulate in genomes over time ., Here , we test this hypothesis using data from a woolly mammoth sample from just prior to extinction ., We observe an excess of retrogenes , deletions , amino acid substitutions , and premature stop codons in woolly mammoths on Wrangel Island ., Given the long period of isolation and extreme population sizes observed in pre-extinction mammoths on Wrangel Island , it is expected that genomes would deteriorate over time ., These results offer genetic support for the nearly-neutral theory of genome evolution , that under small effective population sizes , detrimental mutations can accumulate in genomes ., Independent analysis supporting a reduction in nucleotide diversity across multiple individuals at MHC loci suggests a loss of balancing selection , further support the hypothesis that detrimental variants accumulated in small populations 20 ., We observe two independent loss-of-function mutations in the Wrangel Island mammoth at the locus of FOXQ1 ., One mutation removes the entire gene sequence via a deletion , while the other produces a frameshift in the CDS ., Based on phenotypes observed in mouse models , these two independent mutations would result in a satin fur coat , as well as gastric irritation 14 ., Many phenotypic screens search for homozygous mutations as causative genetic variants that could produce disease ., More recently , it has been proposed that the causative genetic variation for disease phenotypes may be heterozygous non-complementing detrimental mutations 21 ., These data offer one case study of independent non-functionalizing mutations in a single individual , genetic support for independent non-functionalizing mutations at a single locus ., Woolly mammoth outer hairs house multiple medullae , creating a stiff outer coat that may have protected animals from cold climates 22 ( though see 23 for alternative interpretations ) ., Putative loss of these medullae through loss of FOXQ1 could compromise this adaptation , leading to lower fitness ., One of the two specimens comes from Wrangel Island , off the northern coast of Siberia ., This mammoth population had been separated from the mainland population for at least 6000 years after all mainland mammoths had died off ., Prior to extinction , some level of geographic differentiation combined with differing selective pressures led to phenotypic differentiation on Wrangel island 15 ., Island mammoths had diminished size , but not until 12 , 000 years ago when mainland populations had reduced and ice sheets melted 15 ., One possible explanation for the poor fit of simulations is that generation time may have decreased ., Previous work suggested a very high mutation rate for woolly mammoths based on comparisons between island and mainland mammoths ., It is possible that an acceleration in generation times could cause the accumulation of more mutations over time , and that the real mutation rate is similar to humans ( 1 − 2 × 10−8 24 rather than 3 . 8 × 10−8 4 ) ., Such changes would be consistent with island dwarfism being correlated with shorter generation times , and would explain the unusually high mutation rate estimate for mammoths based on branch shortening observed in 4 ., We observe large numbers of pseudogenized olfactory receptors in the Island mammoth ., Olfactory receptors evolve rapidly in many mammals , with high rates of gain and loss 25 ., The Wrangel island mammoth has massive excess even compared to the mainland mammoth ., Wrangel island had different flora compared to the mainland , with peat and sedges rather than grasslands that characterized the mainland 17 ., The island also lacked large predators present on the mainland ., It is possible that island habitats created new selective pressures that resulted in selection against some olfactory receptors ., Such evolutionary change would echo gain and loss of olfactory receptors in island Drosophila 26 ., In parallel , we observe a large number of deletions in major urinary proteins in the island mammoth ., In Indian elephants E . maximus indicus , urinary proteins and pheromones ellicit behavioral responses including mate choice and social status 27 ., It is possible that coevolution between urinary proteins , olfactory receptors , and vomeronasal receptors led to a feedback loop , allowing for rapid loss in these related genes ., It is equally possible that urinary peptides and olfactory receptors are not essential and as such they are more likely to fall within the nearly neutral range 25 ., Either of these hypotheses could explain the current data ., Many factors contributed to the demise of woolly mammoths in prehistoric times ., Climate change led to receding grasslands as forests grew in Beringia and North America and human predation placed a strain on already struggling populations 2 ., Unlike many cases of island invasion , Wrangel Island mammoths would not have continuous migration to replenish variation after mainland populations went extinct ., Under such circumstances , detrimental variation would quickly accumulate on the island ., The putatively detrimental variation observed in these island mammoths , with the excess of deletions , especially recessive lethals may also have limited survival of these struggling pre-extinction populations ., Climate change created major limitations for mammoths on other islands 28 , and these mammoths may have struggled to overcome similar selective pressures ., Many modern day species , including elephants , are threatened or endangered ., Asiatic cheetahs are estimated to have fewer than 100 individuals in the wild 29 ., Pandas are estimated to have 1600 individuals living in highly fragmented territories 30 ., Mountain Gorilla population census sizes have been estimated as roughly 300 individuals , similar to effective population sizes for pre-extinction mammoths 31 ., If nearly neutral dynamics of genome evolution affect contemporary endangered species , detrimental variation would be expected in these genomes ., With single nucleotide changes , recovered populations can purge detrimental variation in hundreds to thousands of generations , returning to normal genetic loads 19 ., However , with deletions that become fixed in populations , it is difficult to see how genomes could recover quickly ., The realm of back mutations to reproduce deleted gene sequences will be limited or impossible ., Although compensatory mutations might conceivably correct for some detrimental mutations , with small effective population sizes , adaptation through both new mutation and standing variation may be severely limited 32 ., Thus we might expect genomes affected by genomic meltdown to show lasting repercussions that will impede population recovery ., All sequences are taken from publicly available sequence data in the ENA or SRA ., Indian elephant specimens for previously published sequence data were handled by the San Diego Zoo ., We used previously aligned bam files from ERR852028 ( Oimyakon , 11X ) and ERR855944 ( Wrangel , 17X ) ( S8 Table ) 4 aligned against the L . africana 4 . 0 reference genome ( available on request from the Broad Institute—vertebrategenomes@broadinstitute . org; https://www . broadinstitute . org/scientific-community/science/projects/mammals-models/elephant/elephant-genome-project ) ., We also aligned 33X coverage of sequencing reads for one modern E . maximus indicus genome Maya ( previously described as “Uno” ) using bwa 0 . 7 . 12-r1044 33 , with parameters set according to 4 bwa aln -l 16500 -o 2 -n 0 . 01 ., The E . maximus indicus sample , previously labeled in the SRA as “Uno” , is from Maya , a former resident of the San Diego Zoo wild-born in Assam , India , North American Studbook Number 223 , Local ID #141002 ( O . Ryder , personal communication ) ., We were not able to use two other mammoth sequences are publicly available , M4 and M25 from Lynch et al . 34 ., These sequences display abnormal PSMC results ( S4 Fig ) , high heterozygosity ( S5 Fig ) , and many SNPs with asymmetrical read support ( S6 Fig ) ., The unrealistically high heterozygosity as well as abnormal heterozygote calls raise concerns with respect to sequence quality ., For further description , please see Supporting Information ., We used the GATK pipleine 7 v3 . 4-0-g7e26428 to identify SNPs in the aligned sequence files for the Oimyakon and Wrangel Island mammoths ., We identified and realigned all indel spanning reads according to the standard GATK pipeline ., We then identified all SNPs using the Unified Genotyper , with output mode set to emit all sites ., We used all CDS annotations from cDNA annotations from L . africana r3 . 7 with liftover coordinates provided for L . africana 4 . 0 to identify SNPs within coding sequences ., We identified all stop codons , synonymous substitutions , and non-synonymous substitutions for the Wrangel Island and Oimyakon mammoths at heterozygous and homozygous sites ., We aligned all reads from the mammoth genome sequencing projects ERR852028 ( Oimyakon ) and ERR855944 ( Wrangel ) ( S8 Table ) against elephant cDNA annotations from L . africana r3 . 7 ., Sequences were aligned using bwa 0 . 7 . 12-r1044 33 , with parameters set according to 4 bwa aln -l 16500 -o 2 -n 0 . 01 in order to account for alignments of damaged ancient DNA ., We then collected all reads that map to exon-exon boundaries with at least 10 bp of overhang ., Reads were then filtered against aligned genomic bam files produced by Palkopoulou et al 4 , discarding all exon-exon junction reads that have an alignment with equal or better alignments in the genomic DNA file ., We then retained all putative retrogenes that showed signs of loss for two or more introns , using only cases with 3 or more exon-exon junction reads ., We calculated coverage depth using samtools 35 with a quality cutoff of -q 20 ., We then implemented change point analysis 36 in 20 kb windows ., Change point methods have been commonly used to analyze microarray data and single read data for CNVs 37–39 The method seeks compares the difference in the log of sum of the squares of the residuals with one regression line vs . two regression lines 36 ., The test statistic follows a chi-squared distribution with a number of degrees of freedom determined by the number of change-points in the data , in this case df = 1 ., We required significance at a Bonferroni corrected p-value of 0 . 05 or less ., We allowed for a maximum of one CNV tract per window , with minimum of 1 kb and maximum of 10 kb ( half the window size ) with a 100 bp step size ., We did not attempt to identify deletions smaller than 1 kb due to general concerns of ancient DNA sequence quality , limitations to assess small deletions in the face of stochastic coverage variation , and concerns that genotype calls for smaller deletions might not be as robust to differences in coverage between the two mammoths ., Sequences with ‘N’s in the reference genome did not contribute to change point detection . We excluded all deletions that were identified as homozygous mutations in both mammoths and in E . maximus indicus specimen Maya , as these suggest insertion in the L . africana reference rather than deletion in other elephantids . To determine the effects that coverage differences would have on deletions , we downsampled the sequence file for the Wrangel Island mammoth using samtools to 11X coverage , using chromosome 1 as a test set . We observe a reduction in the number of deletions for chromosome 1 from 1035 deletions to 999 deletions , resulting in an estimated false negative rate of 0 . 5% at reduced coverage for deletions greater than 1 kb . Highly diverged haplotypes with greater than 2% divergence might prevent read mapping and mimic effects of deletions , but this would require divergence times within a species that are greater than the divergence between mammoths and L . africana . Mutations were considered homozygous if mean coverage for the region was less than 10% of the background coverage level . Otherwise it was considered to be heterozygous . These methods are high-throughput , and it is possible that multiple small homozygous deletions interspersed with full coverage sequences might mimic heterozygote calls . Whether such mutations might meet the conditions for significant change-point detection would depend on the deletion length , placement , and background coverage level . We identified SNPs that differentiate Mammoth genomes from the reference using samtools mpileup ( options -C50 -q30 -Q30 ) , and bcf | Introduction, Results, Discussion, Materials and methods | Woolly mammoths ( Mammuthus primigenius ) populated Siberia , Beringia , and North America during the Pleistocene and early Holocene ., Recent breakthroughs in ancient DNA sequencing have allowed for complete genome sequencing for two specimens of woolly mammoths ( Palkopoulou et al . 2015 ) ., One mammoth specimen is from a mainland population 45 , 000 years ago when mammoths were plentiful ., The second , a 4300 yr old specimen , is derived from an isolated population on Wrangel island where mammoths subsisted with small effective population size more than 43-fold lower than previous populations ., These extreme differences in effective population size offer a rare opportunity to test nearly neutral models of genome architecture evolution within a single species ., Using these previously published mammoth sequences , we identify deletions , retrogenes , and non-functionalizing point mutations ., In the Wrangel island mammoth , we identify a greater number of deletions , a larger proportion of deletions affecting gene sequences , a greater number of candidate retrogenes , and an increased number of premature stop codons ., This accumulation of detrimental mutations is consistent with genomic meltdown in response to low effective population sizes in the dwindling mammoth population on Wrangel island ., In addition , we observe high rates of loss of olfactory receptors and urinary proteins , either because these loci are non-essential or because they were favored by divergent selective pressures in island environments ., Finally , at the locus of FOXQ1 we observe two independent loss-of-function mutations , which would confer a satin coat phenotype in this island woolly mammoth . | We observe an excess of detrimental mutations , consistent with genomic meltdown in woolly mammoths on Wrangel Island just prior to extinction ., We observe an excess of deletions , an increase in the proportion of deletions affecting gene sequences , and an excess of premature stop codons in response to evolution under low effective population sizes ., Large numbers of olfactory receptors appear to have loss of function mutations in the island mammoth ., These results offer genetic support within a single species for nearly-neutral theories of genome evolution ., We also observe two independent loss of function mutations at the FOXQ1 locus , likely conferring a satin coat in this unusual woolly mammoth . | heterozygosity, deletion mutation, genome evolution, population genetics, vertebrates, animals, mammals, mutation, effective population size, mammalian genomics, population biology, molecular evolution, comparative genomics, animal genomics, elephants, population metrics, population size, heredity, genetics, biology and life sciences, genomics, evolutionary biology, amniotes, computational biology, organisms | null |
1,098 | journal.pcbi.1002943 | 2,013 | Embedding Responses in Spontaneous Neural Activity Shaped through Sequential Learning | The way in which neural processing of sensory inputs leads to cognitive functions is one of the most important issues in neuroscience ., Neural activity in the presence of sensory stimuli 1–4 and during the execution of cognitive tasks in response to sensory inputs have been measured experimentally 5 , 6 , and neural network models that exhibit the requested responses to the inputs have been investigated theoretically 7–12 ., Learning algorithms have also been proposed to memorize several input/output ( I/O ) mappings 12–16 ., The response activity has been the main focus both in modeling studies and experiments , while pre-stimulus , i . e . , spontaneous , activity has been dismissed simply as background noise ., However , spontaneous activity has recently been garnering more attention since experimental measurements have revealed that the spontaneous activity is not random noise and that it shows characteristic spatiotemporal patterns 17–19 ., Furthermore , many observations have revealed that the response activities to external stimuli 20 , 21 or cognitive tasks depend on the spontaneous activity 22 , 23 ., Evoked responses are generated not only by external inputs but also through the interplay of the spontaneous activity and external stimuli ., Thus , to establish a neural basis for the cognition and computation in a neural system , it is important to understand the nature of this interplay ., Spontaneous activity has been analyzed theoretically over the last few decades by using neural network models of rate-coding or spiking neurons with random , designed , or biologically realistic connections 24–28 ., However , apart from a few publications 29 , 30 , the relationship between the spontaneous activity and response to external input has rarely been investigated ., Furthermore , how the learning shapes the spontaneous activity and its response to an input is still an open question , but recent experimental studies suggest that learning and developmental processes modify and shape the spontaneous activity 31 , 32 ., In the present paper , we analyze how the spontaneous activity is formed when I/O mappings are memorized ., We do this by introducing a simple learning rule to the neural dynamics in order to study the interplay between the spontaneous activity and input-evoked response ., To analyze the formation of the spontaneous activity and its response to the memorized input through the learning of I/O mappings , we previously proposed a novel view on memory in 33 , 34 , which we called “memories as bifurcations” in contrast to the traditional theoretical viewpoint of “memories as attractors . ”, According to the memories-as-attractors viewpoint , each memory is embedded in one of the attractors in a unique neural dynamical system 11 ., An input specifies an initial condition of the dynamical system , and from that initial state , the neural activity reaches an attractor that matches the target corresponding to the given input ., Thus , the initial states are determined by the given inputs , but the neural activity in the absence of inputs is not examined ., In contrast , according to the memories-as-bifurcations viewpoint , an input modifies the neural dynamics as a parameter , and the flow structure of the neural activity is also changed from that without an input ., In the absence of input , the neural activity evolves and corresponds to spontaneous activity ., In the presence of a learned input , the flow structure in the neural dynamics changes and an attractor that matches the requested target corresponding to the applied input emerges ., With an increase in the input strength , the flow structure changes via a sequence of bifurcations in terms of dynamical systems theory ., Here , the flow structure can be changed substantially by applying different memorized inputs ., Thus , for this viewpoint , memories are embedded in the flow structure of the neural dynamics such that they enable appropriate bifurcations to appear upon input application ., Previously , we designed a neural-network connection matrix through correlations among memorized inputs and targets so that an output that matches a target is generated , as a result of bifurcations from the spontaneous activity , by applying the corresponding input 34 ., In the model , similarity between the spontaneous and evoked activities was demonstrated and is consistent with recent observations in experimental studies 32 , 35–37 ., Although the simplicity of the model is an advantage for analyzing the relationship between spontaneous and evoked neural activities , it remains unclear whether the simplistic structure in the designed network in 34 is the only way to store associative memories or if there exists a variety of networks that show similar behavior and generate a sufficient memory capacity ., Also , how such network structures for memorizing I/O mappings are formed by learning through a widely-accepted synaptic plasticity rule , such as the Hebbian rule , is still open for debate ., In the present study , we introduce a sequential learning model with a simple Hebbian-type learning rule that changes the synaptic strength according to the activities of the pre- and postsynaptic neurons ., From extensive numerical simulations , we have confirmed that through this learning the networks memorize mappings ( where is the number of elements ) satisfying the memories-as-bifurcations viewpoint ., Here , spontaneous activity shows chaotic behavior with approaches to memorized output patterns ., By applying each memorized input , this activity is transformed ( after a sequence of bifurcations ) into different attractors that generate the target pattern corresponding to the applied input ., In spite of the sequential learning scheme , the neural network does not lose the memory it learned earlier; it has a capacity of up to ., This capacity is not so small , and interestingly it is not possible in conventional sequential learning models in which the learning of a new I/O mapping easily pushes out previous memories ., As long as the memorized targets are attractors in the same dynamical system , the formation of a new attraction to a novel attractor will easily destroy the attraction to earlier target patterns ., Our model differs in that the different targets are attractors in the presence of the corresponding input , i . e . , they are embedded in different neural dynamical systems , so that attractors for earlier targets are not destroyed ., Here , the spontaneous activity is flexible; it is possible to apply an input so that a new target is embedded in the network structure without destroying the information of the previous targets ., Remarkably , the network generated through the learning process to obtain a high memory capacity is found to have a similar structure to the network designed in 18 ., Although the learning process can generate a huge variety of networks , which are not similar to the designed network , a common structure is generated by the learning ., A simple learning rule for synaptic change is sufficient for generating such a network ., We first select two random binary patterns , and , as the input and target patterns , respectively ., The neural activity evolves in the presence of whose strength is constant during the learning process for ., The synaptic connection also evolves according to ( 2 ) where is a learning parameter that is the inverse of the time scale ratio of the synaptic to neural dynamics ., The above synaptic dynamics are determined by correlations between the activities of the pre- and postsynaptic neurons ., This learning rule takes a similar form as the perceptron learning rule where the synaptic connection is changed by correlations between activities of elements in the input and output layers 16 ., Here , although the validity of this learning rule is not mathematically proven in contrast to the perceptron , it is expected by the following argument ., According to Eq ., 1 , the change in the neural activity during with the connection modified by the learning , , is given by ( 3 ) ( 4 ) Following the synaptic dynamics in Eq ., 2 , the change in the neural activity due to is given by ( 5 ) where is a positive value determined by and differential coefficient ., Thus , when is larger ( smaller ) than , increases ( decreases ) , respectively ., Hence , the change in the synapses will drive the successive activity toward the target ., Note , however , that the distance between the neural activity and the target is not necessarily guaranteed to decrease monotonically through the learning , because the total change in the neural activity depends also on ., The learning process stops automatically when the neural activity matches the target since in this case , otherwise , the learning process continues ., Here we impose several I/O mappings to be successively learned , and after learning the preceding mapping , another input pattern with the same strength as the previous learning is applied while giving a new target pattern ., The learning process for each single I/O mapping is called a learning step in what follows ., In this learning algorithm , which belongs to a class of palimpsest learning models 38–40 , each mapping is learned sequentially and previously learned mappings are overwritten by the latest mapping ., Thus , it is possible that older mappings are forgotten through the learning process ., During the learning process , double ( neural and synaptic ) dynamics run concurrently , and the neural and synaptic states have to be set as initial states: the neural and synaptic states are randomly selected from with a uniform probability and from a binary ensemble of with equal probability , respectively ., In this model , fully-connected networks without self-connections are used ., Through different learning processes , different sets of mappings are learned so that the generated networks are also different ., For a statistical analysis , we take an average over many networks shaped through different learning processes ., As our purpose in this study is to analyze the relationship between the spontaneous and evoked dynamics , we analyze the neural dynamical system in the absence and presence of input after learning ., After the learning is completed , the synaptic connections are fixed and only the neural activities evolve ., Note that there is no need for the input strengths for learning and memory recall to be identical: we can set the input strength used during the recall process after the learning and independently of the input strength used during the learning process ., For example , after learning with , we can analyze the evoked dynamics by applying the input with ., To distinguish the two clearly , the input strength used in the learning process is denoted by and that used in the analysis of the neural activities after learning is denoted by ., The spontaneous and evoked dynamics are given by and , respectively ., As recall and memory for the memories-as-bifurcations viewpoint are defined differently from those for the memories-as-attractors viewpoint , we outline the definitions of recall and then memory here ., A network succeeds in recalling a target for an input of , if , on application of input for = , the overlap of the evoked activity with the target is higher than the overlap with any other pattern ., Here , is a transposed vector of and the inner product is given by ., By considering a case in which the evoked attractor is not a fixed-point attractor , the temporal average overlap is taken as this criterion ., By denoting the temporal average overlap with the target as , the criterion for the successful recall of corresponding to the applied input is given by ( 6 ) where we measure the avaraged overlaps in the presence of the input and is the pattern that has the largest overlap with the activity among other targets and inputs , as well as other random patterns ., Memory is defined as the ability of a network to recall a target for most initial states ., The condition for whether a network memorizes an I/O ( / ) mapping is ( 7 ) where represents the average over the initial states of this network ., By extending this criterion , we adopt a condition for determining whether networks memorize the I/O mapping for a certain parameter as ( 8 ) where denotes the average over different networks ., Fig . 1 exhibits a learning process shown as a raster plot and the time series of the overlap with the target for ., After wandering over many neural activity patterns , the neural activity reaches the target pattern and the learning process is completed ., The learning process does not stop by becoming trapped in a local minimum , nor does it continue to wander over the neural patterns ., We confirmed that in all trials with parameters of , the learning was completed ., During a learning process , the flow structures of the spontaneous and evoked activities change ., Hence , the recall process also changes through the learning process ., Fig . 2 shows a recall process before and after learning for and ., Before learning , an attractor matching the applied input pattern is generated when that input is applied ( in Fig . 2A ) , but the overlap with the required target is not high and the network thus fails to recall the target ., After learning , two types of neural dynamics are generated depending on the parameter values ( ) ( see also Table 1 ) : We now analyze the spontaneous and evoked neural dynamics in these two regimes ., First , to reveal the dependence of the evoked dynamics on the parameters , as a function of and , is shown in Fig . 3A ., In the R regime , for larger and smaller values , only the target attractor exists and the average overlap is equal to one , while in the NR regime , both the target and reverse-target attractors exist and the average overlap is lower than that in the R regime ., As decreases or increases , the volume of the reverse-target attractor basin increases and that of the target attractor decreases so that the average overlap with the target also decreases ., The dotted line in Fig . 3A represents the boundary between the R and NR regimes computed using the spontaneous activity , as discussed below ., To analyze the spontaneous dynamics , we note that due to the symmetry , the mean overlap for each target over time is generally zero because the orbit can approach both the target and reverse-target with equal probability ., Thus , we measure the standard deviation ( SD ) of the overlap to quantify the approach to each target ., The SD ( ) of an overlap with the -th target over time is computed as ., If this SD is much larger than that for the overlap with a random pattern , then the spontaneous activity selectively approaches the target ( and its reverse ) ., A numerical computation of the SD as a function of and is plotted in Fig . 3B ., In the R regime , chaotic behavior appears and the SD takes a finite positive value , while in the NR regime , fixed-point attractors exist and so the SD is zero ., Interestingly , a band that has a higher SD , which stretches from ( 2 . 6 , 0 . 001 ) to ( 16 , 1 ) , and whose ridge divides the R and NR regimes appears in the figure ., In Fig . 3B , the ridge is shown as the dotted line , which is also plotted as a reference in Fig . 3A ., Around the ridge , the SD of the spontaneous activity is much higher than that in other areas , and the chaotic spontaneous activity shows switching behavior between the target and reverse target ., While the target and reverse-target attractors are unstable , their ruins still exist and the neural dynamics intermittently visit them ., In Figs ., 3A and B , the boundary defined by the SD might be slightly ambiguous because of the finite-size effect ., However , by extrapolating the result for larger system sizes ( to be discussed later ) , it is expected that , in the absence of inputs , all the networks in the NR regime show fixed-point behavior and those in the R regime show chaotic behavior , in the thermodynamic limit ., By increasing or decreasing , the minimum distance between the activity and the target ( or the reverse-target ) increases in the R regime ., Thus , in this limit , the SD in the NR regime is zero ., It suddenly increases to nearly one at the transition point , and then gradually decreases in the R regime ., The ridge of the SD thus indicates the transition between the NR and R regimes well ., The area with the average overlap taking nearly one above the dotted line in Fig . 3A is expected to remain even in the thermodynamic limit ., However , this area is included in the NR regime , since according to the analysis of the neural dynamics after multiple learning steps , to be discussed later , no more than a single pattern is recalled , as in the rest of the NR regime ., We also show how spontaneous activity changes into evoked activity with an increase in in each regime , as shown in Fig . 3C ., In the R regime , by increasing from zero , the neural activity shows successive bifurcations such that the overlap with the target increases to approach unity at ., The fixed-point attractor matching the target appears at ., In the NR regime , the target and reverse target attractors do not change on application of the input , but the basin volumes of the attractors increase ., We analyze the connection matrix that is shaped through the learning process , in the R and NR regimes , by measuring the element of the matrix C which is projected onto and , as defined by ( 9 ) where = ., Note that for a given binary pattern , if the system has a large matrix element , then pattern is more stable in the absence of inputs for the neural dynamics in Eq ., ( 1 ) ., Similarly , when is larger , is less stable ., Fig . 4 shows a time series of the elements , and for the NR , and R , regimes ., In the NR regime , only the element is much larger than the others after learning , while in the R regime , both and take salient positive values and and take salient negative values ., The result that dominates in the NR regime means that the generated connection matrix takes a similar form to that of the Mattis model in a spin system 41 , which corresponds to the Hopfield network with only one memorized pattern ., In the network where is larger and the other elements are much smaller , the target and reverse-target patterns remain highly stable ., This is consistent with the above analysis in the NR regime ., In the R regime , in contrast , the connection matrix shows a form distinct from those of the matrices in Mattis and Hopfield-type networks ., Remarkably , the matrix takes a similar form to that of the model in 34 , where was adopted ., Indeed , the behaviors of the spontaneous and evoked activities in this regime agree with that observed in that model 34 ., In general , the behaviors are strongly dependent on the matrix elements ., In Fig . 5 , the elements as a function of are plotted ., For , all of the elements deviate saliently from zero , and as decreases , the elements , and decrease rapidly , while does not change ., The regime changes from the R to NR regime as this occurs ., We now analyze why such connection matrices are formed through the learning process ., The evolution of the matrix element is also determined by Eq ., ( 2 ) as follows: ( 10 ) ( 11 ) Although also evolves temporally , we set as a constant value , because relative scale of the elements is relevant for understanding the behavior ., In the same way , the evolutions of the other elements are determined by ( 12 ) ( 13 ) ( 14 ) In both the regimes , the activity approaches a target and thus is greater than zero ( and smaller than ) for most of the learning process ., Thus , is positive for most of the learning process and then , takes a large positive value ., In contrast , the change in the other elements is distinct between both regimes , which is explained by the initial behavior of the learning process ., In the R regime , the overlap with the input increases in the early stage of the learning process as is directed toward by the input , as shown in Fig . 4A, ( ii ) ., It is estimated that is and positive , which is much larger than ., Thus , and are negative in the R regime , while is positive ., These estimates of the sign of the elements are consistent with the matrix elements in Fig . 4B ., For the NR regime , in which is smaller and is larger , the increase in the overlap with the input in the early stage is much smaller than that in the R regime; if is small , the neural activity does not respond strongly to the input , whereas if is large , the learning is completed before the overlap with the input increases ., Thus , the temporal changes in , and are much smaller ., Hence , only takes a large value , and thus the Mattis-type network is generated ., Neural activities that are shaped through multiple learning steps are analyzed for I/O mappings that are learned sequentially , as shown in Fig . 6 ., In the presence of each input ( as indicated by the colored bars above the plot ) , the neural activity converges to the target to be memorized in the same way as in the learning process of a single mapping ( shown in Fig . 1 ) ., Note that although the learning process changes the synaptic connections and flow structure of the neural activity , some of the structure generated in earlier learning steps is preserved because the change in the flow structure in each learning step occurs in the presence of a different input pattern ., We mainly present the results after the learning of 40 mappings and analyze the behaviors of spontaneous and evoked activities for later 30 mappings in the following analysis ., ( We choose 30 mapping because memory capacity is almost 20 as shown later . The number 30 and 40 can be arbitrary , as long as they are chosen to be larger than the many capacity . ), Corresponding to each phase in the one-step learning , we also found two distinct behaviors in the multiple learning:, ( i ) Neural activity responds to multiple inputs so that an attractor that matches each learned pattern is generated respectively upon each input ., Thus , multiple mappings are successfully memorized ., ( ii ) The neural activity does not respond to any input ., The two attractors that match the latest learned target and its reverse pattern exist in the absence and presence of the input ., Recall in response to an input is not observed either ., We call these the R and NR regimes , respectively , in the same manner as the analysis for one-learning step ., In Fig . 7 , we plot the neural dynamics in the presence and absence of inputs after 40 learning steps for in the R regime ., The recall processes of 1st , 5th , and 30th targets are shown by the overlaps with for and 30 in the absence and presence of the 1st , 5th , and 30th input , respectively ., From here on , the index ( 5 , and in this case ) denotes the order of the I/O mapping beginning with the most recent , i . e . , the 1st mapping is the latest learned one , while the 5th is that learned 5 steps earlier , and so forth ., In the R regime , by applying an input , the overlap with the required target increases and takes on the highest value of all overlaps ., In particular , in the presence of the latest input , the overlap with the latest target takes a much higher value , of nearly one , and an attractor that matches the latest target is generated ., Thus , the latest target is successfully recalled by applying the corresponding input ., In the presence of earlier inputs , the overlaps with the requested targets take smaller values than that with the latest target , but they are still larger than the overlaps with other patterns ( see Fig . S1 ) , as long as the retrieved mapping is not one that was learned much earlier ( as shown below ) ., ( The overlaps with the applied inputs also take higher values than the overlaps with other patterns , as well as the overlaps with the required targets . Thus , we compare the overlaps with the targets with those with the inputs in the following part . ), For example , the overlap with the 5th target is highest among the overlaps with others , in particular higher than that with the 5th input ( Fig . 7B ) ., Thus , the 5th target is also recalled according to Eq ., 4 ., From almost all initial values , the neural activity evolves to an attractor that gives the corresponding target pattern upon application of the appropriate input ., Thus , the 1st and 5th targets are always recalled ., According to the definition of memory in Eq ., 6 , the 1st and 5th mappings are memorized in this network ., In contrast , the overlap with the 30th target , which is learned much earlier , takes a much smaller value and is lower than the overlap with the 30th input ., Thus the network cannot recall the 30th target , i . e . , the target has not been memorized ., Hence the memory capacity of the present network lies between 5 and 30 ., To examine the memory capacity , we compute the average overlaps with the targets in the presence of each earlier input , as well as the average overlap with the input itself , as shown in Fig . 7B ., The overlap with an earlier target upon application of the corresponding input gradually decreases with an increase in , while the overlap with the applied input increases ., The difference between the average overlaps with the -th target and input under the -th input = decreases with an increase in ., Here , eventually crosses 0 at around 20 ., According to definition of memory in Eq ., 8 , the system in this regime succeeds in recalling the target by applying the corresponding input to 20 I/O mappings ., To reduce the artifact from the fluctuations of the overlap on memory capacity due to the finite size effect , we modify the definition of the memory capacity slightly as ( 15 ) Here , we set , however , as long as the value is small , there is no essential change in the memory capacity ., According to this modified definition , is computed to be 19 ., We also analyze the spontaneous neural dynamics that underlie the responses to the learned inputs analyzed above in the R regime ., The spontaneous neural activity shows noisy behavior , and no fixed pattern is stable , as shown in Fig . 7A ., Irrespectively of the noisy behavior , the overlaps with the memorized targets often show high values from time to time ., We compute the distributions over time of these overlaps and present them in Fig . 7C ., The overlap distribution with the latest target is much broader than that with a random pattern , and thus , the neural activity gets selectively closer to the latest target from time to time , even in the absence of input ., The distributions of the overlaps with earlier targets are also broader than that with a random pattern , even though the magnitude is smaller than that of the overlap with the latest target ., Following the analysis introduced in the single-step learning , we measure the SDs of the distributions of the overlaps with all the targets , as represented by dots in Fig . 7D ., We also compute the SD by averaging over the networks , as shown in Fig . 7D as the light blue line ., As shown , the SDs of the later targets decrease as increases ., The major source of decrease in the SD comes from a decrease in the amplitude of the overlap ., Therefore , the spontaneous activity approaches the learned targets from time to time and the closeness to the target during the spontaneous dynamics decreases with ., The SD decreases approximately as a power law as , with ., This decay rate roughly agrees with that of the evoked activity , which is approximated by with ., Both of the exponents are computed from a fit of the overlap and averaged SD to and , respectively , by using the least-squares method ., We will analyze the dependence of the decay rates on the parameters and below ., In the NR regime , in contrast , the latest target and its reverse pattern exist as attractors in the absence and presence of inputs for ( see Fig . S2 ) ., This is identical to the NR-regime behavior after one learning step , for which was nearly zero ., Due to the stability of the latest target attractor , the neural activity does not respond to the earlier input ( ) either , so that is also nearly zero ., According to the definition of memory , Eq ., 15 , ., By decreasing or increasing , the reverse target attractor is less stable in the presence of the latest input , and loses stability at some parameter values , while this attractor is still stable in the absence of the input ., In this region , is equal to one , while there is still no response to an earlier input , and thus in this region , =\u200a1 ., So far , we have analyzed the spontaneous neural activity with 0 and the evoked activity with ., We now examine how the spontaneous activity is transformed into the evoked activity with , as is increased ., This change with changing is regarded as a bifurcation or a sequence of bifurcations in terms of the dynamical system theory ., The bifurcations of the neural activity , revealed by increasing for the 1st , 5th , and 30th input strengths for the network given in Fig . 7 , are shown in Fig . 8 ., In the R regime , the overlap with the 1st ( i . e . , latest ) target increases monotonically and continuously by increasing the strength of the 1st input ., Finally , the fixed point that matches the 1st target is generated for not only the network used in the figure , but also most of the networks in the R regime ., The change to a fixed point is understood as a low-dimensional bifurcation , while the whole sequence of neural activity changes involves higher-dimensional dynamics ., For the 5th and 30th inputs , the overlap with the corresponding input is increased continuously with an increase in the input strength , in a similar manner as the bifurcation diagram for the 1st input ., In contrast to the latest input , however , the attractor is not a fixed-point attractor even for , where the evoked activity still shows chaotic behavior ., Apart from the change to a fixed-point attractor , the bifurcation sequences involve a large degree of freedom in a high-dimensional ( ) space ., Hence , plotting a few macroscopic variables , i . e . , the overlaps of the neural activity with a few targets , is not sufficient to capture the entire bifurcation sequence ., Therefore , to consider the chaotic dynamics , we measured the Lyapunov spectrum for the neural activity dynamics ., With an increase in the input strength , the number of positive Lyapunov exponents decreases , implying the existence of successive bifurcations from a high-dimensional attractor to a lower-dimensional attractor ( see Fig . 8 ) ., Accordingly , the dimension of the neural-activity attractor also decreases ., No positive Lyapunov exponents exist once the fixed-point attractor is reached for the input that was just learned , while even for the application of an earlier input , a decrease in the number of positive exponents is observed but the number does not reach zero ., In the NR regime , the latest target and reverse-target fixed-point attractors exist with ., Even by increasing the input strength , these attractors remain stable and no bifurcation occurs ., The dependence of the spontaneous and evoked activities on the two parameters , and , are analyzed through the capacity and SD ., The dependence of the evoked activity is explored by measuring the capacity according to Eq ., 15 , with the results shown in Fig . 9A ., In the R regime with a larger and smaller , a high capacity is observed , while in the NR regime with a smaller and larger , the capacity is zero or one ., Over the entire parameter space , the overlap with the requested target in the presence of an earlier input decreases , i . e . , decreases as increases , while that with the corresponding input increases ., However , the decay rate of the overlap with the target as a function of and the growth rate of the overlap with the input are dependent on and ., For a large and small , e . g . , as shown in Fig . 7B | Introduction, Model, Results, Discussion | Recent experimental measurements have demonstrated that spontaneous neural activity in the absence of explicit external stimuli has remarkable spatiotemporal structure ., This spontaneous activity has also been shown to play a key role in the response to external stimuli ., To better understand this role , we proposed a viewpoint , “memories-as-bifurcations , ” that differs from the traditional “memories-as-attractors” viewpoint ., Memory recall from the memories-as-bifurcations viewpoint occurs when the spontaneous neural activity is changed to an appropriate output activity upon application of an input , known as a bifurcation in dynamical systems theory , wherein the input modifies the flow structure of the neural dynamics ., Learning , then , is a process that helps create neural dynamical systems such that a target output pattern is generated as an attractor upon a given input ., Based on this novel viewpoint , we introduce in this paper an associative memory model with a sequential learning process ., Using a simple Hebbian-type learning , the model is able to memorize a large number of input/output mappings ., The neural dynamics shaped through the learning exhibit different bifurcations to make the requested targets stable upon an increase in the input , and the neural activity in the absence of input shows chaotic dynamics with occasional approaches to the memorized target patterns ., These results suggest that these dynamics facilitate the bifurcations to each target attractor upon application of the corresponding input , which thus increases the capacity for learning ., This theoretical finding about the behavior of the spontaneous neural activity is consistent with recent experimental observations in which the neural activity without stimuli wanders among patterns evoked by previously applied signals ., In addition , the neural networks shaped by learning properly reflect the correlations of input and target-output patterns in a similar manner to those designed in our previous study . | The neural activity without explicit stimuli shows highly structured patterns in space and time , known as spontaneous activity ., This spontaneous activity plays a key role in the behavior of the response to external stimuli generated by the interplay between the spontaneous activity and external input ., Studying this interplay and how it is shaped by learning is an essential step toward understanding the principles of neural processing ., To address this , we proposed a novel viewpoint , memories-as-bifurcations , in which the appropriate changes in the activity upon the input are embedded through learning ., Based on this viewpoint , we introduce here an associative memory model with sequential learning by a simple Hebbian-type rule ., In spite of its simplicity , the model memorizes the input/output mappings successively , as long as the input is sufficiently large and the synaptic change is slow ., The spontaneous neural activity shaped after learning is shown to itinerate over the memorized targets in remarkable agreement with the experimental reports ., These dynamics may prepare and facilitate to generate the learned response to the input ., Our results suggest that this is the possible functional role of the spontaneous neural activity , while the uncovered network structure inspires a design principle for the memories-as-bifurcations . | physics, mathematics, biology, computational biology, nonlinear dynamics, biophysics, neuroscience | null |
2,415 | journal.pcbi.1001059 | 2,011 | A Computational and Experimental Study of the Regulatory Mechanisms of the Complement System | The complement system is pivotal to defending against invading microorganisms ., The complement proteins recognize conserved pathogen-associated molecular patterns ( PAMPs ) on the surface of the invading pathogens 1 to initiate the innate immunity response ., The complement activity also enhances adaptive immunity 2 , 3 and participates in the clearance of apoptotic cells 4 as well as damaged and altered self tissue ., The complement proteins in the blood normally circulate as inactive zymogens ., Upon stimulation , proteases in the system cleave the zymogens to release active fragments and initiate an amplifying cascade of further cleavages ., There are three major complement activation routes: the classical , the lectin and the alternative pathways 5 ., Regardless of how these pathways are initiated , the complement activity leads to proteolytic activation and deposition of the major complement proteins C4 and C3 , which induces phagocytosis , and the subsequent assembly of the membrane attack complex which lyses the invading microbes ., However , complement is a double-edged sword; adequate complement activation is necessary for killing the bacteria and removing the apoptotic cells , while excessive complement activation can harm the host by generating inflammation and exacerbating tissue injury ., Dysregulation of the balance between complement activation and inhibition can lead to rheumatoid arthritis 6 , systemic lupus erythematosus 7 , Alzheimers disease 8 and age-related macular degeneration 9 ., Since the final outcome of complement related diseases may be attributable to the imbalance between activation and inhibition 10 , manipulation of this balance using drugs represents an interesting therapeutic opportunity awaiting further investigation ., In light of this potential , complement inhibitors such as factor H and C4b-binding protein ( C4BP ) are critical since they play important roles in tightly controlling the proteolytic cascade of complement and avoiding excessive activation ., Therefore , a systems-level understanding of activation and inhibition , as well as the roles of inhibitors , will contribute towards the development of complement-based immunomodulation therapies ., Complement is usually initiated by the interaction of several pattern-recognition receptors with the surface of pathogens ., C-reactive protein ( CRP ) 11 and ficolins are two initiators of the classical and lectin pathways , which boost immune responses by recognizing phosphorylcholine ( PC ) or N-acetylglucosamine ( GlcNAc ) , respectively , displayed on the surface of invading bacteria 12 , 13 , 14 ., Recently , it was discovered that under local infection-inflammation conditions as reflected by pH and calcium levels , the conformations of CRP and L-ficolin change which leads to a strong interaction between them 15 ., This interaction triggers crosstalk between classical and lectin pathways and induces new amplification mechanisms , which in turn reinforces the overall antibacterial activity and bacterial clearance ., On the other hand , C4BP , a major complement inhibitor is synthesized and secreted by the liver ., The estimated plasma concentration of C4BP is 260 nM under normal physiological condition 16 but its plasma level can be elevated up to four-fold during inflammation 17 , 18 ., Through its α-chain 19 , 20 , C4BP modulates complement pathways by controlling C4b-mediated reactions in multiple ways 21 , 22 , 23 ., Further , C4BP has been proposed as a therapeutic agent for complement-related autoimmune diseases on the premise that mice models supplemented with human C4BP showed attenuation in the progression of arthritis 24 ., Therefore , it is important to understand the systemic effect and the underlying inhibitory mechanism of C4BP ., With this background , we constructed a detailed computational model of the complement network consisting of a system of ordinary Differential equations ( ODEs ) ., The large model size and the many unknown kinetic rate parameters lead to significant computational challenges ., Using the technique developed in 25 , we approximated the ODE dynamics as a dynamic Bayesian network 26 and used it to estimate the model parameters ., After constructing the model , we investigated the enhancement mechanism induced by local inflammation and its interplay with the inhibition mechanism induced by C4BP ., Our studies confirmed and further elucidated the previous experimental findings 15 ., Specifically , using our model we established a detailed relationship between the antimicrobial response and the strength of the crosstalk between CRP and L-ficolin as determined by various combinations of the pH and calcium levels ., We also found that C4BP prevents complement over-activation and restores homeostasis , but it achieves this in two distinct ways depending on whether the complement activity was initiated by PC or GlcNAc ., Finally , the computational model suggested that the major inhibitory effect of C4BP is to potentiate the natural decay of C3 convertase ( C4bC2a ) ., These findings regarding the role of C4BP were experimentally validated ., An earlier mathematical study 27 of the complement system focused on the classical pathway ., This study assumed the dynamics to be linear , which is a severe restriction ., A later study by Korotaevskiy et al 28 more realistically assumed the dynamics to be non-linear ., It also included the alternative pathway ., The main focus was to derive quantitative conclusions regarding the lag time of the immune response as the initial concentrations of the constituent proteins were varied ., Relative to 28 , our model additionally includes the lectin pathway and the recently identified amplification pathways induced by the crosstalk between CRP and L-ficolin 15 ., On the other hand , given our focus on the up- and down- regulation mechanisms of the complement , we do not model the alternative pathway in detail since its role is to maintain a basal level of complement activation ., Instead , this basal activity and the effects of other mechanisms such as C2 bypass 29 are implicitly captured by the kinetic parameters in our model ., Given our focus on the amplification and down-regulation mechanisms of complement , we included in our model only the key proteins in the classical and lectin pathways ., The basal activity maintained by the alternative pathway and other mechanisms are implicitly captured by the kinetic parameters in our model ., A schematic representation of the model structure is shown in Figure 1A ., The cascade of events captured by the model can be described as follows ., The classical pathway is initiated by the binding of antibodies or CRP to antigens or PAMPs ., In our model , in order to decouple the involvement of adaptive immune response , the classical pathway is triggered by the binding of CRP to PC , which is a ligand often displayed on the surface of the invading bacteria 30 , 31 ., Deposited CRP then binds to C1-complex ( formed by C1q , two molecules of C1r , and two molecules of C1s ) that is further activated ., The activated C1-complex recruits C4 leading to the cleavage of C4 to its fragments , C4b and C4a ., After binding of C2 to C4b , the same protease complexes are responsible for generating fragments , C2a and C2b , by cleaving C2 ., The C2a and C4b then form the C4bC2a complex , which is an active C3 convertase , cleaving C3 to C3a and C3b ., The formation of C3b exposes a previously hidden thioester group that covalently binds to patches of hydroxyl and amino groups on the bacterial surface 32 ., The surface-deposited C3b plays a central role in all subsequent steps of the complement cascade: ( 1 ) it acts as an opsonin that enhances the binding and leads to the elimination of bacteria by the phagocytes , ( 2 ) it induces the formation of membrane attack complex leading to the lysis of bacteria ., Since the concentration of the deposited C3 reflects the antibacterial activity of complement , we terminated our model at this step to simplify the network ., On the other hand , the lectin pathway is initiated by the binding of mannose-binding lectin ( MBL ) or ficolins to PAMPs on the pathogen surface ., In our model , we focused on the lectin pathway initiated by L-ficolin as it can interact with CRP and induce crosstalk between classical and lectin pathways ., L-ficolin recognizes various PAMPs on the bacterial surface via the acetyl group on the GlcNAc moiety 33 , 34 ., Therefore , in our model the lectin pathway was triggered by binding of L-ficolin and GlcNAc onto the bacterial surface ., Subsequently , a protease zymogen called MASP-2 is recruited and activated ., Activated MASP-2 cleaves C4 and C2 to form C4bC2a which is C3 convertase ., At this point , the classical pathway and lectin pathway merge at the cleavage step of the central complement protein , C3 , and hence constitutes the endpoint of our model ., As discovered in 15 , infection-induced local inflammation conditions ( slight acidosis and hypocalcaemia ) provoke a strong crosstalk between CRP and L-ficolin 15 ., This elicits two new complement-amplification pathways , which reinforce the classical and lectin pathways ., Since we aimed to study the complement activation and modulation under pathophysiological conditions , we included these two amplification pathways ( Figure 1A , purple ) in our model ., Infection by bacteria containing PC will induce the CRP:L-ficolin mediated amplification pathway: PC→CRP:L-ficolin→MASP2→C4→C2→C3 ., On the other hand , infection by bacteria containing GlcNAc will induce the CRP:L-ficolin mediated amplification pathway: GlcNAc→CRP:L-ficolin→C1→C4→C2→C3 ., The complement allows a rapid attack to intruding bacteria while at the same time protecting the host cells from over-activation ., C4BP , a major inhibitor of complement activation , was reported to either accelerate the decay of the convertases or aid proteolytic inactivation of key players in the pathway into inactive forms such as factor H 32 but the systemic effect of C4BP has remained unclear ., Hence , in our model , we included this major multifunctional inhibitor ., Upstream of the complement cascade , C4BP competes with C1 for the immobilized CRP 23 ., Downstream to this , C4BP binds to C4b and serves as a cofactor to the plasma serine protease factor I in the cleavage of C4b both in the fluid phase and when C4b is deposited on bacterial surfaces 21 ., In addition , C4BP is able to prevent the assembly of the C3 convertase and accelerate the natural decay of the complex 35 ., All of the above effects of C4BP are considered in our model and the relevant components are depicted as red bars in Figure 1A ., The reaction network diagram of the model is shown in Figure 1B ., Processes such as protein association , degradation and translocation are modeled with mass action kinetics and processes such as cleavage , activation and inhibition with Michaelis-Menten kinetics ., The resulting ODE model consists of 42 species , 45 reactions and 85 kinetic parameters with 71 unknown ., The details can be found in the supporting information ( Text S1 ) ., Due to the large model size and many unknown kinetic parameters , tasks such as parameter estimation and sensitivity analyses became very challenging ., Hence , we applied the probabilistic approximation technique developed by Liu et al 25 to derive a simpler model based on the standard probabilistic graphical formalism called Dynamic Bayesian Networks ( DBNs ) 26 ., Briefly , this approximation scheme consists of the following steps:, ( i ) Discretize the value space of each variable and parameter into a finite set of intervals ., ( ii ) Discretize the time domain into a finite number of discrete time points ., ( iii ) Sample the initial states of the system according to an assumed uniform distribution over certain intervals of values of the variables and parameters ., ( iv ) Generate a trajectory for each sampled initial state and view the resulting set of trajectories as an approximation of the dynamics defined by the ODEs system ., ( v ) Store the generated set of trajectories compactly as a dynamic Bayesian network and use Bayesian inference techniques to perform analysis ., A more detailed description of this construction can be found in the Methods section while we explain in the Discussion section how we fixed the number of trajectories to be generated and the maximum time point upto which the trajectories are to be constructed ., In the ODE model the PC-initiated and GlcNAc-initiated complement cascades are merged for convenience ., By suppressing these two cascades to one at a time ( by setting the corresponding expressions in the reaction equations to zero ) , we constructed two dynamic Bayesian networks; one for the PC-initiated complement cascade and the other for GlcNAc-initiated complement cascade ., The range of each variable and parameter was discretized into 6 non-equal size intervals and 5 equal size intervals , respectively ., The time points of interest were set to {0 , 100 , 200 , … , 12600} ( seconds ) ., Each of the resulting DBN approximations encoded trajectories generated by sampling the initial values of the variables and the parameters from the prior , which was assumed to be uniform distributions over certain intervals ., The quality of the approximations relative to the original ODEs dynamics was sufficiently high and the details can be found in the supporting information ( Figure S1 ) ., The values of initial concentrations and 14 kinetic parameters were obtained from literature data ( Table S1 and Table S2 ) ., To estimate the remaining 71 kinetic parameters , we generated test data by incubating human blood under normal and infection-inflammation conditions with beads coated with PC or GlcNAc followed by immunodetection of the deposited CRP , C4 , C3 and C4BP in time series ., For PC-beads , the concentration levels of deposited CRP , C4 , C3 and C4BP were measured at 8 time points from 0 to 3 . 5 h ( Figure 2A , B , red dots ) ., For GlcNAc-beads , the concentration levels of deposited MASP-2 , C4 , C3 and C4BP were also measured at 8 time points from 0 to 3 . 5 h ( Figure 2C , D , red dots ) ., To estimate unknown kinetic parameters , a two-stage DBN based method 25 was deployed ., In the first stage , probabilistic inference applied to the discretized DBN approximation was used to find the combination of intervals of the unknown parameters that have the maximal likelihood , given the evidence consisting of the test data ., As mentioned above , each unknown parameters value space was divided into 5 equal intervals and the inference method called factored frontier algorithm 35 was used to infer the marginal distributions of the species at different time points in the DBN ., We then computed the mean of each marginal distribution and compared it with the time course experimental data ., To train the model by iteratively improving fitness to data , we modified the tool libSRES 36 and used its stochastic ranking evolutionary strategy ( SRES ) , to search in the discretized parameter space consisting of 571 combinations of interval values of the unknown parameters ., The result of this first stage was a maximum likelihood estimate of a combination of intervals of parameter values ., In the second stage we then searched within this combination of intervals having maximal likelihood ., Consequently , the size of the search space for the second stage was just 1/571 of the original search space ., We used the SRES search method and the parameter values thus estimated are shown in Table S2 ., In principle , given the noisy and limited experimental data and the high dimensionality of the system , one could stop with the first stage 37 and try to work an interval of values for each parameter rather than a point value ., However , in our setting we wanted to use the ODE model too for conducting in silico experiments such as varying initial concentrations including the down and over expression of C4BP ., This would have been difficult to achieve by working solely with our current DBN approximation ., We address this point again in the Discussion section ., Figure 2A–2D shows the comparison of the experimental time course training data ( red dots ) with the model simulation profiles generated using the estimated parameters ( blue lines ) ., The model predictions fit the training data well for most of the cases ., In some cases , the simulations were only able to reproduce the trends of the data ., This may be due to the simplifications assumed by our model and further refinement is probably necessary ., We next validated the model using previously published experimental observations 15 ., In particular , normalized concentration level of deposited C3 was used to predict the antibacterial activity since C3 deposition initiated the opsonization process and the lysis of bacteria ., We first simulated the concentration level of deposited C3 at 1 h under different conditions ., We next normalized the results so that the maximum value among them equals to 95% which is the maximum bacterial killing rate reported in the experimental observations 15 ., The normalized values were then treated as predicted bacterial killing rates ., The simulation results are shown in Figure 2E and 2F as black bars ., Consistent with the experimental data ( Figure 2E , grey bars ) , our simulation showed that under the infection-inflammation conditions , the P . aeruginosa , a clinically challenging pathogen , can be efficiently killed ( 95% bacterial killing rate ) by complement whereas under the normal condition , only 28% of the bacteria succumbed ( Figure 2E , black bars ) ., Consistent with experimental data , our simulation results show that in the patient serum , depletion of CRP or ficolin induced a significant drop in the killing rate from 95% to 33% or 25% respectively , indicating that the synergistic action of CRP and L-ficolin accounted for around 40% of the enhanced killing effect ., However , in the normal serum , depletion of CRP or ficolin only resulted in a slight drop in the killing rate from 28% to 18% or 10% respectively ., Furthermore , simulating a high CRP level ( such as in the case of cardiovascular disease ) under the normal healthy condition did not further increase the bacterial killing rate ., As shown in Figure 2F , the simulation results matched the experimental data ., Thus , our model was able to reproduce the published experimental observations shown in both Figure 2E and 2F with less than 10% error ., This not only validated our model thus promoting its use for generating predictions , but also yielded positive evidence in support of the hypothesized amplification pathways induced by infection-inflammation condition ., It also suggested that the antibacterial activity can be simulated efficiently by the level of deposited C3 and this was used to generate model predictions described in later sections ., We performed local and global sensitivity analysis of the model to identify species and reactions that control complement activation during infection , and to evaluate the relative importance of initial concentrations and kinetic parameters for the model output ., To identify critical species , we first calculated the scaled absolute local sensitivity coefficients 38 for initial concentrations of major species using the COPASI tool 39 ., The model outputs were defined as the peak amplitude ( maximum activation ) and integrated response ( area under the activation curve that reflects the overall antibacterial activity ) of C3 deposition ., The results are shown in Figure 3A ., Both the peak amplitude and integrative response were strongly influenced by initial concentrations of C2 and C3 , and were mildly influenced by initial concentrations of C4BP , C1 and C4 ., In contrast , the low sensitivities of CRP , MASP-2 and L-ficolin indicate that over-expression of these proteins is unlikely to increase the antibacterial activity ., Interestingly , it was observed that the integrative response was more sensitive than the peak amplitude to the changes in the initial concentration of PC ., Since the concentration of PC is correlated to the amount of invading bacteria , this result implies that the maximum complement response level may not increase as the amount of bacteria increases but the overall response ( i . e . the area under the curve obtained by integrating the response level over time ) will be enhanced to combat the increased number of bacteria ., In order to identify critical reactions , we next computed global sensitivities for kinetic parameters ., To reduce complexity , we used the DBN approximations ., Multi-parametric sensitivity analysis ( MPSA ) 40 was performed on the DBN for PC-initiated complement cascade ( the details are presented in the Materials and Methods section ) ., The results are shown in Figure 3B ., Strong controls over the whole system are distributed among the parameters associated with the immobilisation of C3b with the surface , interaction between CRP and L-ficolin , cleavage of C2 and C4 , and the decay of C3 convertase ( see Figure 1B , reactions labeled in red ) ., The sensitivity of reactions associated with C3 , C2 and C4 is consistent with the local sensitivity analysis , which highlighted the significant role of major complement components ., The high sensitivity of interaction of CRP and L-ficolin confirms that the overall antibacterial response depends on the strength of the crosstalk between the classical and lectin pathways ., In addition , since the decay of C3 convertase is one of the regulatory targets of C4BP , the sensitivity of the system to a change in the rate of decay of C3 convertase suggested that the regulatory mechanism by C4BP plays an important role in complement ., Since the critical reactions identified are common in PC- and GlcNAc-initiated complement cascades , MPSA results using the other DBN will produce similar results and hence this analysis was not performed ., We next focused our investigation on the enhancement mechanism by the crosstalk and the regulatory mechanism by C4BP ., Under infection-inflammation conditions where PC-CRP:L-ficolin or GlcNAc-L-ficolin:CRP complex is formed , the amplification pathways are triggered ., Model simulation showed that if C1 and L-ficolin or CRP and MASP-2 competed against each other , the antibacterial activity of the classical pathway or lectin pathway might be deprived of the amplification pathways ( see Figure S2 ) ., Therefore , in order to achieve a stable enhancement , C1 and L-ficolin ( or CRP and MASP-2 ) must simultaneously bind to CRP ( or L-ficolin ) ., Further , the abilities of CRP and L-ficolin to trigger subsequent complement cascade were not affected by the formation of this complex ., This is consistent with the previous experimental observation that two amplification pathways co-exist with the classical and lectin pathways 15 ., According to 15 , slight acidosis and mild hypocalcaemia ( pH 6 . 5 , 2 mM calcium ) prevailing at the vicinity of the infection-inflammation triggers a 100-fold stronger interaction between CRP and L-ficolin compared to the normal condition ( pH 7 . 4 , 2 . 5 mM calcium ) ., This can be explained by the fact that the pH value and calcium level influence the conformations of CRP and L-ficolin which in turn govern their binding affinities ., Therefore , the overall antibacterial response which is influenced by the binding affinity of CRP and L-ficolin will be sensitive to the pH value and calcium level ., To confirm this and further investigate the effects of pH and calcium on the antibacterial response , we simulated the complement system under different pH and calcium conditions ., Based on the previous biochemical analysis 15 , we first estimated functions using polynomial regression to predict the binding affinity of CRP and L-ficolin for different pH values and calcium levels ( Figure 4A , B , right panel ) ., In the right panels of Figure 4A and 4B , the reported binding affinities 15 were normalized and are shown as dots ., By curve fitting the dots , we estimated polynomial functions that can be used to predict the binding affinity ., The curves of these functions are shown in red ., We then simulated the C3 deposition dynamics using the predicted binding affinities at pH ranging from 5 . 5 to 7 . 4 in the presence of 2 mM and 2 . 5 mM calcium ., The simulation time was chosen to be 3 . 5 h which is the time frame of the response peaks ., The results are shown in Figure 4A and 4B ., Under both 2 mM and 2 . 5 mM calcium conditions , decreasing pH induces not only the increase of the peak amplitude ( maximum activation ) but also hastens the peak time ( time of maximum activation ) ., To further compare the effects of the two calcium levels , the dose-response curves were generated as shown in Figure 4C ., The antibacterial response was predicted by simulating the system for 1 . 5 h ., At 2 mM calcium ( blue curve ) , the antibacterial response was clearly greater than at 2 . 5 mM calcium ( pink curve ) indicating that slight hypocalcaemia enhanced the antibacterial activity in a stable manner ., In addition , the pH-responses were reaching saturation levels when pH was near 5 . 5 ( Figure 4C ) , implying that the undesirable complement-enhancement by extreme low pH condition can be avoided ., This also suggests that the saturation of the pH-response was influenced by the calcium level in the milieu ., We next investigated the complement regulation by the major inhibitor , C4BP , under infection-inflammation conditions ., We varied the initial concentration of C4BP and simulated the PC- and GlcNAc- initiated complement under infection-inflammation conditions ., The simulation time was chosen to be 5 h which is slightly beyond the largest time point of our training experimental data ., The predicted effects of the initial concentration of C4BP on the antibacterial response in terms of C3 deposition are shown in Figure 5A and 5B ., For PC-initiated complement activation , when the starting amount of C4BP was perturbed around the normal level of 260 nM 16 , increasing C4BP level only delayed the peak time but did not decrease the peak amplitude significantly ., In contrast , reducing the initial C4BP level clearly hastened the complement activation and maximized the activity ., Interestingly , the GlcNAc-initiated complement activation ( Figure 5B ) behaved differently from the PC-mediated complement activation ( Figure 5A ) ., Around the normal level of 260 nM , perturbing the initial C4BP changed the maximum activity but did not affect the peak time , suggesting that C4BP plays distinct roles in regulating the classical and lectin pathways ., To experimentally verify the model predictions , we perturbed the initial amount of C4BP in the patient sera by, ( i ) spiking with purified C4BP ( high C4BP ) and, ( ii ) reducing it by immunoprecipitation ( low C4BP ) ., The resulting C4BP levels in the normal and patient sera are shown in Figure S3 ., The sera were then incubated with PC- or GlcNAc-beads to initiate complement ., Unaltered serum served as the normal control ., The time profiles of the deposited C4BP level was measured over 4 h using Western blot ( Figure 5C ) ., Comparing the kinetic profiles in the C4BP deposition initiated by both PC and GlcNAc , we observed the following order of peak time: high C4BP>normal C4BP>low C4BP , indicating that the pre-existing initial level of C4BP was indeed the driving force controlling the deposition of complement components onto the simulated bacterial surface ., We then measured the time profiles of deposited C3 ., Figure 5D shows that with PC-beads , high C4BP sera induced an early peak and low C4BP delayed the peak of C3 deposition ., The peak amplitude for all three conditions was at a similar level ., These observations are consistent with the simulation results shown in Figure 5A ., With GlcNAc-beads , reducing C4BP led to a slight increase in the peak height although the peak coincided with the normal condition ., In contrast , spiking the sera ( high C4BP ) delayed and lowered the peak amplitude of C3 deposition ., Thus the experimental results broadly agree with our model predictions presented in Figure 5B ., We next investigated how C4BP mediates its inhibitory function ., As shown in Figure 1A , the inhibitory effects of C4BP target different sites in complement:, ( a ) binding to CRP and blocking C1 ,, ( b ) preventing the formation of C4bC2a by binding to C4b ,, ( c ) acting as a cofactor for factor I in the proteolytic inactivation of C4b , and, ( d ) accelerating the natural decay of the C4bC2a complex , which prevents the formation of C4bC2a and disrupts already formed convertase ., To identify the dominant mechanism , we employed in silico knockout of the reactions involved for each mechanism and performed simulations ., Figure 6A–6D shows the model predictions ., Among the four inhibitory mechanisms , only the knockout of reaction, ( d ) significantly enhanced the complement activation suggesting that facilitating the natural decay of C4bC2a ( C3 convertase ) is the most important inhibitory function of C4BP ., This is consistent with our previous observations derived from sensitivity analysis , which identified the decay of C3 convertase as a critical reaction ., In addition , as the inhibitory effect of reaction, ( d ) is stronger than others , knocking out reaction, ( a ) and, ( b ) can even reduce the complement activity , which is counter-intuitive and emphasizes the significance of the systems-level understanding ., To confirm our hypothesis that the major inhibitory role of C4BP relies on accelerating the decay of C3 convertase , we measured the C4 cleavage at different time points ., Figure 6E ( black triangles ) indicates the inactive C4b fragments presented from the time points of 20 , 30 and 90 min under high , normal , and low C4BP conditions , suggesting that C4BP aided cleavage and inactivation of C4b , and thereby caused the natural decay of the C4bC2a ., Here , we developed an ODE-based dynamic model for the complement system accompanied by DBN-based approximations of the ODEs dynamics to understand how the complement activity is boosted under local inflammation conditions while a tight surveillance is established to attain homeostasis ., Previously published models of complement system have focused on the classical and alternative pathways 27 , 28 ., Our model includes the lectin pathway and more interestingly , the recently identified amplification pathways induced by local inflammation conditions 15 ., It also encompasses the regulatory effects of C4BP in the presence of enhanced complement activity ., The ODE model incorporated both the PC-initiated and GlcNAc-initiated complement together for convenience ., By setting the corresponding expressions to zero one at a time , two DBN approximations were then derived; one for the PC-initiated complement cascade and the other for GlcNAc-initiated complement cascade ., For constructing the DBN approximation from an ODE model , one needs to fix , the maximal time point upto which each trajectory is to be explored and , the number of trajectories to be generated ., is set to be suitably beyond the largest time point for which experimental data is available ., In the present study 3 . 5 h , is the largest time point of our training experimental data ., Based on this we set to be 5 h ., After constructing the model , we simulated the system upto 10 h and found no relevant dynamics after 3 . 5 h ., As for the choice of , the number of trajectories , ideally one would like to specify the acceptable amount of error between the actual and the approximated dynamics and use to determine ., This is however difficult to achieve due to the following: The dynamic Bayesian network we construct is a factored Markov chain ., It approximates the idealized Markov chain induced by the ODEs dynamics ., This i | Introduction, Results, Discussion | The complement system is key to innate immunity and its activation is necessary for the clearance of bacteria and apoptotic cells ., However , insufficient or excessive complement activation will lead to immune-related diseases ., It is so far unknown how the complement activity is up- or down- regulated and what the associated pathophysiological mechanisms are ., To quantitatively understand the modulatory mechanisms of the complement system , we built a computational model involving the enhancement and suppression mechanisms that regulate complement activity ., Our model consists of a large system of Ordinary Differential Equations ( ODEs ) accompanied by a dynamic Bayesian network as a probabilistic approximation of the ODE dynamics ., Applying Bayesian inference techniques , this approximation was used to perform parameter estimation and sensitivity analysis ., Our combined computational and experimental study showed that the antimicrobial response is sensitive to changes in pH and calcium levels , which determines the strength of the crosstalk between CRP and L-ficolin ., Our study also revealed differential regulatory effects of C4BP ., While C4BP delays but does not decrease the classical complement activation , it attenuates but does not significantly delay the lectin pathway activation ., We also found that the major inhibitory role of C4BP is to facilitate the decay of C3 convertase ., In summary , the present work elucidates the regulatory mechanisms of the complement system and demonstrates how the bio-pathway machinery maintains the balance between activation and inhibition ., The insights we have gained could contribute to the development of therapies targeting the complement system . | The complement system , which is the frontline immune defense , constitutes proteins that flow freely in the blood ., It quickly detects invading microbes and alerts the host by sending signals into immune responsive cells to eliminate the hostile substances ., Inadequate or excessive complement activities harm the host and may lead to immune-related diseases ., Thus , it is crucial to understand how the host boosts the complement activity to protect itself and simultaneously establishes tight surveillance to attain homeostasis ., Towards this goal , we developed a detailed computational model of the human complement system ., To overcome the challenges resulting from the large model size , we applied probabilistic approximation and inference techniques to train the model on experimental data and explored the key network features of the model ., Our model-based study highlights the importance of infection-mediated microenvironmental perturbations , which alter the pH and calcium levels ., It also reveals that the inhibitor , C4BP induces differential inhibition on the classical and lectin complement pathways and acts mainly by facilitating the decay of the C3 convertase ., These predictions were validated empirically ., Thus , our results help to elucidate the regulatory mechanisms of the complement system and potentially contribute to the development of complement-based immunomodulation therapies . | computational biology/systems biology, immunology/innate immunity | null |
2,176 | journal.pntd.0007739 | 2,019 | Rabies-induced behavioural changes are key to rabies persistence in dog populations: Investigation using a network-based model | Canine rabies is an ancient disease that has persisted in dog populations for millennia–well before urbanisation 1 ., Increased understanding of rabies spread in communities with relatively small populations of dogs–such as those in rural and remote areas–could give insights about rabies persistence in non-urban areas , as well as inform prevention and control strategies in such regions ., Rabies virus is neurotropic and clinical manifestations of canine rabies can be broadly classified as the dumb form ( characterised by progressive paralysis ) and the furious form ( characterised by agitated and aggressive behaviour; 2–4 ) ., Although the mechanisms of rabies-induced behavioural signs are poorly understood 5 , pathogen-influenced changes in host behaviour can optimise pathogen survival or transmission 6 ., We hypothesise that rabies-induced behavioural changes promote rabies transmission in dog populations by influencing social network structure to increase the probability of effective contact ., If so , this would enable rabies to spread in rural and remote regions ., Since 2008 , rabies has spread to previously free areas of southeast Asia ., Islands in the eastern archipelago of Indonesia , as well as Malaysia are now infected 7–10 ., Much of this regional spread of canine rabies has occurred in rural and remote areas ., Oceania is one of the few regions in the world in which all countries are rabies free ., Recent risk assessments demonstrate that Western Province , Papua New Guinea ( PNG ) and northern Australia , are at relatively high risk of a rabies incursion 11 , 12 ., Dogs in communities in these regions are owned and roam freely ., Population estimates in such communities are often low; for example , median 41 dogs ( range 10–127 ) in Torres Strait communities ( pers comm: annual surveys conducted by Queensland Health , and Brookes et al . 13 ) and median 100 dogs ( range 30–1000 ) in Western Province Treaty Villages ( pers comm: annual surveys conducted by the Australian Commonwealth Department of Agriculture ) ., Canine rabies might have a low probability of maintenance in domestic dogs in these communities due to their small population sizes , but if continued transmission occurs–particularly over a long duration–then spread to other communities or regional centres and regional endemicity might occur ., GPS telemetry data from small populations of dogs ( < 50 dogs ) in the Torres Strait have recently been collected 13 ., Such data has been used to describe contact heterogeneity in animal populations , and has been used in models to provide insights about disease spread and potential control strategies 14–16 ., The effect of contact heterogeneity on disease spread is well-researched and models can provide useful insights about disease control strategies in heterogeneously mixing populations 17–19 ., Most recently in the context of rabies , Laager et al . 20 developed a network-based model of rabies spread using GPS telemetry data from dogs in urban N’Djamena , Chad ., Other models of rabies-spread in which parameters that describe contact heterogeneity were derived from telemetry data include canine 21 and raccoon models 22 , 23 ., Patterns of contacts are likely to be altered by the behavioural effects of clinical rabies ., Although Hirsch et al . 22 demonstrated that seasonal patterns of rabies incidence in raccoons could be explained by changes in social structure due to normal seasonal behavioural change of the hosts , the influence of rabies-induced behavioural changes on social structure has neither been researched nor explicitly incorporated in simulation models in any species ., Here , our objective was to investigate the probability , size and duration of rabies outbreaks and the influence of rabies-induced behavioural changes on rabies persistence in small populations of free-roaming dogs , such as those found in rural communities in PNG and northern Australia ., We also investigate the effect of pre-emptive vaccination on rabies spread in such populations ., We developed an agent-based , stochastic , mechanistic model to simulate social networks of free-roaming domestic dogs and the subsequent transmission of rabies between individual dogs within these networks following the latent infection of a single , randomly-assigned dog ( Fig 1 ) ., The structure of the social networks was based on three empirically-derived networks of spatio-temporal associations between free-roaming domestic dogs in three Torres Strait Island communities ( Table 1 ) ; Kubin , Warraber and Saibai 13 ., The progression of rabies infection in a susceptible dog was simulated in daily time-steps and followed an SEI1I2R process ( rabies infection status: susceptible S , latent E , pre-clinical infectious I1 , clinical I2 and dead R ) ., Rabies virus transmission from an individual infectious, ( j ) to an individual susceptible, ( i ) dog is described by Eq 1 , in which the daily probability of contact between a pair of such dogs was calculated based on the edge-weight between the pair ( Eij ) , which is the proportion of a 24 hour period during which that pair of dogs is spatio-temporally associated ( in the event of no network connection , Eij = 0 ) ., Transmission of rabies further depends on the probability of a bite ( Pj ) by the infected dog conditional on its infection status ( I1 or I2 ) , and the probability of subsequent infection of the susceptible dog ( Tj ) ., Generation of the social network and estimation of the parameters associated with the dog population dynamics and rabies epidemiology are described below , and parameter values are shown in Table 2 ., Maximum iteration duration was 3 years ., Model outputs included distributions of the predicted duration of outbreaks ( defined as the number of days from the introduction of one latently infected dog to the day on which infected dogs were no longer present ) , the total number of rabies-infected dogs during the outbreak and the effective reproductive number , Re , during the first month following incursion ( mean number of dogs infected by all dogs that were infected during the first month ) ., Initially , rabies was simulated in each of the three community networks and the predicted outputs from each model were compared between each other ., Statistical tests were used to determine the number of iterations required to achieve convergence of output summary statistics ( described below ) ., Global sensitivity analysis using the Sobol’ method ( described below ) was used to investigate the relative influence of all input parameters on model outputs ., To observe the influence of rabies-induced behavioural changes , model outputs from simulations of rabies spread in each of the three community networks with and without parameters associated with rabies-induced behavioural changes were compared ., Finally , the impact of pre-emptive vaccination was investigated by randomly assigning rabies immunity to a proportion of the population ( 10–90%; tested in 10% increments ) prior to incursion of the rabies-infected dog in each iteration ., Prior to each iteration , a modified Watts Strogatz algorithm generated a connected , undirected small-world network of 50–90 dogs with network characteristics that reflected the empirical networks of the dog populations in Saibai , Warraber and Kubin communities , as follows 13 , 24–26 ., Consistent with the terminology used in our previous description of these networks 13 , dogs are nodes , connections between dogs are edges , the proportion of spatio-temporal association ( within 5m for at least 30s ) between a pair of connected dogs in each 24 hour period is represented as edge-weight , and degree refers to the number of network connections for an individual dog ., Re-wiring refers to re-assignment of an individual dogs connections in the network ., A regular ring lattice was constructed with N nodes in which N was randomly selected from a uniform distribution of 50–90 ., Each node was assigned K degrees , which was randomly selected from the respective empirical degree distribution of the community represented by the simulation ., Each node ( ni ) was connected to Ki/2 ( rounded to the nearest integer ) nearest neighbours in the ring lattice in a forward direction , then all nearest neighbours in a backward direction until Ki was achieved ., Existing edges were then re-wired ( the edge was disconnected from the nearest neighbour and reconnected to a randomly selected node ) following a Bernoulli process ( probability ρ ) to achieve an average shortest path-length expected in an equivalent-sized Erdõs-Réyni graph in which nodes are connected randomly , whilst maintaining the empirical degree distribution of the community represented by the simulation 27 ., Edges were then weighted according to the mean expected duration of association between pairs of dogs as a proportion of daily time , and were randomly selected from the respective empirical edge-weight distribution of the community represented by the simulation ., Parameters that describe the empirical networks and their derivation are presented in Brookes et al . 13 ., Networks simulated with the modified Watts Strogatz algorithm were tested for similarity to the empirical networks prior to use of the algorithm in the model ( Table 1 ) ., Degree and edge-weight distributions were compared to those of the empirical networks using the Mann-Whitney U and Kolmogorov-Smirnoff tests , to assess similarity of median and shape of simulated distributions , respectively ., Mean small-world indices were calculated according to Eq 2 in which C is the global clustering coefficient , L is the average shortest path length , s denotes a simulated network and r denotes an Erdos-Reyni random network of equivalent mean degree 28 ., A small-world index >1 indicates local clustering , consistent with the empirical network structures ., Network similarity tests were conducted on 1000 simulated networks for each community ., Parameters that were used to describe the dog populations and rabies epidemiology in the model are listed in Table 2 ., Variance-based GSA using the Saltelli method was used to determine which parameters most influenced the variance of outputs and was implemented in this study using the SALib module in Python 41 ., The sequence of events were: parameter sampling to create a matrix of parameter sets for each iteration ( parameter ranges are listed in Table 2 ) , simulation using the parameter sets to obtain model output ( duration of outbreaks , the total number of rabies-infected dogs and the mean monthly effective reproductive number , Re ) , and estimation of sensitivity indices ( SIs ) to apportion output variance to each parameter ., Mean monthly Re was used as the output of interest in relation to R for Sobol analysis , to remove the strong influence of incubation period on Re in the first month ., To separate the influence of stochasticity from the variation associated with each parameter , the random seed was also included in the Sobol’ analysis 42 ., The seed value for each iteration was selected from the parameter set ( uniform distribution , 1–100 ) ., First-order and total-effect SIs were estimated for each parameter , representing predicted output variance attributable to each parameter without and with considering interactions with other inputs , respectively ., SIs were normalised by total output variance and plotted as centipede plots with intervals representing SI variance ., Model output variance is most sensitive to inputs with the highest indices ., The number of iterations required to achieve sufficient convergence of summary measures was estimated using the following method ., Key output measures–the number of rabid dogs and the duration of outbreaks ( days ) –were recorded from 9 , 999 iterations of the model divided equally between all three communities ., Ten sets of simulated outputs of an increasing number of iterations ( 1–5000 ) were sampled; for example , ten sets of outputs from 1 iteration , 10 sets of outputs from 2 iterations , 10 sets of outputs from 3 iterations , and so on ., The mean number of rabies-infected dogs and outbreak duration was calculated for samples in each set ., The coefficient of variation ( CV; standard deviation/mean ) of these sample means was then calculated for each set ., With increasing iterations , the variation in sample mean between sets decreases and the CV approaches zero ., The number of iterations was considered sufficient to indicate model output stability when 95% of the previous 100 iteration sizes CV was < 0 . 025 ., Each community simulation comprised 10 , 000 iterations ( more than sufficient to achieve convergence of summary output statistics without limiting computational time S1 Fig ) ., Predicted outputs are shown in Table 3 ., The proportion of iterations in which a second dog became infected was greater than 50% in Kubin and Warraber communities , and 43% in Saibai ., In these iterations , predicted median and upper 95% duration of outbreaks were longest in Warraber and shortest in Saibai ( median: 140 and 78 days; 95% upper range 473 and 360 days , respectively ) ., In the Warraber simulations , 0 . 001% of iterations reached the model duration limit of 1095 days ., The number of infected dogs was reflected in the Re estimates in the first month: 1 . 73 ( 95% range 0–6 . 0 ) , 2 . 50 ( 95% range 1 . 0–7 . 0 ) and 3 . 23 ( 95% range 1 . 0–8 . 0 ) in Saibai , Kubin and Warraber communities , respectively ., The rate of cases during these outbreaks was 2 . 4 cases/month ( 95% range 0 . 6–7 . 6 ) , 2 . 0 cases/month ( 95% range 0 . 4–6 . 5 ) and 2 . 6 cases/month ( 95% range 0 . 5–8 . 0 ) in Saibai , Kubin and Warraber communities , respectively ., Fig 2 shows plots of the Sobol’ total-effect sensitivity indices ( SI ) of parameters for outbreak duration , number of infected dogs and the monthly effective reproductive ratio Re ., S2 Fig shows Sobol’ first-order effect SIs , which are low relative to the total-effect SIs for all outcomes ., This indicates that interactions between parameters are highly influential on output variance in this model and therefore , we focus on the influence of parameters through their total effects ., As expected , the total-effect SI of the seed was highest–it was associated with > 50% of the variance for all outcomes–because it determines the random value selected in the Bernoulli processes that provide stochasticity to all parameters ., The influence of the seed is not presented further in these results ., Incubation period , the size of the dog population and the degree of connectivity were highly influential on outbreak duration ( total-effect SI 0 . 51 , 0 . 55 and 0 . 51 , respectively ) ., All parameters were influential on the predicted number of rabid dogs ( total effect SIs > 0 . 1 ) ., The size of the dog population , incubation and clinical periods , and degree had greatest influence ( total effect SIs > 0 . 5 ) ., Dog population size and degree of association were most influential on predicted mean monthly Re ( total effect SI 0 . 74 and 0 . 40 , respectively ) ., Of the community-specific parameters ( population size , degree and edge-weight distributions , birth and death rates , and initial probability of re-wiring ) , dog population size and the degree consistently had the greatest influence on each predicted output’s variance ., Of network parameters other than degree , the probability of wandering ( ‘re-wiring’ ) during the clinical phase ( furious form ) was markedly less influential on predicted mean monthly Re than initial ‘re-wiring’ ( total effect SIs 0 . 051 and 0 . 19 , respectively ) or either parameter associated with spatio-temporal association ( edge-weight; both total effect SIs > 0 . 15 ) ., The influence of the increased probability of a bite by a dog in the clinical period ( furious form ) on predicted mean monthly Re was greater compared to the pre-clinical or clinical ( dumb-form ) bite probability ( total-effect SI 0 . 19 relative to 0 . 11 ) ., The size of the relative influence of these parameters on outbreak duration or number of rabies-infected dogs was reversed and less marked ., Birth and death rate consistently had a moderate influence on all outputs ( total-effect SI 0 . 20–0 . 24 ) ., The proportion of outbreaks in which > 1 dog became infected , and the duration , number of infected dogs and Re in the first month following incursion in simulations without all or with combinations of parameters for rabies-induced behavioural changes , are shown in Fig 3 ., Outputs from the simulation in each community with all parameters ( increased bite probability furious form , increased spatio-temporal association edge-weight; dumb form , wandering ‘re-wiring’; furious form ) are included for comparison ., The simulation without parameters for rabies-induced behavioural changes ( Fig 3; ‘None’ ) propagated following < 10% of incursions in all communities ., In 95% of these predicted outbreaks , rabies spread to ≤ 3 other dogs during a median of ≤ 60 days ., This was reflected in the low Re estimate in the first month of these incursions ( ≤ 0 . 75 ) ., Inclusion of one parameter associated with rabies-induced behavioural changes was still insufficient for sustained predicted outbreaks ., Overall , < 20% incursions in these simulations resulted in rabies spread to ≤ 6 other dogs over a median duration of ≤ 56 days ., Re in the first month of these incursions indicated that increased spatio-temporal association , followed by an increased probability of bite were more likely to result in rabies spread than ‘re-wiring’ to increase network contacts in these simulations ., This pattern was reflected in the upper 95% range of dogs infected , which was greatest when increased spatio-temporal association was included , and least when ‘re-wiring’ was included ., When combinations of rabies-induced behavioural changes were included , increased bite probability and spatio-temporal association together were sufficient to achieve similar proportions of predicted outbreaks in which > 1 dog was infected ( 40–60% of incursions ) as the simulation with all parameters included ( Fig 3 ‘Full’ ) ., Predicted impacts and Re in the first month following incursion were also similar ., Re was greater than the sum of Re from scenarios with increased bite probability and spatio-temporal association alone ., With combined spatio-temporal association and ‘re-wiring’ , the 95% range of the number of infected dogs was greater than simulations in which only one parameter was included ( up to 11 other dogs ) but Re in the first month following incursion was close to 1 in all communities , reflecting overall limited rabies spread ., In the combined increased bite probability and ‘re-wiring’ simulation , propagation did not occur to > 4 dogs , reflecting the Re of ≤ 0 . 8 ., Due to the similarity between median outputs from each community and greatest variation in outputs from Warraber , only vaccination simulations using the Warraber network were included in this section ., Initially , all parameters were included in these vaccination simulations ( births and deaths were included ) ., Vaccination simulations were then run without population turnover ( births and deaths were excluded ) ., Fig 4 shows all outputs ., In all simulations , the proportion of outbreaks in which > 1 dog was infected fell as the proportion of pre-emptively vaccinated dogs increased–a greater reduction was observed in the simulations without population turnover–and was < 40% when at least 70% of the population were vaccinated ., The proportion of outbreaks in which more than one dog was infected was still 17% and 12% when 90% of the population were vaccinated in simulations with and without births and deaths , respectively ., In outbreaks in which > 1 dog was infected , the duration of outbreaks decreased as vaccination proportion increased ( although the 95% range was always predicted > 195 days in all simulations ) ., The median number of infected dogs was ≤ 3 once at least 60% of dogs were vaccinated in all simulations , but the 95% range was not consistently < 10 dogs until 80% and 70% of the population was vaccinated in simulations with and without births and deaths , respectively ., The median case rate was 1 . 6 cases/month ( 95% range 0 . 4–4 . 6 cases/month ) when 70% of the population was vaccinated in simulations with births and deaths , with a median duration of 68 days ( 95% range 16–276 days ) ., In simulations without births and deaths , the case rate was 1 . 4 cases/month ( 95% range 0 . 4–4 . 3 cases/month ) when 70% of the population was vaccinated , with a median duration of 64 days ( 95% range 16–248 days ) ., Re estimated in the first month following incursion reflected these outputs ., At ≥ 70% pre-emptive vaccination , Re was approximately 1 or less when births and deaths were excluded ., However , in the simulations with births and deaths Re did not fall below 1 until > 80% of the population were pre-emptively vaccinated ., Our study is unique in that we modelled rabies spread in small populations of free-roaming dogs and incorporated the effect of rabies-induced behavioural changes ., Key findings included the long duration of rabies persistence at low incidence in these populations , and the potential for outbreaks even with high levels of pre-emptive vaccination ., This has implications for canine rabies surveillance , elimination and incursion prevention strategies , not only in rural areas with small communities , but also for elimination programs in urban areas ., We discuss our findings and their implications below ., Without behavioural change , we could not achieve rabies propagation in the social networks in the current study; disruption of social contacts appears to be key for rabies maintenance in small populations of dogs ., Social network studies have shown that dogs form contact-dense clusters 13 , 20 ., Increased bite probability and spatio-temporal association between contacts ( edge-weight in the model ) were most influential on rabies propagation in our model , but it is possible that ‘re-wiring’ of dogs is also influential in larger populations in which there is a greater probability that a dog would ‘re-wire’ to a completely new set of dogs in another cluster , thus increasing total contacts and enhancing spread ( degree was also found to be highly influential on rabies spread ) ., Ranges for these parameters were wide to reflect uncertainty which in turn reflects the difficulty of acquiring accurate field information about the behaviour of rabies-infected dogs ., It is not ethical to allow dogs that have been identified in the clinical stages of rabies infection to continue to pose a threat to other animals and humans so that field data about contact behaviour can be collected ., However , whilst these parameters were important for spread to occur , their wide range was not as influential on output variance relative to other parameters for which data were more certain ., In the model , limiting types of behavioural change to each rabies form was a simplification that allowed us to differentiate the effects of types of network disruption ., In reality , the association between rabies forms and behavioural changes is likely to be less distinct 33 and thus , rabies spread in small populations could be further enhanced if dogs display a range of behavioural changes ., Incubation period strongly influenced outbreak size and duration , and together with rabies-induced behavioural changes that enabled transmission , is likely to have resulted in the ‘slow-burn’ style of outbreaks ( low incidence over long duration ) that were predicted by this model ., Within iterations in which propagation occurred , case rate was generally < 3 cases/month without vaccination , and 1 . 5 cases/month when 70% of dogs were pre-emptively vaccinated ., At such low incidence , we believe that canine rabies is likely to have a low probability of detection in communities where there is high population turnover and aggressive free-roaming dogs can be normal 29 , 43 ., In these populations , dog deaths and fights between dogs are common ., Undetected , slow-burn outbreaks in previously free regions are a great risk to humans because rabies awareness is likely to be low ., They also provide more opportunity for latently infected dogs to travel between communities either by themselves , or with people , which could result in regional endemicity ., Townsend et al ( 34 ) suggest a case detection rate of at least 5% ( preferably 10% ) is required to assess rabies freedom following control measures; surveillance capacity in rabies-free regions such as Oceania should be evaluated and enhanced if required ., Pre-emptive vaccination is another option to protect rabies-free regions; for example , an ‘immune-belt , ’ an area in which dogs must be vaccinated , was established in the 1950s in northern Malaysia along the Thai border 44 ., The World Health Organization recommends repeated mass parenteral vaccination of 70% of dog populations to achieve herd immunity 45 ., Whilst the origin of this recommendation is unclear , it has been accepted for decades–for example , legislation allowed free-roaming of dogs in designated areas if at least 70% of the dog population was vaccinated in New York State in the 1940s 46–and previous modelling studies of pre-emptive vaccination support this threshold 20 , 47–49 ., We found that vaccination with 70% coverage is expected result in outbreaks are self-limiting ., Therefore , if inter-community dog movements are unlikely , the probability of regional spread is unlikely ., However , given predicted upper 95% ranges of 8–14 rabies infected dogs for at least 8 months at 70% coverage , we recommend at least 90% coverage to reduce the effective monthly reproductive ratio < 1 , limit human exposure , and provide a more certain barrier to regional spread , particularly in regions where dogs are socially and culturally connected to people and consequently , movement of dogs is likely ., In places in which movements are not easily restricted–such as urban centres in which dog populations are contiguous–our study indicates that comprehensive vaccination coverage is crucial and that reducing population turnover ( for example , by increasing veterinary care to improve dog health ) might not have a substantial effect on reducing the vaccination coverage required ., The political and operational challenges of rabies elimination are well-documented 50 , and lack of elimination or subsequent re-emergence is attributed to insufficient vaccination coverage ( < 70% dog population overall , patchy coverage or insufficient duration 49 , 51 , 52 ) and re-introduction of infected dogs 48 , 53 ., Pockets of unvaccinated dogs within well-vaccinated , urban areas could maintain rabies at a low incidence sufficient to re-introduce rabies as surrounding herd immunity wanes ., It is also possible that with comprehensive , homogenous 70% coverage , a low incidence of rabies–such as appears possible at 70% vaccination in our study–is sufficient for endemicity in larger populations but is practically undetectable , giving the appearance of elimination ., A higher proportion of vaccinated dogs might be required for elimination , and further modelling studies incorporating behavioural change in larger empirical networks are required to test this hypothesis ., Validation of a canine-rabies spread model is challenging , not only because variation between model outputs and observed data can arise from many sources , but because rabies surveillance is passive and case ascertainment is notoriously challenging 52 , thus limiting the fitting of mathematical models and undermining comparison of predicted outputs to observed data ., Mechanistic models are therefore a valuable tool to describe possible spread and develop hypotheses about rabies persistence , surveillance and control by using plausible , generalisable disease data ( in the current study , the epidemiology of rabies ) and context specific , ecological data ( in the current study , empirical network data from small populations of dogs to provide contact rates ) ., Although opportunity for validation is limited because outbreak data from small populations of dogs is scarce ( and non-existent in our study area ) , observed patterns of disease spread ( low incidence and long duration of outbreaks ) are consistent with those predicted by the current study 37 , 54 ., Global sensitivity analysis indicated that population size ( a parameter of reasonable certainty ) and degree of connectivity had the greatest influence on duration , size and initial spread; this makes intuitive sense , and as expected , the largest and longest outbreaks were predicted in the Warraber network which had the highest median degree ., Of the parameters that most influenced model outputs , parameterisation of the degree of connectivity was most likely to influence generalisability of our study findings because data are limited and social connectivity might vary between populations of free-roaming dogs ., However , a study in N’Djaména , Chad , found that the average degree was 9 and 15 ( maximum 20 and 64 , respectively ) in two populations of size 272 and 237 dogs , respectively 20 , which is not dissimilar to the degree distribution of the small Torres Strait dog populations ., Reassuringly , input parameters about which there was more uncertainty–for example , bite probabilities–were less influential on variation in outputs ., By exploring rabies epidemiology in small populations of free-roaming dogs–in which contact heterogeneity was determined in part by their social networks and in part by the disease–our study provides insights into how rabies-induced behavioural changes are important for endemicity of rabies in rural and remote areas ., We found that rabies induced behavioural change is crucial for the disease to spread in these populations and enables a low incidence of rabies cases over a long duration ., Without movement restrictions , we predict that substantially greater than the recommended 70% vaccination coverage is required to prevent rabies emergence in currently free areas . | Introduction, Methods, Results, Discussion | Canine rabies was endemic pre-urbanisation , yet little is known about how it persists in small populations of dogs typically seen in rural and remote regions ., By simulating rabies outbreaks in such populations ( 50–90 dogs ) using a network-based model , our objective was to determine if rabies-induced behavioural changes influence disease persistence ., Behavioural changes–increased bite frequency and increased number or duration of contacts ( disease-induced roaming or paralysis , respectively ) –were found to be essential for disease propagation ., Spread occurred in approximately 50% of model simulations and in these , very low case rates ( 2 . 0–2 . 6 cases/month ) over long durations ( 95% range 20–473 days ) were observed ., Consequently , disease detection is a challenge , risking human infection and spread to other communities via dog movements ., Even with 70% pre-emptive vaccination , spread occurred in >30% of model simulations ( in these , median case rate was 1 . 5/month with 95% range of 15–275 days duration ) ., We conclude that the social disruption caused by rabies-induced behavioural change is the key to explaining how rabies persists in small populations of dogs ., Results suggest that vaccination of substantially greater than the recommended 70% of dog populations is required to prevent rabies emergence in currently free rural areas . | We investigated rabies spread in populations of 50–90 dogs using a simulation model in which dogs’ contacts were based on the social networks of three populations of free-roaming domestic dogs in the Torres Strait , Australia ., Rabies spread would not occur unless we included rabies-induced behavioural changes ( increased bite frequency and either roaming or paralysis that increased the number or duration of contacts , respectively ) ., The model predicted very low case rates over long durations which would make detection challenging in regions in which there is already a high population turnover , increasing the risk of human infection and spread to other communities via dog movements ., Spread also occurred in >30% of model simulations at low incidence for up to 200 days when 70% of the population was pre-emptively vaccinated , suggesting that higher vaccination coverage will be required to prevent rabies emergence in currently free rural areas , especially those in which dogs readily travel between communities . | animal types, medicine and health sciences, immunology, tropical diseases, sociology, vertebrates, social sciences, pets and companion animals, dogs, animals, mammals, simulation and modeling, preventive medicine, rabies, animal behavior, network analysis, social networks, neglected tropical diseases, vaccination and immunization, zoology, research and analysis methods, public and occupational health, infectious diseases, computer and information sciences, zoonoses, behavior, epidemiology, psychology, eukaryota, biology and life sciences, viral diseases, amniotes, organisms | null |
807 | journal.pcbi.1003684 | 2,014 | Modeling Higher-Order Correlations within Cortical Microcolumns | Electrophysiology is rapidly moving towards high density recording techniques capable of capturing the simultaneous activity of large populations of neurons ., This raises the challenge of understanding how networks encode and process information in ways that go beyond tuning properties or feedforward receptive field models ., Modeling the distribution of states in a network provides a way to discover communication patterns between neurons or functional groupings such as cell assemblies which may exhibit a more direct relation to stimulus or behavioral variables ., The Ising model , originally developed in the 1920s to describe magnetic interactions 1 , has been used to statistically characterize electrophysiological data , particularly in the retina 2 , and more recently for cortical recordings 3 , 4 ., This model treats spikes from a population of neurons binned in time as binary vectors and captures dependencies between cells with the maximum entropy distribution for pairwise dependencies ., This has been shown to provide a good model for small groups of cells in the retina 5 , though it is unable to capture dependencies higher than second-order ., In this work , we apply maximum entropy models to neural population recordings from the visual cortex ., Cortical networks have proven more challenging to model than the retina: The magnitude and importance of pairwise correlations between cortical cells is controversial 6 , 7 and higher-order correlations , i . e . correlations which cannot be captured by a pair-wise maximum entropy model , play a more important role 8–10 ., One of the challenges with current recording technologies is that we can record simultaneously only a tiny fraction of the cells that make up a cortical circuit ., Sparse sampling together with the complexity of the circuit mean that the majority of a cells input will be from cells outside the recorded population ., In adult cat visual cortex , direct synaptic connections have been reported to occur between 11%–30% of nearby pairs of excitatory neurons in layer IV 11 , while a larger fraction of cell pairs show “polysynaptic” couplings 12 , defined by a broad peak in the cross-correlation between two cells ., This type of coupling can be due to common inputs ( either from a different cortical area or lateral connections ) or a chain of monosynaptic connections ., A combination of these is believed to give rise to most of the statistical interactions between recorded pairs of cells ., The Ising model , which assumes only pairwise couplings , is well suited to model direct ( and symmetric ) synaptic coupling , but cannot capture interactions involving more than two cells ., We propose a new approach , that addresses both incomplete sampling and common inputs from other cell assemblies , by extending the Ising model with a layer of hidden units or latent variables ., The resulting model is a semi-Restricted Boltzmann Machine ( sRBM ) , which combines pairwise connections between visible units with an additional set of connections to hidden units ., Estimating the parameters of energy-based models , to which Ising models and Boltzmann machines belong , is computationally hard because these models cannot be normalized in closed form ., For both Ising models and Boltzmann machines with hidden units , the normalization constant is intractable to compute , consisting of a sum over the exponential number of states of the system ., This makes exact maximum likelihood estimation impossible for all but the smallest systems and necessitates approximate or computationally expensive estimation methods ., In this work , we use Minimum Probability Flow ( MPF 13 , 14 , in the context of neural decoding see 4 , 15 ) to estimate parameters efficiently without computing the intractable partition function ., It provides a straightforward way to estimate the parameters of Ising models and Boltzmann machines for high-dimensional data ., Another challenge in using energy-based models is the evaluation of their likelihood after fitting to the data , which is again made difficult due to the partition function ., To compute probabilities and compare the likelihood of different models , annealed importance sampling ( AIS ) 16 was used to estimate the partition function ., Combining these two methods for model estimation and evaluation , we show that with hidden units , Boltzmann machines can capture the distribution of states in a microcolumn of cat visual cortex significantly better than an Ising model without hidden units ., The higher-order structure discovered by the model is spatially organized and specific to cortical layers , indicating that common input or recurrent connectivity within individual layers of a microcolumn are the dominant source of correlations ., Applied to spatiotemporal patterns of activity , the model captures temporal structure in addition to dependencies across different cells , allowing us to predict spiking activity based on the history of the network ., We estimated Ising , RBM and sRBM models for populations of cortical cells simultaneously recorded across all cortical layers in a microcolumn of cat V1 in response to long , continuous natural movies presented at a frame rate of 150 Hz ., Code for the model estimation is available for download at http://github . com/ursk/srbm ., Fig . 1a ) shows an example frame from one of the movies ., Model parameters were estimated using MPF with an regularization penalty on the model parameters to prevent overfitting ., To compute and compare likelihoods , the models were normalized using AIS ., Here we present data from two animals , one with 22 single units ( B4 ) , another with 36 units ( T6 ) , as well as a multiunit recording with 26 units ( B4M ) ., Fig . 1b ) shows spiking data from session B4 in 20 ms bins , with black squares indicating a spike in a bin ., Spatiotemporal datasets were constructed by concatenating spikes from consecutive time bins ., Pairs of cells show weak positive correlations , shown in Fig . 1c ) , and noise correlations computed from 60 repetitions of a 30s stimulus are similarly small and positive ., For all recordings , the population was verified to be visually responsive and the majority of cells were orientation selective simple or complex cells ., As recordings were performed from a single cortical column , receptive fields shared the same retinotopic location and have similar orientation selectivity , differing mostly in size , spatial frequency and phase selectivity ., See 17 for a receptive field analysis performed on the same raw data ., The estimated model parameters for the three different types of models ( Ising , RBM and sRBM ) are shown in Fig . 2 for session B4 ., The sparseness penalty , chosen to optimize likelihood on a validation dataset , results in many of the parameters being zero ., For the Ising model, ( a ) we show the coupling as a matrix plot , with lines indicating anatomical layer boundaries ., The diagonal contains the bias terms , which are negative since all cells are off the majority of the time ., The matrix has many small positive weights that encourage positive pairwise correlations ., In, ( b ) we show the hidden units of the RBM as individual bar plots , with the bars representing connection strengths to visible units ., The topmost bar corresponds to the hidden bias of the unit , and hidden units are ordered from highest to lowest variance ., The units are highly selective in connectivity: The first unit almost exclusively connects to cells in the deep ( granular and subgranular ) cortical layers ., The second unit captures correlations between cells in the superficial ( supergranular ) layers ., The correlations are of high order , with 10 and more cells receiving input from a hidden unit ., The remaining units connect fewer cells , but still tend to be location-specific ., Only the hidden units that have non-zero couplings are shown ., Additional hidden units are turned off by the sparseness penalty , which was chosen to maximize likelihood on the cross-validation dataset ., The interpretation of hidden units is quite similar to the pairwise terms of the Ising model: positive coupling to a group of visible units encourages these units to become active simultaneously , as the energy of the system is lowered if both the hidden unit and the cells it connects to are active ., Thus the hidden units become switched on when cells they connect to are firing ( activation of hidden units not shown ) ., The sRBM combines both pairwise and hidden connections and hence is visualized with a pairwise coupling matrix and bar plots for hidden units ., With the larger number of parameters , the best model is even more sparse in the number of nonzero parameters ., The remaining pairwise terms predominantly encode negative interactions , and much of the positive coupling has been explained away by the hidden units ., These give rise to strong positive couplings within either superficial ( II/III ) or intermediate ( IV ) and deep ( V/VI ) layers , which explain the majority of structure in the data ., The more succinct explanation for dependencies between recorded neurons is via connections to shared hidden units , rather than direct couplings between visible units ., The RBM and sRBM in this comparison were both estimated with 22 hidden units , but we show only units that did not turn off entirely due to the sparseness penalty ., In this example , a sparseness penalty of was found to be optimal for all three models ., In order to ascertain to what degree the stimulus driven component of activity accounts for the learned higher-order correlations , we augmented the above models with a dynamic bias term that consists of the log of the average instantaneous firing probability of each cell over repeated presentations of the same stimulus ., In the case that all trained parameters were zero , this model would assign a firing probability to all neurons identical to that in the peri-stimulus time histogram ( PSTH ) ., In Fig . 2d ) the couplings for the Ising model with stimulus terms are shown ., As the pairwise couplings now only capture additional structure beyond correlations explained by the stimulus , they tend to be weaker than in the Ising model without stimulus terms ., In particular the bias terms on the diagonal are almost completely explained away by the dynamic bias ., The same reasoning applies to the RBM with PSTH terms , which is shown in e ) ., Although the couplings are weaker than for the pure RBM , the basic structure remains , with the first two hidden units explaining correlations within superficial and deep groups of cells , respectively ., This shows that the learned coupling structure can not be explained purely from higher-order stimulus correlations and receptive field overlap ., Even when stimulus-induced correlations are fully accounted for , the correlation structure captured by the RBM remains similar and higher-order correlations are the dominant driver of correlated firing ., For a quantitative comparison between models , we computed normalized likelihoods using Annealed Importance Sampling ( AIS ) to estimate the partition function ., For each model , we generated 500 samples through a chain of annealing steps ., To ensure convergence of the chain , we use a series of chains varying the number of annealing steps and verify that the estimate of the partition function stabilizes to within at least 0 . 02 bits/s ( see Fig . S1 ) ., For models of size 20 and smaller we furthermore computed the partition function exactly to verify the AIS estimate ., Fig . 3a ) shows a comparison of excess log likelihood for the three different models and on all three datasets ., , which we define as the gain in likelihood over the independent firing rate model , is computed in units of bits/spike for the full population ., Both higher-order models outperform the Ising model in fitting the datasets significantly ., Error bars are standard deviation computed from 10 models with different random subsets of the data used for learning and validation , and different random seeds for the parameters and the AIS sampling runs ., Fig . 3b ) shows the excess log likelihood for the models with stimulus terms ., Due to the additional computational complexity , these models were only estimated for the small B4 data set ., The left of the two bar plots shows that including the stimulus information through the PSTH greatly increases the likelihood , even the PSTH only model without coupling terms outperforms the Ising and RBM models by about 0 . 7 bits/s ., Including coupling terms still increases the likelihood , which is particularly visible on the right bar plot which shows the log likelihood gain relative to the PSTH model ., Including higher-order coupling terms still provides a significant gain over the pairwise model , confirming that there are higher-order correlation in the data beyond those induced by the stimulus ., Each of the models was estimated for a range of sparseness parameters bracketing the optimal using 4-fold cross-validation on a holdout set , and the results are shown for the optimal choice of for each model ., Additional insight into the relative performance of the models can be gained by comparing model probabilities to empirical probabilities for the various types of patterns ., Fig . 4 shows scatter plots of model probabilities under the different models against pattern frequencies in the data ., Patterns with a single active cell , two simultaneously active cells , etc . are distinguished by different symbols ., As expected from the positive correlations , the independent model ( yellow ) shown in, a ) consistently overestimates the probabilities of cells being active individually , so these patterns fall above the identity line , while all other patterns are underestimated ., For comparison , the Ising model is shown in the same plot ( blue ) , and does significantly better , indicated by the points moving closer to the identity line ., It still tends to fail in a similar way though , with many of the triplet patterns being underestimated as the model cannot capture triplet correlations ., In, b ) , this model is directly compared against the RBM ( green ) ., Except for very rare patterns , most points are now very close to the identity line , as the model can fully capture higher-order dependencies ., Hidden units describe global dependencies that greatly increase the frequency of high order patterns compared to individually active cells ., The 5% and 95% confidence intervals for the counting noise expected in the empirical frequency of states are shown as dashed lines ., The solid line is the identity ., Inserts in both models show the distribution of synchrony , , where is the number of cells simultaneously firing in one time bin ., This metric has been used for example in 18 to show how pairwise models fail to capture higher-order dependencies ., In the case of the T6 data set with 36 cells shown here , the Ising model and RBM both provide a good fit to the distribution of synchrony in the observed data ., Note that any error in estimating the partition function of the models would lead to a vertical offset of all points ., Thus visually checking the alignment of the data cloud around the identity line provides a visual verification that there are no catastrophic errors in the estimation of the partition function ., Unfortunately we cannot use this alignment as a shortcut to compute the partition function without sampling , e . g . by defining such that the all zeros state has the correct frequency , as this assumes a perfect model fit ., For instance , regularization tends to reduce model probabilities of the most frequent states , so this estimate of would systematically overestimate the likelihood of regularized models ., We note , however , that for higher-order models with no regularization this estimate does indeed agree well with the AIS estimate ., The same models can be used to capture spatiotemporal patterns by treating previous time steps as additional cells ., Consecutive network states binned at 6 . 7 ms were concatenated in blocks of up to 13 time steps , for a total network dimensionality of 130 with 10 cells ., These models were cross-validated and the sparseness parameters optimized in the same way as for the instantaneous model ., This allows us to learn kernels that describe the temporal structure of interactions between cells ., In Fig . 5 we compare the relative performance of spatiotemporal Ising and higher-order models as a function of the number of time steps included in the model ., To create the datasets , we picked a subset of 10 cells with the highest firing rates from the B4 dataset ( 4 cells from subgranular , 2 from granuar and 4 from supergranular layers ) and concatenated blocks of up to 13 subsequent data vectors ., This way models of any dimensionality divisible by 10 can be estimated ., The number of parameters of the RBM and Ising model were kept the same by fixing the number of hidden units in the RBM to be equal to the number of visible units , the sRBM was also estimated with a square weight matrix for the hidden layer ., As before , the higher-order models consistently outperform the Ising model ., The likelihood per spike increases with the network size for all models , as additional information from network interactions leads to an improvement in the predictive power of the model ., The curve for the Ising model levels off after a dimensionality of about 30 is reached , as higher-order structure that is not well captured by pairwise coupling becomes increasingly important ., However , the likelihood of higher-order models continues to increase through the entire experimental range ., The insert in the figure shows the entropy of the models , normalized by the data dimensionality by dividing by the number of frames and neurons ., The entropy was computed as where the expectation of the energy was estimated by initializing 100 , 000 samples using the holdout data set , and then running 2000 steps of Gibbs sampling ., Due to temporal dependencies additional frames carry less entropy , but we do not reach the point of extensivity where the additional entropy per frame reaches a constant value ., As the RBM is better able to explain additional structure in new frames , the additional entropy for new frames is much less than for the Ising model ., A similar observation has been made in 5 , where Ising and higher-order models for 100 retinal ganglion cells were compared to models for 10 time steps of 10 cells ., It is noteworthy that temporal dependencies are similar to dependencies between different cells , in that there are strong higher-order correlations not well described by pairwise couplings ., These dependencies extend surprisingly far across time ( at least 87 ms , corresponding to the largest models estimated here ) and are of such a form that including pairwise couplings to these states does not increase the likelihood of the model ., This has implications e . g . for GLMs that are typically estimated with linear spike coupling kernels which will likely miss these interactions ., To predict spiking based on the network history , we can compute the conditional distribution of single units given the state of the rest of the network ., This is illustrated for a network with 15 time steps for a dimensionality of 150 ., This model is not included in the above likelihood comparison , as the AIS normalization becomes very expensive for this model size ., Fig . 6a ) shows the learned weights of 18 randomly chosen nonzero hidden units for a spatiotemporal RBM model with 150 hidden units ., Each subplot corresponds to one hidden unit , which connects to 10 neurons ( vertical axis ) across 15 time steps or 100 ms ( horizontal axis ) ., Some units specialize in spatial coupling across different cells at a constant time lag ., As the model has no explicit notion of time , the time lag of these spatial couplings is not unique and the model learns multiple copies of the same spatial pattern ., Thus while there are 55 nonzero hidden units , the number of unique patterns is much smaller so that the effective representation is quite sparse ., The remaining units describe smooth , long-range temporal dependencies , typically for small groups of cells ., Both of these subpopulations capture higher-order structure connecting many neurons that cannot be well approximated with pairwise couplings ., By conditioning the probability of one cell at one time bin on the state of the remaining network , we can compute how much information about a cell is captured by the model over a naive prediction based on the firing rate of the cell ., This conditional likelihood for each cell is plotted in Fig . 6b ) in a similar way to excess log likelihood for the entire population in Fig . 5 , except in units of bits per second rather than bits per spike ., While the result here reflects our previous observation that Boltzmann machines with hidden units outperform Ising models , we note that the conditional probabilities are easily normalized in closed form since they describe a one-dimensional state space ., Thus we can ensure that the likelihood gain holds independent of the estimation of and is not due to systematic errors in sampling from the high-dimensional models ., Fig . 6c ) provides a more intuitive look at the prediction ., For 1 s of data from one cell , where 5 spikes occur , we show the conditional firing probabilities for the three models given 100 ms of history of itself and the other cells ., Qualitatively , the models perform well in predicting spiking probabilities , suggesting it might compare favorably to prediction based on GLMs or Ising models 19 ., While there has been a resurgence of interest in Ising-like maximum entropy models for describing neural data , progress has been hampered mainly by two problems ., First , estimation of energy based models is difficult since these models cannot be normalized in closed form ., Evaluating the likelihood of the model thus requires approximations or a numerical integral over the exponential number of states of the model , making maximum likelihood estimation computationally intractable ., Even the pairwise Ising model is typically intractable to estimate , and various approximations are required to overcome this problem ., Second , the number of model parameters to be estimated grows very rapidly with neural population size ., If correlations up to order are considered , the number of parameters is proportional to ., In general , fully describing the distribution over states requires a number of parameters which is exponential in the number of neurons ., This can be dealt with by cutting off dependencies at some low order , by estimating only a small number of higher-order coupling terms , or by imposing some specific form on the dependencies ., We attempted to address both of these problems here ., Parameter estimation was made tractable using MPF , and latent variables were shown to be an effective way of capturing high order dependencies ., This addresses several shortcomings that have been identified with the Ising model ., As argued in 20 , models with direct ( pairwise ) couplings are not well suited to model data recorded from cortical networks ., Since only a tiny fraction of the neurons making up the circuit are recorded , most input is likely to be common input to many of the recorded cells rather than direct synapses between them ., While this work compares Generalized Linear Models ( GLMs ) such as models of the retina 21 and for LGN 22 to linear dynamical systems ( LDS ) models , the argument applies equally for the models presented here ., Another shortcoming of the Ising model and some previous extensions is that the number of parameters to be estimated does not scale favorably with the dimensionality of the network ., The number of pairwise coupling terms in GLM and Ising models scales with the square of the number of neurons , so with the amounts of data typically collected in electrophysiological experiments it is only possible to identify the parameters for small networks with a few tens of cells ., This problem is aggravated by including higher-order couplings: for example the number of third order coupling parameters scales with the cube of the data dimensionality ., Therefore attempting to estimate these coupling parameters directly is a daunting task that usually requires approximations and strong regularization ., Early attempts at modeling higher-order structure side-stepped these technical issues by focussing on structure in very small networks ., Ohiorhenuan noted that Ising models fail to explain structure in cat visual cortex 8 and was able to model triplet correlations 9 by considering very small populations of no more than 6 neurons ., Similarly , Yu et al . 3 , 10 show that over the scale of adjacent cortical columns of anesthetized cat visual cortex , small subnetworks of 10 cells are better characterized with a dichotomized Gaussian model than the pairwise maximum entropy distribution ., While the dichotomized Gaussian 23 is estimated only from pairwise statistics , it carries higher-order correlations that can be interpreted as common Gaussian inputs 24 ., However these correlations are implicit in the structure of the model and not directly estimated from the data as with the RBM , so it is not clear that the model would perform as well on different datasets ., Given that higher-order correlations are important to include in statistical models of neural activity , the question turns to how these models can be estimated for larger data sets ., In this section , we focus on two approaches that are complementary to our model using hidden units ., The increasing role of higher-order correlations in larger networks was first observed in 25 , where Ising models were fit via MCMC methods to the same 40 cell retina dataset that was analyzed in terms of subsets of 10 cells in 2 ., This point is further emphasized by Schneidman and Ganmor in 5 , who caution that trying to model small subsets ( 10 cells ) of a larger network to infer properties of the full network may lead to incorrect conclusions , and show that for retinal networks higher-order correlations start to dominate only once a certain network size is reached ., Therefore they address the same question as the present paper , i . e . how to capture order correlations without the accompanying growth in the number of free parameters in a larger network ., In their proposed Reliable Interaction Model ( RIM ) , they exploit the sparseness of the neural firing patterns to argue that it is possible to explicitly include third , forth and higher-order terms in the distribution , as most higher-order coupling terms will be zero ., Therefore the true distribution can be well approximated from a small number of these terms , which can be calculated using a simple recursive scheme ., In practice , the main caveat is that only patterns that appear in the data many times are used to calculate the coupling terms ., While the model by construction assigns correct relative probabilities to observed patterns , the probability assigned to unobserved patterns is unconstrained , and most of the RIMs probability mass may thus be assigned to states which never occur in the data ., The second alternative to the RBM with hidden units is to include additional low-dimensional constraints in an Ising model ., In the “K-pairwise” model 18 , 26 , in addition to constraining pairwise correlations , a term is introduced to constrain the probability of neurons being active simultaneously ., This adds very little model complexity , but significantly improves the model of the data ., This is shown , for example , by computing conditional predictions in a similar fashion to that shown in Fig . 6c ) , where the K-pairwise model for a population of 100 retinal ganglion cells has an 80% correlation with the repeated trial PSTH ., In contrast to the RBM , however , this model is not structured in a way that can be easily interpreted in terms of functional connectivity ., To estimate these models for large numbers ( ) of neurons , the authors leverage the sampling algorithm described in 27 , an -regularized histogram Monte Carlo method ., In addition to proposing a faster ( though slower than MPF ) parameter estimation method for this class of models , Tkačik and colleagues address the difficulty in sampling from the model and computing the partition function ., In our experiments the overall limiting factor is the Gibbs sampler in the AIS partition function estimation ., Tkačik et al . use a more efficient sampling algorithm ( Wang-Landau ) to compute partition functions and entropy of their models ., As an even simpler approach to the partition function problem , they suggest that it can be obtained in closed form if the empirical probability of at least one pattern in the data is accurately known ., A case in point is the all zeros pattern that is typically frequent for recordings with sparsely firing neurons ., Unfortunately , this approach is limited in that it assumes that the probability the model assigns to the state is identical to the empirical probability of the state ., In the case that the model has not been perfectly fit to the data , or in the case that the data does not belong to the model class , this will lead to an incorrect estimate of the partition function ., Since the activity we are modeling is in response to a specific stimulus , one may rightfully question whether the observed higher-order correlations in neural activity are simply due to higher-order structure contained in the stimulus , as opposed to being an emergent property of cortical networks ., In an attempt to tease apart the contribution of the stimulus , we included a nonparametric PSTH term in the model ., However , this can capture arbitrarily complex stimulus transformations using the trial-averaged response to predict the response to a new repetition of the same stimulus ., As an “oracle model” , it does not only capture the part of the response that could be attributed to a feed-forward receptive field , but also captures contextual modulation effects mediated by surrounding columns and feedback from higher brain areas , essentially making it “too good” as a stimulus model ., The RBM and Ising models are then relegated to merely explain the trial to trial variability in our experiments ., Not including stimulus terms and finding the best model to explain the correlations present in the data , irrespective of whether they are due to stimulus or correlated variability , seems to be an equally valid approach to discover functional connectivity in the population ., GLMs 21 can be used to model each cell conditioned on the rest of the population ., While mostly used for stimulus response models including stimulus terms , they are easily extended with terms for cross-spike coupling , which capture interactions between cells ., GLMs have been successfully augmented with latent variables 20 , for instance to model the effect of common noisy inputs on synchronized firing at fast timescales 28 ., A major limitation of GLMs is that current implementations can only be estimated efficiently if they are linear in the stimulus features and network coupling terms , so they are not easily generalized to higher-order interactions ., Two approaches have been used to overcome this limitation for stimulus terms ., The GLM can be extended with additional nonlinearities , preserving convexity on subproblems 22 ., Alternatively , the stimulus terms can be packaged into nonlinear features which are computed in preprocessing and usually come with the penalty of a large increase in the dimensionality of the problem 29 ., However , we are not aware of any work applying either of these ideas to spike history rather than stimulus terms ., Another noteworthy drawback of GLMs is that instantaneous coupling terms cannot be included 20 , so instantaneous correlations cannot be modeled | Introduction, Results, Discussion, Materials and Methods | We statistically characterize the population spiking activity obtained from simultaneous recordings of neurons across all layers of a cortical microcolumn ., Three types of models are compared: an Ising model which captures pairwise correlations between units , a Restricted Boltzmann Machine ( RBM ) which allows for modeling of higher-order correlations , and a semi-Restricted Boltzmann Machine which is a combination of Ising and RBM models ., Model parameters were estimated in a fast and efficient manner using minimum probability flow , and log likelihoods were compared using annealed importance sampling ., The higher-order models reveal localized activity patterns which reflect the laminar organization of neurons within a cortical column ., The higher-order models also outperformed the Ising model in log-likelihood: On populations of 20 cells , the RBM had 10% higher log-likelihood ( relative to an independent model ) than a pairwise model , increasing to 45% gain in a larger network with 100 spatiotemporal elements , consisting of 10 neurons over 10 time steps ., We further removed the need to model stimulus-induced correlations by incorporating a peri-stimulus time histogram term , in which case the higher order models continued to perform best ., These results demonstrate the importance of higher-order interactions to describe the structure of correlated activity in cortical networks ., Boltzmann Machines with hidden units provide a succinct and effective way to capture these dependencies without increasing the difficulty of model estimation and evaluation . | Communication between neurons underlies all perception and cognition ., Hence , to understand how the brains sensory systems such as the visual cortex work , we need to model how neurons encode and communicate information about the world ., To this end , we simultaneously recorded the activity of many neurons in a cortical column , a fundamental building block of information processing in the brain ., This allows us to discover statistical structure in their activity , a first step to uncovering communication pathways and coding principles ., To capture the statistical structure of firing patterns , we fit models that assign a probability to each observed pattern ., Fitting probability distributions is generally difficult because the model probabilities of all possible states have to sum to one , and enumerating all possible states in a large system is not possible ., Making use of recent advances in parameter estimation , we are able to fit models and test the quality of the fit to the data ., The resulting model parameters can be interpreted as the effective connectivity between groups of cells , thus revealing patterns of interaction between neurons in a cortical circuit . | computational neuroscience, neuroscience, biology and life sciences, computational biology | null |
2,378 | journal.pcbi.1001096 | 2,011 | Flexible Cognitive Strategies during Motor Learning | When learning a new motor skill , verbal instruction often proves useful to hasten the learning process ., For example , a new driver is instructed on the sequence of steps required to change gears when using a standard transmission ., As the skill becomes consolidated , the driver no longer requires explicit reference to these instructions ., Operating a vehicle with a stiffer or looser clutch does not generally require further instruction , but rather entails a subtle recalibration , or adaptation of the previously learned skill ., Indeed , the use of an explicit strategy may even lead to degradation in the experts performance ., Consideration of these contradictory issues brings into question the role of instructions or explicit strategies in sensorimotor learning ., The type of motor task and nature of the instruction can have varying effects on motor execution and learning 1–3 ., In the serial reaction time task ( SRT ) , participants produce a sequence of cued button presses ., If the participant is informed of the underlying sequence , learning occurs much more rapidly compared to when sequential learning arises from repeated performance 4 ., However , learning in the SRT task entails the linkage of a series of discrete actions ., Explicit instructions of the sequence structure may be viewed as a way to create a working memory representation of the series ., Many skills lack such a clear elemental partition and , as such , participants cannot easily verbalize what a successful movement entails ., For example , the pattern of forces required to move the hand in a straight line in a novel force field 5–7 would be hard to verbalize ., Various studies have examined the role of explicit strategies in tasks involving sensorimotor adaptation 8–11 ., The benefits of an explicit strategy may be illusory with adaptive processes arising from automatic and incremental updating of a motor system that is impenetrable to conscious intervention 12–16 ., However , performance measures indicate that adaptation may differ between conditions in which participants are either aware or unaware of the changes in the environment 17 ., For example , a large visuomotor rotation can be introduced abruptly , in which case , awareness is likely , or introduced incrementally such that participants are unaware of the rotation ., The abrupt onset of large unexpected errors may promote the use of cognitive strategies 18–20 ., Participants who gain explicit knowledge of an imposed visuomotor rotation show better performance during learning than participants who report little or no awareness of the rotation 10 ., Moreover , the rate of learning , at least in the early phase of adaptation , correlates positively with spatial working memory span 21 , suggesting that strategic compensation may be dependent on working memory capacity ., Studies of sensorimotor adaptation during aging also indicate that the rate of learning is slower in older adults compared to young adults , despite similar aftereffects 22–24 ., This cost is absent in older adults who report awareness of the rotation 25 ., In many of the studies cited above , the assumption has been that the development of awareness can lead to the utilization of compensatory strategies ., However , few studies have directly sought to manipulate strategic control during sensorimotor adaptation ., One striking exception is a study by Mazzoni and Krakauer 9 ., Participants viewed a display of eight small circles , or visual landmarks , that were evenly spaced by 45° to form a large , implicit ring ., The target location was specified by presenting a bullseye within one of the eight circles ., After an initial training phase in which the visuomotor mapping was unaltered , a 45° rotation in the counterclockwise direction ( CCW ) was introduced ., In the standard condition in which no instructions were provided , participants gradually reduced endpoint error by altering their movement heading in the clockwise direction ( CW ) ., In the strategy condition , participants were given explicit instructions to move to the circle located 45° clockwise to the target ., This strategy enabled these participants to immediately eliminate all error ., However , as training continued , the participants progressively increased their movement heading in the clockwise direction ., As such , the endpoint location of the feedback cursor drifted further from the actual target location and , thus , performance showed an increase in error over training , a rather counterintuitive result 26 ., Mazzoni and Krakauer 9 proposed that this drift arises from the implicit adaptation of an internal forward model ., Importantly , the error signal for this learning process is not based on difference between the observed visual feedback and target location ., Rather , it is based on the difference between the observed visual feedback and strategic aiming location ., Even though participants aim to a clockwise location of the target ( as instructed ) , the motor system experiences a mismatch between the predicted state and the visual feedback ., This mismatch defines an error signal that is used to recalibrate the internal model ., Reducing the mismatch results in an adjustment of the internal model such that the next movement will be even further in the clockwise direction ., Thus , the operation of an implicit learning process that is impervious to the strategy produces the paradoxical deterioration in performance over time ., In the present paper , we start by asking how this hypothesis could be formalized in a computational model of motor learning ., State space modeling techniques have successfully described adaptation and generalization during motor learning 27–29 ., These models focus on how learning mechanisms minimize error from trial to trial ., Variants of these models postulate multiple learning mechanisms that operate at different time scales 28 ., Within this framework , strategic factors might be associated with fast learning processes that rapidly reduce error ., However , such models are unable to account for the drift that arises following the deployment of a strategy ., To address these issues , we developed a series of setpoint state-space models of adaptation to quantitatively explore how strategic control and implicit adaptation interact ., Assuming a fixed strategy , adaptation should continue to occur until the error signal , the difference between the feedback location and the aiming location is zero; that is , the visual feedback matches the intended aim of the reach ., As such , drift arising from implicit adaptation should continue to rise until it offsets the adopted strategy ., To test this prediction , we increased the length of the adaptation phase ., Moreover , we manipulated the salience of the visual landmarks used to support the strategy ., We hypothesized that these landmarks served as a proxy for the aiming location ., If this assumption is correct , then elimination of the visual landmarks should weaken the error signal , given uncertainty concerning the aiming location , and drift should be attenuated ., We test this prediction by comparing performance with and without visual landmarks ., When informed of an appropriate strategy that will compensate for the rotation , participants immediately counteract the rotation and show on-target accuracy ., The standard model as formulated above does not provide a mechanism to implement an explicit strategy ., To allow immediate implementation of the strategy , we postulate that there is direct feedthrough of the strategy ( s ) to the target error equation ( equation 1 ) : ( 3 ) Direct feedthrough allows the strategy to contribute to the target error equation without directly influencing the updating of the internal model ., If the strategy operated through the internal model , then the impact of the strategy would take time to evolve , assuming there is substantial memory of the internal models estimation of the rotation ( i . e . , A has a high value in Eq . 2 ) ., With direct feedthrough , the implementation of an appropriate strategy can immediately compensate for the rotation ., In the current arrangement , the appropriate strategy is fixed at 45° in the CW direction from the cued target ., Once the strategy is implemented , performance should remain stable since the error term is small ., Indeed , a model based on Eq ., 3 immediately compensates for the rotation ., The target error , the difference between the feedback location and target location , is essentially zero on the first trial with the strategy , and remains so throughout the rotation block ( Figure 1B – green line ) ., However , this model fails to match the empirical results observed by Mazzoni and Krakauer 9: performance drifts over time with an increase in errors in the direction of the strategy ., This phenomenon led the authors to suggest that the prediction error signal to the internal model is not based on target error ., Instead , the error signal should be defined by the difference between the feedback location and aiming location ( see Figure 2E ) : ( 4 ) The formulation of the prediction error term in Eq ., 4 is akin to a setpoint or reference signal from engineering control theory 30 ., In typical motor learning studies , the setpoint is to reach to the target ., When there is no strategy ( s\u200a=\u200a0 ) , the target error in Eq ., 1 is the same as the error term in Eq ., 4 ., However , when a strategy with direct feedthrough is used ( s≠0 ) , the strategy terms may cancel out if the actual implemented strategy is similar to the desired strategy ., The input error to update the internal models estimate of the rotation becomes: ( 5 ) This model shows immediate compensation for the visuomotor rotation , and more importantly , produces a gradual deterioration in performance over the course of continued training with the reaching error drifting in the direction of the strategy ( Figure 1A – red line ) , consistent with the results reported by Mazzoni and Krakauer 9 ., It is important to emphasize that the error signal for sensorimotor recalibration in Eq ., 4 is not based on the difference between the feedback location and target location ( target error ) ., Rather , the error signal is defined by the difference between the feedback location and aiming location , or what we will refer to as aiming error ., When a fixed strategy is adopted throughout training ( Figure 1B – blue line ) , the aiming error is ( initially ) quite large given that the predicted hand location is far from the location of visual feedback , even though the feedback cursor may be close to the actual target ., In its simplest form , the setpoint model predicts that , as the internal model minimizes this error ( Figure 1B – red line ) , drift will continue until the observed feedback of the hand matches the aiming location ., That is , the magnitude of the drift should equal the size of the strategic adjustment ., In the Mazzoni and Krakauer set-up 9 , the drift would eventually reach 45° in the CW direction ( Figure 1A – red line ) ., A second prediction can be derived by considering that the error signal in Eq ., 4 relies on an accurate estimate of the strategic aiming location ., We assume that a visual landmark in the display can be used as a reference point for strategy implementation ( e . g . , the blue circle adjacent to the target ) ., This landmark can serve as a proxy for the aiming location ., The salience of this landmark provides an accurate estimate of the aiming location and , from Eq ., 4 , drift should be pronounced ., However , if these landmarks are not available , then the estimate of the aiming location will be less certain ., Previous studies have shown that adaptation is attenuated when sensory feedback is noisy 31 , 32 ., One approach for modeling the effect of changing the availability or certainty of the ( strategy defined ) aiming location would be to vary the adaptation rate ( B ) ., For example , B could be smaller if there is a decrease in certainty of the aiming location , and correspondingly , a decrease in the certainty of the aiming error ., This model predicts that the rate of drift is directly related to B: if B is lower due to decreased certainty of the aiming location , then the rate of drift will be attenuated ( Figure 1A – cyan line ) ., To evaluate the predictions of this setpoint model , participants were tested in an extended visuomotor rotation task in which we varied the visual displays used to define the target and strategic landmarks ( see methods ) ., The target was defined as a green circle , appearing at one of eight possible locations , separated by 45° ( Figure 2 , only three shown here ) ., By encouraging the participants to make movements that “sliced” through the target , and only providing feedback at the point of intersection with the virtual target ring , we were able to train the participants to move quickly with relatively low trial-to-trial variability ., We assume that participants mostly relied on feedforward control given the ballistic nature of the movements and absence of continuous online feedback ., Participants were assigned to one of three experimental groups ( n\u200a=\u200a10 per group ) , with the groups defined by our manipulation of the blue landmarks in the visual displays ., For the aiming-target group ( AT ) , the blue circles were always visible , similar to the method used by Mazzoni and Krakauer ., For the disappearing aiming-target group ( AT ) , the blue circles were visible at the start of the trial and disappeared when the movement was initiated ., For the no aiming-target group ( NoAT ) , the blue landmarks were not included in the display ., The participants were initially required to reach to the green target ( Figure 2A ) ., Movement duration , measured when the hand crossed the target ring , averaged 275±50 . 8 ms with no significant difference between groups ( F2 , 27\u200a=\u200a1 . 02 , p\u200a=\u200a0 . 37 ) ., Following the initial familiarization block , participants were trained to use a strategy of moving 45° in the CW direction from the green target location , ( Figure 2B ) ., This location corresponded to the position of the neighboring blue circle ., Feedback was veridical in this phase ( e . g . , corresponded to hand position ) ., To help participants in the NoAT group learn to move at 45° , the blue circles were also presented on half of the trials for this group ( in this phase only ) ., The mean angular shifts , relative to the green target , were 43 . 4±1 . 6° and 42 . 9±1 . 2° for the AT and DAT groups , respectively ( Figure 3 - orange ) ., For the NoAT group , the mean angular shift was 43 . 5±0 . 9° when the aiming target was present and 40 . 1±7 . 1° when the aiming target was absent ., While the variance was considerably larger for trials without the aiming target , the means were not significantly different ( t18\u200a=\u200a0 . 95 , p\u200a=\u200a0 . 38 ) ., Practicing the 45° CW strategy did not produce interference on a subsequent baseline block in which participants were again instructed to reach to the cued , green target ( Figure 3 – black ) ., Over the last 10 movements of the familiarization block participants , across all groups , had an average target error of −1 . 5±0 . 7° ., Over the first 10 movements of the baseline block , this value was −0 . 5±0 . 6° , confirming that the strategy-only block did not produce a substantial bias ., Without warning , the CCW rotation was introduced ( Figure 2C ) ., As expected , the introduction of the CCW rotation induced a large target error ., Averaged over the two , rotation probe trials , the mean values were −41 . 6±3 . 3° , −43 . 8±1 . 1° , −43 . 5±3 . 2° for AT , DAT , and NoAT groups , respectively ( Figure 3 – “x” ) ., After the participants were instructed to use the clockwise strategy ( Figure 2D ) , the target error was reduced immediately to 3 . 5±4 . 4° , 1 . 0±4 . 3° , and −2 . 5±6 . 6° , values that were not significantly different from each other ( F2 , 27\u200a=\u200a1 . 96 , p\u200a=\u200a0 . 16 ) ., The participants were then instructed to use the strategy and required to produce a total of 320 reaching movements under the CCW rotation ., This extended phase allowed us to, a ) verify that error increased over time , drifting in the direction of the strategy , and, b ) determine if the magnitude of the drift would approximate the magnitude of the rotation , a prediction of the simplest form of the setpoint model ., Consistent with the results of Mazzoni and Krakauer 9 , error increased in the direction of the strategy over the initial phase of the rotation block ., However , the extent of the drift fell far short of the magnitude of the rotation ., To quantify the peak drift , each participants time series of endpoint errors was averaged over 10 movements and we identified the bin with the largest error ., Based on this estimate of peak drift , a significant difference was observed between groups ( F2 , 27\u200a=\u200a21 . 9 , p<0 . 001; Figure 4A ) ., This is consistent with the prediction of the model based that the salience of the aiming targets would influence the estimation of the aiming location ., Drift was largest when the aiming targets were always visible , and progressively less for the DAT and NoAT groups ., Drift was not isolated to particular target locations ( Figure 4B ) ., Our rotation plus strategy block lasted 320 trials , nearly four times the number of trials used by Mazzoni and Krakauer 9 ., This larger window provides an interesting probe on learning given that the participants become progressively worse in performance with respect to the target over the drift phase ., While the AT group had the largest drift , they eventually showed a change in performance such that the heading angle at the end of the rotation block was close to 45° CW from the green target ( Figure 3A ) ., By the end of training , their target error was only 0 . 3±3 . 9° , which was not significantly different from zero ( t9\u200a=\u200a0 . 17 , p\u200a=\u200a0 . 85 ) ., We did not observe a consistent pattern in how these participants counteracted the drift ( Figure 5 ) ., Two participants showed clear evidence of an abrupt change in their performance , suggesting a discrete change in their aiming strategy ., For the other eight AT participants , the changes in performance were more gradual ., The drift persisted over the 320 trials of the rotation block for participants in the DAT group ( Figure 3C ) ., The average drift was 5 . 9±4 . 8° at the end of training , a value that was significantly greater than zero ( t9\u200a=\u200a2 . 40 , p\u200a=\u200a0 . 04 ) ., Given that the NoAT group showed minimal drift , we did not observe any consistent changes in performance over the block ., At the end of training , the mean target error was only 1 . 0±1 . 9° , a value which is not significantly different from zero ( t9\u200a=\u200a1 . 01 , p\u200a=\u200a0 . 33 ) ., The availability or certainty in the estimate of the aiming location was manipulated by altering the presence of the aiming target across the groups ., As predicted by the setpoint model , the degree of drift was attenuated as the availability of the aiming targets decreased ., In the current implementation of our model , this decrease in drift rate is captured by a decrease in the adaptation rate ( B ) : with greater uncertainty , the weight given to the error term for updating the internal model is reduced ., However , one prediction of this model is at odds with the empirical results ., Variation in the adaptation rate not only predicts a change in drift rate , but also predicts a change in the washout period ., Specifically , decreasing the adaptation rate should produce a slower washout , or extended aftereffect ( Figure 1B – cyan ) ., This prediction was not supported ., The washout rates are similar across the three groups ( bootstrap , p>0 . 11 between all groups ) ., One could hypothesize different adaptation rates during the rotation and washout phases , with the effect of target certainty only relevant for the former ., However , a post hoc hypothesis along these lines is hard to justify ., Alternatively , it is possible that the adaptation rate ( B ) is similar for the three groups and that the variation in drift rate arises from another process ., One possibility is that the manipulation of the availability of the aiming targets influences the certainty of the desired strategy term in Equation 4 , and correspondingly , modifies the aiming error term: ( 6 ) A value of K that is less than 1 will attenuate drift ( Figure 1A – magenta line; simulated with K\u200a=\u200a0 . 5 ) because the strategy output ( Eq . 3 ) and the desired strategy ( Eq . 6 ) do not completely cancel out ., Consequently , the error used to adjust the internal model will be smaller and produce attenuated drift ( Figure 1B – magenta line ) ., Moreover , because the strategy is no longer used during the washout phase , the K term is no longer relevant ., Thus , the washout rates should be identical across the three groups , assuming a constant value of B . In sum , while variation in B or K can capture the group differences in drift rate , only the latter accounts for the similar rates of washout observed across groups ., When the availability of the aiming targets is reduced , either by flashing them briefly or eliminating them entirely , the participants certainty of the aiming location is attenuated ., This hypothesis is consistent with the notion that the aiming locations serve as a proxy for the predicted aiming location ., As noted above , none of the participants showed drift approaching 45° ., Even those exhibiting the largest drift eventually reversed direction such that they became more accurate over time in terms of reducing endpoint error with respect to the target location ., To capture this feature of the results , we considered how participants might vary their strategy over time as performance deteriorates ., It is reasonable to assume that the participant may recognize that the adopted strategy should be modified to offset the rising error ., One salient signal that could be used to adjust the strategy is the target error , the difference between the target location and the visual feedback ., To capture this idea , we modified the setpoint model , setting the strategy as a function of target error ( Figure 2E ) : ( 7 ) where E defines the retention of the state of the strategy and the F defines the rate of strategic adjustment ., As target error grows ( i . e . , drift ) , the strategy will be adjusted to minimize this error ( Figure 2E ) ., In our initial implementation of the setpoint model , the strategy term was fixed at 45° ., Equation 7 allows the strategy term to vary , taking on any value between 0° and 360° ., The availability of the aiming targets , captured by K in Eq ., 6 , influences the magnitude of the drift ., Greater drift occurs when the aiming error , that between the feedback location and aiming location , is salient ( Figure 1B – red line; K\u200a=\u200a1 ) ., However , when the target error grows too large , adjustments to the strategy begin to gain momentum and performance becomes more accurate with respect to the target given the change in strategy ( Figure 1C – blue line; simulated with E\u200a=\u200a1 and F\u200a=\u200a0 . 01 ) ., More emphasis on target errors rather than the aiming error results in less drift ( Figure 1C – orange line; simulated with F\u200a=\u200a0 . 05 ) ., Thus , the relative values of K and F determine the degree of performance error that is tolerated before strategic adjustments compensate to offset the drift ( Figure 1C ) ., This setpoint model ( Eqs . 8–11 ) was fit by bootstrapping ( see methods ) each groups time series of target errors: ( 8 ) ( 9 ) ( 10 ) ( 11 ) The fits ( Figure 6A–C and Table 1 ) show that K is the greatest for the AT group and progressively less for the DAT group and the NoAT group ( AT vs DAT group: p\u200a=\u200a0 . 003; AT vs NoAT groups: p<0 . 001; DAT vs . NoAT groups: p<0 . 001 ) ., When the aiming targets remain visible , the aiming error signal is readily available , and the weight given to the strategic aiming location , K , is larger ., Conversely , the weight given to the target error , F , is significantly greater for the NoAT group compared to the AT and DAT groups ( NoAT vs AT group: p\u200a=\u200a0 . 005; NoAT vs DAT group p\u200a=\u200a0 . 001 ) ., These results are consistent with the hypothesis that participants in the NoAT group rely more on target errors because the absence of the aiming targets removes a reference point for generating a reliable aiming error ( Eq . 9 ) ., The dynamics of the recalibration process and strategy state ( Eqs . 10 and 11 ) are plotted in Figure 6D ., These parameters , along with the other parameters that represent the memory of the internal model ( A ) , the adaptation rate ( B ) , and the memory of the strategy ( E ) are listed in Table 1 ., Following the rotation block , we instructed the participants that the rotation would be turned off and they should reach to the cued green target ., For the first eight trials , no endpoint feedback was presented ., This provided a measure of the degree of sensorimotor recalibration in the absence of learning ( Figure 4C – triangles ) ., Aftereffects were observed in all three groups ., The average error was significantly different from zero in the CW direction from the green target for all three groups ( one sample t-test for each group , p<0 . 001 ) ., In comparisons between the groups , the AT group showed the largest aftereffect of 19 . 2±3 . 7° ( t18\u200a=\u200a3 . 5 , p\u200a=\u200a0 . 003 and t18\u200a=\u200a5 . 61 , p<0 . 001 compared to the DAT and NoAT groups , respectively ) ., The mean aftereffects for the DAT and NoAT groups were 10 . 4±3 . 2° and 6 . 8±2 . 2° , values that were not significantly different ., When endpoint feedback was again provided , the size of the aftereffect diminished over the course of the washout block ( Figure 4C - squares ) ., In the setpoint model , the internal model will continue to adapt even in the face of strategic adjustments adopted to improve endpoint accuracy ., As such , the model predicts that the size of the aftereffect should be larger than the degree of drift ., To test this prediction , we compared the peak drift during the rotation block to the aftereffect ., In the preceding analysis , we had estimated peak drift for each participant by averaging over 10 movements and identifying the bin with the largest error ., However , a few errant movements could easily bias the estimate of drift within a 10-movement bin ., As an alternative procedure , we used a bootstrapping procedure to identify the bin with the largest angular error for each group ., This method should decrease the effect of noise because the estimate of peak drift is selected from an averaged sample of the participants data ., Moreover , any bias in the estimate of the magnitude of the peak should be uniform across the three groups of participants ., For consistency , we estimated the aftereffect ( the first 8 trials without feedback ) using the same bootstrap procedure ., For the AT group , the peak drift was 14 . 8±2 . 5° in the CW direction , occurring 64±30 movements into the rotation block ., For the DAT group , the peak drift was 10 . 0±1 . 8° , occurring at a later point in the rotation block ( 130±106 ) ., For the NoAT group , peak drift was only 3 . 2±2 . 7° and occurred after 145±131 movements ., As predicted by the model , the aftereffect was significantly larger than peak drift for the AT and NoAT groups ( Figure 4D; bootstrap: p\u200a=\u200a0 . 002 and p<0 . 001 , respectively ) ., The difference between the degree of peak drift and aftereffect in the DAT group was not reliable ., It is important to emphasize that estimates of the time of peak drift should be viewed cautiously , especially in terms of comparisons between the three groups ., These estimates have lower variance for the AT group because it was easier to detect the point of peak drift in this group compared to the DAT and NoAT groups ., Visuomotor rotation tasks are well-suited to explore how explicit cognitive strategies influence sensorimotor adaptation ., Following the approach introduced by Mazzoni and Krakauer 9 , we instructed participants to aim 45° CW in order to offset a −45° rotation ., Between groups , we manipulated the information available to support the strategy by either constantly providing an aiming target , blanking the aiming target at movement initiation , or never providing an aiming target ., In all groups , the strategy was initially effective , resulting in the rapid elimination of the rotation-induced endpoint error ., However , when the aiming target was present , participants showed a drift in the direction of the strategy , replicating the behavior observed in Mazzoni and Krakauer 9 ., This effect was markedly attenuated when the aiming target was not present suggesting that an accurate estimate of the strategic aiming location is responsible for causing the drift ., In addition , when the drift became quite large ( as in the AT group ) , participants begin to adjust their strategy to offset the implicit drift ., Mathematical models of sensorimotor adaptation have not explicitly addressed how a strategy influences learning and performance ., By formalizing the effect of strategy usage into the standard state-space model of motor learning , we can begin to evaluate qualitative hypotheses that have been offered to account for the influence of strategies on motor learning ., Mazzoni and Krakauer 9 suggested that drift reflects the interaction of the independent contribution of strategic and implicit learning processes in movement execution ., Current models of adaptation cannot be readily modified to account for this interaction ., Rather , we had to consider more substantive architectural changes ., Borrowing from engineering control theory , we used a setpoint model in which the internal model can be recalibrated around any given reach location ., The idea of a setpoint is generally implicit in most models of learning , but this component does not come into play since the regression is around zero ., However , simply making the setpoint explicit is not sufficient to capture the drift phenomenon ., The strategy must have direct feedthrough to the output equation in order to implement the explicit strategy while allowing for an internal model to implicitly learn the visuomotor rotation ., This simple setpoint model was capable of completely eliminating error on the first trial and capture the deterioration of performance with increased training ., Drift arises because the error signal is driven by the difference between the internal models prediction of the aiming location and the actual , endpoint feedback ., The idea that an aiming error signal is the source of drift is consistent with the conjecture of Mazzoni and Krakauer 9 ., An important observation in the current study is that , given uncertainty in the prediction of the aiming location , participants use external cues as a proxy in generating this prediction ., This hypothesis accounts for the observation that drift was largest when the aiming target was always visible , intermediate when the aiming target was only visible at the start of the trial , and negligible when the aiming target was never visible ., The aiming target , when present , served as a proxy for predicted hand position , and helped define the error between the feedback cursor and aiming location in visual coordinates ., When the aiming target was not present , the aiming location was less well-defined in visual coordinates , and thus , the relationship between the aiming location and feedback cursor was less certain ., Under this condition , the participants certainty of the error was reduced and adaptation based of this signal was attenuated ., Quantitatively , progress | Introduction, Results, Discussion, Methods | Visuomotor rotation tasks have proven to be a powerful tool to study adaptation of the motor system ., While adaptation in such tasks is seemingly automatic and incremental , participants may gain knowledge of the perturbation and invoke a compensatory strategy ., When provided with an explicit strategy to counteract a rotation , participants are initially very accurate , even without on-line feedback ., Surprisingly , with further testing , the angle of their reaching movements drifts in the direction of the strategy , producing an increase in endpoint errors ., This drift is attributed to the gradual adaptation of an internal model that operates independently from the strategy , even at the cost of task accuracy ., Here we identify constraints that influence this process , allowing us to explore models of the interaction between strategic and implicit changes during visuomotor adaptation ., When the adaptation phase was extended , participants eventually modified their strategy to offset the rise in endpoint errors ., Moreover , when we removed visual markers that provided external landmarks to support a strategy , the degree of drift was sharply attenuated ., These effects are accounted for by a setpoint state-space model in which a strategy is flexibly adjusted to offset performance errors arising from the implicit adaptation of an internal model ., More generally , these results suggest that strategic processes may operate in many studies of visuomotor adaptation , with participants arriving at a synergy between a strategic plan and the effects of sensorimotor adaptation . | Motor learning has been modeled as an implicit process in which an error , signaling the difference between the predicted and actual outcome is used to modify a model of the actor-environment interaction ., This process is assumed to operate automatically and implicitly ., However , people can employ cognitive strategies to improve performance ., It has recently been shown that when implicit and explicit processes are put in opposition , the operation of motor learning mechanisms will offset the advantages conferred by a strategy and eventually , performance deteriorates ., We present a computational model of the interplay of these processes ., A key insight of the model is that implicit and explicit learning mechanisms operate on different error signals ., Consistent with previous models of sensorimotor adaptation , implicit learning is driven by an error reflecting the difference between the predicted and actual feedback for that movement ., In contrast , explicit learning is driven by an error based on the difference between the feedback and target location of the movement , a signal that directly reflects task performance ., Empirically , we demonstrate constraints on these two error signals ., Taken together , the modeling and empirical results suggest that the benefits of a cognitive strategy may lie hidden in many motor learning tasks . | neuroscience/behavioral neuroscience, neuroscience/cognitive neuroscience, neuroscience/motor systems, computational biology/computational neuroscience, neuroscience/experimental psychology | null |
2 | journal.pcbi.1006283 | 2,018 | Unsupervised clustering of temporal patterns in high-dimensional neuronal ensembles using a novel dissimilarity measure | Precisely timed spike patterns spanning multiple neurons are a ubiquitous feature of both spontaneous and stimulus-evoked brain network activity ., Remarkably , not all patterns are generated with equal probability ., Synaptic connectivity , shaped by development and experience , favors certain spike sequences over others , limiting the portion of the network’s “state space” that is effectively visited 1 , 2 ., The structure of this permissible state space is of the greatest interest for our understanding of neural network function ., Multi-neuron temporal sequences encode information about stimulus variables 3–8 , in some cases “unrolling” non-temporally organized stimuli , such as odors , into temporal sequences 9 ., Recurrent neuronal networks can generate precise temporal sequences 10–14 , which are required for example for the generation of complex vocalization patterns like bird songs 15 ., Temporal spiking patterns may also encode sequences of occurrences or actions , as they take place , or are planned , projected , or “replayed” for memory consolidation in the hippocampus and other structures 16–25 ., Timing information between spikes of different neurons is critical for memory function , as it regulates spike timing dependent plasticity ( STDP ) of synapses , with firing of a post-synaptic neuron following the firing of a pre-synaptic neuron typically inducing synaptic potentiation , and firing in the reverse order typically inducing depotentiation 26–28 ., Thus , the consolidation of memories may rely on recurring temporal patterns of neural activity , which stabilize and modify the synaptic connections among neurons 16–22 , 29–35 ., Storing memories as sequences has the advantage that a very large number of patterns is possible , because the number of possible spike orderings grows exponentially , and different sequences can efficiently be associated to different memory items , as proposed by for instance the reservoir computing theory 36–40 ., Detecting these temporal patterns represents a major methodological challenge ., With recent advances in neuro-technology , it is now possible to record from thousands of neurons simultaneously 41 , and this number is expected to show an exponential growth in the coming years 42 ., The high dimensionality of population activity , combined with the sparsity and stochasticity of neuronal output , as well as the limited amount of time one can record from a given neuron , makes the detection of recurring temporal sequences an extremely difficult computational problem ., Many approaches to this problem are supervised , that is , they take patterns occurring concurrently with a known event , such as the delivery of a stimulus for sensory neurons or the traversal of a running track for hippocampal place fields , as a “template” and then search for repetitions of the same template in spiking activity 20 , 43 , 44 ., Other approaches construct a template by measuring latencies of each neuron’s spiking from a known event , such as the beginning of a cortical UP state 5 , 45 ., While this enables rigorous , relatively easy statistical treatment , it risks neglecting much of the structure in the spiking data , which may contain representations of other items ( e . g . remote memories , presentations of different stimuli , etc . ) ., A more complete picture of network activity may be provided by unsupervised methods , detecting regularities , for example in the form of spiking patterns recurring more often than predicted by chance ., Unsupervised methods proposed so far typically use linear approaches , such as Principal Component Analysis ( PCA ) 24 , 46 , 47 , and cannot account for different patterns arising from permutations of spike orderings ., While approaches like frequent itemset mining and related methods 48–51 can find more patterns than the number of neurons and provide a rigorous statistical framework , they require that exact matches of the same pattern occur , which becomes less and less probable as the number of neurons grows or as the time bins become smaller ( problem of combinatorial explosion ) ., To address this problem , 52 , 53 proposed another promising unsupervised method based on spin glass Ising models that allows for approximate pattern matching while not being linearly limited in the number of patterns; this method however requires binning , and rather provides a method for classifying the binary network state vector in a small temporal neighborhood , while not dissociating rate patterns from temporal patterns ., In this paper we introduce a novel spike pattern detection method called SPOTDisClust ( Fig 1 ) ., We start from the idea that the similarities of two neural patterns can be defined by the trace that they may leave on the synaptic matrix , which in turn is determined by the pairwise cross-correlations between neural activities 26 , 27 ., The algorithm is based on constructing an epoch-to-epoch dissimilarity matrix , in which dissimilarity is defined as SPOTDis , making use of techniques from the mathematical theory of optimal transport to define , and efficiently compute , a dissimilarity between two spiking patterns 54–57 ., We then perform unsupervised clustering on the pairwise SPOTDis matrix ., SPOTDis measures the similarity of two spike patterns ( in two different epochs ) by determining the minimum transport cost of transforming their corresponding cross-correlation matrices into each other ., This amounts to computing the Earth Mover’s Distance ( EMD ) for all pairs of neurons and all pairs of epochs ( see Methods ) ., Through ground-truth simulations , we show that SPOTDisClust has many desirable properties: It can detect many more patterns than the number of neurons ( Fig 2 ) ; it can detect complex patterns that would be invisible with latency-based methods ( Figs 3 and 4 ) ; it is highly robust to noise , i . e . to the ‘insertion’ of noisy spikes , spike timing jitter , or fluctuations in the firing rate , and its performance grows with the inclusion of more neurons given a constant signal-to-noise ratio ( Fig 5 ) ; it can detect sequences in the presence of sparse firing ( Fig 6 ) ; and finally it is insensitive to a global or patterned scaling of the firing rates ( Fig 7 ) ., We apply SPOTDisClust to V1 Utah array data from the awake macaque monkey , and identify different visual stimulus directions using unsupervised clustering with SPOTDisClust ( Fig 8 ) ., Suppose we perform spiking measurements from an ensemble of N neurons , and we observe the spiking output of this ensemble in M separate epochs of length T samples ( in units of the time bin length ) ., Suppose that there are P distinct activity patterns that tend to reoccur in some of the M epochs ., Each pattern generates a set of normalized ( to unit mass ) cross-correlation histograms among all neurons ., Instantiations of the same pattern are different because of noise , but will have the same expectation for the cross-correlation histogram ., The normalized cross-correlation histogram is defined as, s i j ′ ( τ ) ≡ s i j ( τ ) ∑ τ = - T T s i j ( τ ) , ( 1 ), if ∑ τ = - T T s i j ( τ ) > 0 , and s i j ′ ( τ ) = 0 otherwise ., Here , sij ( τ ) ≡ ∑t si ( t ) sj ( t + τ ) is the cross-correlation function ( or cross-covariance ) , and si ( t ) and sj ( t ) are the spike trains of neurons i and j ., In other words , the normalized cross-correlation histogram is simply the histogram of coincidence counts at different delays τ , normalized to unit mass ., We take the N × N × ( 2T + 1 ) matrix of s i j ′ ( τ ) values as a full representation of a pattern , that is , we consider two patterns to have the same temporal structure when all neuron pairs have the same expected value of s i j ′ ( τ ) for each τ ., For simplicity and clarity of presentation , we have written the cross-correlation function as a discrete ( histogram ) function of time ., However , because the SPOTDis , which is introduced below , is a cross-bins dissimilarity measure and requires only to store the precise delays τ at which s i j ′ ( τ ) is non-zero , the sampling rate can be made infinitely large ( see Methods ) ., In other words , the SPOTDis computation does not entail any loss of timing precision beyond the sampling rate at which the spikes are recorded ., The SPOTDisClust method contains two steps ( Fig 1 ) , which are illustrated for five example patterns ( Fig 1A and 1B ) ., The first step is to construct the SPOTDis dissimilarity measure between all pairs of epochs on the matrix of cross-correlations among all neuron pairs ., The second step is to perform clustering on the SPOTDis dissimilarity measure using an unsupervised clustering algorithm that operates on a dissimilarity matrix ., Many algorithms are available for unsupervised clustering on pairwise dissimilarity matrices ., One family of unsupervised clustering methods comprises so called density clustering algorithms , including DBSCAN , HDBSCAN or density peak clustering ., Here , we use the HDBSCAN unsupervised clustering method 58–61 ( see Methods ) ., To examine the separability of the clusters in a low dimensional 2-D embedding , we employ the t-SNE projection method 62 , 63 ( see Methods ) ., The SPOTDis measure is constructed as follows: To test the SPOTDisClust method for cases in which the ground truth is known , we generated P input patterns in epochs of length T = Tepoch = 300 samples , defined by the instantaneous rate of inhomogeneous Poisson processes , and then generated spiking outputs according to these ( Fig 1A and 1B ) ( see Methods ) ., Because the SPOTDis is a binless measure , in the sense that it does not require any binning beyond the sampling frequency , the epochs could for example represent spike series of 3s with a sampling rate of 100Hz , or spike series of 300ms with a sampling rate of 1000Hz ., Each input pattern was constructed such that it had a baseline firing rate and a pulse activation firing rate , defined as the expected number of spikes per sample ., The pulse activation period ( with duration Tpulse samples ) is the period in the epoch in which the neuron is more active than during the baseline , and the positions of the pulses across neurons define the pattern ., For each neuron and pattern , the position of the pulse activation period was randomly chosen ., We generated M/ ( 2 * P ) realizations for each of the P patterns , and a matching number of M/2 noise epochs ( i . e . 50 percent of epochs were noise epochs ) ., We performed simulations for two types of noise epochs ( S3 Fig ) ., First , noise was generated with random firing according to a homogeneous Poisson process with a constant rate ( see Fig 1 ) ., We refer to this noise , throughout the text , as “homogeneous noise” ., For the second type of noise , each noise epoch comprised a single instantiation of a unique pattern , with randomly chosen positions of the pulse activation periods ., We refer to this noise as “patterned noise” ., For both types of noise patterns , the expected number of spikes in the noise epoch was the same as during an epoch in which one of the P patterns was realized ., The second type of noise also had the same inter spike interval statistics for each neuron as the patterns ., Importantly , because SPOTDisClust uses only the relative timing of spiking among neurons , rather than the timing of spiking relative to the epoch onset , the exact onset of the epoch does not have to be known with SPOTDis; even though the exact onset of the pattern is known in the simulations presented here , this knowledge was not used in any way for the clustering ., Fig 1 illustrates the different steps of the algorithm for an example of P = 5 patterns ., For the purpose of illustration , we start with an example comprising five patterns that are relatively easy to spot by eye; later in the manuscript we show examples with a very low signal-to-noise ratio ( Fig 5 ) or sparse firing ( Fig 6 ) ., We find that in the 2-D t-SNE embedding , the P = 5 different patterns form separate clusters ( Fig 1E ) , and that the HDBSCAN algorithm is able to correctly identify the separate clusters ( S4 Fig ) ., We also show a simulation with many more noise epochs than cluster epochs , which shows that even in such a case t-SNE embedding is able to separate the different clusters ( S5 Fig ) ., In S3 Fig we compare clustering with homogeneous and patterned noise ., The homogeneous noise patterns have a consistently small SPOTDis dissimilarity to each other and are detected as a separate cluster , while the patterned noise epochs have large SPOTDis dissimilarities to each other and do not form a separate cluster , but spread out rather uniformly through the low-dimensional t-SNE embedding ( S3 Fig ) ., A key challenge for any pattern detection algorithm is to find a larger number of patterns than the number of measurement variables , assuming that each pattern is observed several times ., This is impossible to achieve with traditional linear methods like PCA ( Principal Component Analysis ) , which do not yield more components than the number of neurons ( or channels ) , although decomposition techniques using overcomplete base sets might in principle be able to do so ., Other approaches like frequent itemset mining and related methods 48–50 require that exact matches of the same pattern occur ., Because SPOTDisClust clusters patterns based on small SPOTDis dissimilarities , it does not require exact matches of the same pattern to occur , but only that the different instantiations of the same pattern are similar enough to one another ,, i . e ., have SPOTDis values that are small enough , and separate them from other clusters and the noise ., Fig 2 shows an example where the number of patterns exceeds the number of neurons by a factor 10 ( 500 to 50 ) ., In the 2-D t-SNE embedding , the 500 patterns form separate clusters , with the emergence of a noise cluster that has higher variance ., Consistent with the low dimensional t-SNE embedding , the HDBSCAN algorithm is able to correctly identify the separate clusters ( S4 Fig ) ., When many patterns are detectable , the geometry of the low dimensional t-SNE embedding needs to be interpreted carefully: In this case , all 500 patterns are roughly equidistant to each other , however , there does not exist a 2-D projection in which all 500 clusters are equidistant to each other; this would only occur with a triangle for P = 3 patterns ., Thus , although the low dimensional t-SNE embedding demonstrates that the clusters are well separated from each other , in the 2-D embedding nearby clusters do not necessarily have smaller SPOTDis dissimilarities than distant clusters when P is large ., Temporal patterns in neuronal data may consist not only of ordered sequences of activation , but can also have a more complex character ., As explained above , a key advantage of the SPOTDis measure is that it computes averages over the EMD , which can distinguish complex patterns beyond patterns that differ only by a measure of central tendency ., Indeed , we will demonstrate that SPOTDisClust can detect a wide variety of patterns , for which traditional methods that are based on the relative activation order ( sequence ) of neurons may not be well equipped ., We first consider a case where the patterns consist of bimodal activations within the epoch ( Fig 3A ) ., These types of activation patterns might for example be expected when rodents navigate through a maze , such that enthorinal grid cells or CA1 cells with multiple place fields are activated at multiple locations and time points 64–66 ., A special case of a bimodal activation is one where neurons have a high baseline firing rate and are “deactivated” in a certain segment of the epoch ( Fig 3B ) ., These kinds of deactivations may be important , because e . g . spatial information about an animal’s position in the medial temporal lobe ( 67 ) or visual information in retinal ganglion cells is carried not only by neuronal activations , but also by neuronal deactivations ., We find that the different patterns form well separated clusters in the low dimensional t-SNE embedding based on SPOTDis ( Fig 3A and 3B ) , and that HDBSCAN correctly identifies them ( S4 Fig ) ., Next , we consider a case where there are two coarse patterns and two fine patterns embedded within each coarse pattern , resulting in a total number of four patterns ., This example might be relevant for sequences that result from cross-frequency theta-gamma coupling , or from the sequential activation of place fields that is accompanied by theta phase sequences on a faster time scale 68 , 69 ., These kinds of patterns would be challenging for methods that rely on binning , because distinguishing the coarse and fine patterns requires coarse and fine binning , respectively ., We find that the SPOTDis allows for a correct separation of the data in four clusters corresponding to the four patterns and one noise cluster ( Fig 4A ) , and that HDBSCAN identifies them ( S4 Fig ) ., As expected , we find that the two patterns that share the same coarse structure ( but contain a different fine structure ) have smaller dissimilarities to each other in the t-SNE embedding as compared to the patterns that share a different coarse structure ., Finally , we consider a set of patterns consisting of a synchronous ( i . e . without delays ) firing of a subset of cells , with a cross-correlation function that is symmetric around the delay τ = 0 ( i . e . , correlation without delays ) ., This type of activity may arise for example in a network in which all the coupling coefficients between neurons are symmetric ., Previous methods to identify the co-activation ( without consideration of time delays ) of different neuronal assemblies relied on PCA 24 , which has the key limitation that it can identify only a small number of patterns ( smaller than the number of neurons ) Furthermore , while yielding orthonormal , uncorrelated components that explain the most variance in the data , PCA components do not necessarily correspond to neuronal spike patterns that form distinct and separable clusters; e . g . a multivariate Gaussian distribution can yield multiple PCA components that correspond to orthogonal axes explaining most of the data variance ., Fig 4B shows four patterns , in which a subset of cells exhibits a correlated activation without delays ., Separate clusters emerge in the t-SNE embedding based on SPOTDis ( Fig 4B ) and are identified by HDBSCAN ( S4 Fig ) ., This demonstrates that SPOTDisClust is not only a sequence detection method in the sense that it can detect specific temporal orderings of firing , but can also be used to identify patterns in which specific groups of cells are synchronously co-active without time delays ., A major challenge for the clustering of temporal spiking patterns is the stochasticity of neuronal firing ., That is , in neural data , it is extremely unlikely to encounter , in a high dimensional space , a copy of the same pattern exactly twice , or even two instantiations that differ by only a few insertions or deletions of spikes ., Furthermore , patterns might be distinct when they span a high-dimensional neural space , even when bivariate correlations among neurons are weak and when the firing of neurons in the activation period is only slightly higher than the baseline firing around it ( see further below ) ., The robustness of a sequence detection algorithm to noise is therefore critical ., We can dissociate different aspects of “noise” in temporal spiking patterns ., A first source of noise is the stochastic fluctuation in the number of spikes during the pulse activation period and baseline firing period ., In the ground-truth simulations presented here , this fluctuation is driven by the generation of spikes according to inhomogeneous Poisson processes ., This type of noise causes differences in SPOTDis values between epochs , because of differences in the amount of mass in the pulse activation and baseline period , in combination with the normalization of the cross-correlation histogram ., In the extreme case , some neurons may not fire in an given epoch , such that all information about the temporal structure of the pattern is lost ., Such a neural “silence” might be prevalent when we search for spiking patterns on a short time scale ., We note that fluctuations in the spike count are primarily detrimental to clustering performance because there is baseline firing around the pulse activation period , in other words because “noisy” spikes are inserted at random points in time around the pulse activations ., To see this , suppose that the probability that a neuron fires at least one spike during the pulse activation period is close to one for all M epochs and all N neurons , and that the firing rate during the baseline is zero ., In this case , because SPOTDis is based on computing optimal transport between normalized cross-correlation histograms ( Eq ( 5 ) ) , the fluctuation in the spike count due to Poisson firing would not drive differences in the SPOTDis ., A second source of noise is the jitter in spike timing ., Jitter in spike timing also gives rise to fluctuations in the SPOTDis and in the ground-truth simulations presented here , spike timing jitter is a consequence of the generation of spikes according to Poisson processes ., As explained above , because the SPOTDisClust method does not require exact matches of the observed patterns , but is a “cross-bins” dissimilarity measure , it can handle jitter in spike timing well ., Again , we can distinguish jitter in spike timing during the baseline firing , and jitter in spike timing during the pulse activation period ., The amount of perturbation caused by spike timing jitter during the pulse activation period is a function of the pulse period duration ., We will explore the consequences of these different noise sources , namely the amount of baseline firing , the sparsity of firing , and spike timing jitter in Figs 5 and 6 ., We define the SNR ( Signal-to-Noise-Ratio ) as the ratio of the firing rate inside the activation pulse period over the firing rate outside the activation period ., This measure of SNR reflects both the amount of firing in the pulse activation period as compared to the baseline period ( first source of noise ) , and the pulse duration as compared to the epoch duration ( second source of noise ) ., We first consider an example of 100 neurons that have a relatively low SNR ( Fig 5A ) ., It can be appreciated that different realizations from the same pattern are difficult to identify by eye , and that exact matches for the same pattern , if one would bin the spike trains , would be highly improbable , even for a single pair of two neurons ( Fig 5A ) ., Yet , in the 2-D t-SNE embedding based on SPOTDis , the different clusters form well separated “islands” ( Fig 5A ) , and the HDBSCAN clustering algorithm captures them ( S4 Fig ) ., To systematically analyze the dependence of clustering performance on the SNR , we varied the SNR by changing the firing rate inside the activation pulse period , while leaving the firing rate outside the activation period as well as the duration of the activation ( pulse ) period constant ., Thus , we varied the first aspect of noise , which is driven by spike count fluctuations ., A measure of performance was then constructed by comparing the unsupervised cluster labels rendered by HDBSCAN with the ground-truth cluster labels , using the Adjusted Rand Index ( ARI ) measure ( see Methods ) ., As expected , we find that clustering performance increases with the firing rate SNR ( Fig 5B ) ., Importantly , as the number of neurons increases , we find that the same clustering performance can be achieved with a lower SNR ( Fig 5B ) ., Thus , SPOTDisClust does not suffer from the problem of combinatorial explosion as the number of neurons that constitute the patterns increases , and , moreover , its performance improves when the number of recorded neurons is higher ., The reason underlying this behavior is that each neuron contributes to the separability of the patterns , such that a larger sample of neurons allows each individual neuron to be noisier ., This means that , in the brain , very reliable temporal patterns may span high-dimensional neural spaces , even though the bivariate correlations might appear extremely noisy; absence of evidence for temporal coding in low dimensional multi-neuron ensembles should therefore not be taken as evidence for absence of temporal coding in high dimensional multi-neuron ensembles ., We also varied the SNR by changing the pulse duration while leaving the ratio of expected number of spikes in the activation period relative to the baseline constant ., The latter was achieved by adjusting the firing rate inside the activation period , such that the product of pulse duration with firing rate in the activation period remained constant ,, i . e ., Tpulseλpulse = c ., Thus , we varied the second aspect of noise , namely the amount of spike timing jitter in the pulse activation period ., We find a similar dependence of clustering performance on the firing rate SNR and the number of neurons ( Fig 5B ) ., Hence , patterns that comprise brief activation pulses of very high firing yield , given a constant product Tpulseλpulse , clusters that are better separated than patterns comprising longer activation pulses ., We performed further simulations to study in a more simplified , one-dimensional setting how the SPOTDis depends quantitatively on the insertion of noise spikes outside of the activation pulse periods , which further demonstrates the robustness of the SPOTDis measure to noise ( S6 Fig ) ., In addition , we performed simulations to determine the influence of spike sorting errors on the clustering performance ., In general , spike sorting errors lead to a reduction in HDBSCAN clustering performance , the extent of which depends on the type of spike sorting error ( contamination or collision ) 70 and the structure of the spike pattern ( S7 and S8 Figs ) ., This result is consistent with the dependence of the HBDSCAN clustering performance on signal-to-noise ratio and pulse duration shown in Fig 5 , as well with the notion that contamination mixes responses across neurons , such that the number of neurons that carries unique information decreases ., We further note that in general , SPOTDisClust provides flexibility when detecting patterns using multiple tetrodes or channels of a laminar silicon probe: A common technique employed when analyzing pairwise correlations ( e . g . noise correlations ) is to ignore pairs of neurons that were measured from the same tetrode or from nearby channels ., When the number of channels is large , this will only ignore a relatively small fraction of neuron pairs ., Because the SPOTDis measure is defined over pairs of neurons , rather than the sequential ordering of firing defined over an entire neuronal ensemble , this can be easily implemented by , in Eq 2 , letting the sum run over neuron pairs from separate electrodes ., As explained above , an extreme case of noise driven by spike count fluctuations is the absence of firing during an epoch ., If many neurons remain “silent” in a given epoch , then we can only compute the EMD for a small subset of neuron pairs ( Eq ( 2 ) ) ., Such a sparse firing scenario might be particularly challenging to latency-based methods , because the latency of cells that do not fire is not defined ., We consider a case of sparse firing in Fig 6 where the expected number of spikes per epoch is only 0 . 48 ., Despite the firing sparsity , the low-dimensional t-SNE embedding based on SPOTDis shows separable clusters , and HDBSCAN correctly identifies the different clusters ( Fig 6 ) ., We also performed a simulation in which patterns consist of precise spike sequences , and examined the influence of temporal jitter of the precise spike sequence , as well as the amount of noise spikes surrounding these precise spike sequences ., Up to some levels of temporal jitter , and signal-to-noise ratio , HDBSCAN shows a relatively good clustering performance ( S9 Fig ) ., In general , given sparse firing , a sufficient number of neurons is needed to correctly identify the P patterns , but , in addition , the patterns should be distinct on a sufficiently large fraction of neuron pairs ., Furthermore , the lower the signal-to-noise ratio , the more neurons are needed to separate the patterns from one another ., A key aim of the SPOTDisClust methodology is to identify temporal patterns that are based on consistent temporal relationships among neurons ., However , in addition to temporal patterns , neuronal populations can also exhibit fluctuations in the firing rate that can be driven by e . g . external input or behavioral state and are superimposed on temporal patterns ., A global scaling of the firing rate , or a scaling of the firing rate for a specific assembly , should not constitute a different temporal pattern if the temporal structure of the pattern remains unaltered ,, i . e ., when the normalized cross-correlation function has the same expected value , and should not interfere with the clustering of temporal patterns ., This is an important point for practical applications , because it might occur for instance that in specific behavioral states rates are globally scaled 71 , 72 ., In Fig 7A , we show an example where there are three different global rate scalings , as well as two temporal patterns ., The temporal patterns are , for each epoch , randomly accompanied by one of the different global rate scaling factors ., The t-SNE embedding shows that the temporal patterns form separate clusters , but that the global rate scalings do not ( Fig 7A ) ., Furthermore , HDBSCAN correctly clusters the temporal patterns , but does not find separate clusters for the different rate scalings ( Fig 7A and S4 Fig ) ., This behavior can be understood from examination of the sorted dissimilarity matrix , in which we can see that epochs with a low rate do not only have a higher SPOTDis to epochs with a high rate , but also to other epochs with a low rate , which prevents them from agglomerating into a separate cluster ( Fig 7A ) ; rather the epochs with a low rate tend to cluster at the edges of the cluster , whereas the epochs with a high rate tend to form the core of the cluster ( Fig 7A ) ., Another example of a rate scaling is one that consists of a scaling of the firing rate for one half of the neurons ( Fig 7B ) ., Again , the t-SNE embedding and HDBSCAN clustering show that rate scalings do not form separate clusters , and do not interfere with the clustering of the temporal patterns ( Fig 7B and S4 Fig ) ., We conclude that the unsupervised clustering of different temporal patterns with SPOTDisClust is not compromised by the inclusion of global rate scalings , or the scaling of the rate in a specific subset of neurons ., We apply the SPOTDisClust method to data collected from monkey V1 ., Simultaneous recordings were performed from 64 V1 channels using a chronically implanted Utah array ( Blackrock ) ( see Methods ) ., We presented moving bar stimuli in four cardinal directions while monkeys performed a passive fixation task ., Each stimulus bar was presented 20 times ., We then pooled all 80 trials together , and added 80 trials containing spontaneous activity ., Our aim was then to recover the separate stimulus conditions using unsupervised clustering of multi-unit data with SPOTDisClust ., The low dimensional t-SNE embedding shows four dense regions that are well separated from each other and correspond to the four stimuli , and HDBSCAN identifies these four clusters ( Fig 8 ) ., Furthermore , when we performed clustering on the firing rate vectors , and constructed a t-SNE embedding on distances between population rate vectors , we were not able to extract the different stimulus directions from the cluster labels ., This shows that the temporal clustering results are not trivially explained by rate differences across stimulus directions , but also indicates that the temporal pattern of population activity m | Introduction, Results, Discussion, Methods | Temporally ordered multi-neuron patterns likely encode information in the brain ., We introduce an unsupervised method , SPOTDisClust ( Spike Pattern Optimal Transport Dissimilarity Clustering ) , for their detection from high-dimensional neural ensembles ., SPOTDisClust measures similarity between two ensemble spike patterns by determining the minimum transport cost of transforming their corresponding normalized cross-correlation matrices into each other ( SPOTDis ) ., Then , it performs density-based clustering based on the resulting inter-pattern dissimilarity matrix ., SPOTDisClust does not require binning and can detect complex patterns ( beyond sequential activation ) even when high levels of out-of-pattern “noise” spiking are present ., Our method handles efficiently the additional information from increasingly large neuronal ensembles and can detect a number of patterns that far exceeds the number of recorded neurons ., In an application to neural ensemble data from macaque monkey V1 cortex , SPOTDisClust can identify different moving stimulus directions on the sole basis of temporal spiking patterns . | The brain encodes information by ensembles of neurons , and recent technological developments allow researchers to simultaneously record from over thousands of neurons ., Neurons exhibit spontaneous activity patterns , which are constrained by experience and development , limiting the portion of state space that is effectively visited ., Patterns of spontaneous activity may contribute to shaping the synaptic connectivity matrix and contribute to memory consolidation , and synaptic plasticity formation depends crucially on the temporal spiking order among neurons ., Hence , the unsupervised detection of spike sequences is a sine qua non for understanding how spontaneous activity contributes to memory formation ., Yet , sequence detection presents major methodological challenges like the sparsity and stochasticity of neuronal output , and its high dimensionality ., We propose a dissimilarity measure between neuronal patterns based on optimal transport theory , determining their similarity from the pairwise cross-correlation matrix , which can be taken as a proxy of the “trace” that is left on the synaptic matrix ., We then perform unsupervised clustering and visualization of patterns using density clustering on the dissimilarity matrix and low-dimensional embedding techniques ., This method does not require binning of spike times , is robust to noise , jitter and rate fluctuations , and can detect more patterns than the number of neurons . | action potentials, medicine and health sciences, engineering and technology, signal processing, applied mathematics, membrane potential, vertebrates, electrophysiology, neuroscience, animals, jitter, mammals, simulation and modeling, algorithms, primates, probability distribution, mathematics, clustering algorithms, old world monkeys, research and analysis methods, monkeys, animal cells, probability theory, macaque, cellular neuroscience, eukaryota, cell biology, physiology, neurons, signal to noise ratio, biology and life sciences, cellular types, physical sciences, amniotes, neurophysiology, organisms, modulation | null |
1,053 | journal.pcbi.1003710 | 2,014 | Analysis of Graph Invariants in Functional Neocortical Circuitry Reveals Generalized Features Common to Three Areas of Sensory Cortex | Transmission and processing of information in the brain is in large part determined by the connectivity between neurons 1 ., The neocortical microcircuit hypothesis states that the neocortex is composed of repeated elements of a generalized circuit that are tweaked for specialization in each area 2 ., Supporting this hypothesis , local synaptic connectivity in the neocortex is non-random and is at least partly determined by neuron location and class 2–12 ., These rules imply that there is a probabilistic or partially stereotyped wiring diagram ., The extent to which these rules generalize across the neocortex , however , is unclear ., Analysis of neocortical microcircuit spiking activity in different brain regions has revealed common dynamical features 12–15 , suggesting that circuits may share similarities between regions ., In this study , we use the spatiotemporal correlations of firing activity between neurons to generate functional wiring diagrams 15–19 ., Modeling studies have shown a clear relationship between connectivity and neural firing 15 , 20–24 ., This suggests that we can gain insight into the underlying structure and organization of cortical circuitry by analyzing the emergent dynamics of large populations of neocortical neurons ., Here we employed high speed two-photon calcium imaging 25 to densely sample the spiking activity of up to 1126 neurons within a 1 . 1 mm diameter field of view , spanning multiple columns and layers in three different areas of the sensory neocortex ., We then applied post-processing algorithms to detect spatiotemporal relationships between spiking neurons and modeled this activity as wiring diagrams , or graphs 15 ., Graph theory is a useful technique to quantify network dynamics , and has been increasingly applied in the neural context to understand brain connectivity patterns 21 , 26 , 27 ., One potential approach to identify invariant features of functional wiring diagrams within and between areas of cortex is to isolate graph isomorphisms ., For example , the unlabeled graphs and are isomorphic when any two nodes and of are connected in if and only if that connection exists in ., However , such an analysis currently remains intractable in graphs of sizes analyzed here , as the best known algorithm runs in polynomial time 28 ., Perhaps more importantly , the organizational features of connectivity that have been described to date reflect probabilistic , rather than deterministic microcircuit architectures 5 , 8 , 9 , making it unlikely that connectivity patterns in the brain are formally isomorphic ., In order to test the postulate that the organization of functional circuitry generalizes across the neocortex , we instead applied functions that are invariant to labeling of the nodes of the graph ., In other words , if A is the adjacency matrix describing graph , we wanted to describe the function such that , where is the × permutation matrix 29 ., In the context of our study , we aimed to identify features of a neuronal circuit wiring diagram that are invariant to the particular identities of the neurons ., Thus , we characterized each neuron only by the connections it had with other neurons ., While neurons and activation patterns between animals and regions may vary in their individual details , these abstract , global characteristics of circuit structure stay constant , even following the relabeling of the neurons ., By investigating label-independent features , called graph invariants , we hoped to disregard features of the functional circuit that may be susceptible to over-fitting , and focus on features that are stable across slices and areas of the neocortex ., Many graph invariants have been previously described , such as maximum degree and MAXCUT value 29 ., Some particularly useful invariants include the graph eigenvalues and eigenvectors 29 , 30 ., We apply these analyses to functional wiring diagrams generated from imaging data from three sensory neocortical areas to test the validity of a functional analogue to a generalized circuit architecture of the neocortex ., All procedures were performed in accordance and approved by the Institutional Animal Care and Use Committee at the University of Chicago ., To foster reproducibility and fast development of future work based upon these results , we have published functional graph analysis tools under an open source , GPLv3 license , available here: https://github . com/ssgrn/GraphInvariantsNeocortex ., All statistical analyses were performed with MATLAB ( MathWorks ) ., Unless otherwise noted , data are presented as mean ± SD ., All values in the text are in reference to the Pearson correlation computed with the command corrcoef ., For nonparametric distribution comparison between the three sensory areas , the Kruskal-Wallis test ( KW-test ) was implemented via the kruskalwallis function ., The nonparametric Komolgorov-Smirnov test ( KS-test ) , noted at use , was used to compare fitted distributions to data ., The Komolgorov-Smirnov test were implemented using the command kstest2 ., For tests of significance , was used as the cutoff ., Algebraic connectivity and eigenvector centrality were computed using the MIT Toolbox for Network Analysis ( http://strategic . mit . edu/downloads . php ? page=matlab_networks ) ., Graph figures were generated using the open source Python graph visualization tool NetworkX ( http://networkx . github . io/ ) ., Circular variance was computed with the MATLAB Toolbox for Circular Statistics 32 ., We compared our data to two null models: random topologies and -nearest neighbors topologies ., Each random topology was formed by preserving the locations of neurons in a corresponding functional topology and then assigning a 0 . 5 probability of forming a directed edge between every neuron in the field of view ., Each -nearest neighbors topology was formed by preserving the locations of neurons in a corresponding functional topology and then forming a directed edge from neuron A to neuron B if neuron B was one of -nearest neighbors of neuron A . In all analyses , we used ., To determine whether A1 , S1 , and V1 functional circuit wiring diagrams exhibited invariant features , we monitored neuronal activity in 43 slices from each region of the mouse neocortex ( 11 of A1 , 21 of S1 , and 11 of V1 ) using high speed multi-photon calcium imaging 15 , 25 , 31 ., Spontaneous circuit activity requires intact excitatory amino acid transmission 15 , 33 , sufficient oxygenation 34 and corresponds to UP states within single neurons which comprise the functional circuit 15 , 35 ., Previous reports have found that spontaneous activity delineates all of the possible multi-neuronal patterns within a sampled population and that a sensory input activates only a subset of these patterns 14 , 36 ., By monitoring spontaneous activity in the imaged field of view , we hoped to maximize the number of pairwise correlations within the imaged populations ., We imaged the flow of activity through large populations of neurons ( A1: 595±101 cells , S1: 704±157 cells , V1: 734±129 cells ) at the mesoscale in a two-dimensional circular imaging plane with a diameter of 1 . 1 mm that comprised multiple layers and columns with single-cell resolution ( Figure 1A ) ., We confirmed activity was not biased to any one lamina and that our sampling was uniform across our field of view , since the amount of activity observed across all circuit events did not differ between layers ( , KW-test; see Methods for explanation of laminar identification ) ., Because temporal resolution of multi-photon microscopy is compromised at these spatial scales , we used the heuristically optimized path scan technique 25 ( Figure 1B ) , which allowed us to achieve fast frame rates ( frame duration 86±17 . 7 ms ) that did not differ between regions ( , KW-test ) ., We deconvolved calcium fluorescence changes of each detected neuron into spike trains ( Figure 1C ) 31 and generated rasters of spiking activity for the entire imaged population of neurons ( Figure 1D ) ., All regions of the sensory neocortex showed a common capacity for emergent , multi-neuronal patterned activity , characterized by discrete periods ( >500 ms ) of correlated action potential generation within subsets of neurons ., Circuit events were separated by periods of quiescence and we refer to these distinct , clustered epochs of spontaneous action potentials as individual circuit events ., The start and finish of a circuit event was easily resolvable because the field of view was either quiescent , corresponding to a DOWN state in a single neuron , or was active , corresponding to a UP state in a single neuron 35 ., One circuit event lasted 1203±456 ms in A1 , 1568±885 ms in S1 , and 1342±698 ms in V1 ., We imaged 82 total circuit events in A1 , 268 total events in S1 , and 104 total events in V1 ., Using this data , we generated graphical abstractions , or circuit topologies , corresponding to functional activity over all circuit events observed in a single field of view ., Neurons were represented as nodes in each graph ., Edges between nodes were directional and formed according to the following rule: neuron A was considered functionally connected to neuron B if neuron B fired in the subsequent frame ( Figure 1E ) ., These edges were then weighted according to how many times this single frame lagged correlation occurred , normalized to the number of events in that field of view ., Thus , stronger edge weights indicated reliable , correlated spiking , whereas weaker edge weights indicated unreliable , weakly correlated spiking ( Figure 1E ) ., The resultant graphs contained a large number of edges ( median: 3 . 4×104 functional connections , range: 4 . 2×105 functional connections ) ., Note that although a functional relationship between neurons increases the probability of them having a synaptic connection 18 , 33 , a linear relationship between each functional edge and a synaptic connection does not exist 16 ., Rather , given our method of inference , the functional connectivity measure captured the flow of activity through the network during a circuit event ., There is an ongoing debate on whether the cortical column , which is oriented perpendicular to pia , regulates and shapes the flow of information in sensory cortices 47 ., Coronal slices allowed us to image activity patterns with near simultaneity across all lamina ., Using this data , we assessed directional flow in functional graphs by computing the angle and distance between the source and destination of directed functional connections relative to the orientation of pia ., Flow maps are plots that capture direction of circuit flow with points scattered at a radius and angle about the origin ., represents the distance of the functional connection from the source to the sink , and represents the angle between the source and the sink ., We measured the amount of angular clustering of activity flow in sensory areas by computing the circular variance of functional connections ., The clustering of points at a particular angle indicates stereotypy of functional flow across events in a neighborhood of the functional topology ., We calculated the amount of angular clustering by computing the circular variance of the set of points ., Circular variance is defined as:The value of the circular variance varies from 0 to 1; the lower the value , the tighter the clustering of points about a single mean angle ., In functional circuit topologies from all three areas of the sensory neocortex , flow covered the entire angular space , regardless of the pairwise distance , or radius , spanned by the functional connection ( Figure 5A ) ., We found that the spread of circular variance increased for functional connections which spanned the largest distances , most likely due boundaries imposed by pia , internal capsule , or field of view ( Figure 5B ) ., Thus we did not find a canonical circuit flow in spontaneous cortical activity regardless of sensory area ., The highly distributed nature of functional topologies suggested that large fields of view are necessary to fully capture invariant features of functional topology ., We sought to confirm this hypothesis by examining the spatial dependency of connectedness in functional topologies ., Connectedness in the context of an imaged field of view can be described as an aperture problem: large interlinked networks look like disjoint groups of interacting cells if viewed only in small parts , while viewing the entire network at once reveals one giant component ., For efficient computation in our graph invariant framework , we examined this problem in the following way: disjoint modules of network activity could be characterized as a weakly connected functional topology with a small algebraic connectivity ., We explored how algebraic connectivity of the functional topology was modulated by two variables: minimum weight and field of view size ., Because edge weight corresponds to the reliability of an observation of a spike correlation , thresholding minimum weight in a functional topology pruned its weaker edges ., We defined field of view size as the maximum pairwise distance between any two neurons investigated ., Together , these variables represented spatial and sampling bias during experiments ., We found that the algebraic connectivity of functional topologies followed similar trajectories in all three sensory areas: smaller fields of view and the exclusion of the weakest functional connections resulted in weakly connected graphs ( Figure 6A ) ., Taken together , these data suggest that one must employ large fields of view and low edge weight thresholds to capture an independent functional circuit ., Interestingly , we found a field of view size in each sensory area at which the algebraic connectivity seemed to reach capacity or asymptote; above this distance , larger fields of view did not result in significantly increased connectivity ., This finding suggested that a subsample size less than 1 . 1 square mm would capture a complete functional circuit topology ., To further understand the interplay between experimental field of view and the topology of the functional circuits , we specified a general model of Field of View ( FOV ) Error , or how well a functional topology is captured as a function of field of view size ( Figure 6B ) ., FOV error varies with the distribution of functional connections inherent to each neocortical region ( Figure 2 , right column ) ., Formally , let denote the existence of a functional connection between neurons and , and denote the pairwise distance between and ., Let be a pairwise distance ., Then , We computed the average FOV error over all pairwise combinations of neurons in all sensory areas as a function of ., To achieve less than 10 percent FOV error , we found that must be at least 676 microns in A1 , 660 microns in S1 , and 583 microns in V1 ( Figure 6C ) ., This corresponds to a minimum of 430 neurons in A1 , 510 neurons in S1 , and 478 neurons in V1 by computing a cumulative distribution of neuronal density based on the probability distributions of pairwise distances in our fields of view ( Figure 6D ) ., In contrast , we found less than 10 percent FOV error was achieved with just 93 microns in -nearest neighbors topologies , and 884 microns ( almost the entire imaging field of view ) in topologies with a uniform random spatial distribution of functional connectivity ( Figure 6C ) ., In the random graphs , error dropped linearly as field of view size was increased ( =\u200a0 . 9995 ) ., Thus , it appears that large FOVs result in fewer errors about underlying functional topology , and that the field of view error is lessened by skew in the likelihood of a connection toward shorter distances ., All regions of the sensory neocortex showed a common capacity for spontaneous circuit activations that emerged from the underlying local synaptic connectivity 15 ., Using the statistical dependencies of spiking between pairs of neurons , we generated directed and weighted functional graphs ., This approach revealed a scaling relationship between A1 and S1 15 , but was unable to delineate exactly what graph features were common to both regions ., In this study , we conducted an analysis of graph invariance in functional circuit topologies generated from three regions of sensory neocortex in order to extend the graph theoretic approach toward delineating generalized rules of connectivity ., The graph invariant framework allowed us to examine how circuits are similar , by considering how graph properties independent of neuronal labeling are consistent between areas ., This represents a top-down approach which extracted global features of functional connectivity from large , dense sampling of neuronal activity in the neocortex ., This analysis revealed multiple graph invariants that are consistent across sensory areas ., The structure of neocortical functional topologies were well-characterized by non-random connectivity that was not merely dependent on spatial proximity , despite the fact that the probability of functional connection peaked proximally ., In all areas , distal connections were required to achieve connected graphs , reminiscent of the daisy arrangement of dense local and patchy distal neocortical connections suggested by neuronal anatomy 2 , 48 ., We found that functional topologies of all areas were connected , and the degree of connectivity was statistically indistinguishable between areas ., Moreover , functional connections were structured even within a local circuit of the functional topology ., We found that eigenvector centrality , a measure of influence in local flow , is log-normally distributed in all sensory areas , and is highly correlated with out-degree , and weakly correlated with in-degree ., The size of functional topology does not scale with the number of neurons in the field of view , revealing that circuit activity is comprised of structured activations of subsets of neurons ., Local circuit flow comprehensively covers angular space regardless of spatial scale , which is inconsistent with a canonical flow of spontaneous activity ., Finally , our analysis revealed that given a large imaged field of view , a minimal numerical sample size was necessary to minimize the error of falsely characterizing two neurons as being independent ., In summary , the invariant features revealed by this study suggest the existence of a generalized functional circuit throughout the sensory neocortex , strengthening the argument that the neocortical microcircuit hypothesis should be framed as probabilistic rules of connectivity and organization ., This is not to say that label-dependent features do not play a role in mediating the structure of functional topology ., For example , although connectivity is strongly biased towards spatial proximity between neurons , the -nearest neighbors rule and random topologies poorly recapitulated functional topologies in the data ., This indicates that other connectivity rules that are not simply dependent on spatial proximity , such as those based on cell types 11 , 49 , likely play an important role ., As another example , we found that the distribution of eigenvector centrality , which strongly correlates with out-degree in all areas , is highest in V1 , and that the ratio of the number of open sequences to closed sequences , which stays constant as a function of path length in all areas , is highest in V1 ., These analyses suggest that V1 may be more feedforward than A1 and S1 , a result consistent with previous studies 37 , 42 ., The translation of the eigenvector centrality distribution seen in Figure 4A may represent a tweaking of a generalized rule ( fitting to a log-normal distribution ) to optimize the circuit for a particular function ( feedforwardness ) ., In general , it is possible that the specialization of the circuit to the overall function of the cortical area is label-dependent , or dependent on emergent properties of cell phenotypes ., However , despite the fact that label dependent rules of connectivity are likely present , by investigating global features of functional circuit topology that are invariant to the details of individual neurons , we are able to reveal abstract structural rules present in functional wiring in a computationally efficient manner ., We emphasize that our functional approach does not necessarily identify causal connectivity , but rather pairwise correlative dynamics 50 ., However , we also note that there is a relationship between structure and function 18 ., This relationship is likely enhanced in this study as the high sampling density employed here should dramatically increase the likelihood that a correlation could reflect a causal connection , since the likelihood of a synaptic connection increases with spatial proximity 8 ., We consider the slice preparation to be an isolated system that allows us to study the local connectivity that defines cortical microcircuitry and remove the potentially conflating influence of long modulatory and long afferent inputs ., This approach allowed us to maximize the imaged field of view and the corresponding numerical sample of neurons ., In addition , coronal slices allowed us to examine the potential influence of laminar boundaries on functional circuitry ., We found that a field of view of approximately 640 µm is necessary to correctly establish functional dependence between two neurons in the sensory neocortex ., This field of view results from having a minimal numerical sampling while having sufficient distal functional connections that are necessary to generate a connected graph ., The necessity of distal functional connections that extend beyond layers and columns may indicate that functional circuits represent information from multiple octaves in A1 51 whiskers in S1 52 , or a natural visual scene in V1 53 ., Our data are consistent with anatomical studies that have revealed a patchy , distributed axonal structure which has been postulated to limit signal redundancy while enabling the potential for integration of information within local populations of neurons 48 , 54 ., For these hypotheses to be properly evaluated , future work toward understanding the role of connectivity in cortical dynamics and behavior will require a combination of research at the in-vitro and in-vivo level ., Interestingly , we found that the connectedness of the topology depended not only on the size of the field of view , but also on whether the most unreliable connections were considered ., In a previous study employing a network model , we similarly found that weak connections were necessary to recapitulate experimentally observed circuit dynamics 15 ., In this study , functional topologies became sparse and modular as minimum thresholds on weight were increased , likely because fewer functional connections were reliable ., When only the most reliable functional connections were considered , the topologies were sparsely connected regardless of sensory area ., By investigating invariant metrics without setting thresholds on how reliably active the neurons were , we did not bias ourselves to only investigating the most reliable connections ., Such a bias may lead to subsampling errors , exactly parallel to the problems that arise from using small fields of view ., Since circuit topologies become highly connected with the inclusion of weak functional connections , weak connections may be necessary to provide a large dynamic range similar to a previous study of mouse V1 37 , 42 ., These data and analyses suggest that the generalized features of functional circuitry identified in this study maximize the capacity of this system to represent the sensory environment . | Introduction, Methods, Results, Discussion | Correlations in local neocortical spiking activity can provide insight into the underlying organization of cortical microcircuitry ., However , identifying structure in patterned multi-neuronal spiking remains a daunting task due to the high dimensionality of the activity ., Using two-photon imaging , we monitored spontaneous circuit dynamics in large , densely sampled neuronal populations within slices of mouse primary auditory , somatosensory , and visual cortex ., Using the lagged correlation of spiking activity between neurons , we generated functional wiring diagrams to gain insight into the underlying neocortical circuitry ., By establishing the presence of graph invariants , which are label-independent characteristics common to all circuit topologies , our study revealed organizational features that generalized across functionally distinct cortical regions ., Regardless of sensory area , random and -nearest neighbors null graphs failed to capture the structure of experimentally derived functional circuitry ., These null models indicated that despite a bias in the data towards spatially proximal functional connections , functional circuit structure is best described by non-random and occasionally distal connections ., Eigenvector centrality , which quantifies the importance of a neuron in the temporal flow of circuit activity , was highly related to feedforwardness in all functional circuits ., The number of nodes participating in a functional circuit did not scale with the number of neurons imaged regardless of sensory area , indicating that circuit size is not tied to the sampling of neocortex ., Local circuit flow comprehensively covered angular space regardless of the spatial scale that we tested , demonstrating that circuitry itself does not bias activity flow toward pia ., Finally , analysis revealed that a minimal numerical sample size of neurons was necessary to capture at least 90 percent of functional circuit topology ., These data and analyses indicated that functional circuitry exhibited rules of organization which generalized across three areas of sensory neocortex . | Information in the brain is represented and processed by populations of interconnected neurons ., However , there is a lack of a clear understanding of the structure and organization of circuit wiring , particularly at the mesoscale which spans multiple columns and layers ., In this study , we sought to evaluate whether functional circuit architecture generalizes across the neocortex , testing the existence of a functional analogue to the neocortical microcircuit hypothesis ., We analyzed the correlational structure of spontaneous circuit activations in primary auditory , somatosensory , and visual neocortex to generate functional topologies ., In these graphs , neurons were represented as nodes , and time-lagged firing between neurons were directed edges ., Edge weights reflected how many times the lagged firing occurred and was synonymous to the strength of the functional connection between two neurons ., The presence of label-independent features , identified by investigating functional circuit topologies under a graph invariant framework , suggest that functionally distinct areas of the neocortex carry features of a generalized functional cortical circuit ., Furthermore , our analyses show that the simultaneous recording of large sections of cortical circuitry is necessary to recognize these features and avoid undersampling errors . | computer and information sciences, computational techniques, network analysis, biology and life sciences, graph theory, neuroscience, research and analysis methods | null |
2,225 | journal.pcbi.1006772 | 2,019 | A component overlapping attribute clustering (COAC) algorithm for single-cell RNA sequencing data analysis and potential pathobiological implications | Single cell ribonucleic acid sequencing ( scRNA-seq ) offers advantages for characterization of cell types and cell-cell heterogeneities by accounting for dynamic gene expression of each cell across biomedical disciplines , such as immunology and cancer research 1 , 2 ., Recent rapid technological advances have expanded considerably the single cell analysis community , such as The Human Cell Atlas ( THCA ) 3 ., The single cell sequencing technology offers high-resolution cell-specific gene expression for potentially unraveling of the mechanism of individual cells ., The THCA project aims to describe each human cell by the expression level of approximately 20 , 000 human protein-coding genes; however , the representation of each cell is high dimensional , and the human body has trillions of cells ., Furthermore , scRNA-seq technologies have suffered from several limitations , including low mean expression levels in most genes and higher frequencies of missing data than bulk sequencing technology 4 ., Development of novel computational technologies for routine analysis of scRNA-seq data are urgently needed for advancing precision medicine 5 ., Inferring gene-gene relationships ( e . g . , regulatory networks ) from large-scale scRNA-seq profiles is limited ., Traditional approaches to gene co-expression network analysis are not suitable for scRNA-seq data due to a high degree of cell-cell variabilities ., For example , LEAP ( Lag-based Expression Association for Pseudotime-series ) is an R package for constructing gene co-expression networks using different time points at the single cell level 6 ., The Partial information decomposition ( PID ) algorithm aims to predict gene-gene regulatory relationships 7 ., Although these computational approaches are designed to infer gene co-expression networks from scRNA-seq data , they suffer from low resolution at the single-cell or single-gene levels ., In this study , we introduced a network-based approach , termed Component Overlapping Attribute Clustering ( COAC ) , to infer novel gene-gene subnetwork in individual components ( the subset of whole components ) representing multiple cell types and cell phases of scRNA-seq data ., Each gene co-expression subnetwork represents the co-expressed relationship occurring in certain cells ., The scoring function identifies co-expression networks by quantifying uncoordinated gene expression changes across the population of single cells ., We showed that gene subnetworks identified by COAC from scRNA-seq profiles were highly correlated with the survival rate of melanoma patients and drug responses in cancer cell lines , indicating a potential pathobiological application of COAC ., If broadly applied , COAC can offer a powerful tool for identifying gene-gene networks from large-scale scRNA-seq profiles in multiple diseases in the on-going development of precision medicine ., In this study , we present a novel algorithm for inferring gene-gene networks from scRNA-seq data ., Specifically , a gene-gene network represents the co-expression relationship of certain components ( genes ) , which indicates the localized ( cell subpopulation ) co-expression from large-scale scRNA-seq profiles ( Fig 1 ) ., Specifically , each gene subnetwork is represented by one or multiple feature vectors , which are learned from the scRNA-seq profile of the training set ., For the test set , each gene expression profile can be transformed to a feature value by one or several feature vectors which measure the degree of coordination of gene co-expression ., Since the feature vectors are learned from the relative expression of each gene , batch effects can be eliminated by normalization of relatively co-expressed genes ( see Methods ) ., In addition to showing that COAC can be used for batch effect elimination , we further validated COAC by illustrating three potential pathobiological applications: ( 1 ) cell type identification in two large-scale human scRNA-seq datasets ( 43 , 099 and 43 , 745 cells respectively , see Methods ) ; ( 2 ) gene subnetworks identified from melanoma patients-derived scRNA-seq data showing high correlation with survival of melanoma patients from The Cancer Genome Atlas ( TCGA ) ; ( 3 ) gene subnetworks identified from scRNA-seq profiles which can be used to predict drug sensitivity/resistance in cancer cell lines ., We collected scRNA-seq data generated from 10x scRNA-seq protocol 7 , 8 ., In total , 14 , 032 cells extracted from peripheral blood mononuclear cells ( PBMC ) in systemic lupus erythematosus ( SLE ) patients were used as the case group and 29 , 067 cells were used as the control group ( see Methods ) ., For the case group , we used 12 , 277 cells for the training set and the remaining 1 , 755 cells for the validation set ., For the control group , we used 25 , 433 cells for the training set and 3 , 634 for the validation set ., After filtering with average correlation and average component ratio thresholds ( see Methods ) , we obtained 93 , 951 co-expression subnetworks ( gene clusters with components ) by COAC ., We transformed these co-expression gene clusters to feature vectors ., Features whose variance distribution was significantly different in the case group versus the control group were kept ( see Methods ) ., Using a t-SNE algorithm implemented in the R package-tsne 9 , we found that the single cells ( from the case group ) which were retrieved directly from the patients can be more robustly separated from the control group cells ( Fig 2B ) , comparing to the original data ( Fig 2A ) without applying COAC ., Thus , the t-SNE analysis reveals that batch effects can be significantly reduced by COAC ( Fig 2 ) ., We next turned to examine whether COAC can be used for cell type identification ., We collected a scRNA-seq dataset of 14 , 448 single cells in an IFN-β stimulated group and 14 , 621 single cells in the control group 8 ., To remove factors caused by the stimulation conditions or experimental batch effects , we selected 13 , 003 cells in the IFN-β stimulated group and 13 , 158 cells in the control group as the training set to obtain homogeneous feature vectors for each cell ., The remaining scRNA-seq data are used as the validation set ., We generated the gene subnetworks by COAC and transformed the subnetworks into feature vectors for individual cells ( see Methods ) ., We found that cells from IFN-β stimulated and control groups were separated significantly ( Fig 3A ) by t-SNE 9 ., However , without applying COAC cells from the IFN-β stimulated and control groups are uniformly distributed in the whole space ( Fig 3B ) , suggesting that components which separate IFN-β stimulated cells from control cells were eliminated from the feature vector identified by COAC ., We further collected a scRNA-seq dataset including a total of 43 , 745 cells with well-defined cell types from a previous study 10 ., We built a training set ( 21 , 873 cells ) and a validation set ( 21 , 872 cells ) with approximately equivalent size ., In the training set , we generated co-expression subnetworks as the feature vector by COAC ., For the validation set , we grouped the total cells into five main categories as described previously 10 ., Fig 3C shows that COAC-inferred subnetworks can be used to distinguish five different cell types with high accuracy ( cell types for 83 . 05% cells have been identified correctly ) in the t-SNE analysis , indicating that COAC can identify cell types from heterogeneous scRNA-seq profiles ., We next inspected potential pathobiological applications of COAC in identifying possible prognostic biomarkers or pharmacogenomics biomarkers in cancer ., We next turned to inspect whether COAC-inferred gene co-expression subnetworks can be used as potential prognostic biomarkers in clinical samples ., We identified gene subnetworks from scRNA-seq data of melanoma patients 11 ., Using a feature selection pipeline , we filtered the original subnetworks according to the difference of means and variances between two different groups ( e . g . , malignant cells versus control cells ) to prioritize top gene co-expression subnetworks ( S1A Fig ) ., We collected the bulk gene expression data and clinical data for 458 melanoma patients from the TCGA website 12 ., Applying COAC , we identified two gene co-expression subnetworks with the highest co-expression correlation in malignant cells compared to control cells ( S1B Fig ) ., For each subnetwork , we then calculated the co-expression correlation in bulk RNA-seq profiles of melanoma patients ., Using the rank of co-expression values of melanoma patients , the top 32 patients were selected as group 1 and the tail 32 patients were selected as group 2 ., Log rank test was employed to compare the survival rate of two groups 13 ., We found that gene subnetworks identified by COAC from melanoma patients-derived scRNA-seq data can predict patient survival rate ( Fig 4A and Fig 4B ) ., KRAS , is an oncogene in multiple cancer types 14 , including menaloma 15 ., Herein we found a co-expression among KRAS , HADHB , and PSTPIP1 , can predict significantly patient survival rate ( P-value = 4 . 09×10−5 , log rank test , Fig 4B ) ., Thus , regulation of KRAS-HADHB-PSTPIP1 may offer new a pathobiological pathway and potential biomarkers for predicting patient’s survival in menaloma ., We next focused on gene co-expression subnetworks in several known melanoma-related pathways , such as the MAPK , cell-cycle , DNA damage response , and cell death pathways 16 by comparing the differences in means and variances between T cell and other cells using COAC ( see Methods ) ., For each gene co-expression subnetwork identified by COAC , we selected 32 patients who had enriched co-expression correlation and 32 patients who had lost a co-expression pattern ., We found that multiple COAC-inferred gene subnetworks predicted significantly menaloma patient survival rate ( Fig 4C–4F ) ., For example , we found that BRAF-PSMB3-SNRPD2 predict significant survival ( P-value = 0 . 0058 , log rank test . Fig 4C ) , revealing new potential disease pathways for BRAF melanoma ., CDKN2A , encoding cyclin-dependent kinase Inhibitor 2A , plays important roles in melanoma 17 ., Herein we found a potential regulatory subnetwork , RBM6-CDKN2A-MRPL10-MARCKSL , which is highly correlated with melanoma patients’ survival rate ( P-value = 0 . 019 , log rank test . Fig 4F ) ., We identified several new potential regulatory subnetworks for TP53 as well , which is highly correlated with patients survival rate as well ( Fig 4D and 4E ) ., Multiple novel COAC-inferred gene co-expression subnetworks that are significantly associated with patient’s survival rate are provided in S2 Fig . Altogether , gene regulatory subnetworks identified by COAC can shed light on new disease mechanisms uncovering possible functional consequences of known melanoma genes and offer potential prognostic biomarkers in melanoma ., COAC-inferred prognostic subnetworks should be further validated in multiple independent cohorts before clinical application ., To examine the potential pharmacogenomics application of COAC , we collected robust multi-array ( RMA ) gene expression profiles and drug response data ( IC50 The half maximal inhibitory concentration ) across 1 , 065 cell lines from the Genomics of Drug Sensitivity in Cancer ( GDSC ) database 18 ., We selected six drugs in this study based on two criteria:, ( i ) the highest variances of IC50 among over 1 , 000 cell lines , and, ( ii ) drug targets across diverse pathways: SNX-2112 ( a selective Hsp90 inhibitor ) , BX-912 ( a PDK1 inhibitor ) , Bleomycin ( induction of DNA strand breaks ) , PHA-793887 ( a pan-CDK inhibitor ) , PI-103 ( a PI3K and mTOR inhibitor ) , and WZ3105 ( also named GSK-2126458 and Omipalisib , a PI3K inhibitor ) ., We first identified gene co-expression subnetworks from melanoma patients’ scRNA-seq data 11 by COAC ., The COAC-inferred subnetworks with RMA gene expression profiles of bulk cancer cell lines were then transformed to a matrix: each column of this matrix represents a feature vector and each row represents a cancer cell line from the GDSC database 18 ., We then trained an SVM regression model using the LIBSVM 19 R package with default parameters and linear kernel ( see Methods ) ., We defined cell lines whose IC50 were higher than 10 μM as drug-resistant cell lines ( or non-antitumor effects ) , and the rest as drug sensitive cell lines ( or potential antitumor effects ) ., As shown in Fig 5A–5F , the area under the receiver operating characteristic curves ( AUC ) ranges from 0 . 728 to 0 . 783 across 6 drugs during 10-fold cross-validation , revealing high accuracy for prediction of drug responses by COAC-inferred gene subnetworks ., To illustrate the underlying drug resistance mechanisms , we showed two subnetworks identified by COAC for SNX-2112 ( Fig 5G ) and BX-912 ( Fig 5H ) respectively ., SNX-2112 , a selective Hsp90 ( encoded by HSP90B1 ) inhibitors , has been reported to have potential antitumor effects in preclinical studies , including melanoma 20 , 21 ., We found that several HSP90B1 co-expressed genes ( such as CDC123 , LPXN , and GPX1 ) in scRNA-seq data may be involved in SNX-2112’s resistance pathways ( Fig 5G ) ., GPX1 22 and LPXN 23 have been reported to play crucial roles in multiple cancer types , including melanoma ., BX-912 , a PDK1 inhibitor , has been shown to suppress tumor growth in vitro and in vivo 24 ., Fig 5H shows that several PDK1 co-expressed genes ( such as TEX264 , NCOA5 , ANP32B , and RWDD3 ) may mediate the underlying mechanisms of BX-912’s responses in cancer cells ., NCOA5 25 and ANP32B 26 were reported previously in various cancer types ., Collectively , COAC-inferred gene co-expression subnetworks from individual patients’ scRNA-seq data offer the potential underlying mechanisms and new biomarkers for assessment of drug responses in cancer cells ., In this study , we proposed a network-based approach to infer gene-gene relationships from large-scale scRNA-seq data ., Specifically , COAC identified novel gene-gene co-expression in individual certain components ( the subset of whole components ) representing multiple cell types and cell phases , which can overcome a high degree of cell-cell variabilities from scRNA-seq data ., We found that COAC reduced batch effects ( Fig 2 ) and identified specific cell types with high accuracy ( 83% , Fig 3C ) in two large-scale human scRNA-seq datasets ., More importantly , we showed that gene co-expression subnetworks identified by COAC from scRNA-seq data were highly corrected with patients’ survival rate from TCGA data and drug responses in cancer cell lines ., In summary , COAC offers a powerful computational tool for identification of gene-gene regulatory networks from scRNA-seq data , suggesting potential applications for the development of precision medicine ., There are several improvements in COAC compared to traditional gene co-expression network analysis approaches from RNA-seq data of bulk populations ., Gene co-expression subnetwork identification by COAC is nearly unsupervised , and only a few parameters need to be determined ., Since gene overlap among co-expression subnetworks is allowed , the number of co-expression subnetworks has a higher order of magnitude than the number of genes ., Gene co-expression subnetworks identified by COAC can capture the underlying information of cell states or cell types ., In addition , gene subnetworks identified by COAC shed light on underlying disease pathways ( Fig 4 ) and offer potential pharmacogenomics biomarkers with well-defined molecular mechanisms ( Fig 5 ) ., We acknowledged several potential limitations in the current study ., First , the number of predicted gene co-expression subnetworks is huge ., It remains a daunting task to select a few biologically relevant subnetworks from a large number of COAC-predicted gene subnetworks ., Second , as COAC is a gene co-expression network analysis approach , subnetworks identified by COAC are not entirely independent ., Thus , the features used for computing similarities among cells are not strictly orthogonal ., In the future , we may improve the accuracy of COAC by integrating the human protein-protein interactome networks and additional , already known , gene-gene networks , such as pathway information 27–29 ., In addition , we could improve COAC further by applying deep learning approaches 30 for large-scale scRNA-seq data analysis ., In summary , we reported a novel network-based tool , COAC , for gene-gene network identification from large-scale scRNA-seq data ., COAC identifies accurately the cell types and offers potential diagnostic and pharmacogenomic biomarkers in cancer ., If broadly applied , COAC would offer a powerful tool for identifying gene-gene regulatory networks from scRNA-seq data in immunology and human diseases in the development of precision medicine ., In COAC , a subnetwork is represented by the eigenvectors of its adjacency correlation matrix ., In practice , the gene regulatory relationships represented by each subnetwork are not always unique ., Those that occur in each subnetwork represent a superposition of two or several regulatory relationships , where each has a weight in gene subnetworks shown in S3A Fig . We thereby used multi-components ( i . e . , top eigenvectors with large eigenvalues ) to represent the co-expression subnetworks ., As shown in S3B Fig , a regulatory relationship between two genes can be captured in different co-expression subnetworks ., Herein , we integrated matrix factorization 31 into the workflow of closed frequent pattern mining 32 ., Specifically , the set of closed frequent patterns contains the complete itemset information regarding these corresponding frequent patterns 32 ., Here , closed frequent pattern is defined that if two item sets appear in the same samples , only the super one is kept ., For a general gene expression matrix , to obtain a sparse distribution of genes in each latent variable , a matrix factorization method such as sparse principal component analysis ( PCA ) 33 can be chosen ., In this study , because the scRNA-seq data matrix is highly sparse , singular value decomposition ( SVD ) is chosen for matrix factorization ( i . e . , the SVD of A is given by UσV* ) ., The robust rank r is defined in the S1 Text ., Components that are greater than rank r are selected and then each attribute is treated as the linearly weighted sum of components ( Di = wi1 P1 + wi2 P2 + wi3 P3 …wir Pr ) ., The projection of gene distribution i over principal component j can be expressed as DitPj‖Di‖‖Pj‖ , where ‖Pj‖ = 1 ., Then , D ( i , j ) =DitPj‖Di‖‖Pj‖=DitPj‖Di‖=wij‖Di‖ and −1<DitPj‖Di‖<1 ., The projection of each attribute distribution over each principal component distribution is illustrated in S4A Fig . In practice , single cell data are always sparse ., For component j , most elements in the collection of D ( i , j ) |j are zero ., Several thresholds are determined by F-distribution ., For a component j , the mean and the variance of collection D ( i , j ) |j is m and s2 ., Then the F-distribution with degree of freedom 1 , and degree of freedom N-1 ( N is the number of attributes ) is:, F ( 1 , N−1 ) ( x ) = ( x−m ) 2s2, ( 1 ), The P-value for a element x in collection D ( i , j ) |j is the extreme upper tail probability of this F-distribution ., The threshold of the collection D ( i , j ) |j is divided into two groups ., In one group , the P-value of all element should be below a pre-defined threshold ., The detailed process for obtaining the thresholds is described in the S1 Text ., Herein , the cutoff of P-value for F-distribution ranges from 0 . 01 to 0 . 05 ., Subsequently , we defined the mapping rule using these thresholds ., {1ifthresholdPj<DxtPj‖Dx‖<1 ( Gain ) 0ifthresholdNj<DxtPj‖Dx‖<thresholdPj ( Non−effect ) −1if−1<DxtPj‖Dx‖<thresholdNj ( Loss ), ( 2 ), The pipeline is shown in S4B and S4C Fig . In the ( 1/0 ) sparse matrix , each row represents a component while each column represents an attribute ( gene ) ., The association rule is consisted of:, ( i ) one is an attribute ( gene ) collection and, ( ii ) the other is a component collection ., The position in the binary distribution matrix of any pair with the Cartesian product of the two collections is always 1 ., This position is shown in S4D and S4E Fig . For each association rule , the attribute collection should have maximal component collection ., For example , for association rules {X Y Z} {M} , {X Y} {M} , {X Y} {M N} , only the maximal {X Y} {M N} is allowed ., And the closed association rule states that if two rules have the same component collections , only the maximal attribute collection is preserved and kept ., For association rules {X Y Z} {M N} , {X Y} {M N} , {Y Z} {M N} , and {X Z} {M N} , with the same component collection {M , N} , only the maximal {X Y Z} {M N} is kept , whereas the others are removed ., The process of efficient enumeration of all significant association rules ( gene subnetwork ) is described in the S1 Text ., The subnetwork and gene distribution of selected components are obtained directly by applying the association rule , and the gene subnetwork is treated as the largest connected component ( graph ) from co-expression networks of scRNA-seq profiles ., Finally , two metrics are introduced for filtering ., The average correlation among genes in each subnetwork is a measure of the homogeneity of genes with selected components ., The average component ratio denotes the average of how much of the whole component space is occupied by the selected components ., AverageCorrelation= ( 1n ( n−1 ) ) ∑i , j∈{X , Y , Z} , j , i≠jCorrelation ( Ai , Aj ) |M , N, ( 3 ), ComponentRatioofAi=‖Ai‖2|selectedcomponents‖Ai‖2, ( 4 ), AverageComponentRatio=1N∑ComponentRatioofAi, ( Ai ∈ attribute collection of a closed associate rule ), ( 5 ), The processes of obtaining the average correlation and the average component ratio are provided in the S1 Text ., The final largest connected component subnetwork is represented by several eigenvectors with large eigenvalues , which are calculated from the correlation matrix ., These eigenvectors are used to map each record of the gene expression profile into individual numerical values ( feature vectors ) ., Featurevector=SFt/‖S‖2 ( ‖F‖2=1 ), ( 6 ), Where S is the gene expression vector for each cell , and F is the first eigenvector of the component matrix ., If several principal components exist , then the feature value becomes the sum of components multiplied by the attenuation coefficient ., Featurevector=SF1t/‖S‖2+ ( σ2/σ1 ) SF2t/‖S‖2+ ( σ3/σ1 ) SF3t/‖S‖2… ( ‖F1t‖2=1 , ‖F2t‖2=1… ), ( 7 ), Where σ1 , σ2 , σ3 , … , σv are the eigenvalues of the gene clustering ( subnetwork ) correlation matrix , and F1t , F2 , t… are the eigenvectors of gene clustering correlation matrix ., The purpose of cell type alignment was to label cell types of each cell under different conditions ., Cell types with the same labels under each condition were then clustered ., Subsequently , differential expression analyses were performed for various conditions of each cell type ., Finally , surrogate variable analyses 34 were performed to remove the batch effects ., We used the limma 35 method ( S5B Fig ) for the differential expression analysis of the differently conditioned cell types ., The scRNA-seq data ( GEO accession ID: GSE96583 ) that was used to test the batch effect elimination was collected from PBMC peripheral blood mononuclear cells of SLE patients 7 , 8 ., In total , 14 , 032 cells with 13 aligned PBMC subpopulations under resting and interferon β ( IFN-β ) -stimulated conditions were collected 8 ., In addition , we also collected 29 , 067 cells from two controls as the control group 7 ., For the training dataset , the variances of the feature vectors ( COAC-identified subnetworks ) between the case group and the control group were calculated and was regarded as differential variances ., The variances of the feature vectors of the merged group of the case group and the control group were regarded as background variances ., For each feature , the ratio of the differential variance and background variance was defined as F-score , which measured how much this feature can distinguish cells in a case group versus a control group ., The F-score distribution for 93 , 951 features is described in S6 Fig . Using a critical point of 2 . 4 as a threshold ( S6 Fig ) , 8 , 331 features with F-score higher than the threshold were kept ., For comparison , we used 2 , 657 genes which were used as biomarkers previously as the feature vector 8 ., The scRNA-seq data of mouse kidney with well-annotated cell types were collected from a previous study 10 ., By stringent quality controls described previously 10 , a total of 43 , 745 cells selected from the original 57 , 979 cells were used in this study ., The entire dataset was randomly divided into the training set ( 21 , 873 cells ) and the test set ( 21 , 872 cells ) ., The detail of prediction model construction can be found in cell type alignment pipeline ( S5 Fig ) ., For the validation part , cell type was predicted using the training model ., For each cell , the scores for cell types were calculated ., Then all cells were plotted by t-SNE algorithm 9 ., The results of cell type prediction were displayed in the confusion matrix ., We collected the melanoma patients’ scRNA-seq data with well-annotated cell types from a previous study 11 ., The bulk RNA-seq data and clinical profiles for melanoma patients were collected from the TCGA website 13 ., The gene expression values in the scRNA-seq dataset were transformed as log ( TPMij+1 ) , where TPMij refers to transcript-per-million ( TPM ) of gene i in cell j ., The gene expression value in the bulk RNA-seq dataset was transformed in the same way ., The sub-network list was obtained from melanoma scRNA-seq dataset 11 by COAC ., Sub-networks then were transformed to feature vectors ., Two top sub-networks with the highest co-expressed correlation in melanoma cell type and one top sub-network with the highest co-expressed correlation in T cells were evaluated ., The co-expression values were calculated with RNA-seq gene expression of melanoma patients from TCGA 13 ., Survival analysis was conducted using an R survival package 36 ., We downloaded drug response data ( defined by IC50 value ) and gene bulk expression profiles in cancer cell lines from the GDSC database 18 ., The component co-expression sub-networks were identified from the melanoma patients’ scRNA-seq data with well-annotated cell types from a previous study 11 ., For scRNA-seq data , genes that had a ratio of expressed cells less than 0 . 03 were removed ., Herein , we kept the top 0 . 1~0 . 01 percent subnetworks with the highest correlation as feature vectors ., We predicted each drug’ IC50 value by LIBSVM 19 R package with default parameters and linear kernel ., The ROC curves for the result of drug response were plotted using the R package . | Introduction, Results, Discussion, Methods and materials | Recent advances in next-generation sequencing and computational technologies have enabled routine analysis of large-scale single-cell ribonucleic acid sequencing ( scRNA-seq ) data ., However , scRNA-seq technologies have suffered from several technical challenges , including low mean expression levels in most genes and higher frequencies of missing data than bulk population sequencing technologies ., Identifying functional gene sets and their regulatory networks that link specific cell types to human diseases and therapeutics from scRNA-seq profiles are daunting tasks ., In this study , we developed a Component Overlapping Attribute Clustering ( COAC ) algorithm to perform the localized ( cell subpopulation ) gene co-expression network analysis from large-scale scRNA-seq profiles ., Gene subnetworks that represent specific gene co-expression patterns are inferred from the components of a decomposed matrix of scRNA-seq profiles ., We showed that single-cell gene subnetworks identified by COAC from multiple time points within cell phases can be used for cell type identification with high accuracy ( 83% ) ., In addition , COAC-inferred subnetworks from melanoma patients’ scRNA-seq profiles are highly correlated with survival rate from The Cancer Genome Atlas ( TCGA ) ., Moreover , the localized gene subnetworks identified by COAC from individual patients’ scRNA-seq data can be used as pharmacogenomics biomarkers to predict drug responses ( The area under the receiver operating characteristic curves ranges from 0 . 728 to 0 . 783 ) in cancer cell lines from the Genomics of Drug Sensitivity in Cancer ( GDSC ) database ., In summary , COAC offers a powerful tool to identify potential network-based diagnostic and pharmacogenomics biomarkers from large-scale scRNA-seq profiles ., COAC is freely available at https://github . com/ChengF-Lab/COAC . | Single-cell RNA sequencing ( scRNA-seq ) can reveal complex and rare cell populations , uncover gene regulatory relationships , track the trajectories of distinct cell lineages in development , and identify cell-cell variabilities in human diseases and therapeutics ., Although experimental methods for scRNA-seq are increasingly accessible , computational approaches to infer gene regulatory networks from raw data remain limited ., From a single-cell perspective , the stochastic features of a single cell must be properly embedded into gene regulatory networks ., However , it is difficult to identify technical noise ( e . g . , low mean expression levels and missing data ) and cell-cell variabilities remain poorly understood ., In this study , we introduced a network-based approach , termed Component Overlapping Attribute Clustering ( COAC ) , to infer novel gene-gene subnetworks in individual components ( subsets of whole components ) representing multiple cell types and phases of scRNA-seq data ., We showed that COAC can reduce batch effects and identify specific cell types in two large-scale human scRNA-seq datasets ., Importantly , we demonstrated that gene subnetworks identified by COAC from scRNA-seq profiles highly correlated with patientss survival and drug responses in cancer , offering a novel computational tool for advancing precision medicine . | biotechnology, medicine and health sciences, clinical research design, engineering and technology, statistics, gene regulation, computational biology, cancers and neoplasms, biomarkers, oncology, research design, mathematics, network analysis, pharmacology, pharmacogenomics, research and analysis methods, bioengineering, computer and information sciences, mathematical and statistical techniques, gene expression, melanomas, survival analysis, biochemistry, gene regulatory networks, genetics, biology and life sciences, physical sciences, genomics, statistical methods, genomic medicine | null |
1,735 | journal.pcbi.1006029 | 2,018 | Inverse tissue mechanics of cell monolayer expansion | The body of multicellular organisms must be properly shaped in order to exert its functions , and this proper formation is based on the orchestration of cellular behaviors , such as cell division , differentiation , migration , and other ., One of the key processes in morphogenesis is the coordinated change in cell shapes and positions ., The coordination depends on cell-generated mechanical forces that introduce stress , which induces multicellular deformation and flow 1 ., Therefore , the research on how the molecular components responsible for force generation and propagation , such as motor proteins and cell-cell adhesion molecules , are regulated in space and time during the morphogenesis has attracted a lot of attention recently 2 ., In parallel , remarkable progress has been made in the development of the technologies allowing the measurements of the generated forces and stress in the living tissues 3 , which represents a crucial step towards linking the underlying molecular activities with the morphogenesis ., Epithelial tissues represent important model systems for the understanding of the force dynamics during morphogenesis , because their two-dimensional sheet structure facilitates the observation of the processes that occur in these tissues and analysis ., In particular , many valuable insights have been obtained using the cultured cell monolayer , i . e . , one-cell-thick sheet of tightly-connected epithelial cells 4–6 ., The cells belonging to a monolayer collectively migrate in order to fill a cell-free surface , which replicates in vivo tissue remodeling , such as wound repair , which occurs during regeneration , and epiboly , during embryonic development ., When migrating , the cells exert forces on the underlying substrate to propel themselves forward , and in the unicellular motion , this force , known as the cell traction force , can be visualized by the displacement of fluorescent beads embedded into the substrate 7 ., The simple flat-sheet structure of the monolayer allows us to apply the same technique to observe a spatio-temporal profile of the cell traction force in a wide field of view 8 , and to determine where and how the force and stress are generated 9 , 10 ., In order to achieve the quantitative understandings of the resultant tissue morphogenesis , however , we need to elucidate the other mechanical factors as well , i . e . , the mechanical properties that describe the relation between the deformation and forces ., Although several pioneering works exist 11–14 , our access to the mechanical properties is still limited ., The characterization of these properties often requires exogenous manipulation of the tissue to induce deformation , but the procedure itself perturbs cell physiology and interferes with the tissue morphogenesis ., Here , force measurement in a non-invasive manner gives a way to bypass this issue , and we can infer mechanical properties by associating spontaneous tissue deformation with the observed force dynamics ., In this study , we propose a reverse-engineering method to identify the mechanical properties , which is based on the combination of tissue mechanics modeling and statistical machine learning ., Our strategy is to represent a cell monolayer as a continuum-mechanical system 15 , and to use the passive and simultaneous observations of the deformation and traction force in order to compute the maximum likelihood estimate of the mechanical parameters ., We formulated the inference as an inverse of the forward processes in which the mechanical properties and reaction force to the traction cause the tissue deformation ., Our inference algorithm is based on the sequential updates of estimates; using the current model state and parameters , the mechanical model predicts the traction force field , and then the error feedback based on the observation is used to update the model state and parameters ., Here , we applied our method to a cultured monolayer system to infer the elastic moduli from the collected tissue deformation and traction force data ., To characterize the tissue deformation , we used velocity field of tissue motion , hereafter called tissue flow field ., MDCK cells ( strain II ) were maintained in minimal essential medium ( MEM; Invitrogen ) supplemented with 10% fetal bovine serum ( FBS; Equitech-Bio ) , GlutaMAX ( Invitrogen ) , and 1 mM sodium pyruvate , in a 5% CO2 humidified incubator at 37 C° ., According to a previously published protocol 9 , 48 h before the image acquisition , 3 μl drop of dense cell suspension ( 8 × 106 cells/ml ) was added to each dish containing the gel and 3 ml medium ., Afterward , 3 h before the image acquisition , the medium was replaced by 3 ml CO2-independent medium supplemented with 10% FBS and GlutaMAX ., For the myosin II inhibition , we added blebbistatin ( Sigma Aldrich ) at a final concentration of 25 μM following the replacement of the medium ., Polyacrylamide gel substrates were prepared according to the previously published protocols 8 , 9 ., Briefly , the gel solution was prepared with 3% acrylamide , 0 . 25% bisacrylamide , 0 . 8% ammonium persulfate , 0 . 08% TEMED ( Bio-Rad products ) , and 0 . 01% red fluorescent carboxylate-modified beads ( 0 . 5μm diameter , Invitrogen ) ., 20 μl of this mixture was added to each dish and the samples were covered with glass cover slips with 18 mm diameter ( Matsunami ) ., After the polymerization , the surface was coated with type I collagen ( Purecol , Advanced BioMatrix ) using 4 μM sulphosuccinimidyl-6- ( 4-azido-2-nitrophenylamino ) hexanoate ( Sulfo-SANPAH; Pierce ) ., Young’s modulus of the gel was characterized by the conventional method using the Hertz equation 16 , obtaining E = 2500±600 Pa ., The Fourier-transform traction microscopy 8 was used to estimate traction force fields from bead displacement fields ., Confocal imaging was conducted at 48 h after the seeding of the cells ., We used FV10i-LIV ( Olympus ) to simultaneously acquire phase contrast images of the cells and fluorescent images of the beads ., The trial period lasted for 6-10 h and the sampling rate was one frame per 5 min ., After each trial , we removed the cells by the trypsinization and imaged the strain-free pattern of the fluorescent beads ., To increase the field of view , we stitched tiled images by the Grid/Collection stitching plugin in Fiji 17 ., Following this , the images at different time points were aligned to match the bead configurations in a cell-free region 18 ., To obtain velocity fields in the phase contrast image and bead displacement fields , we adopted an advanced optical flow technique , which tracks changes between two images by matching the patterns of intensity and its gradient 19 ., S1 Movie shows a representative result of the image analysis ., For the tissue flow , we used images from subsequent time points , while for the bead displacement , we compared the stress-free image with each fluorescent image ., The image resolution was 0 . 61μm/pixel and the grid spacing of the vector fields was 14 . 7 μm ., Finally , the flow and force fields were down-sampled in space and time into Δx = 29 . 4μm and Δt = 10 min ., We adopted a continuum modeling of the monolayer mechanics , in which the deformation of the tissue , or strain , determines the stress 20 , 21 ., The cell monolayer was represented as a two-dimensional sheet , and therefore , we represented the stress as a symmetric matrix, σ ( x , y , t ) ≡ ( σ x x σ x y σ x y σ y y ) = π I + σ ˜ ., ( 1 ), In the second line , we applied the deviatoric decomposition where the stress tensor is given as the summation of isotropic ( the first term ) and distortional ( the second term ) components ., The strain tensor was also represented by a two-by-two matrix as, ϵ i j ≡ 1 2 ( ∂ j u i + ∂ i u j ) , ( i , j = x , y ) , ( 2 ), where ( ux , uy ) represents the displacement vector of the tissue from the stress-free state ., In a linearly elastic material , the relationship between the stress and strain tensors becomes simply linear , meaning that the stress accumulates in response to the strain ., However , the stress-free state of the living tissue can vary in time due to cell growth and death ., Therefore , we adopted an alternative formulation using the strain rate tensor, e ( x , y , t ) ≡ ϵ ˙ → e i j = 1 2 ( ∂ j v i + ∂ i v j ) → e = 1 2 ( ∇ · v ) I + e ˜ , ( 3 ), where v = ( vx , vy ) is the flow velocity vector in the tissue ., In the last line , we applied the deviatoric decomposition ., Although previous works suggested that anisotropic cell division can contribute the tissue mechanics 22 , 23 , we modeled the cell growth simply as isotropic and homogeneous expansion with the rate Dg , which is partially supported by a previous report that cell division in the monolayer shows no particular orientation 24 ., Since the observed total expansion of the tissue is the summation of the growth and the deformation-originated expansion , i . e . , Dtotal = Dg + Dmaterial , the subtraction Dtotal − Dg should appear in the stress-strain relation 15 ., Taken together , our elastic model was written as, π ˙ ( x , y , t ) = K ( ∇ · v ( x , y , t ) - D g ) + ξ ( x , y , t ) σ ˜ ˙ x x ( x , y , t ) = 2 G e ˜ x x ( x , y , t ) + ξ x x ( x , y , t ) σ ˜ ˙ x y ( x , y , t ) = 2 G e ˜ x y ( x , y , t ) + ξ x y ( x , y , t ) , ( 4 ), where K and G are the in-plane bulk and shear elastic moduli , respectively ( in S1 Text , we derived the relation of the in-plane moduli to the conventional three dimensional moduli ) ., ξs are the stochastic terms representing random variables with Gaussian distribution that is not space or time-dependent ., Dg is associated with the cell division interval tdiv as Dg = ln 2/tdiv , and we adopted tdiv = 1 division per day ., We found that essentially the same results are obtained by increasing or decreasing the rate of growth rate two times ., On the other hand , the tissue stress tensor and traction force vector were related through the force balance equation 25, - T x ( x , y , t ) = ∂ π ∂ x + ∂ σ ˜ x x ∂ x + ∂ σ ˜ x y ∂ y + η x - T y ( x , y , t ) = ∂ π ∂ y - ∂ σ ˜ x x ∂ y + ∂ σ ˜ x y ∂ x + η y , ( 5 ), where ηs are noises in the force quantification assumed to be normally-distributed , and we call them the observation noises ., Here , we briefly describe the inference algorithm ( the details of derivation is given in S1 Text ) ., Let Y and Λ represent the collected spatio-temporal fields of traction force and tissue flow , respectively , and X represent the stress tensor field that was discretized in space and time according to Y and Λ ., Additionally , let θ represent the model parameters ., Then , our aim is to find such θ ^ that maximizes the log-likelihood:, ln L ≡ ln p ( Y | Λ , θ ) = ln ∫ p ( Y | X , θ ) p ( X | Λ , θ ) d X ., ( 6 ), Note that p ( X|Λ , θ ) and p ( Y|X , θ ) are corresponding to the stress evolution and force balance equations , i . e . , Eqs 4 and 5 , respectively ., Since the integration w . r . t . X is analytically intractable , we adopted the expectation-maximization ( EM ) algorithm , which maximizes the lower-bound of the log-likelihood by executing the following E and M steps alternately 26 ., ( E-step ), Estimate the stress fields by computing p ( X|Y , Λ , θ* ) with the Rauch-Tung-Striebel smoother 27 , where θ* is a tentative estimate of the parameters ., ( M-step ), Compute the expected complete-data log-likelihood:, Q ( θ ) = E p ( X | Y , Λ , θ * ) ln p ( X , Y | Λ , θ ) , ( 7 ), and update the parameters through maximizing Q ( θ ) ., Repeating the E and M steps , which offers monotonic increase and convergence of the likelihood ., After the convergence , we obtain the maximum likelihood estimate of the parameters θ ^ ., In order to collect the data on tissue deformation and force , we adopted a model system , Madin-Darby canine kidney ( MDCK ) epithelial cell monolayer , and analyzed it using the colony expansion assay 9 ., We performed phase contrast imaging to measure the flow of cells in the monolayer and , simultaneously , traction force microscopy , in order to visualize the generated force using fluorescent beads embedded into the soft substrate ., Our inference algorithm used a mechanical model of the cell monolayer , i . e . , a spatio-temporal model of mechanical stress within the tissue ., Our mechanical model of the cell monolayer , represented by Eq 4 ( see Materials and methods section ) , included two biophysical factors that are essential for the colony expansion: tissue elasticity and cell growth ., Elasticity is a basic property of a material , which resists the influence of an external force and shows a recoverable deformation ., According to the previous studies 9 , 14 , we assumed the linear elasticity where the deformation is proportional to the force ., Additionally , cellular growth supplies new cells into the tissue , and thereby promotes tissue expansion 28 ., Our mechanical model also included stochasticity , which represents other mechanical processes such as viscosity , plasticity , active contractile force , and others ., This model , represented by Eq 4 , had two mechanical parameters describing the elastic properties of the monolayer , the in-plane bulk modulus K and shear modulus G . The values of the bulk and shear moduli represent the resistance against area-changing and area-preserving deformation in the monolayer , respectively ., Additionally , the elastic model contains the variance parameters for strength of the stochastic effects in the stress dynamics ., Our inference algorithm , using the movie data showing tissue flow and traction force , estimated the values of these parameters ( Fig 1A and 1B and “Materials and methods” section ) ., We found that the flow speed of the tissue at the periphery was approximately 10-30 μm/h ( Fig 2A ) , the strength of the traction force was distributed around 10-100 Pa ( Fig 2B ) , and both flow speed and force strength decreased monotonically along the distance from the edge ( Fig 2A and 2B ) ., These results are consistent with the previous observations 4 , 8 , 29 ., We computed maximum likelihood estimates of the parameters in our elastic model from the collected data ., For this estimation , the model state , i . e . , the tissue stress field , was corrected by the current traction force data at each time point in the movie sequence; following this , the model was numerically simulated , using the tissue flow data , in order to predict the traction force field at the following time point ., As a result , even though the model inference was based on the one-step prediction ( Δt = 10 min ) , we found that the estimated model can provide a long-term forecast ( >1 h ) without corrected by the traction force data ., To quantitatively demonstrate this result , we divided each movie data on tissue flow and traction force , into two parts in time: The earlier , training data and the following test data ., Using the training data , the inference algorithm was used to estimate the model parameters and the stress field in the monolayer , and then this model was examined in terms of the forecast accuracy for future force fields by using the test data ., As a quantitative measure , we employed the correlation between the forecasted and observed force vector fields:, R = ⟨ T forecast · T data ⟩ ⟨ | T | forecast · | T | data ⟩ , ( 8 ), where 〈⋅〉 represents an average over all spatial grid points ., The correlation plotted against time is represented in Fig 3A ., As shown , the forecast provided by the elastic model was highly correlated even 3 h after the initiation of the test part ., For comparison , we adopted a null-hypothetical , zero-elasticity model , where K = G = 0 ( Fig 3A ) ., In Fig 3B , the correlation in both models at the last time-point in the test part , the long-term forecast accuracy , is shown ., These results demonstrate that our data-driven elastic model showed better forecast which is clearer especially in longer time forecast ., We also computed the difference of correlation between the models in a sample-wise manner ( Fig 3C ) , and confirmed statistically-significant superiority of the elastic model compared with the zero-elasticity model ( p < 10−4 , Wilcoxon signed-rank test ) ., Representative forecasted and observed traction forces at the last time point are shown in Fig 3D ., Therefore , despite of its simplicity , our data-driven elastic model captured the stress evolution in the tissue expansion by the estimated bulk ( K ) and shear ( G ) moduli ., Next , we examined if the elastic moduli are different in the tissues treated with blebbistatin , a myosin inhibitor ., Previous studies showed that the inhibition of the molecular motors considerably reduces the traction force strength , which is expected ., However , this inhibition does not slow down the tissue expansion rate 6 , indicating the alterations in tissue mechanical properties ., By comparing the elastic moduli estimated from a different dataset from blebbistatin-treated tissues with those estimated under the standard conditions ( Fig 4A , 4B and 4C ) , we obtained the results consistent with those obtained in previous experiments ., We observed a significant decrease in the elastic moduli associated with the treatment ( Fig 4D and 4E ) ., In the standard experimental setting , both moduli were within the order of the magnitude of ∼ 103Pa · μm , while myosin inhibition induced several-fold reduction in the elastic modulus values , i . e . , softening ( p < 0 . 01 , U-test ) ., Note that the estimated moduli are not guaranteed to have positive values because unmodeled monolayer mechanics , such as viscosity and anisotropic tissue growth , might affect the stress dynamics ., In fact , the estimated moduli from myosin-inhibited tissues have frequently shown negative values , suggesting that the elasticity effect was no longer dominant over unmodeled effects due to the softening ., Finally , we assessed the spatial distribution of the elastic moduli ., When considering difference in the flow speed and force strength dependent on the distance from the tissue edge ( Fig 2 ) , we expected the mechanical properties that would correlate with this distance ., However , as shown in Fig 5 , we found that the estimated moduli are homogenous along with the distance ., The framework we presented in this study would benefit from the advances in force measurement ., For example , although the traction force microscopy is applicable only to in vitro tissues , in vivo measurement techniques are being actively developed 38 , 39 ., We can apply our reverse-engineering method to the in vivo measuring by modifying the model of the observation process , Eq 5 in our case ., Additionally , another interesting direction would be to use a more elaborate model of tissue mechanics , in particular , by directly including cellular processes such as cell division 40 , 41 ., We hope that , with the advancements in the technology of force/stress measurement , our method may assist further understanding of the mechanics underling tissue development and maintenance . | Introduction, Materials and methods, Results, Discussion | Living tissues undergo deformation during morphogenesis ., In this process , cells generate mechanical forces that drive the coordinated cell motion and shape changes ., Recent advances in experimental and theoretical techniques have enabled in situ measurement of the mechanical forces , but the characterization of mechanical properties that determine how these forces quantitatively affect tissue deformation remains challenging , and this represents a major obstacle for the complete understanding of morphogenesis ., Here , we proposed a non-invasive reverse-engineering approach for the estimation of the mechanical properties , by combining tissue mechanics modeling and statistical machine learning ., Our strategy is to model the tissue as a continuum mechanical system and to use passive observations of spontaneous tissue deformation and force fields to statistically estimate the model parameters ., This method was applied to the analysis of the collective migration of Madin-Darby canine kidney cells , and the tissue flow and force were simultaneously observed by the phase contrast imaging and traction force microscopy ., We found that our monolayer elastic model , whose elastic moduli were reverse-engineered , enabled a long-term forecast of the traction force fields when given the tissue flow fields , indicating that the elasticity contributes to the evolution of the tissue stress ., Furthermore , we investigated the tissues in which myosin was inhibited by blebbistatin treatment , and observed a several-fold reduction in the elastic moduli ., The obtained results validate our framework , which paves the way to the estimation of mechanical properties of living tissues during morphogenesis . | In order to shape the body of a multicellular organism , cells generate mechanical forces and undergo deformation ., Although these forces are being increasingly determined , quantitative characterization of the relation between the deformation and forces at the tissue level remains challenging ., To estimate these properties , we developed a reverse-engineering method by combining tissue mechanics modeling and statistical machine learning , and then tested this method on a common model system , the expansion of cultured cell monolayer ., This statistically sound framework uses the passive observations of spontaneous deformation and force dynamics in tissues , and enables us to elucidate unperturbed mechanical processes underlying morphogenesis . | tissue mechanics, fluorescence imaging, mechanical properties, classical mechanics, cell cycle and cell division, cell processes, biomechanics, developmental biology, molecular motors, actin motors, materials science, damage mechanics, morphogenesis, motor proteins, research and analysis methods, contractile proteins, imaging techniques, proteins, deformation, biophysics, physics, biochemistry, cytoskeletal proteins, cell biology, myosins, biology and life sciences, physical sciences, material properties | null |
1,664 | journal.pgen.1007452 | 2,018 | Proper conditional analysis in the presence of missing data: Application to large scale meta-analysis of tobacco use phenotypes | Meta-analysis has become a critical tool for genetic association studies in human genetics ., Meta-analysis increases sample sizes , empowers association studies , and has led to many exciting discoveries in the past decade 1–5 ., Many of these genetic discoveries have informed new biology , provided novel clinical insights 6 , 7 , and led to novel therapeutic drug targets 8 , 9 ., Conditional meta-analysis has been a key component for these studies , which is useful to distinguish novel association signals from shadows of known association signals and to pinpoint causal variants ., Existing methods for conditional meta-analysis were proposed based upon the assumptions that summary association statistics from all variant sites are measured and shared ., Yet , in practice , the score statistics from contributing studies often contain missing values , possibly due to the use of different genotyping arrays , sequencing capture assays , or quality control filters by each participating cohort ., While genotype imputation is an effective approach to fill in missing genotype data for participating cohorts , many scenarios may preclude accurate genotype imputation ., For example , a targeted genotyping array/sequencing assay ( e . g . exome array ) may not provide sufficient genome-wide coverage for imputation ., In addition , it is challenging to impute low frequency variants even with the highest quality reference panels ., Imputed genotypes of low quality are often filtered out based upon the recommendations from the best practices 10 , since these variants are more prone to artefacts and can lead to inflated type I errors ., Therefore , missing data in meta-analysis of genetic association studies are unavoidable ., Some existing meta-analysis strategies can be highly biased in the presence of missing data ., First , a commonly used method for conditional analysis , COJO , can lead to biased results when contributed summary association statistics from participating studies contain missing values 11 ., The COJO method approximates the variance-covariance matrix between association statistics with the linkage disequilibrium ( LD ) information from a reference panel ., When the association statistics from contributed studies are missing at some variant sites , the correlation matrix of the meta-analysis statistics can differ greatly from the LD matrix ., Consider the simple example of a meta-analysis of two independent studies , where variant 1 is only measured in study 1 and variant 2 is only measured in study 2 ., The meta-analysis association statistics for the two variants are independent , which cannot be approximated by the LD ., COJO only uses meta-analysis results as input ., Therefore , it cannot distinguish the scenario where only study 1 measures both variants ( and study 2 measures none ) , and the scenario where study 1 only measures variant 1 and study 2 only measures variant 2 ., In the presence of missing data , COJO can be highly biased and lead to inflated type I errors ., Second , the strategy of imputing missing data from contributed association statistics and using imputed association statistics in meta-analysis can also lead to inflated type I errors in conditional analysis ., A simple imputation strategy for marginal ( or unconditional ) analysis is to replace missing summary statistics with zeros ( REPLACE0 ) , which are their expected value under the null hypothesis 2 , 3 ., This method yields valid type I errors for marginal association analysis ., Taking this simple approach for conditional analysis , however , is problematic ., The genetic variants at conditioned sites are likely to have non-zero effects ., Replacing missing summary data with zeros will bias the genetic effect estimates at conditioned variant sites , and can lead to highly inflated type I errors for conditional analysis ( see RESULTS ) ., Similarly , the methods that seek to impute missing summary statistics based upon LD ( e . g . impG 12 ) may introduce substantial biases to the effects of missing variants ., Plugging in the imputed Z-score statistics into conditional analysis ( impG+meta ) can lead to inflated type I errors ., Finally , discarding studies with missing summary statistics ( DISCARD , or complete case analysis ) will give valid type I errors , but at the cost of reduced power ., In the statistics literature , synthesis methods have previously been developed to meta-analyze joint effects from different studies , where the participating studies measure different predictors 13 , 14 ., The scenario is similar to the meta-analysis of genetic association studies with missing data ., Yet , in genetic association analysis , usually only marginal effects are reported and joint effects have to be approximated from marginal effects ., The synthesis methods also lack an implementation for genetic association studies , which greatly limits their impact ., To explore the usefulness of synthesis methods , we proposed and implemented an extension of the synthesis methods termed SYN+ , which can be applied in genetic association meta-analysis ., To overcome these limitations of existing GWAS meta-analysis methods and improve power , we developed an improved conditional meta-analysis method called partial correlation based score statistic ( PCBS ) that borrows strength across multiple participating studies and consistently estimates the partial variance-covariance matrices between genotypes and phenotypes ., We conducted extensive simulations , and showed that our PCBS method has valid type I error and the highest power among all the methods ., On the other hand , COJO , impG+meta and REPLACE0 can lead to highly inflated type I errors in the presence of missing data ., SYN+ , while having valid type I errors , is consistently less powerful than PCBS , especially when the missingness is high or the conditioned variants have larger effects ., We also demonstrated the clear advantage of PCBS in the meta-analysis of cigarettes per day phenotype ., PCBS identified many more independently associated variants from known loci , compared to alternative approaches ., We implemented the proposed methods in the open-source software tools RAREMETAL 15 and R package rareMETALS and made them publically available ( https://genome . sph . umich . edu/wiki/Rare_Variant_Analysis_and_Meta-Analysis ) ., RAREMETAL and rareMETALS use marginal score statistics and exact variance-covariance matrix as input , which is suitable for rare variant association analysis ., We also implemented the same method in rareGWAMA ( https://github . com/dajiangliu/rareGWAMA ) , which conducts meta-analysis using approximate covariance matrix from a reference panel ., These methods and tools have been applied and tested in a few large scale meta-analyses ., We expect these methods to play an important role in sequence-based genetic studies and lead to important genetic discoveries ., We denote the genotype for individual i at variant site j in study k as Gijk , which can take values of 0 , 1 or 2 , representing the number of the minor ( or alternative ) alleles in the locus ., When the genotypes are imputed or generated from low pass sequencing studies , genotype dosage can be used in association analysis ., In this case , Gijk will be the expected number of minor ( or alternative ) allele counts ., We denote the non-genotype covariates as Zik , which includes a vector of 1’s to incorporate the intercept in the model ., Single variant association can be analyzed in a regression model: Yk = Gjkβj + Zkγk + ek ., The score statistic for single variant association takes the form:, Ujk=1σ^02∑iGijk ( Yik−y^ik ), ( 1 ), where y^ik=Zikγ^k , γ^k is the covariate effect , and σ^0 is the standard deviation of the phenotype residuals estimated under the null model M0, Yk=Zkγk+ek , ek∼MVN ( 0 , σ^02I ), ( M0 ), Without the loss of generality , we assume that the phenotype residuals are standardized in each study as in commonly done in practice ., So σ^0 is often equal 1 in practice ., We denote the vector of score statistics in a genetic region as Uk = ( U1k , … , UJk ) ., The variance-covariance matrix between scores statistics is equal to, Vk=1/σ^02Gk′Gk−GkTZk ( ZkTZk ) −1ZkTGk, ( 2 ), For our illustration of the method , we focus on the analysis of continuous outcomes ., Yet , the meta-analysis and conditional meta-analysis methods work for both continuous outcomes and binary outcomes ., The meta-analysis score statistics and their covariance matrices are calculated using the Mantel-Haenszel method , i . e . U = ∑k Uk and V = ∑k Vk ., The meta-analysis statistics can be used to estimate the joint effects for variants 1 , … , J , i . e . β^=V−1U ., We denote the score statistics at candidate and conditioned variant sites as U= ( UG , UG* ) , where G and G*represent the genotypes from the candidate and conditioned variants respectively ., The variance covariance matrix for U equals to V= ( VGVGG*VG*GVG* ) The conditional score statistic can be calculated by, UG|G*= ( UG−VGG*VG*−1UG* ) σ^02/σ^c2, ( 3 ), where σ^c2 is the residual variance estimated from the conditional analysis model, Yk=Gk*βG*+Zkγk+ek , ek∼MVN ( 0 , σ^c2I ), ( Mc ), After conditioning on the genotypes G* , the residual variance equals to σ^c2=σ^02 ( 1−1NUG*′VG*−1UG* ) ., It is easy to verify that the variance of the conditional score statistics under Mc is equal to, VG|G*= ( VG−VGG*VG*−1VG*G ) σ^02/σ^c2, ( 4 ), The single variant and gene-level tests in conditional analysis can be calculated based upon the conditional score statistics UG|G* and the covariance matrix VG|G* ., Details are provided in S1 Text ., Reviewing formulae ( 3 ) and ( 4 ) , we note that the conditional score statistics and their variances only depend on the partial variance-covariance matrix between the phenotypes and the genotypes after the adjustment of covariates ., The key idea underlying our approach is to derive a consistent estimator for the partial covariances in the presence of missing summary statistics and to use it for unbiased conditional analysis ., In statistics , to calculate the partial covariance between random variables Gjk and Yk adjusting for variable Zk , we first regress out covariate Zk from both Gjk and Yk , and then calculate the covariance between the residuals ., Specifically ,, ρ^GjkYk|Zk=1Njkσ^02Gjk′ ( Yk−Zkγ^ ), ( 5 ), For a given study , it is easy to check that the partial covariances are in fact scaled score statistics , i . e . Therefore , in meta-analysis , we propose to estimate the partial covariance between genotype Gij , phenotype Yi after adjusting the covariate effect Zi using all available summary statistics:, ρ^GY|Z , j=∑k∈{k:Mjk=1}Ujk∑k∈{k:Mjk=1}Njk, ( 8 ), ρ^GG|Z , j1j2=∑k∈{k:Mj1k=Mj2k=1}Vj1j2k∑k∈{k:Mj1k=Mj2k=1}Njk, ( 9 ), Here Mjk is an indicator variable that takes the value of 1 when the summary statistic at variant site j is measured in study k ., For notational convenience , we define the matrices of partial covariance as ρ^GY|Z= ( ρ^GY , j ) j=1 , … , J and ρ^GG|Z= ( ρ^GG|Z , j1j2 ) j1 , j2=1 , … , J ., Under the fixed effect model , we have E ( Vk−1Uk ) =β for all k ., We showed in S1 Text that E ( ρ^GG|Z−1ρ^GY|Z ) =β ., Therefore , the partial covariance matrices can be consistently estimated even in the presence of missing summary statistics ., We define partial correlation based score statistics as, U˜G|G*=ρ^GY|Z−ρ^GG*|Zρ^G*G*|Z−1ρ^G*Y|Z, ( 10 ), The covariances for U˜G|G* are equal to, V˜G|G*=cov ( ρ^GY|Z ) +ρ^GG*|Zρ^G*G*|Z−1cov ( ρ^G*Y|Z ) ρ^G*G*|Z−1ρ^G*G|Z−ρ^GG*|Zρ^G*G*|Z−1cov ( ρ^G*Y|Z , ρ^GY|Z ) −cov ( ρ^GY|Z , ρ^G*Y|Z ) ρ^G*G*|Z−1ρ^G*G|Z, ( 11 ), It is easy to verify that the conditional analysis using the estimator U˜G|G* is equivalent to the standard score statistics when no missing data are present ., In the presence of missing data , the partial correlation based statistic U˜G|G* remains consistent ., The conditional association analysis can be performed by replacing the standard score statistic with a partial correlation based score statistic ., Details for calculating single variant and gene-level conditional association statistics can be found in S1 Text ., When the contributed summary association statistics from participating studies contain missing values , a natural strategy is to replace the missing values using imputation ., Several imputation methods were previously developed ., One method is REPLACE0 , which is to replace the missing values by 0 ., We denote the resulting statistics as U0 and V0 ., To mathematically describe this method , we define an indicator variable Mjk , which takes value 1 if the summary statistics at site j in study k is measured and 0 if missing ., The meta-analysis score statistic is calculated by, Uj0=∑k∈{k:Mjk=1}UjkandVj1j20=∑k∈{k:Mj1k=Mj2k=1}Vj1j2k, We proved in S1 Text that replacing missing summary association statistics with zero will bias the genetic effect estimate , i . e . E ( UG*0 ) ≠VG*0βG* ., As a consequence , under the null hypothesis that the candidate variant is not associated with the phenotype , the expectation of the conditional score statistics is not equal to 0 , i . e . E ( UG|G* ) =VGG*βG*−VGG*0 ( VG*0 ) −1E ( UG*0 ) ≠0 ., The type I error for conditional analysis can be highly inflated ., A more sophisticated set of methods is to impute missing summary statistics based upon LD information ., Yet , the genetic effect estimates based upon the imputed Z-score statistics are often biased , unless the following condition holds, EZimp=Σimp , tagΣtag−1EZtag, where Zimp and Ztag are Z-score statistics at the missing and tagSNP sites , Σimp , tag and Σtag are genotype correlation matrices ., A special case for this condition is that both the tagSNP and missing variants have null effects ., Similar to REPLACE0 , applying impG+meta method can lead to inflated type I errors ., We conducted extensive simulations to evaluate the performance of PCBS as well as 5 alternative approaches , including 1 ) impG+meta; 2 ) COJO; 3 ) REPLACE0; 4 ) DISCARD and 5 ) SYN+ using simulated data ., We simulated genetic data following a coalescent model that we previously used for evaluating rare variant association analysis methods 2 ., The model captures an ancient population bottleneck and recent explosive population growth ., Model parameters were tuned such that the site frequency spectrum and the fraction of the singletons of the simulated data match that of large scale sequence datasets ., For quantitative traits , phenotype data from each cohort were simulated according to the linear model:, Yi=β0+∑j=1JGijβj+∑j=1JGij*γj+ϵi, where Gij and Gij* denote the candidate and conditioned variant genotypes , and βj and γj are their effects respectively ., The model assumes that the genetic variants have additive effects on the phenotype ., The genetic effects for candidate variants follow a mixture normal distribution , which accommodates the possibility that a genetic variant can be causal ( with probability c ) or non-causal ( with probability 1 − c ) : βj∼ ( 1−c ) ×I ( 0 ) +c×N ( 0 , τβ2 ) ., The genetic effects for the conditioned variants follow: γj∼N ( 0 , τγ2 ) ., To evaluate the influence of missing data , we randomly chose a certain fraction ( 10% 30% or 50% ) of the sites from each study and masked them as missing ., We then applied the new method PCBS , along with impG+meta , COJO , DISCARD , REPLACE0 and SYN+ to the data ., In our evaluations , we used the exact LD with COJO and impG+meta , in order to remove the influence of approximate LD and focus on the impact of missing summary statistics on the power and type I error ., We evaluated the type I errors and power for each approach under a variety of scenarios with different genetic effect sizes , fractions of causal variants in the gene region , and the fractions of missing data ., To evaluate the effectiveness of methods in real datasets , we applied our methods to a meta-analysis of seven cohorts with a cigarettes-per-day ( CPD ) phenotype , a key measurement for studying nicotine dependence ., Participating studies were the Minnesota Center for Twin and Family Research ( MCTFR ) 17–19 , SardiNIA20 , METabolic Syndrome In Men ( METSIM ) 21 , Genes for Good 22 , COPDGene with samples of European ancestry23 , Center for Antisocial Drug Dependence ( CADD ) 24 , and full UK Biobank ., Genotypes were imputed using the Haplotype Reference Consortium panel 25 and the Michigan Imputation Server 26 ( with the exception of UK Biobank dataset , which was imputed centrally by the UK Biobank team ) ., Summary association statistics from the seven cohorts were generated using RVTESTS 27 , and meta-analysis performed using rareMETALS with the PCBS statistics and other alternative approaches ., Detailed descriptions of the cohorts are available in S1 Text section 4 , including the methods for association analyses and the adjusted covariates ., To ensure the validity of our association analysis results , we conducted extensive quality control for the imputed genotype data ., We filtered out variant sites with the imputation quality metric R2 < . 7 , and sites that showed large differences in allele frequencies from the imputation reference panel ., Imputation dosages were used in the association analysis ., For each sentinel SNP with genome-wide significance ( α = 5×10−8 ) , we defined the locus as the 1 MB window surrounding it ., We applied iterative single variant conditional analysis to identify independently associated variants in each locus ., We started by conditioning on the most significant variant from marginal association analysis ., After each round of the association analysis , if the top variant remained statistically significant , we added the top variant to the set of conditioned variants , and performed an additional round of association testing ., We applied the six methods to analyze the data , including the PCBS statistic , SYN+ , impG+meta , REPLACE0 , DISCARD and COJO ., In order to examine if the low frequency variants in aggregate can be explained by the identified independently associated variants , we also performed gene-level association analysis for rare variants with MAF<1% , conditional on the identified independently associated variants ., We evaluated the type I errors for the six conditional analysis methods PCBS , SYN+ , COJO ( with exact LD ) , impG+meta , REPLACE0 , and DISCARD ., Scenarios were considered for different combinations of the fractions of missing data , the genetic effects of the variants in the candidate gene , and the genetic effects of the conditioned variants ., First , we noted that PCBS , SYN+ and DISCARD are the only three methods that have controlled type I errors across all scenarios , consistent with our theoretical expectation ( Table 1 ) ., The type I error rate for the other three methods , i . e . impG+meta , REPLACE0 and COJO are inflated in a number of scenarios ., The inflation tends to increase with the effect of the conditioned variant ( s ) and the rate of missingness ., In many scenarios , the type I error can be >100X inflated over the significance threshold ( α = 5×10−8 ) ., For example , when the conditioned variant effect is . 04 , and the association statistics from 30% of the variant sites are missing , type I errors for impG+meta , COJO and REPLACE0 are . 015 , . 57 and . 74 under the significance threshold of α = 0 . 005 ., When the missing rate is 50% , and the conditioned variant effects is . 08 , the type I errors for the three methods become . 25 , . 65 , and . 60 ., Second , among the methods with the controlled type I error rates ( i . e . SYN+ , PCBS and DISCARD ) , PCBS is consistently the most powerful method ( Table 1 ) ., The power advantage of PCBS over the other two approaches increases when, 1 ) the conditioned variant ( s ) have larger effects or, 2 ) the fraction of missing summary association statistics is larger ., For example , when candidate variant effect is . 04 , the conditioned variant effect is . 08 , and the missing rate of score statistics is 30% , the power for PCBS is . 21 , which is 75% higher than the power for SYN+ ( . 12 ) ., When the candidate variant effect is . 08 , the conditioned variant effect is . 08 , and score statistics from 50% of the variant sites in each participating study are missing , the power for PCBS and SYN+ are respectively . 83 and . 74 ., Due to the obvious limitations of complete case analysis , the DISCARD method of discarding the studies with missing data can lead to considerable loss of power ( Table 1 ) ., The power for DISCARD is substantially lower than PCBS and SYN+ ., In some scenarios where the missingness is high , the power is barely larger than the significance threshold ., Interestingly , gene-level association tests are affected by two types of missing data with opposite consequences: Missing values at causal variant sites reduce power but missing values at non-causal variant sites tend to reduce noise and thus improve power ( Table 2 ) ., When missingness is higher , the power of gene-level tests is lower , but the power loss is small ., For instance , when a causal variant in the candidate gene has effects sampled from N ( 0 , 0 . 22 ) , the conditioned variant has effect . 1 , and 30% of the contributed summary statistics in each study have missing values , the power for burden/SKAT/VT tests are 58%/58%/56% , which are only slightly reduced compared to the power of analyzing the complete datasets ( 60%/61%/60% ) ., On the other hand , the method that discards studies with missing data has much reduced power ( 0 . 011/0 . 011/8 . 8×10−3 ) ., Our method was developed for the fixed effect meta-analysis , where the genetic effects are assumed to be constant across different studies ., But since PCBS first aggregates association statistics from across studies and then performs conditional analysis , the impact of genetic effects heterogeneities does not invalidate the test and the type I error remains well controlled ., The power is slightly reduced , but the advantages over other methods remain ., To confirm this , we performed simulation analysis assuming that the genetic effects across studies are heterogeneous ( S1 Table , S2 Table ) ., In our simulations , the genetic effects for a given variant in different studies were simulated from a normal distribution N ( μβG* , ( μβG*/2 ) 2 ) , allowing for substantial between-study heterogeneities ., The power comparison for different methods remains similar to the scenarios where the genetic effects are the same across studies ., We performed a meta-analysis of CPD phenotype in 7 cohorts ., The locus CHRNA5-CHRNB4-CHRNA3 was previously identified as associated with CPD 28 ., After careful quality control , 42 , 669 , 770 variants were meta-analyzed ., A majority ( 32 , 796 , 258 ) of these variants had minor allele frequencies <1% ., It is important to note that even with high quality imputation panels , such as the haplotype reference consortium panel 25 , there was still considerable missing data in the imputed datasets ., A fraction of 76 . 1% of the variants were missing from at least one participating study post imputation , due to filtering on the imputation quality ( R2> . 7 ) ., Compared to common variants , rare variants were considerably more likely to be missing: 95 . 3% of the variants with MAF<1% were missing from at least one cohort , compared to the fraction of 20 . 1% for the common variants with MAF>1% ., The Quantile-Quantile plot for–log10 ( p-value ) is well calibrated ( S1 Fig ) ., The genomic control value is 1 . 14 for common variants with MAF>0 . 01 , and 1 . 00 for rare variants with MAF<0 . 01 ., The genomic control value is consistent with that of large scale GWAS for highly polygenic traits 29 , 30 ., The intercept for LD score regression 31 was 1 . 01 , which shows little influence from potential population structure ., The meta-analysis of 7 cohorts identifies 9 loci ( S2 Fig ) , including the well-known CPD associated loci , the nicotine receptor genes CHRNB2 , CHRNB3-CHRNA6 , CHRNA5-CHRNB4-CHRNA3 , the gene CYP2A6 that encodes cytochrome P450 protein , the gene PDE1C that encodes Phosphodiesterase 1C , FAM163B-DBH , YTHDF3 and GRM4 ., Among these loci , CHRNB2 and FAM163B-DBH are associated with CPD at the genome-wide significance threshold for the first time ., While smoking behaviors are known to be heritable , only the CHRNA5-CHRNB4-CHRNA3 and CYP2A6 loci have been consistently implicated in human GWAS to date ., The other nicotine receptor gene CHRNB3-CHRNA6 was first identified with genome-wide significance in an isolated population for associations with nicotine dependence and nicotine use 32 ., CHRNB2 was implicated in the nicotine dependence trait , but not at genome-wide significance ., To our knowledge , there is no report that this gene is associated with CPD at genome-wide significance 33 ., In order to understand the allelic architecture of the CPD phenotype and compare different methods on real data , we performed sequential forward selection with the new PCBS method , and identified 5 independently associated variants for the CHRNA5-CHRNB4-CHRNA3 locus and 4 independently associated variants for the CYP2A6 locus at genome-wide significance threshold ( with p-values < 5 × 10−8 ) ( Table 3 ) ., The other loci do not have additional independently associated variants besides the sentinel variant ., As a comparison , we also performed sequential forward selection using the five alternative approaches ( S3 Table ) ., Using the SYN+ method , fewer independently associated variants are identified ., At the CHRNA5-CHRNB4-CHRNA3 locus , 3 independently associated variants are identified , and also at the CYP2A6 locus , only 3 independently associated variants are identified ., DISCARD also identifies fewer number of independently associated SNPs ., The results from real data analysis is consistent with our simulation study that PCBS has higher power than alternative approaches ., Among the approaches that have inflated type I errors in simulations , impG+meta identifies a lot of SNPs with very significant p-values ., Many of these identified SNPs have substantial missingness among the participating cohorts ( e . g . N<50 , 000 ) ., Given the inflated type I errors that we observed in simulations , as well as the small available sample sizes for the top variants , the validity of the results using impG+meta is of concern ., Most of the top variants identified by COJO and REPLACE0 have low missingness , so there are not many false positive results ., Yet , COJO and REPLACE0 identified fewer independently associated SNPs compared to PCBS and SYN+ ( Table 3 and S3 Table ) ., Together , the analysis of real data confirmed our simulation experiments ., We examined if our independently associated variants explained previously known association signals ., To do this , we looked up GWAS catalog 34 using key words “CPD” or “cigarettes per day” and found 11 associated variants in the loci that we identified ( S4 Table ) ., We first analyzed these 11 variants conditional on our independently associated variants ., All of these variants became insignificant , which indicated that our newly identified independently associated variants can explain previously known association signals ., We also performed conditional analysis in the opposite direction to examine if our identified association signal may be explained by the known variants ., We found that variants within the CPY2A6 locus remained highly significant and variants within the CHRNA5-CHRNB4-CHRNA3 locus remained marginally significant ., Together , our independently associated variants explained 1 . 1% of the phenotypic variance , which substantially improves the phenotypic variance ( . 64% ) explained by the 11 known signals ., Finally , in addition to single variant association , we investigated if rare variants within each of the 9 loci were independently associated with the CPD phenotype ( S5 Table ) ., 27 genes were analyzed using simple burden , SKAT and VT tests under a MAF threshold of 0 . 01 ., Only one gene ( CHRNA5 ) has gene-level p-values less than 0 . 05/27 , which is the Bonferroni threshold ., None of the genes have exome-wide significant gene-level association p-values ., We proposed a simple yet effective meta-analysis method to estimate joint and conditional effects of rare variants in the presence of missing summary statistics from contributing studies ., The method leads to the optimal use of shared summary association statistics ., It has well controlled type I error and much higher power than alternative approaches even when a large number of contributing studies contain missing summary statistics ., Several approaches were previously developed to combine genetic effects across studies when different studies may measure different genetic variants e . g . Verzilli et al 35 and Newcombe et al 36 ., These methods have some noticeable limitations ., The method by Verzilli et al requires the individual level genotype and phenotype data as input ., Also the method focuses on random effects meta-analysis , while our approach focuses on fixed effect meta-analysis ., The method by Newcombe et al models the haplotype counts in cases and controls ., The method does not allow for the adjustment of covariates , which is a serious limitation ., Both methods use MCMC for fitting the model , which may not scale well for contemporary meta-analysis with tens of millions of variants and dozens of studies ., It is important to note that our method , PCBS is developed for proper conditional and joint analysis when imputation fails to work ., As we showed in our meta-analysis of smoking phenotypes , even with the state-of-the-art imputation methods and high quality reference panels , there are still considerable amount of association statistics filtered out from participating studies ., The rate of missingness is much higher for rare variant association statistics than for common variant association statistics ., PCBS will be particularly useful for the meta-analysis of sequence data , where the measured variants are predominantly low frequency or rare 37 ., Our method is not developed to replace genotype imputation ., Genotype imputation fills in missing genotypes with imputed values , and increases effective sample sizes and power ., Our method does not increase the effective sample size for tested variants ., In practice , imputation method should first be applied in each participating cohort ., Our method should be applied at the meta-analysis stage for valid and powerful conditional meta-analysis , especially when contributed summary statistics from participating cohorts contain missing values ., Missing data will continue to be a persistent issue in the next generation of large-scale genetic studies ., Major biobanks have started to develop their own genotyping arrays and imputation reference panels to incorporate customized content ., Combining these newly genotyped studies with existing datasets will result in missing summary statistics ., Our method will continue to be useful when analyzing these newly generated datasets ., Another major application of the proposed method is in the meta-analysis of sequence data ., Given the use of targeted sequencing assays and variability in batch processing and quality control across studies , it would be difficult to impute missing genotype data or missing summary statistics ., One of the challenges in sequence-based meta-analysis is to properly represent monomorphic sites , as the polymorphic variant sites are not known a priori ., Neither un-called variant sites ( e . g . due to insufficient coverage or failed quality control ) nor monomorphic sites contribute to the single variant meta-analysis statistic ., Yet they should be treated differently in joint and conditional meta-analysis ., Summary statistics from monomorphic variants should be replaced by zeros ., On the other hand , summary statistics from un-called variants should be treated a | Introduction, Materials and methods, Results, Discussion | Meta-analysis of genetic association studies increases sample size and the power for mapping complex traits ., Existing methods are mostly developed for datasets without missing values , i . e . the summary association statistics are measured for all variants in contributing studies ., In practice , genotype imputation is not always effective ., This may be the case when targeted genotyping/sequencing assays are used or when the un-typed genetic variant is rare ., Therefore , contributed summary statistics often contain missing values ., Existing methods for imputing missing summary association statistics and using imputed values in meta-analysis , approximate conditional analysis , or simple strategies such as complete case analysis all have theoretical limitations ., Applying these approaches can bias genetic effect estimates and lead to seriously inflated type-I or type-II errors in conditional analysis , which is a critical tool for identifying independently associated variants ., To address this challenge and complement imputation methods , we developed a method to combine summary statistics across participating studies and consistently estimate joint effects , even when the contributed summary statistics contain large amounts of missing values ., Based on this estimator , we proposed a score statistic called PCBS ( partial correlation based score statistic ) for conditional analysis of single-variant and gene-level associations ., Through extensive analysis of simulated and real data , we showed that the new method produces well-calibrated type-I errors and is substantially more powerful than existing approaches ., We applied the proposed approach to one of the largest meta-analyses to date for the cigarettes-per-day phenotype ., Using the new method , we identified multiple novel independently associated variants at known loci for tobacco use , which were otherwise missed by alternative methods ., Together , the phenotypic variance explained by these variants was 1 . 1% , improving that of previously reported associations by 71% ., These findings illustrate the extent of locus allelic heterogeneity and can help pinpoint causal variants . | It is of great interest to estimate the joint effects of multiple variants from large scale meta-analyses , in order to fine-map causal variants and understand the genetic architecture for complex traits ., The summary association statistics from participating studies in a meta-analysis often contain missing values at some variant sites , as the imputation methods may not work well and the variants with low imputation quality will be filtered out ., Missingness is especially likely when the underlying genetic variant is rare or the participating studies use targeted genotyping array that is not suitable for imputation ., Existing methods for conditional meta-analysis do not properly handle missing data , and can incorrectly estimate correlations between score statistics ., As a result , they can produce highly inflated type-I errors for conditional analysis , which will result in overestimated phenotypic variance explained and incorrect identification of causal variants ., We systematically evaluated this bias and proposed a novel partial correlation based score statistic ., The new statistic has valid type-I errors for conditional analysis and much higher power than the existing methods , even when the contributed summary statistics contain a large fraction of missing values ., We expect this method to be highly useful in the sequencing age for complex trait genetics . | genome-wide association studies, variant genotypes, random variables, covariance, genetic mapping, mathematics, statistics (mathematics), genome analysis, research and analysis methods, genome complexity, genomics, mathematical and statistical techniques, genetic loci, research assessment, probability theory, phenotypes, heredity, meta-analysis, genetics, biology and life sciences, physical sciences, computational biology, research errors, statistical methods, introns, human genetics | null |
1,865 | journal.pgen.1006711 | 2,017 | Phenome-wide heritability analysis of the UK Biobank | The heritability of a trait refers to the proportion of phenotypic variance that is attributable to genetic variation among individuals ., Heritability is commonly measured as either the contribution of total genetic variation ( broad-sense heritability , H2 ) , or the fraction due to additive genetic variation ( narrow-sense heritability , h2 ) 1 ., A large body of evidence from twin studies has documented that essentially all human complex traits are heritable ., For example , a recent meta-analysis of virtually all twin studies published between 1958 and 2012 , encompassing 17 , 804 traits , reported that the overall narrow-sense heritability estimate across all human traits was 49% , although estimates varied widely across phenotypic domains 2 ., Over the past decade , the availability of genome-wide genotyping has enabled the direct estimation of additive heritability attributable to common genetic variation ( “SNP heritability” or hSNP2 ) 3–5 ., These estimates do not capture non-additive genetic effects such as dominance or epistasis , and provide a lower bound for narrow-sense heritability because they also do not capture contributions ( e . g . , from rare variants ) that are not assayed by most genotyping microarrays and are not well tagged by genotyped variants ., Nevertheless , estimates of SNP heritability can provide important information about the genetic basis of complex traits such as the proportion of phenotypic variation that could be explained by common-variant genome-wide association studies ( GWAS ) ., However , heritability is not a fixed property of a phenotype but depends on the population in which it is estimated ., As a ratio of variances , it can vary with population-specific differences in both genetic background and environmental variation 1 ., For example , twin data have documented variations in the heritability of childhood IQ by socioeconomic status ( SES ) 6 , highlighting that different environment may have different relative contributions to the variance of a phenotype ., In addition , heritability estimates for a range of complex phenotypes have been shown to vary according to the sex and age distributions of the sampled populations 2 ., Identifying variables that may affect the heritability of complex traits has implications for the design of GWAS , highlighting subgroups and environmental conditions in which common-variant contributions may be diminished or magnified ., To date , however , studies of complex trait heritability and the effect of modifying variables have produced mixed results likely due to sample size limitations and population-specific differences in genetic and environmental variance that may be operating in different cohorts ., The UK Biobank ( http://www . ukbiobank . ac . uk ) provides a unique opportunity to estimate the heritability of traits across a broad phenotypic spectrum in a single population sample ., The UK Biobank is a large prospective population-based cohort study that enrolled 500 , 000 participants aged 40–69 years between 2006 and 2010 7 ., The study has collected a wealth of phenotypic data from questionnaires , physical and biological measurements , and electronic health records as well as genome-wide genotype data ., However , this rich data source also presents analytic challenges ., For example , with the large sample size , existing heritability estimation methods such as genome-wide complex trait analysis ( GCTA ) 3–5 and LD ( linkage disequilibrium ) score regression 8 become computationally expensive and memory intensive , and thus can be difficult to apply ., Here we implemented a computationally and memory efficient approach to estimate the heritability for 551 complex traits derived from the interim data release ( 152 , 736 subjects ) of the UK Biobank , comprising both quantitative phenotypes and disease categories ., We then examined how heritability estimates are modified by three major demographic variables: age , sex and socioeconomic status ( SES ) ., Our results underscore the importance of considering population characteristics in estimating and interpreting heritability , and may inform efforts to apply genetic risk prediction models for a broad range of human phenotypes ., We report the heritability for 551 traits that were made available to us through the interim data release of the UK Biobank ( downloaded on Mar 3 , 2016 ) and had sufficient sample sizes to achieve accurate heritability estimates ( standard error of the heritability estimate smaller than 0 . 1; 15 disease codes excluded ) using a computationally and memory efficient heritability estimation method ( see Methods and S1 Text ) ., The 551 traits can be classified into 11 general phenotypic domains as defined by the UK Biobank to group individual data fields into broadly related sets: cognitive function ( 5 traits ) , early life factors ( 7 traits ) , health and medical history ( 60 traits ) , hospital in-patient main diagnosis ICD-10 codes ( 194 traits ) , life style and environment ( 88 traits ) , physical measures ( 50 traits ) , psychosocial factors ( 40 traits ) , self-reported cancer codes ( 9 traits ) , self-reported non-cancer illness codes ( 79 traits ) , sex-specific factors ( 14 traits ) , and sociodemographics ( 5 traits ) ., ICD-10 ( the International Classification of Diseases , version-10 ) is a medical classification list published by the World Health Organization ( WHO ) , which contains thousands of diagnostic codes ., Fig 1 shows the percentage of each domain that makes up the 551 traits we analyzed ., Using the top-level categories and chapters of the self-reported disease and ICD-10 coding tree , we can further break down self-reported non-cancer illness codes and ICD-10 codes into different functional domains ( S1 Fig ) ., We note that since we only analyzed disease codes that had prevalence greater than 1% in the sample , distribution of the disease traits across functional domains was skewed ., For example , we investigated a large number of gastrointestinal and musculoskeletal traits , while diseases that have low prevalence in the sampled population such as psychiatric disorders were not well represented ., Table 1 lists the top heritable traits in each domain ( the most heritable trait and traits with heritability estimates greater than 0 . 30 ) ., S1 Table and S2 Table show the heritability estimates , standard error estimates , sample sizes , covariates adjusted , prevalence in the sample ( for binary traits ) and other relevant information for all the traits we analyzed ., Common genetic variants appear to have an influence on most traits we investigated , although heritability estimates showed heterogeneity within and across trait domains ., Complex traits that exhibited high SNP heritability ( larger than 0 . 40 ) included human height ( 0 . 685+/-0 . 004 ) , skin color ( very fair/fair vs . other , 0 . 556+/-0 . 008 ) , ease of skin tanning ( very/moderately tanned vs . mildly/occasionally/never tanned , 0 . 454+/-0 . 006 ) , comparative height at age 10 ( taller than average , 0 . 439+/-0 . 007; shorter than average , 0 . 405+/-0 . 008 ) , rheumatoid arthritis ( 0 . 821+/-0 . 046 ) , hypothyroidism/myxedema ( 0 . 814+/-0 . 017 ) , malignant neoplasm of prostate ( 0 . 426+/-0 . 093 ) , and diabetes diagnosed by doctor ( 0 . 414+/-0 . 016 ) , among others ., On the other end of the spectrum , traits such as duration of walks/moderate activity/vigorous activity , frequency of stair climbing , ever had stillbirth , spontaneous miscarriage or termination , painful gums , stomach disorder , fracture , injuries to the head/knee/leg , and pain in joint had zero or close to zero heritability estimates , indicating that their phenotypic variation is largely determined by environmental factors , or there is widespread heterogeneity or substantial measurement error in these phenotypes ., SNP heritability estimates for several phenotypes , including diseases with known immune-mediated pathogenesis ( rheumatoid arthritis , psoriasis , diabetes , hypothyroidism ) , were markedly reduced when the major histocompatibility complex ( MHC ) region was excluded from analysis ( S4 Table ) , and thus need to be interpreted with caution ( see Discussion ) ., A substantial fraction of the phenotypes we examined were based on self-report illness codes or diagnostic ( ICD-10 ) codes , which may be noisy and have low specificity ., However , the SNP heritability estimates for 14 pairs of self-reported illness and ICD-10 codes that represent the same or closely matched diseases were largely consistent and had a Pearson correlation of 0 . 78 ( Table 2 ) , indicating that both phenotypic approaches captured useful and comparable variations in these phenotypes ., Heritability analysis stratified by sex identified a number of traits whose heritability showed significant difference in males and females after multiple testing correction ( Fig 2 ) ., For example , the analyses of diastolic and systolic blood pressure , and self-reported hypertension and high blood pressure provided consistent evidence that the heritability of blood pressure related traits and diseases is significantly higher in females than in males ., A majority of physical measures showed decreasing heritability with age ( S3 Table ) ., More specifically , 33 out of 50 physical measures had a significant decreasing trend in heritability estimates after accounting for multiple testing correction ( mean slope of the 33 traits -0 . 0035 , i . e . , heritability estimates decrease by 3 . 5 percent per decade ) ., The age-varying SNP heritability estimates and their standard errors for 12 traits that showed both significant slopes and significantly different heritability estimates between the first ( 40–49 years ) and last age range ( 64–73 years ) are shown in Fig 3A ., S2 Fig shows the mean and standard deviation of the 12 traits in each age range ., When we stratified heritability by the Townsend deprivation index , a proxy for SES , education ( has college or university degree or not ) was the only trait on which SES had a significant moderating effect after accounting for multiple testing correction ., Fig 3B shows that the heritability of education increases with increasing SES ., Estimating the heritability of complex , polygenic traits is an important component of defining the genetic basis of human phenotypes ., In addition , heritability estimates provide a theoretical upper bound for the utility of genetic risk prediction models 9 ., We calculated the common-variant heritability of 551 phenotypes derived from the interim data release of the UK Biobank , and confirmed that common genetic variation contributes to a broad array of quantitative traits and human diseases in the UK population ., Two aspects of our work are particularly notable ., First , we developed a computationally and memory efficient method that enabled us to calculate the most extensive population-based survey of SNP heritability to date ., Second , we found that the heritability for a number of phenotypes is moderated by major demographic variables , demonstrating the dependence of heritability on population characteristics ., We discuss each of these advances and the limitations of the biobank data and our analyses below ., Classical methods to estimate SNP heritability , such as GCTA ( also known as the GREML method ) , rely on the restricted maximum likelihood ( ReML ) algorithm 3–5 , which can give unbiased heritability estimates in quantitative trait analysis and non-ascertained case-control studies , and is statistically efficient when the trait is Gaussian distributed 10 ., However , ReML is an iterative optimization algorithm , which is computationally and memory intensive , and thus can be difficult to apply when analyzing data sets with hundreds of thousands of subjects ., An alternative and widely used SNP heritability estimation method is LD score regression , which is based on GWAS summary statistics and an external reference panel for the LD structure 8 ., The approach can thus be easily applied to complex traits on which large-scale GWAS results are available , and allows meta-analysis of heritability estimates from different studies ., Recently , LD score regression has been extended to partition heritability by functional annotation 11 , and to estimate the genetic correlation between two traits 12 , 13 ., However , when applying LD score regression to novel phenotypes in a large cohort , conducting GWAS is often time-consuming ., In the present study , we implemented a computationally and memory efficient moment-matching method for heritability estimation , which is closely related to the Haseman-Elston regression 14–16 and phenotype-correlation genetic-correlation ( PCGC ) regression 10 , and produces unbiased SNP heritability estimates for both continuous and binary traits ., The moment-matching method is theoretically less statistically efficient than the ReML algorithm ( i . e . , produces larger standard error on the point estimate ) when analyzing quantitative traits , but the power loss is expected to be small 17 and is less of an issue given large sample sizes , such as in the UK Biobank ., The moment-matching method is also mathematically equivalent to LD score regression if the following conditions are satisfied: ( 1 ) the out-of-sample LD scores estimated from the reference panel and the in-sample LD scores estimated from individual-level genotype data are identical; ( 2 ) the intercept in the LD score regression model is constrained to 1 ( i . e . , assuming that there is no confound and population stratification in the data ) ; and ( 3 ) a particular weight is used in the LD score regression ( more specifically , the reciprocal of the LD score , which is close to the default setting in the LD score regression software ) 18 ., Here , since we have constrained our analysis to a white British ( Caucasian ) sample and have accounted for potential population stratification by including top PCs of the genotype data as covariates , the two methods should produce similar estimates ., See Box 1 for an empirical comparison between the moment-matching method , LD score regression and GCTA ., Using the moment-matching method , we found that a large number of traits we examined display significant heritability ., For traits whose heritability has been intensively studied , our estimates are generally in line with prior studies ., For example , twin and pedigree studies have estimated the heritability of human height and body mass index ( BMI ) to be approximately 80% and 40–60% see e . g . , 19–21 , respectively , although recent studies have shown that heritability may be overestimated in family studies due to , for instance , improper modeling of common environment , assortative mating in humans , genetic interactions , and suboptimal statistical methods 10 , 22–25 ., Using genome-wide SNP data from unrelated individuals , it has been shown that common SNPs explain a large proportion of the height and BMI variation in the population , although SNP heritability estimates are lower than twin estimates 4 , 5 , 26 ., Specifically , the first GCTA analysis estimated the SNP heritability of human height to be 0 . 45 using relatively sparse genotyping data ( approximately 300 , 000 SNPs ) and showed that the estimate could be higher if imperfect LD between SNPs and causal variants are corrected 4 ., A more recent study leveraging whole-genome sequencing data and imputed genetic variants concluded that narrow-sense heritability is likely to be 60–70% for height and 30–40% for BMI 27 ., Here , we estimated the SNP heritability of human height and BMI to be 0 . 685+/-0 . 004 and 0 . 274+/-0 . 004 , respectively , which are comparable to the expected range ., The SNP heritability estimates of other complex traits of interest , such as age at menarche in girls ( 0 . 239+/-0 . 007 ) , diastolic ( 0 . 184+/-0 . 004 ) and systolic ( 0 . 156+/-0 . 004 ) blood pressures , education ( has colleague or university degree or not , 0 . 294+/-0 . 007 ) , neuroticism ( 0 . 130+/-0 . 005 ) , smoking ( ever smoked or not , 0 . 174+/-0 . 006 ) , asthma ( 0 . 340+/-0 . 010 ) and hypertension ( 0 . 263+/-0 . 007 ) were also more modest and lower than twin estimates , as expected 2 ., Heritability is , by definition , a ratio of variances , reflecting the proportion of phenotypic variance attributable to individual differences in genotypes ., Because the genetic architecture and non-genetic influences on a trait may differ depending on the population sampled , heritability itself may vary ., Examples of this have been reported in the twin literature ., In one well-known study , Turkheimer and colleagues 6 reported that the heritability of IQ is moderated by SES in a sample of 320 7-year-old twin pairs of mixed ancestry ., In that study , the heritability of IQ was essentially 0 at the lowest end of SES but substantial at the highest end ., Subsequent studies of the moderating effects of SES on the heritability of cognitive ability and development using twin designs have produced mixed results 28–33 ., In our analysis , using SNP data , we observed no moderating effect of SES ( as measured by the Townsend deprivation index ) on the heritability of cognitive traits ( including fluid intelligence ) , possibly due to the age range of participants in the UK Biobank ( middle and old age ) in contrast to many previous studies targeting childhood or early adulthood , and the cross-national differences in gene-by-SES interaction on intelligence as shown by a recent meta-analysis 34 ., In addition , the brief cognitive tests available in the UK Biobank may have had limited sensitivity for capturing individual differences in IQ ( see discussion below ) ., On the other hand , the heritability of education showed significant interactions with SES , with increasing heritability at higher SES levels ., Prior evidence has suggested that education has substantial genetic correlation with IQ and may be a suitable proxy phenotype for genetic analyses of cognitive performance 35; thus our results may indirectly support earlier studies of the SES moderation of IQ heritability ., With two exceptions , significant sex differences we observed indicated greater heritability for women compared to men ., Our results are consistent with findings from some twin studies but not others ., For example , we found that women exhibited significantly greater heritability for measured waist circumference and blood pressure ., Twin studies have also reported greater female heritability for waist circumference 36 but no substantial sex difference in heritability of blood pressure 2 , 37 ., A substantial difference between the heritability of rheumatoid arthritis ( RA ) in males compared to females was observed , although the MHC region has a large impact on the SNP heritability estimates of autoimmune diseases , and thus this finding needs to be interpreted with caution ( see discussion below ) ., While RA is known to be more common in women , a twin analysis found no sex difference in heritability among Finnish and UK twin pairs , though power was limited in that analysis 38 ., Intriguingly , greater heritability was observed among men for the personality trait of miserableness , a component of neuroticism , suggesting that environmental factors may be more influential for this trait among women or that measurement error differs by sex ., We examined age effects on heritability for a subset of variables and found that a number of physical measures indexing body size , adiposity , height , as well as systolic blood pressure and lung function , showed declining heritability with age ., Age-related declines in heritability may reflect the cumulative effect of environmental perturbations over the lifespan ., Prior twin studies of age effects on the heritability of anthropometric traits in adults have had inconsistent results 39–41 ., Haworth and colleagues showed that the heritability of BMI increases over childhood 42 ., A recent meta-analysis of 32 twin studies documented a non-monotonic relationship between BMI heritability and age ( from childhood to late adulthood ) , with a peak around age 20 and decline thereafter 43 ., An age-related decline in indices of body size may reflect a decreasing contribution of genetically-regulated growth processes over the lifespan ., However , we were unable to assess the entire trajectory of heritability due to the age range ( 40–73 years ) of the UK Biobank participants ., Some but not all studies have also suggested varying or declining heritability with age for blood pressure , lung function and age at first birth 39 , 44–50 ., Our results should be interpreted in light of the limitations associated with the biobank data ., First , the UK Biobank is restricted to middle and old age groups , which may be subject to sample selection bias ., For example , older and physically/cognitively impaired subjects may be underrepresented in the study , which may have an impact on the heritability estimates stratified by age ., Mortality selection can also alter the results of genetic analyses as shown by recent analyses 51 ., In addition , the UK Biobank participants comprised a relatively high proportion of well-educated , skilled professionals 52 , potentially leading to the underrepresentation or restricted range of certain traits such as smoking relative to other cohorts ., Therefore , our heritability estimates may be specific to this UK population and may not generalize to other settings or ancestry groups ., Second , although the UK Biobank has collected a wealth of phenotypes , measurements associated with a particular phenotypic domain may not be comprehensive ., For example , only five cognitive tests were included in the UK Biobank ., The reasoning task ( fluid intelligence test ) was brief and had a narrow range; the reaction time was averaged from a small number of trials; and the visual memory test ( pairs-matching test ) had a significant floor effect ( a large number of participants made zero or very few mistakes , and thus the scores do not fully reflect individual differences ) ., In addition , all cognitive tests had relatively low reliability across repeat measurements 53 ., These noisy measurements may thus downwardly bias heritability estimates of cognition ., The Townsend deprivation index , which we used to stratify phenotypes , was calculated based on the national census output area of each participant in which their postcode was located at the time of recruitment , and thus can only serve as a proxy for SES ., Third , the phenotypes were limited to those for which we had sufficient data to estimate heritability with adequate precision ., Therefore , diseases with low prevalence in the sampled population were not well represented in our analysis ., We expect to analyze traits with lower prevalence ( e . g . , 0 . 5% ) when the genetic data for all UK Biobank participants becomes available ., We also assumed in our analysis that the population prevalence of a binary trait is identical to the observed sample prevalence , but diseases such as schizophrenia and stroke are naturally under-ascertained and thus their sample prevalence is often lower than population prevalence ., In addition , we note that since we used medical history to define cases and controls , the prevalence of many diseases we investigated reflected lifetime prevalence , which may be different from cross-sectional prevalence used in other studies ., We also binarized categorical ( multinomial or ordinal ) variables to facilitate analysis , but this might not optimally represent variation in these variables with respect to heritability ., Fourth , a substantial fraction of the phenotypes we examined were based on self-report or diagnostic ( ICD-10 ) codes , which may or may not validly capture the phenotypes they represent ., For example , a recent UK Biobank study shows that 51% of the participants who reported RA were not on RA-relevant medication , a proxy measure of valid diagnosis 54 ., However , our head-to-head comparison of the heritability estimates between self-reported illness and ICD-10 codes showed largely consistent results , indicating that both phenotypic approaches at least captured comparable variations in these phenotypes ., Prior research evaluating phenotypes derived from electronic health records ( EHR ) indicate that greater phenotypic validity can be achieved when diagnostic codes are supplemented with text mining methods 55–58 ., The specificity of the disease codes might also be improved by leveraging the medication records in the UK Biobank ., Methodologically , our SNP heritability estimation approach , despite its superior computational and memory performance compared to existing methods , also has several limitations ., First , heritability estimation always relies on a number of assumptions on the genetic architecture ., For example , the moment-matching method we used here , as well as the established GCTA and LD score regression approaches , implicitly assumes that the causal SNPs are randomly spread over the genome , which is independent of the MAF spectrum and the LD structure , and the effect sizes of causal SNPs are Gaussian distributed and have a specific relationship to their MAFs ., Although it has been shown that SNP heritability estimates are reasonably robust to many of these modeling assumptions 59 , the estimates can be biased if , for instance , causal SNPs are rarer or more common than uniformly distributed on the MAF spectrum , or are enriched in high or low LD regions across the genome ., For example , the heritability estimates for some autoimmune diseases such as psoriasis and RA dropped dramatically when the MHC region ( chr6:25-35Mb ) was removed when constructing the genetic similarity matrix , indicating , as expected , that causal variants for these diseases are disproportionally enriched in the MHC region ., S4 Table lists all the traits whose heritability estimates decreased by 0 . 2 or more when the MHC region was taken out , and thus need to be interpreted with caution ., Methods to correct for MAF properties and region-specific LD heterogeneity of causal variants have been proposed 27 , 59 , 60 ., For example , we can stratify MAF and LD structure into different bins , compute a genetic similarity matrix within each bin , and fit a mixed effects model with multiple variance components 27 , 60 ., This approach can give heritability estimates that are more robust to properties of the underlying genetic architecture , but has the downside of increased computational burden and reduced statistical power ., A different direction to explore is to estimate SNP heritability using imputed data ( in contrast to the genotype data here ) , which might capture more genetic variation from rare variants , or common variants that are not well tagged by the genotyped SNPs , and thus lead to increased heritability estimates ., Second , heritability analysis models , including the one we employed in the present study , typically assume that genetic and environmental effects are independent , i . e . , no gene-by-environment ( GxE ) interaction exists ., This is certainly a simplification of the real world where GxE interactions are expected for many complex traits ., Recent computational studies have also shown that ignoring GxE interactions in heritability analysis can produce biased estimates 25 ., However , modeling GxE would require collecting relevant environmental variables for each phenotype and more sophisticated statistical modeling approaches , e . g . , incorporating multiple random effects in the heritability analysis model 5 , 61 ., Due to the limited measurements of environment collected by the UK Biobank , and the extensive analyses we have conducted across the phenotypic spectrum , explicitly modeling the environmental factors and GxE interactions is not feasible ., We therefore took an alternative approach to examine the moderating effects of three major demographic variables on heritability estimates by stratifying samples ., Of note , consistent heritability estimates across different levels of the stratifying variable do not completely eliminate the potential existence of GxE interactions ., Specifically , recent studies have identified genetic heterogeneity in human traits such as BMI and fertility 62 , 63 , indicating that the genetic architecture of a trait may be different across environments ( i . e . , the genetic correlation of a trait in different environments may be significantly smaller than, 1 ) even if the overall heritability estimates are similar ., Dissection of common and unique environmental influences and their interactive effects with genetics on different complex traits , and the shared and unique genetic effects across environments are important future directions to explore ., Lastly , as reviewed in 64 , a number of empirical genetic similarity measurements computable from genome-wide SNP data have been proposed , which , when utilized in heritability analysis , can give different estimates with different interpretations ., In addition , recent studies have argued that estimation error associated with genetic similarity measurements and the ill-posedness of the empirical genetic similarity matrix may produce unstable and unreliable SNP heritability estimates 65 ., However , this is an area under active investigation and debate 64–67 ., Here , as the first study to screen all UK Biobank variables and provide an overview of the distribution of SNP heritability across different trait domains , and to examine the effect of potential modifying variables on heritability estimates , we used a straightforward and classical modeling approach that is most widely used ., To obtain more insights into the genetic architecture and find the most appropriate and robust model for each individual trait , more systematic investigation is needed ., In sum , using a computationally and memory efficient approach , we provide estimates of the SNP heritability for 551 complex traits across the phenome captured in the population-based UK Biobank ., We further identify phenotypes for which the contribution of genetic variation is modified by demographic factors ., These results underscore the importance of considering population characteristics in interpreting heritability , highlight phenotypes and subgroups that may warrant priority for genetic association studies , and may inform efforts to apply genetic risk prediction models for a broad range of human phenotypes ., This study utilized deidentified data from the baseline assessment of the UK Biobank , a prospective cohort study of 500 , 000 individuals ( age 40–69 years ) recruited across Great Britain during 2006–2010 7 ., The protocol and consent were approved by the UK Biobank’s Research Ethics Committee ., The UK Biobank collected phenotypic data from a variety of sources including questionnaires regarding mental and physical health , food intake , family history and lifestyle , a baseline physical assessment , computerized cognitive testing , linkage with health records , and blood samples for biochemical and DNA analysis ., Details about the UK Biobank project are provided at http://www . ukbiobank . ac . uk ., Data for the current analyses were obtained under an approved data request ( Ref: 13905 ) ., The interim release of the genotype data for the UK Biobank ( downloaded on Mar 3 , 2016 ) comprises 152 , 736 samples ., Two closely related arrays from Affymetrix , the UK BiLEVE Axiom array and the UK Biobank Axiom array , were used to genotype approximately 800 , 000 markers with good genome-wide coverage ., Details of the design of the arrays and sample processing can be found at http://biobank . ctsu . ox . ac . uk/crystal/refer . cgi ?, id=146640 and http://biobank . ctsu . ox . ac . uk/crystal/refer . cgi ?, id=155583 ., Prior to the release of the genotype data , stringent quality control ( QC ) was performed at the Wellcome Trust Centre for Human Genetics , Oxford , UK ., Procedures were documented in detail at http:// | Introduction, Results, Discussion, Materials and methods | Heritability estimation provides important information about the relative contribution of genetic and environmental factors to phenotypic variation , and provides an upper bound for the utility of genetic risk prediction models ., Recent technological and statistical advances have enabled the estimation of additive heritability attributable to common genetic variants ( SNP heritability ) across a broad phenotypic spectrum ., Here , we present a computationally and memory efficient heritability estimation method that can handle large sample sizes , and report the SNP heritability for 551 complex traits derived from the interim data release ( 152 , 736 subjects ) of the large-scale , population-based UK Biobank , comprising both quantitative phenotypes and disease codes ., We demonstrate that common genetic variation contributes to a broad array of quantitative traits and human diseases in the UK population , and identify phenotypes whose heritability is moderated by age ( e . g . , a majority of physical measures including height and body mass index ) , sex ( e . g . , blood pressure related traits ) and socioeconomic status ( education ) ., Our study represents the first comprehensive phenome-wide heritability analysis in the UK Biobank , and underscores the importance of considering population characteristics in interpreting heritability . | Heritability of a trait refers to the proportion of phenotypic variation that is due to genetic variation among individuals ., It provides important information about the genetic basis of complex traits and indicates whether a phenotype is an appropriate target for more specific statistical and molecular genetic analyses ., Recent studies have leveraged the increasingly ubiquitous genome-wide data and documented the heritability attributable to common genetic variation captured by genotyping microarrays for a wide range of human traits ., However , heritability is not a fixed property of a phenotype and can vary with population-specific differences in the genetic background and environmental variation ., Here , using a computationally and memory efficient heritability estimation method , we report the heritability for a large number of traits derived from the large-scale , population-based UK Biobank , and , for the first time , demonstrate the moderating effect of three major demographic variables ( age , sex and socioeconomic status ) on heritability estimates derived from genome-wide common genetic variation ., Our study represents the first comprehensive heritability analysis across the phenotypic spectrum in the UK Biobank . | genome-wide association studies, dermatology, medicine and health sciences, population genetics, sociology, social sciences, neuroscience, learning and memory, skin neoplasms, cognition, genome analysis, memory, population biology, malignant skin neoplasms, genetic polymorphism, social stratification, phenotypes, heredity, genetics, biology and life sciences, genomics, evolutionary biology, cognitive science, computational biology, complex traits, human genetics | null |
1,522 | journal.pgen.1000098 | 2,008 | Evaluating Statistical Methods Using Plasmode Data Sets in the Age of Massive Public Databases: An Illustration Using False Discovery Rates | “Omic” technologies ( genomic , proteomic , etc . ) have led to high dimensional experiments ( HDEs ) that simultaneously test thousands of hypotheses ., Often these omic experiments are exploratory , and promising discoveries demand follow-up laboratory research ., Data from such experiments require new ways of thinking about statistical inference and present new challenges ., For example , in microarray experiments an investigator may test thousands of genes aiming to produce a list of promising candidates for differential genetic expression across two or more treatment conditions ., The larger the list , the more likely some genes will prove to be false discoveries , i . e . genes not actually affected by the treatment ., Statistical methods often estimate both the proportion of tested genes that are differentially expressed due to a treatment condition and the proportion of false discoveries in a list of genes selected for follow-up research ., Because keeping the proportion of false discoveries small ensures that costly follow-on research will yield more fruitful results , investigators should use some statistical method to estimate or control this proportion ., However , there is no consensus on which of the many available methods to use 1 ., How should an investigator choose ?, Although the performance of some statistical methods for analyzing HDE data has been evaluated analytically , many methods are commonly evaluated using computer simulations ., An analytical evaluation ( i . e . , one using mathematical derivations to assess the accuracy of estimates ) may require either difficult-to-verify assumptions about a statistical model that generated the data or a resort to asymptotic properties of a method ., Moreover , for some methods an analytical evaluation may be mathematically intractable ., Although evaluations using computer simulations may overcome the challenge of intractability , most simulation methods still rely on the assumptions inherent in the statistical models that generated the data ., Whether these models accurately reflect reality is an open question , as is how to determine appropriate parameters for the model , what realistic “effect sizes” to incorporate in selected tests , as well as if and how to incorporate correlation structure among the many thousands of observations per unit 2 ., Plasmode data sets may help overcome the methodological challenges inherent in generating realistic simulated data sets ., Catell and Jaspers 3 made early use of the term when they defined a plasmode as “a set of numerical values fitting a mathematico-theoretical model . That it fits the model may be known either because simulated data is produced mathematically to fit the functions , or because we have a real—usually mechanical—situation which we know with certainty must produce data of that kind . ”, Mehta et al . ( p . 946 ) 2 more concisely refer to a plasmode as “a real data set whose true structure is known . ”, The plasmodes can accommodate unknown correlation structures among genes , unknown distributions of effects among differentially expressed genes , an unknown null distribution of gene expression data , and other aspects that are difficult to model using theoretical distributions ., Not surprisingly , the use of plasmode data sets is gaining traction as a technique of simulating reality-based data from HDEs 4 ., A plasmode data set can be constructed by spiking specific mRNAs into a real microarray data set 5 ., Evaluating whether a particular method correctly detects the spiked mRNAs provides information about the methods ability to detect gene expression ., A plasmode data set can also be constructed by using a current data set as a template for simulating new data sets for which some truth is known ., Although in early microarray experiments , sample sizes were too small ( often only 2 or 3 arrays per treatment condition ) to use as a basis for a population model for simulating data sets , larger HDE data sets have recently become publicly available , making their use feasible for simulation experiments ., In this paper , we propose a technique to simulate plasmode data sets from previously produced data ., The source-data experiment was conducted at the Center for Nutrient–Gene Interaction ( CNGI , www . uab . edu/cngi ) , at the University of Alabama at Birmingham ., We use a data set from this experiment as a template for producing a plasmode null data set , and we use the distribution of effect sizes from the experiment to select expression levels for differentially expressed genes ., The technique is intuitively appealing , relatively straightforward to implement , and can be adapted to HDEs in contexts other than microarray experiments ., We illustrate the value of plasmodes by comparing 15 different statistical methods for estimating quantities of interest in a microarray experiment , namely the proportion of true nulls ( hereafter denoted π0 ) , the false discovery rate ( FDR ) 6 and a local version of FDR ( LFDR ) 7 ., This type of analysis enables us , for the first time , to compare key omics research tools according to their performance in data that , by definition , are realistic exemplars of the types of data biologists will encounter ., The illustrations given here provide some insight into the relative performance characteristics of the 15 methods in some circumstances , but definitive claims regarding uniform superiority of one method over another would require more extensive evaluations over multiple types of data sets ., Steps for plasmode creation that are described herein are relatively straightforward ., First , an HDE data set is obtained that reflects the type of experiment for which statistical methods will be used to estimate quantities of interest ., Data from a rat microarray experiment at CNGI were used here ., Other organisms might produce data with different structural characteristics and methods may perform differently on such data ., The CNGI data were obtained from an experiment that used rats to test the pathways and mechanisms of action of certain phytoestrogens 8 , 9 ., In brief , rats were divided into two large groups , the first sacrificed at day 21 ( typically the day of weaning for rats ) , the second sacrificed at day 50 ( the day , corresponding to late human puberty , when rats are most susceptible to chemically induced breast cancer ) ., Each of these groups was subdivided into smaller groups according to diet ., At 21 and 50 days , respectively , the relevant tissues from these rat groups were appropriately processed , and gene expression levels were extracted using GCOS ( GeneChip Operating Software ) ., We exported the microarray image ( * . CEL ) files from GCOS and analyzed them with the Affymetrix Package of Bioconductor/R to extract the MAS 5 . 0 processed expression intensities ., The arrays and data were investigated for outliers using Pearsons correlation , spatial artifacts 10 and a deleted residuals approach 11 ., It is important to note that only one normalization method was considered , but the methods could be compared on RMA normalized data as well ., In fact , comparisons of methods performances on data from different normalization techniques could be done using the plasmode technique ., Second , an HDE data set that compares effect of a treatment ( s ) is analyzed and the vector of effect sizes is saved ., The effect size used here was a simple standardized mean difference ( i . e . , a two sample t-statistics ) but any meaningful metric could be used ., Plasmodes , in fact , could be used to compare the performance of statistical methods when different statistical tests were used to produce the P-values ., We chose two sets of HDE data as templates to represent two distributions of effect sizes and two different null distributions ., We refer to the 21-day experiment using the control group ( 8 arrays ) and the treatment group ( EGCG supplementation , 10 arrays ) as data set 1 , and the 50-day experiment using the control group ( 10 arrays ) and the treatment group ( Resveratrol supplementation , 10 arrays ) as data set, 2 . There were 31 , 042 genes on each array , and two sample pooled variance t-tests for differential expression were used to create a distribution of P-values ., Histograms of the distributions for both data sets are shown in Figure, 1 . The distribution of P-values for data set 1 shows a stronger signal ( i . e . , a larger collection of very small P-values ) than that for data set 2 , suggesting either that more genes are differentially expressed or that those that are expressed have a larger magnitude treatment effect ., This second step provided a distribution of effects sizes from each data set ., Next , create the plasmode null data set ., For each of the HDE data sets , we created a random division of the control group of microarrays into two sets of equal size ., One consideration in doing so is that if some arrays in the control group are ‘different’ from others due to some artifact in the experiment , then the null data set can be sensitive to how the arrays are divided into two sets ., Such artifacts can be present in data from actual HDEs , so this issue is not a limitation of plasmode use but rather an attribute of it , that is , plasmodes are designed to reflect actual structure ( including artifacts ) in a real data set ., We obtained the plasmode null data set from data set 1 by dividing the day 21 control group of 8 arrays into two sets of 4 , and for data set 2 by dividing the control group of 10 arrays into two sets of 5 arrays ., Figure 2 shows the two null distributions of P-values obtained using the two sample t-test on the plasmode null data sets ., Both null distributions are , as expected , approximately uniform , but sampling variability allows for some deviation from uniformity ., A proportion 1−π0 of effect sizes were then sampled from their respective distributions using a weighted probability sampling technique described in the Methods section ., What sampling probabilities are chosen can be a tuning parameter in the plasmode creation procedure ., The selected effects were incorporated into the associated null distribution for a randomly selected proportion 1−π0 of genes in a manner also described in the Methods section ., What proportion of genes is selected may depend upon how many genes in an HDE are expected to be differentially expressed ., This may determine whether a proportion equal to 0 . 01 or 0 . 5 is chosen to construct a plasmode ., Proportions between 0 . 05 and 0 . 2 were used here as they are in the range of estimated proportions of differentially expressed genes that we have seen from the many data sets we have analyzed ., Finally , the plasmode data set was analyzed using a selected statistical method ., We used two sample t-tests to obtain a plasmode distribution of P-values for each plasmode data set because the methods compared herein all analyze a distribution of P-values from an HDE ., P-values were declared statistically significant if smaller than a threshold τ ., Box 1 summarizes symbol definitions ., When comparing the 15 statistical methods , we used three values of π0 ( 0 . 8 , 0 . 9 , and 0 . 95 ) and two thresholds ( τ\u200a=\u200a0 . 01 and 0 . 001 ) ., For each choice of π0 and threshold τ , we ran B\u200a=\u200a100 simulations ., All 15 methods provided estimates of π0 , 14 provided estimates of FDR , and 7 provided estimates of LFDR ., Because the true values of π0 and FDR are known for each plasmode data set , we can compare the accuracy of estimates from the different methods ., There are two basic strategies for estimating FDR , both predicated on an estimated value for π0 , the first using equation ( 1 ) below , the second using a mixture model approach ., Let PK\u200a=\u200aM/K be the proportion of tests that were declared significant at a given threshold , where M and K were defined with respect to quantities in Table, 1 . Then one estimate for FDR at this threshold is , ( 1 ) The mixture model ( usually a two-component mixture ) approach uses a model of the form , ( 2 ) where f is a density , p represents a P-value , f0 a density of a P-value under the null hypothesis , f1 a density of a P-value under the alternative hypothesis , π0 is interpreted as before , and θ a ( possibly vector ) parameter of the distribution ., Since valid P-values are assumed , f0 is a uniform density ., LFDR is defined with respect to this mixture model as , ( 3 ) FDR is defined similarly except that the densities in ( 3 ) are replaced by the corresponding cumulative distribution functions ( CDF ) , that is , ( 4 ) where F1 ( τ ) is the CDF under the alternative hypothesis , evaluated at a chosen threshold τ ., ( There are different definitions of FDR and the definition in ( 4 ) is , under some conditions , the definition of a positive false discovery rate 12 ., However , in cases with a large number of genes many of the variants of FDR are very close 13 ) ., The methods are listed for quick reference in Table, 2 . Methods 1–8 use different estimates for π0 and , as implemented herein , proceed to estimate FDR using equation ( 1 ) ., Method 9 uses a unique algorithm to estimate LFDR and does not supply an estimate of FDR ., Methods 10–15 are based on a mixture model framework and estimate FDR and LFDR using equations ( 3 ) and ( 4 ) where the model components are estimated using different techniques ., All methods were implemented using tuning parameter settings from the respective paper or ones supplied as default values with the code in cases where the code was published online ., First , to compare their differences , we used the 15 methods to analyze the original two data sets , with data set 1 having a “stronger signal” ( i . e . , lower estimates of π0 and FDR ) ., Estimates of π0 from methods 3 through 15 ranged from 0 . 742 to 0 . 837 for data set 1 and 0 . 852 to 0 . 933 for data set, 2 . ( Methods 1 and 2 are designed to control for rather than estimate FDR and are designed to be conservative; hence , their estimates were much closer to, 1 . ) Results of these analyses can be seen in the Supplementary Tables S1 and S2 ., Next , using the two template data sets we constructed plasmode data sets in order to compare the performance of the 15 methods for estimating π0 ( all methods ) , FDR ( all methods except method 9 ) , and LFDR ( methods 9–15 ) ., Figures 3 and 4 show some results based on data set, 2 . More results are available in the Figures S1 , S2 , S3 , S4 , S5 , and S6 ., Figure 3 shows the distribution of 100 estimates for π0 using data set 2 when the true value of π0 is equal to 0 . 8 and 0 . 9 ., Methods 1 and 2 are designed to be conservative ( i . e . , true values are overestimated ) ., With a few exceptions , the other methods tend to be conservative when π0\u200a=\u200a0 . 8 and liberal ( the true value is underestimated ) when π0\u200a=\u200a0 . 9 ., The variability of estimates for π0 is similar across methods , but some plots show a slightly larger variability for methods 12 and 15 when π0\u200a=\u200a0 . 9 ., Figure 4 shows the distribution of estimates for FDR and LFDR at the two thresholds ., The horizontal lines in the plots show the mean ( solid line ) and the minimum and maximum ( dashed lines ) of the true FDR value for the 100 simulations ., A true value for LFDR is not known in the simulation procedure ., The methods tend to be conservative ( overestimate FDR ) when the threshold τ\u200a=\u200a0 . 01 and are more accurate at the lower threshold ., Estimates of FDR are more variable for methods 11 , 13 , and 14 and estimates for LFDR more variable for methods 13 and 14 , with the exception of a few unusual estimates obtained from method 9 ., The high variability of FDR estimates from method 11 may be due to a “less than optimal” choice of the spanning parameter in a numerical smoother ( see also Pounds and Cheng 27 ) ., We did not attempt to tune any of the methods for enhanced performance ., Researchers have been evaluating the performance of the burgeoning number of statistical methods for the analysis of high dimensional omic data , relying on a mixture of mathematical derivations , computer simulations , and sadly , often single dataset illustrations or mere ipse dixit assertions ., Recognizing that the latter two approaches are simply unacceptable approaches to method validation 2 and that the first two suffer from limitations described earlier , an increasing number of investigators are turning to plasmode datasets for method evaluation 28 ., An excellent example is the Affycomp website ( http://affycomp . biostat . jhsph . edu/ ) that allows investigators to compare different microarray normalization methods on datasets of known structure ., Other investigators have also recently used plasmode-like approaches which they refer to as ‘data perturbation’ 29 , 30 , yet it is not clear that these ‘perturbed datasets’ can distinguish true from false positives , suggesting greater need for articulation of principles or standards of plasmode generation ., As more high dimensional experiments with larger sample sizes become available , researchers can use a new kind of simulation experiment to evaluate the performance of statistical analysis methods , relying on actual data from previous experiments as a template for generating new data sets , referred to herein as plasmodes ., In theory , the plasmode method outlined here will enable investigators to choose on an empirical basis the most appropriate statistical method for their HDEs ., Our results also suggest that large , searchable databases of plasmode data sets would help investigators find existing data sets relevant to their planned experiments ., ( We have already implemented a similar idea for planning sample size requirements in HDEs 31 , 32 . ), Investigators could then use those data sets to compare and evaluate several analytical methods to determine which best identifies genes affected by the treatment condition ., Or , investigators could use the plasmode approach on their own data sets to glean some understanding of how well a statistical method works on their type of data ., Our results compare the performance of 15 statistical methods as they process the specific plasmode data sets constructed from the CNGI data ., Although identifying one uniformly superior method ( if there is one ) is difficult within the limitations of this one comparison , our results suggest that certain methods could be sensitive to tuning parameters or different types of data sets ., A comparison over multiple types of source data sets with different distributions of effects sizes could add the detail necessary to clearly recommend certain methods over others 1 ., Other papers have used simulation studies to compare the performance of methods for estimating π0 and FDR ( e . g . , Hsueh et al . 33; Nguyen 34; Nettleton et al . 35 ) ., We compared methods that use the distribution of P-values as was done in Broberg 36 and Yang and Yang 37 ., Unlike our plasmode approach , most earlier comparison studies used normal distributions to simulate gene expression data and incorporated dependence using a block diagonal correlation structure as in Allison et al 26 ., A key implication and recommendation of our paper is that , as data from the growing number of HDEs is made publicly available , researchers may identify a previous HDE similar to one they are planning or have recently conducted and use data from these experiments to construct plasmode data sets with which to evaluate candidate statistical methods ., This will enable investigators to choose the most appropriate method ( s ) for analyzing their own data and thus to increase the reliability of their research results ., In this manner , statistical science ( as a discipline that studies the methods of statistics ) becomes as much an empirical science as a theoretical one ., The quantities in Table 1 are those for a typical microarray experiment ., Let N\u200a=\u200aA+B and M\u200a=\u200aC+D and note that both N and M will be known and K\u200a=\u200aN+M ., However , the number of false discoveries is equal to an unknown number C . The proportion of false discoveries for this experiment is C/M ., Benjamini and Hochberg 6 defined FDR as , P ( M>0 ) where I{M>0} is an indicator function equal to 1 if M>0 and zero otherwise ., Storey 12 defined the positive FDR as ., Since P ( M>0 ) ≥1− ( 1−τ ) K , and since K is usually very large , FDR≈pFDR , so we do not distinguish between FDR and pFDR as the parameter being estimated and simply refer to it as FDR with estimates denoted ( and ) ., Suppose we identify a template data set corresponding to a two treatment comparison for differential gene expression for K genes ., Obtain a vector , δ , of effect sizes ., One suggestion is the usual t-statistic , where the ith component of δ , is given by ( 5 ) where ntrt , nctrl are number of biological replicates in the treatment and control group , respectively , X̅i , trt , X̅i , ctrl are the mean gene expression levels for gene i in treatment and control groups , and , is the usual pooled sample variance for the ith gene , where the two sample variances are given by , ., In what follows , we will use this choice for δi since it allows for effects to be described by a unitless quantity , i . e . , it is scaled by the standard error of the observed mean difference X̅i , trt−X̅i , ctrl for each gene ., For convenience , assume that nctrl is an even number and divide the control group into two sets of equal size ., Requiring nctrl≥4 allows for at least two arrays in each set , thus allowing estimates of variance within each of the two sets ., This will be the basis for the plasmode “null” data set ., There are ways of making this division ., Without loss of generality , assume that the first nctrl/2 arrays after the division are the plasmode control group and the second nctrl/2 are the plasmode treatment group ., Specify a value of π0 and specify a threshold , τ , such that a P-value ≤τ is declared evidence of differential expression ., Execute the following steps ., One can then obtain another data set and repeat the entire process to evaluate a method on a different type of data , perhaps from a different organism having a different null distribution , or a different treatment type giving a different distribution of effect sizes , δ ., Alternatively , one might choose to randomly divide the control group again and repeat the entire process ., This would help assess how differences in arrays within a group or possible correlation structure might affect results from a method ., If some of the arrays in the control group have systematic differences among them ( e . g . , differences arising from variations in experimental conditions—day , operator , technology , etc . ) , then the null distribution can be sensitive to the random division of the original control group into the two plasmode groups , particularly if nctrl is small . | Introduction, Results, Discussion, Methods | Plasmode is a term coined several years ago to describe data sets that are derived from real data but for which some truth is known ., Omic techniques , most especially microarray and genomewide association studies , have catalyzed a new zeitgeist of data sharing that is making data and data sets publicly available on an unprecedented scale ., Coupling such data resources with a science of plasmode use would allow statistical methodologists to vet proposed techniques empirically ( as opposed to only theoretically ) and with data that are by definition realistic and representative ., We illustrate the technique of empirical statistics by consideration of a common task when analyzing high dimensional data: the simultaneous testing of hundreds or thousands of hypotheses to determine which , if any , show statistical significance warranting follow-on research ., The now-common practice of multiple testing in high dimensional experiment ( HDE ) settings has generated new methods for detecting statistically significant results ., Although such methods have heretofore been subject to comparative performance analysis using simulated data , simulating data that realistically reflect data from an actual HDE remains a challenge ., We describe a simulation procedure using actual data from an HDE where some truth regarding parameters of interest is known ., We use the procedure to compare estimates for the proportion of true null hypotheses , the false discovery rate ( FDR ) , and a local version of FDR obtained from 15 different statistical methods . | Plasmode is a term used to describe a data set that has been derived from real data but for which some truth is known ., Statistical methods that analyze data from high dimensional experiments ( HDEs ) seek to estimate quantities that are of interest to scientists , such as mean differences in gene expression levels and false discovery rates ., The ability of statistical methods to accurately estimate these quantities depends on theoretical derivations or computer simulations ., In computer simulations , data for which the true value of a quantity is known are often simulated from statistical models , and the ability of a statistical method to estimate this quantity is evaluated on the simulated data ., However , in HDEs there are many possible statistical models to use , and which models appropriately produce data that reflect properties of real data is an open question ., We propose the use of plasmodes as one answer to this question ., If done carefully , plasmodes can produce data that reflect reality while maintaining the benefits of simulated data ., We show one method of generating plasmodes and illustrate their use by comparing the performance of 15 statistical methods for estimating the false discovery rate in data from an HDE . | biotechnology, mathematics, science policy, computational biology, molecular biology, genetics and genomics | null |
1,405 | journal.pcbi.1005692 | 2,017 | The Stochastic Early Reaction, Inhibition, and late Action (SERIA) model for antisaccades | In the antisaccade task ( 1; for reviews , see 2 , 3 ) , participants are required to saccade in the contralateral direction of a visual cue ., This behavior is thought to require both the inhibition of a reflexive saccadic response towards the cue and the initiation of a voluntary eye movement in the opposite direction ., A failure to inhibit the reflexive response leads to an erroneous saccade towards the cue ( i . e . , a prosaccade ) , which is often followed by a corrective eye movement in the opposite direction ( i . e . , an antisaccade ) ., As a probe of inhibitory capacity , the antisaccade task has been widely used to study psychiatric and neurological diseases 3 ., Notably , since the initial report 4 , studies have consistently found an increased number of errors in patients with schizophrenia when compared to healthy controls , independent of medication and clinical status 5–8 ., Moreover , there is evidence that an increased error rate constitutes an endophenotype of schizophrenia , as antisaccade deficits are also present in non-affected , first-degree relatives of diagnosed individuals ( for example 5 , 7; but for negative findings see for example 9 , 10 ) ., Unfortunately , the exact nature of the antisaccade deficits and their biological origin in schizophrenia remain unclear ., One path to improve our understanding of these experimental findings is to develop generative models of their putative computational and/or neurophysiological causes 11 ., Generative models that capture the entire distribution of responses can reveal features of the data that are not apparent when only considering summary statistics such as mean error rate ( ER ) and reaction time ( RT ) 12–15 ., Additionally , this type of model can potentially relate behavioral findings in humans to their biological substrate ., Here , we apply a generative modeling approach to the antisaccade task ., First , we introduce a novel model of this paradigm based on previous proposals 16–20 ., For this , we formalize the ideas introduced by Noorani and Carpenter 17 and extend them into what we refer to as the Stochastic Early Reaction , Inhibition and late Action ( SERIA ) model ., Second , we apply both models to an experimental data set of three mixed blocks of pro- and antisaccades trials with different trial type probability ., More specifically , we compare several models using Bayesian model comparison ., Third , we use the parameter estimates from the best model to investigate the effects of our experimental manipulation ., We found that there was positive evidence in favor of the SERIA model when compared to our formalization of the model proposed in 17 ., Moreover , the parameters estimated through model inversion revealed the complexity of the decision processes underlying the antisaccade task that is not obvious from mean RT and ER ., This paper is organized as follows ., First , we formalize the model developed in 17 and introduce the SERIA model ., Second , we describe our experimental setup ., Third , we present our behavioral findings in terms of summary statistics ( mean RT and ER ) , the comparison between different models , and the parameter estimates ., Finally , we review our findings , discuss other recent models , potential future developments , and translational applications ., All participants gave written informed consent before the study ., All experimental procedures were approved by the local ethics board ( Kantonale Ethikkomission Zürich , KEK-ZH-Nr . 2014-0246 ) ., In this section , we derive a formal description of the models evaluated in this paper ., We start with a formalized version of the model proposed by Noorani and Carpenter in 17 and proceed to extend it ., Their approach resembles the model originally proposed by Camalier and colleagues 21 to explain RT and ER in the double step and search step tasks , in which participants are either asked to saccade to successively presented targets or to saccade to a target after a distractor was shown ., Common to all these tasks is that subjects are required to inhibit a prepotent reaction to an initial stimulus and then to generate an action towards a secondary goal ., Briefly , Camalier and colleagues 21 extended the original ‘horse-race’ model 16 by including a secondary action in countermanding tasks ., In 17 , Noorani and Carpenter used a similar model in combination with the LATER model 22 in the context of the antisaccade task by postulating an endogenously generated inhibitory signal ., Note that this model , or variants of it , have been used in several experimental paradigms ( reviewed in 20 ) ., Here , we limit our discussion to the antisaccade task ., Following 17 , we assume that the RT and the type of saccade generated in a given trial are caused by the interaction of three competing processes or units ., The first unit up represents a command to perform a prosaccade , the second unit us represents an inhibitory command to stop a prosaccade , and the third unit ua represents a command to perform an antisaccade ., The time t required for unit ui to arrive at threshold si is given by:, si=rit ,, ( 1 ), siri=t ,, ( 2 ), where ri represents the slope or increase rate of unit ui , si represents the height of the threshold , and t represents time ., We assume that , on each trial , the increase rates are stochastic and independent of each other ., The time and order in which the units reach their thresholds si determines the action and RT in a trial ., If the prosaccade unit up reaches threshold before any other unit at time t , a prosaccade is elicited at t ., If the antisaccade unit arrives first , an antisaccade is elicited at t ., Finally , if the stop unit arrives before the prosaccade unit , an antisaccade is elicited at the time when the antisaccade unit reaches threshold ., It is worth mentioning that , although this model is motivated as a race-to-threshold model , actions and RTs depend only on the arrival times of each of the units and ultimately no explicit model of increase rates or thresholds is required ., Thus , for the sake of clarity , we refer to this approach as a ‘race’ model , in contrast to ‘race-to-threshold’ models that explicitly describe increase rates and thresholds ., Formally ( but in a slight abuse of language ) , the two random variables of interest , the reaction time T ∈ 0 , ∞ and the type of action performed A ∈ {pro , anti} , depend only on three further random variables: the arrival times Up , Us , Ua ∈ 0 , ∞ of each of the units ., The probability of performing a prosaccade at time t is given by the probability of the prosaccade unit arriving at time t , and the stop and antisaccade unit arriving afterwards:, p ( A=pro , T=t ) =p ( Up=t ) p ( Ua>t ) p ( Us>t ) ., ( 3 ), The probability of performing an antisaccade at time t is given by, p ( A=anti , T=t ) =p ( Ua=t ) p ( Up>t ) p ( Us>t ) +p ( Ua=t ) ∫0tp ( Us=τ ) p ( Up>τ ) dτ ., ( 4 ), The first term on the right side of Eq 4 corresponds to the unlikely case that the antisaccade unit arrives before the prosaccade and the stop units ., The second term describes trials in which the stop unit arrives before the prosaccade unit ., It can be decomposed into two terms:, p ( Ua=t ) ∫0tp ( Us=τ ) p ( Up>τ ) dτ=p ( Ua=t ) ( p ( Us<t ) p ( Up>t ) +∫0tp ( Us=τ ) p ( τ<Up<t ) dτ ), ( 5 ), =p ( Ua=t ) ( p ( Us<t ) p ( Up>t ) +∫0tp ( Us<τ ) p ( Up=τ ) dτ ), ( 6 ), The term p ( Ua=t ) ∫0tp ( Us<τ ) p ( Up=τ ) dτ describes the condition in which the prosaccade unit is inhibited by the stop unit allowing for an antisaccade ., Note that if the prosaccade unit arrives later than the antisaccade unit , the arrival time of the stop unit is irrelevant ., That means that we can simplify Eq 4 to, p ( A=anti , T=t ) =p ( Ua=t ) ( p ( Up>t ) +∫0tp ( Us<τ ) p ( Up=τ ) dτ ) ., ( 7 ), Eqs 3 and 7 constitute the likelihood function of a single trial , and define the joint probability of an action and the corresponding RT ., We refer to this likelihood function as the PRO-Stop-Antisaccade ( PROSA ) model ., It shares the central assumptions of 17 , namely:, ( i ) the time to reach threshold of each of the units is assumed to depend linearly on the rate r ,, ( ii ) it includes a stop unit whose function is to inhibit prosaccades and, ( iii ) there is no lateral inhibition between the different units ., Finally ,, ( iv ) RTs are assumed to be equal to the arrive-at-threshold times ., Note that the RT distributions are different from the arrival time distributions because of the interactions between the units described above ., The main difference of this model compared to 17 is that we do not exclude a priori the possibility of the antisaccade unit arriving earlier than the other units ., Aside from this , both models are conceptually equivalent ., The PROSA model is characterized by a strict association between units and action types ., In other words , the unit up leads unequivocally to a prosaccade , whereas the unit ua always triggers an antisaccade ., This implies that if the distribution of the arrival times of the units is unimodal and strictly positive , the PROSA model cannot predict voluntary slow prosaccades with a late peak , or in simple words , the PROSA model cannot account for slow , voluntary prosaccades that have been postulated in the antisaccade task 23 ., Similarly , it has been argued that prosaccade RT can be described by the mixture of two distributions ( for example 2 , 22 ) ., To account for this , we introduce the Stochastic Early Reaction , Inhibition and Late Action ( SERIA ) model ., According to this model , and in analogy to the PROSA model , an early reaction takes place at time t if the early unit ue arrives before the late and inhibitory units , ul and ui , respectively ., If the inhibitory or late unit arrives before the early unit , a late response is triggered at the time the late unit reaches threshold ., Crucially , both early and late responses can trigger pro- and antisaccades with a certain probability ., Thus , in parallel to the race processes which determine RTs , an independent , secondary decision process is responsible for which reaction is generated ., Fig 1 shows the structure of the SERIA model ., To formalize the concept of early and late responses , we introduce a new unobservable random variable that represents the type of response R ∈ {early , late} ., The distribution of the RTs is analogous to the PROSA-model , such that , for instance , the probability of an early response at time t is given by, p ( R=early , T=t ) =p ( Ue=t ) p ( Ui>t ) p ( Ul>t ), ( 8 ), where Ue , Ui , and Ul represent the arrival times of the early , inhibitory , and late units , respectively ., The fundamental assumption of the SERIA model is that a secondary decision process , beyond the race between early , inhibitory , and late units , decides the action generated in a single trial ., An initial approach to model this secondary decision process is to assume that the action type ( pro- or antisaccade ) is conditionally independent of the RT given the response type ( early or late ) ., Hence , the distribution of RTs is not a priori coupled to the saccade type anymore; RT distributions for both pro- and antisaccades could in principle be bimodal , consisting of both fast reactive and slow voluntary saccades ., Formally , the conditional independency assumption can be written down as, p ( A , T|R ) =p ( A|R ) p ( T|R ) ,, ( 9 ), p ( A , T|R ) p ( R ) =p ( A|R ) p ( T|R ) p ( R ) ,, ( 10 ), p ( A , T , R ) =p ( A|R ) p ( T , R ) ., ( 11 ), The term p ( A|R ) is simply the probability of an action , given a response type ., We denote it as, p ( A=pro|R=early ) =πe∈0 , 1 ,, ( 12 ), p ( A=anti|R=early ) =1−πe ,, ( 13 ), p ( A=pro|R=late ) =πl∈0 , 1 ,, ( 14 ), p ( A=anti|R=late ) =1−πl ., ( 15 ), Since the type of response R is not observable , it is necessary to marginalize it out in Eq 11 to obtain the likelihood of the SERIA model:, p ( A , T ) =p ( A , T , R=early ) +p ( A , T , R=late ) ., ( 16 ), The complete likelihood of the model is given by substituting the terms in Eq 16, p ( A=pro , T=t ) =πep ( Ue=t ) p ( Ui>t ) p ( Ul>t ) +πlp ( Ul=t ) ( p ( Ue>t ) +∫0tp ( Ue=τ ) p ( Ui<τ ) dτ ) ,, ( 17 ), p ( A=anti , T=t ) = ( 1−πe ) p ( Ue=t ) p ( Ui>t ) p ( Ul>t ) + ( 1−πl ) p ( Ul=t ) ( p ( Ue>t ) +∫0t ( Ue=τ ) p ( Ui<τ ) dτ ) ., ( 18 ), It is worth noting here that the PROSA model is a special case of the SERIA model , namely , it corresponds to the assumption that πe = 1 and πl = 0 ., The SERIA model allows for bimodal distributions , as both early and late responses can be pro- and antisaccades ., Importantly , one prediction of the model is that late prosaccades have the same distribution as late antisaccades ., Until now , we have assumed that the competition that leads to late pro- and antisaccades does not depend on time in the sense that late actions are conditionally independent of RT ., This assumption can be weakened by postulating a secondary race between late responses; this leads us to a modified version of the SERIA model , that we refer to as the late race SERIA model ( SERIAlr ) ., The derivation proceeds similarly to the SERIA model , except that we postulate a fourth unit that generates late prosaccades instead of assuming that the late decision process is time insensitive ., This version of the SERIA model includes an early unit ue that , for simplicity , we assume produces only prosaccades , an inhibitory unit that stops early responses ui , a late unit that triggers antisaccades ua , and a further unit that triggers late prosaccades up ., As before , if the early unit reaches threshold before any other unit , a prosaccade is generated with probability, p ( Ue=t ) p ( Ui>t ) p ( Ua>t ) p ( Up>t ) ., ( 19 ), If any of the late units arrive first , the respective action is generated with probability:, Antisaccade:p ( Ua=t ) p ( Up>t ) p ( Ue>t ) p ( Ui>t ) ., ( 20 ), Prosaccade:p ( Up=t ) p ( Ua>t ) p ( Ue>t ) p ( Ui>t ) ., ( 21 ), Finally , if the inhibitory unit arrives first , either a late pro- or antisaccade is generated with probability, Antisaccades:p ( Ua=t ) p ( Up>t ) ( ∫0tp ( Ui=τ ) p ( Ue>τ ) dτ ) ,, ( 22 ), Prosaccades:p ( Up=t ) p ( Ua>t ) ( ∫0tp ( Ui=τ ) p ( Ue>τ ) dτ ) ., ( 23 ), Implicit in the last two terms is the competition between the late units , which are assumed again to be independent of each other ., Formally , this competition is expressed as the probability of , for example , the late antisaccade unit arriving before a late prosaccade p ( Ua = t ) p ( Up > t ) ., A schematic representation of the model is shown in Fig 2 ., This late race is similar to the Linear Ballistic Accumulation model proposed by 24 , although in that model decisions are seen as the result of a race of ballistic accumulation processes with fixed threshold but stochastic base line and increase rate ., Here we only assume that the late decision process is a GO-GO race 21 ., The likelihood of an action is given by summing over all possible outcomes that lead to that action:, p ( A=pro , T=t ) =p ( Ue=t ) p ( Ui>t ) p ( Ua>t ) p ( Up>t ) +p ( Up=t ) p ( Ua>t ) p ( Ui>t ) p ( Ue>t ) +p ( Up=t ) p ( Ua>t ) ( ∫0tp ( Ui=τ ) p ( Ue>τ ) dτ ) ,, ( 24 ), p ( A=anti , T=t ) =p ( Ua=t ) p ( Up>t ) p ( Ui>t ) p ( Ue>t ) +p ( Ua=t ) p ( Up>t ) ( ∫0tp ( Ui=τ ) p ( Ue>τ ) dτ ) ., ( 25 ), We have left out some possible simplifications in Eqs 24 and 25 for the sake of clarity ., The conditional probability of a late antisaccade is given by the interaction between the late units , such that, p ( Ua<Up ) =∫0∞p ( Ua=t ) p ( Up>t ) dt=1−p ( Up<Ua ) ,, ( 26 ), is analogous to the probability of a late antisaccade 1−πl in the SERIA model ., This observation shows that the main difference between the SERIA and SERIAlr model is that the former postulates that the distribution of late pro- and antisaccades are equal and conditionally independent of the action performed , whereas the latter constrains the probability of a late antisaccade to be a function of the arrival times of the late units ., The expected response time of late pro- and antisaccade actions is given by, 1p ( Up<Ua ) ∫0∞tp ( Up=t ) p ( Ua>t ) dt ,, ( 27 ), 1p ( Ua<Up ) ∫0∞tp ( Ua=t ) p ( Up>t ) dt ., ( 28 ), We will refer to these terms as the mean response time of pro- and antisaccade actions , in contrast to the mean arrival times , which are the expected value of any single unit ., The models above can be further finessed to account for non-decision times δ by transforming the RT t to tδ = t−δ ., The delay δ might be caused by , for example , conductance delays from the retina to the cortex ., In addition , the antisaccade or late units might include a constant delay δa , which is often referred to as the antisaccade cost 1 ., Note that the model is highly sensitive to δ because any RT below it has zero probability ., In order to relax this condition and to account for early outliers , we assumed that saccades could be generated before δ at a rate η ∈ 0 , 1 such that the marginal likelihood of an outlier is, p ( T<δ ) =p ( Tδ<0 ) =η ., ( 29 ), For simplicity , we assume that outliers are generated with uniform probability in the interval 0 , δ:, p ( T=t ) =ηδift<δ ., ( 30 ), Furthermore , we assume that the probability of an early outlier being a prosaccade was approximately 100 times higher than being an antisaccade ., Because of the new parameter η , the distribution of saccades with a RT larger than δ needs to be renormalized by the factor 1−η ., In the case of the PROSA model , for example , this means that the joint distribution of action and RT is given by the conditional probability, p ( A=pro , T=tδ|tδ>0 ) =p ( Up=tδ ) p ( Ua>tδ−δa ) p ( Us>tδ ) ,, ( 31 ), p ( Ua<0 ) =0 ,, ( 32 ), p ( A=anti , T=tδ|tδ>0 ) =p ( Ua=tδ−δa ) ( p ( Up>tδ ) +∫0tδp ( Up=τ ) p ( Us<τ ) dτ ) ., ( 33 ), A similar expression holds for the SERIA models ., However , in the PROSA model a unit-specific delay is equal to an action-specific delay ., By contrast , in the SERIA model both early and late responses can generate pro- and antisaccades ., Thus , δa represents a delay in the late actions that affects both late pro- and antisaccades ., The models discussed in the previous sections can be defined independently of the distribution of the rate of each of the units ., In order to fit experimental data , we considered four parametric distributions with positive support for the rates: gamma 13 , inverse gamma , lognormal 25 and the truncated normal distribution ( similarly to 22 and 24 ) ., Table 1 and Fig 3 summarize these distributions , their parameters , and the corresponding arrival time densities ., We considered five different configurations: 1 ) all units were assigned inverse gamma distributed rates , 2 ) all units were assigned gamma distributed rates , 3 ) the increase rate of the prosaccade and stop units ( or early and the inhibitory units ) was gamma distributed but the antisaccade ( late ) unit’s increase rate was inverse gamma distributed , 4 ) all the units were assigned lognormal distributed rates or 5 ) all units were assigned truncated normal distributed rates ., All the parametric distributions considered here can be fully characterized by two parameters which we generically refer to as k and θ ., Hence , the PROSA model is characterized by the parameters for each unit kp , ka , ks , θp , θa , θs ., The SERIA model can be characterized by analogous parameters ke , kl , ki , θe , θl , θi and the probabilities of early and late prosaccades πe and πl ., In the case of the SERIAlr model , the probability of a late prosaccade is replaced by the parameters of a late prosaccade unit kp , θp ., In addition to the unit parameters , all models included the non-decision time δ , the antisaccade ( or late unit ) cost δa , and the marginal rate of early outliers η ., In this section , we describe the experimental procedures , statistical methods , and inference scheme used to invert the models above ., The data is from the placebo condition of a larger pharmacological study that will be reported elsewhere ., We aimed to answer three questions with the models considered here ., First , we investigated which model ( i . e . PROSA , SERIA or SERIAlr ) explained the experimental data best , and whether all important qualitative features of the data were captured by this model ., We did not have a strong hypothesis regarding the parametric distribution of the data and hence , comparisons of parametric distributions were only of secondary interest in our analysis ., Second , we investigated whether reduced models that kept certain parameters fixed across trial types were sufficient to model the data ., Third , we investigated how the probability of a trial type in a block affected the parameters of the model ., Inference on the model parameters was performed using the Metropolis-Hastings algorithm 31 ., To increase the efficiency of our sampling scheme , we iteratively modified the proposal distribution during an initial ‘burn-in’ phase as proposed by 32 ., Moreover , we extended this method by drawing from a set of chains at different temperatures and swapping samples across chains ., This method , called population MCMC or parallel tempering , increases the statistical efficiency of the Metropolis-Hasting algorithm 33 and has been used in similar contexts before 34 ., We simulated 16 chains with a 5-th order temperature schedule 35 ., For all but the models including a truncated normal distribution , we drew 4 . 1 × 104 samples per chain , from which the first 1 . 6 × 104 samples were discarded as part of the burn-in phase ., When a truncated normal distribution was included ( models m5 , m10 , and m15 ) , the total number of samples was increased to 6 × 104 , from which 2 × 104 were discarded ., The convergence of the algorithm was assessed using the Gelman-Rubin criterion 33 , 36 such that the R˜ statistic of the parameters of the model was aimed to be below 1 . 1 ., When a simulation did not satisfy this criterion , it was repeated until 99 . 5 percent of all simulations satisfied it ., Models were scored using their log marginal likelihood or log model evidence ( LME ) ., This is defined as the log probability of the data given a model after marginalizing out all its parameters ., When comparing different models , the LME corresponds to the log posterior probability of a model under a uniform prior on model identity ., Thus , for a single subject with data y , the posterior probability of model k , given models 1 to n is, p ( mk|y ) =p ( y|mk ) p ( mk ) ∑i=1np ( y|mi ) p ( mi ) =p ( y|mk ) ∑i=1np ( y|mi ) ., ( 35 ), Importantly , this method takes into account not only the accuracy of the model but also its complexity , such that overparameterized models are penalized 37 ., A widely used approximation to the LME is the Bayesian Information Criterion ( BIC ) which , although easy to compute , has limitations ( for discussion , see 38 ) ., Here , we computed the LME through thermodynamic integration 33 , 39 ., This method provides robust estimates and can be easily computed using samples obtained through population MCMC ., One important observation here is that the LME is sensitive to the prior distribution , and thus can be strongly influenced by it 40 ., We addressed this issue in two ways: On one hand and as mentioned above , we defined the prior distribution of the increase rates of all models in terms of the same mean and variance ., This implies that the priors were equal up to their first two moments , and hence all models were similarly calibrated ., On the other hand , we complemented our quantitative analysis with qualitative posterior checks 33 as shown in the results section ., Besides comparing the evidence of each model , we also performed a hierarchical or random effects analysis described in 38 , 41 ., This method can be understood as a form of soft clustering in which each subject is assigned to a model using the LME as assignment criterion ., Here , we report the expected probability of the model ri , which represents the percentage of subjects that is assigned to the cluster representing model i ., This hierarchical approach is robust to population heterogeneity and outliers , and complements reporting the group-level LME ., Finally , we compared families of models 42 based on the evidence of each model for each subject summed across conditions ., In addition to a Bayesian analysis of the data , we used classical statistics to investigate the effect of our experimental manipulation on behavioral variables ( mean RT and ER ) and the parameters of the models ., We have suggested previously 11 , 43 , 44 that generative models can be used to extract hidden features from experimental data that might not be directly captured by , for example , standard linear methods or purely data driven machine learning techniques ., In this sense , classical statistical inference can be boosted by extracting interpretable data features through Bayesian techniques ., Frequentist analyses of RT , ER , and parameter estimates were performed using a mixed effects generalized linear model with independent variables subject ( SUBJECT ) , prosaccade probability ( PP ) with levels PP20 , PP50 and PP80 , and when pro- and antisaccade trials were analyzed together , trial type ( TT ) ., The factor SUBJECT was always entered as a random effect , whereas PP and TT were treated as categorical fixed effects ., In the case of ER , we used the probit function as link function ., Analyses were conducted with the function fitglme . m in MATLAB 9 . 0 ., The significance threshold α was set to 0 . 05 ., All likelihood functions were implemented in the C programming language using the GSL numerical package ( v . 1 . 16 ) ., Integrals without an analytical form or well-known approximations were computed through numerical integration using the Gauss-Kronrod-Patterson algorithm 45 implemented in the function gsl_integration_qng ., The sampling routine was implemented in MATLAB ( v . 8 . 1 ) and is available as a module of the open source software package TAPAS ( www . translationalneuromodeling . org/tapas ) ., Forty-seven subjects ( age: 23 . 8 ± 2 . 9 ) completed all blocks and were included in further analyses ., A total of 27072 trials were recorded , from which 569 trials ( 2% ) were excluded ( see Table 4 ) ., Both ER and RT showed a strong dependence on PP ( Fig 5 and Table 5 ) ., Individual data is included in the S1 Dataset and is displayed in S1 Fig . The mean RT of correct pro- and antisaccade trials was analyzed independently with two ANOVA tests with factors SUBJECT and PP ., We found that in both pro- ( F2 , 138 = 46 . 9 , p < 10−5 ) and antisaccade trials ( F2 , 138 = 37 . 3 , p < 10−5 ) the effect of PP was significant; with higher PP , prosaccade RT decreased , whereas the RT of correct antisaccades increased ., On a subject-by-subject basis , we found that between the PP20 and PP80 conditions , 91% of the participants showed increased RT in correct antisaccade trials , while 81% demonstrated the opposite effect ( a decrease in RT ) in correct prosaccade trials ., Similarly , there was a significant effect of PP on ER in both prosaccade ( F2 , 138 = 376 . 1 , p < 10−5 ) as well as in antisaccade ( F2 , 138 = 347 . 0 , p < 10−5 ) trials ., This effect was present in all but one participant in antisaccade trials and in all subjects in prosaccade trials ., Exemplary RT data of one subject in the PP50 condition is displayed in Fig 6 ., Finally , we investigated how some of the parameters of the model were related to each other across subjects ., Because it has been commonly reported that schizophrenia is related with higher ER , but also with increased antisaccade RT , an interesting question is whether higher late-action response times are correlated with the percentage of late errors and inhibition failures , i . e . , early saccades that are not stopped ., We found that the response time of late pro ( F1 , 135 = 13 . 6 , p < 0 . 001 ) and antisaccades ( F1 , 135 = 7 . 1 , p < 0 . 01 ) was negatively correlated with the probability of a late error ( Fig 13 ) , but no significant interaction between PP and response time was found ( pro: F2 , 135 = 1 . 7 , p = 0 . 19; anti: F2 , 135 = 0 . 3 , p = 0 . 76 ) ., Hence , late responders tended to make fewer late errors , suggesting a speed/accuracy trade-off in addition to the main effect of PP ., We further considered the question whether the percentage of inhibition failures was correlated with the expected arrival time of the late antisaccade unit in antisaccade trials ( Fig 13 right ) ., Note that the number of inhibition failures is the same in both trial types in a constrained model , but inhibition failures are errors in antisaccade trials and correct early reactions in prosaccade trials ., We found that these parameters were not significantly correlated ( F2 , 135 = 1 . 2 , p = 0 . 26 ) ., This was also the case when we considered the expected response time of late prosaccades in prosaccade trials ( not displayed; F2 , 135 = 0 . 0 , p = 0 . 98 ) ., Fig 14 illustrates the posterior distribution of late errors and inhibition failures of two representative subjects as estimated using MCMC ., Clearly , PP induced strong differences in the percentage of inhibition failures and late errors in prosaccade trials in both subjects ., The effect of PP is less pronounced in late errors in antisaccade trials ., The posterior distributions also illustrate how the SERIAlr model can capture individual differences: For example , the percentage of late prosaccade errors in the PP80 condition and the percentage of inhibition failures across all conditions are clearly different in each subject ., Our results show that both RT and ER depend on PP ., While this was a highly significant factor in our study , there are mixed findings in previous reports ., ER in antisaccade trials was found to be correlated with TT probability in several studies 29 , 46 , 47 ., However , this effect might depend on the exact implementation of the task 47 , 48 ., Changes in prosaccade ER similar to our study have been reported by 29 and 48 ., Studies in which the type of saccade was signaled at fixation prior to the presentation of the peripheral cue do not always show this effect 47 ., The results on RTs are less consistent in the literature ., Our findings of increased anti- and decreased prosaccade RTs with higher PP are in line with the overall trend in 29 , and with studies in which the cue was presented centrally 47 ., Often , there is an additional increase in RT in the PP50 condition 29 , 47 , which was visible in our data as a slight increase in RT in the PP50 condition on top of the linear effect of PP ., Overall , RTs in our study were relatively slow compared to studies in which the TT cue was separated from the spatial cue 46 , 47 ., However , a study with a similar design and added visual search reported even slower RTs in both pro- and antisaccades 29 ., Formal comparison of generative models can offer insight into the mechanisms underlying eye movement behavior 11 and might be relevant in translational neuromodeling applications , such as computational psychiatry 49–53 ., Here , we have presented what is , to our knowledge , the first formal statistical comparison of models of the antisaccade task ., For this , we formalized the model introduced in 17 and proceeded to develop a novel model that relaxes the one-to-one association of early and late responses with pro- and antisaccades , respectively ., All models and estimation techniques presented here are openly available under the GPLv3 . 0 license as part of the open source package TAPAS ( www . translationalneuromodeling . org/tapas ) ., Bayesian model comparison yielded four conclusions at the family level ., First , the SERIA models were clearly favored when compared to the PROSA models ., | Introduction, Materials and methods, Results, Discussion | The antisaccade task is a classic paradigm used to study the voluntary control of eye movements ., It requires participants to suppress a reactive eye movement to a visual target and to concurrently initiate a saccade in the opposite direction ., Although several models have been proposed to explain error rates and reaction times in this task , no formal model comparison has yet been performed ., Here , we describe a Bayesian modeling approach to the antisaccade task that allows us to formally compare different models on the basis of their evidence ., First , we provide a formal likelihood function of actions ( pro- and antisaccades ) and reaction times based on previously published models ., Second , we introduce the Stochastic Early Reaction , Inhibition , and late Action model ( SERIA ) , a novel model postulating two different mechanisms that interact in the antisaccade task: an early GO/NO-GO race decision process and a late GO/GO decision process ., Third , we apply these models to a data set from an experiment with three mixed blocks of pro- and antisaccade trials ., Bayesian model comparison demonstrates that the SERIA model explains the data better than competing models that do not incorporate a late decision process ., Moreover , we show that the early decision process postulated by the SERIA model is , to a large extent , insensitive to the cue presented in a single trial ., Finally , we use parameter estimates to demonstrate that changes in reaction time and error rate due to the probability of a trial type ( pro- or antisaccade ) are best explained by faster or slower inhibition and the probability of generating late voluntary prosaccades . | One widely replicated finding in schizophrenia research is that patients tend to make more errors than healthy controls in the antisaccade task , a psychometric paradigm in which participants are required to look in the opposite direction of a visual cue ., This deficit has been suggested to be an endophenotype of schizophrenia , as first order relatives of patients tend to show similar but milder deficits ., Currently , most models applied to experimental findings in this task are limited to fit average reaction times and error rates ., Here , we propose a novel statistical model that fits experimental data from the antisaccade task , beyond summary statistics ., The model is inspired by the hypothesis that antisaccades are the result of several competing decision processes that interact nonlinearly with each other ., In applying this model to a relatively large experimental data set , we show that mean reaction times and error rates do not fully reflect the complexity of the processes that are likely to underlie experimental findings ., In the future , our model could help to understand the nature of the deficits observed in schizophrenia by providing a statistical tool to study their biological underpinnings . | medicine and health sciences, ballistics, classical mechanics, reaction time, random variables, neuroscience, cognitive neuroscience, probability distribution, mathematics, sensory physiology, behavior, schizophrenia, mental health and psychiatry, probability theory, physics, visual system, eye movements, normal distribution, physiology, biology and life sciences, sensory systems, physical sciences, cognitive science | null |
1,992 | journal.pcbi.1006945 | 2,019 | Chemical features mining provides new descriptive structure-odor relationships | Around the turn of the century , with its acknowledgement as an object of science by the Nobel society 1 the hidden sense associated with the perception of odorant chemicals , hitherto considered superfluous to cognition , became a focus of study in its own right ., Odors are emitted by food , which is a source of pleasure 2; they also influence our relations with others 3 ., The olfactory percept encoded in odorant chemicals contributes to our emotional balance and wellbeing: olfactory impairment jeopardizes this equilibrium 4 , 5 ., Neuroscientific studies have revealed that odor perception is the consequence of a complex phenomenon rooted in the chemical properties of a volatile molecule ( described by multiple physicochemical descriptors ) further detected by our olfactory receptors in the nasal cavity 6 ., A neural signal is then transmitted to central olfactory brain structures 7 ., At this stage , a complete neural representation , called “odor” is generated and then , it can be described semantically by various types of perceptual qualities ( e . g . , musky , fruity , floral , woody etc . ) ., While it is generally agreed that the physicochemical characteristics of odorants affect the olfactory percept , no simple and/or universal rule governing this Structure Odor Relationship ( SOR ) has yet been identified ., Why does one odorant smell of rose and another smell of lemon ?, Given the fact that the totality of the odorant message was encoded within the chemical structure , chemists have tried for a long time to identify relationships between chemical properties and odors ., Topological descriptors , eventually associated with electronic properties or molecular flexibility , have been tentatively connected to odorant descriptors ., For instance , molecules carrying a sulfur atom and/or having low molecular weight or low structural complexity are often rated as unpleasant 8–10 ., In addition to the hedonic valence of odors , others have looked for predictive models describing odor perception and quality ( see 11–14 ) ., Indeed , this was the aim of a crowd-sourced challenge recently proposed by IBM Research and Sage called DREAM Olfaction Prediction Challenge ., The challenge resulted in several models that were able to predict pleasantness and intensity as well as 8 out of 19 semantic descriptors ( namely “garlic” , “fish” , “sweet” , “fruit” , “burnt” , “spices” , “flower” and “sour” ) with an average correlation of predictions across all models above 0 . 5 15 ., Although these investigations brought evidence that chemical features of odorants can be linked to odor perception , the stimulus-percept problem raised a number of issues ., For instance , the stimulus-percept relationship is generally viewed as bijective in that one physicochemical rule describes or predicts one quality ., However , some cases suggest the existence of more than a single rule to relate chemistry and perception ., Indeed , chemicals belonging to different families can trigger a “camphor” or a musky smell 16 ., On the other hand , a single chiral center can render a compound odorless or shift its perceived odor completely , as is the case for ( + ) and ( - ) -carvone 17 ., These examples strengthen the notion that the connections between the chemical space and the perceptual space are subtler than previously thought with multiple physicochemical rules describing a given quality ., At best , the bijective SOR rules may be only be applicable to a very small fraction of the chemical space , with the remaining part of the perceptual space being best described using a multiple rules approach ., The complexity of available databases , they include both thousands of chemical properties and a large heterogeneity in perceptual descriptions , 18–21 means that the manual generation of multiple rules is not feasible ., In other words , to better understand the stimulus-percept issue in olfaction , there is a clear need to extract knowledge automatically and in an intelligible manner ., Such an approach is positioned upstream of predictive modeling since it will enable modeling that extracts descriptive rules from the data that link subgroups belonging to both chemical and perceptual spaces ., The main aim of our study was to develop such a computational framework to discover new descriptive structure-odor relationships ., To achieve this , we first set up a large database containing more than 1600 odorant molecules described by both physicochemical properties and olfactory qualities ., We then developed an original methodology based on the discovery of physicochemical descriptions distinguishing between a group of objects given a target or class label , namely odor qualities ., This approach has been widely studied in Artificial Intelligence ( AI ) , data mining and machine learning ., Specifically , supervised descriptive rules were formalized through subgroup discovery , emerging pattern/contrast-sets mining 22 ., In all cases , we face a set of objects associated with descriptions and these objects are related to one or several class labels ., This new pattern mining method , a variant of redescription mining 23 , allows the discovery of pairs consisting of a description ( of physicochemical properties ) and a label ( or sub-set of labels , olfactory qualities ) ., The strength of the rule ( SOR in our application ) is evaluated through a new quality-control measure detailed in the Methods section ., We designed and set up a database describing odorant molecules by both their perceptual and physicochemical properties ., Here , data from different sources were extracted and grouped:, ( i ) for odorant identification and olfactory qualities , we referred respectively to the PubChem website ( https://pubchem . ncbi . nlm . nih . gov/ ) and the textbook by Arctander 24;, ( ii ) for physicochemical properties , we referred to the Dragon software package ( http://www . talete . mi . it/index . htm ) ., Olfactory qualities were thus gathered from the book “Perfume and Flavor Chemicals” , published in 1969 by Steffen Arctander ., In this book , Arctander gives a complete description , including olfactory and trigeminal qualities as well as flavors , of 3102 odorants ( detailed physicochemical properties of 1689 odorants among these 3102 odorants were retrieved , see below ) ., These odorants were further identified by chemical name , molecular weight and corresponding olfactory qualities ., Here , the 74 olfactory qualities selected by Chastrette and colleagues 25 were used as a reference list ., These qualities were selected in a study of the whole of Arctander’s book by excluding those that did not provide qualitative olfactory information and those that were the least frequent ., Note that before selecting this source , we ran a comparison with other existing Atlases and websites used for research , teaching and applicative purposes: specifically , the Dravnieks Atlas 26 , the Boelens Atlas ( see 27 ) , and the Flavornet website ( http://www . flavornet . org ) ., These sources ( atlases , book and website ) were compared along a series of parameters ( the comparison took into account all odorants for which we collected CID numbers ) ., The first parameter of interest was the number of molecules studied in the source , and was respectively 1689 , 138 , 263 , and 660 for the Arctander , the Dravnieks , the Boelens and the Flavornet ( here , only molecules for which we found a PubChem Compound Identification or CID are taken into account ) ., The second parameter was the number of evaluators ( and their expertise level ) who smelled the compounds and provided the olfactory qualities: one trained evaluator for the Arctander , a large panel of evaluators for the Dravnieks ( although there seems to be a large heterogeneity in the expert profile of these panelists , and little information as to the extent of training that panelists were given ) , six trained evaluators for the Boelens , and no information is given regarding the panelists for the Flavornet website ., Third , when considering the way olfactory qualities were collected in the source , both the Arctander and the Flavornet used a binary format ( presence/absence of quality ) , and both the Dravnieks and the Boelens used a scale of intensity or agreement ., Fourth , we compared the number of olfactory qualities used in each atlas/book/website and observed the following distribution ( the average number of qualities per molecule is in brackets ) : 74 ( 2 . 88 ) for the Arctander , 146 ( 29 . 99 ) for the Dravnieks , 30 ( 12 . 86 ) for the Boelens , and 197 ( 2 . 72 ) for the Flavornet ., Note also that the minimum ( and the maximum ) number of qualities for one molecule was: Arctander ( min: 1; max: 10 ) , Dravnieks ( min: 5; max: 52 ) , Boelens ( min: 0; max: 22 ) , Flavornet ( min: 1; max: 5 ) ., Thus , this analysis showed that whereas some sources are characterized by a large number of molecules ( e . g . Arctander and Flavornet ) , others contain only a limited number of odorants ( e . g . Boelens and Dravnieks ) ., Moreover , there is great heterogeneity between these different sources with regards to the number and the degree of expertise of the evaluators ., Some sources involve a large number of evaluators but with heterogeneous profiles ( e . g . Dravnieks ) and others involve a limited number of experts ( e . g . Boelens and Arctander ) ., Finally , whereas some sources have , on average , between 10 and 30 qualities per odorant ( e . g . Boelens and Dravnieks ) , the average number is around three for others ( e . g . Arctander and Flavornet ) ., In view of these parameters , and because the descriptive approach used in this study requires a large database , we used the Arctander book because it contained the highest number of odorant molecules ( 1689 ) and a reasonable number of qualities per odorant ( 2 . 88 on average ) ., Odorant physicochemical properties were then obtained using Dragon , a software application that enables the calculation of 4885 molecular descriptors ( Talete ) ., Descriptors included in our dataset ranged from the simplest atom types , functional groups and fragment counts , to topological and geometrical descriptors ., As Dragon requires 3D structure files , these were collected from the PubChem website ( https://pubchem . ncbi . nlm . nih . gov ) by using the compound identifier number of each odorant ( CID ) ., Individual odorant CIDs were obtained by using the CAS Registry Number and/or the chemical name of the odorant as an entry in the PubChem website ., In total , 1689 CIDs were found for the 3102 odorants ., In the following section , we study the set M of odorant molecules that are described by n physicochemical properties denoted F . Each property fi ∈ F is a function that associates a real value with a molecule: fi: M → image ( fi ) with image ( fi ) an interval of R . The olfactory qualities are denoted by O and class is a mapping that associates a subset of O to a molecule: class: M → 2O ., Here , we developed an original subgroup discovery approach to mine descriptive rules that specifically characterize subsets of olfactory qualities ( O ) ., The specificity of this approach is intended to be able to extract rules with several olfactory qualities as targets , and also to treat unbalanced classes robustly , i . e . , the fact that some olfactory qualities are very rare ( e . g . “musty” ) compared to others ( e . g . “fruity” ) ., Subgroup discovery is a generic data mining method aimed at discovering regions in the data that stand out with respect to a given target ., We instantiated this framework in order to identify the conditions on some odorant physicochemical properties that are strongly associated with olfactory qualities ., A structure odor rule ( SORule ) , denoted D → Q , is defined by a physico- chemical description D and a set of olfactory qualities Q ⊆ O . The description is a set of n intervals D = ⟨x1 , y1 , x2 , y2 , … , xn , yn⟩ , each being a restriction on the value image of its corresponding physicochemical property: xi , yi ⊆ image ( fi ) ., The molecules whose values on physicochemical descriptors belong to the intervals of the description D are members of the coverage of D:, coverage ( D ) ={m∈M∀i=1…n , xi≤fi ( m ) ≤yi}, We count the number of molecules in the coverage with support ( D ) = |coverage ( D ) | ., The quality of a rule is evaluated with respect to the olfactory qualities of the molecules in its coverage ., First , the precision measure gives the proportion of the molecules of the coverage of D that also have ( part of ) the olfactory qualities Q:, P ( D→Q ) =|{m∈coverage ( D ) class ( m ) ⊆Q}|support ( D ), This is the percentage of times the rule is triggered for molecules whose qualities are in Q . On the other hand , it is also important to know if the rule covers all the molecules of quality Q . This is what the recall measure evaluates:, R ( D→Q ) =|{m∈coverage ( D ) class ( m ) ⊆Q}||{m∈Mclass ( m ) ⊆Q}|, These two measures behave in opposite ways: when one increases , the other decreases ., One way to globally evaluate a rule is to use the F1 measure , the harmonic mean between the precision and recall measures:, F1 ( D→Q ) =2P ( D→Q ) R ( D→Q ) P ( D→Q ) +R ( D→Q ), As mentioned above , the olfactory qualities are more or less frequent in the data ., To take that into account , the Fβ measure gives more importance to the precision measure for rare olfactory qualities , while favoring the recall measure for frequent qualities:, Fβ ( D→Q ) = ( 1+β ( support ( Q ) ) P ( D→Q ) R ( D→Q ) β ( support ( Q ) ) P ( D→Q ) +R ( D→Q ), with support ( Q ) = |{m ∈ M |class ( m ) ⊆ Q}| and, β ( x ) = ( 0 . 5× ( 1+tanh ( xβ-xlβ ) ) ) 2, Here , the terms xBeta and lBeta are determinant in choosing the appropriate sigmoid model , and are values that can be set by the experimenter ., Given that , our approach aims to discover rules D → Q whose support support ( D ) is greater than a threshold minSupp and with |Q| is lower or equal to a value maxQual ., Those parameters make it possible to identify rules that are supported by sufficient odorant molecules , and also that are specific to a small set of olfactory qualities ., The maxQual parameter enforces that the right-hand side of the rule contains a limited number of olfactory qualities to be interpretable by the analyst ., Similarly , a maxProp parameter allows to limit the number of ( physicochemical ) conditions in the left-hand side of the rules ., To illustrate the previous definitions , let us consider the toy olfactory dataset given in Table 1 ., This dataset contains 6 molecules identified by their IDs M = {1 , 2 , 3 , 4 , 5 , 6} ., Each molecule is described by its molecular weight MW , its number of atoms nAt and its number of carbon atoms nC , that is , F = {MW , nAt , nC} ., Besides , the molecules are also associated with their olfactory qualities among O = {fruity , vanillin , woody} ., Let us consider the description, D=⟨128 , 151 , 23 , 29 , 9 , 12⟩, Its coverage is coverage ( D ) = {2 , 3 , 5 , 6} ., If we consider the odorant quality Q = {vanillin} , as there is 2 molecules of coverage ( D ) with this quality , the precision of the rule is equal to:, P ( D→Q ) =24, As there are 3 molecules in the whole dataset with that quality , the recall of the rule is:, R ( D→Q ) =23, Its F1 measure is thus equal to:, F1 ( D→Q ) =227, Detailed information regarding the principle of the algorithm are provided as S1 Text ., Our olfactory dataset includes 1689 molecules described by 74 olfactory qualities ., The dataset is multi-labeled , each molecule being associated with one or several olfactory qualities ., On average , each molecule refers to 2 . 88 olfactory qualities among the 74 possible labels ., Moreover , the frequency of olfactory qualities across odorants is unbalanced: on average a quality is used in 65 . 79 molecules ( standard deviation: 105 . 28 ) , the maximum is reached for the “fruity” quality ( used in 570 molecules ) , the minimum for musty ( used in only 2 molecules ) ., Fig 1 illustrates the entire building process of the database ., Fig 2 presents a world cloud of the 74 olfactory qualities ., With regard to the physicochemical properties , our original database contained more than 4000 physicochemical features ., For the purpose of a rational approach where features can be interpreted on a chemical basis , we selected attributes that were relevant , but more importantly easily interpretable ., This approach is strongly inspired by the so-called 3D-olfactophore , where such easily interpretable features computed on odorants sharing the same olfactory percept are gathered in the 3 dimensions of space ., Such features are typically Hydrogen bond donor/acceptor , Aromatic cycle , Charged atom , etc ., This methodology is typically useful for molecular scientists to learn about structure-property relationships and design new molecules which fulfill the properties of these olfactophores 28 ., Here the features we used were a series of physico-chemical properties ., Thus , we selected constitutional , topological and chemical descriptors that represent molecular features which can be easily interpreted and extrapolated for further predictive models ., They include the following categories: constitutional indices ( n = 29; ex . “Molecular weight” ) , ring descriptors ( n = 7; ex . “Number of rings” ) , functional group counts ( n = 40; ex . “Number of esters” ) , molecular properties ( n = 6; ex . “Topological polar surface area” ) ., To select these descriptors , we screened the whole set of descriptors proposed by Dragon ., We carefully selected descriptors able to provide information interpretable by any molecular scientist ., The cost of selecting interpretable descriptors is a reduction in the description of the dataset ., To evaluate the loss of information on the variance of a given molecular dataset , descriptors were computed on a set of 2620 odorants provided by Saito and colleagues 29 ., Finally , 347 descriptors remained after filtering the following: correlated ( above 0 . 85 ) , constant for the whole dataset ( no variation across parameters ) , not available for the whole dataset ., After the dimensionality reduction , our selected 82 descriptors accounted for 37 . 2% of the original variance ., When choosing randomly 82 descriptors within this set of 347 , the variance always falls below 25% , suggesting that our descriptors performed quite well at describing a molecular set with a certain degree of variability ., Finally , when projecting the entire set of molecules on to the two first components of a PCA , the dataset remains well split and molecules were still distinguishable ., First , the physicochemical rules were generated for each of the 74 qualities based on the 82 descriptors ., This was done using the following parameters: maxoutput ( 100 ) , beamwidth ( 30 ) , MaxQual ( 1 ) , MaxProperties ( 8 ) , max Supp ( 700 ) , XBeta ( 110 ) , IBeta ( 20 ) , and four different minSupp ( 5 , 10 , 20 and 30 ) ( see Methods section and S1 Text for a detailed definition of these parameters ) ., Second , an algorithm search for the best rules or combination of rules ( with a maximum of 12 rules ) for each of the 74 qualities and the four different minSupp ( from 5 to 30 ) ., At this stage , the rules or combination of rules were ranked as a function of their Precision ., Here , to evaluate the best rule or combination of rules that can describe each quality , we calculated for each rule ( or combination of rules ) the distance ( Euclidian ) from the “ideal” situation defined as the data-point with an error of “0” ( error was calculated as one minus precision ) and the best recall ( value of 1 in the y-axis , meaning that all molecules that belong to the quality are described by these physicochemical rules ) ., The point ( s ) with the smallest distance was ( were ) selected as the best rule or combination of rules for a given quality ., From this selection , we built a list of rules and/or combination of rules for each quality ( see S1 Table ) ., We showed that around 90% of the olfactory qualities were described by 1 to 6 rules and 66% ( 49 qualities among 74 ) were described by 3 , 4 or 5 rules ( see Fig 3a ) ., Moreover , for the same quality , different rules or combinations of rules were selected because their distance to the “ideal” situation ( recall: 1; error:, 0 ) was the same ( see an example in Fig 3b ) ., Fig 3c shows an example of the chemical structure of the molecules described by the same quality ( jasmine here ) and rules/combinations of rules ., To compare olfactory qualities according to their description by physicochemical rules , we plotted all physicochemical rules ( and/or combination of rules ) of each quality in a 2D space comprising error ( x-axis ) and recall ( y-axis ) ( Fig 4 ) ., As can be seen , whereas some qualities were close to the “ideal” situation others were very far ., First , 38 qualities ( 51 . 35% , named “Group 1” ) exhibited an error rate lower than 0 . 5 and a recall greater than ( or equal to ) 0 . 5 ( sulfuraceous , vanillin , phenolic , musk , sandalwood , almond , orange-blossom , jasmine , hay , tarry , smoky , lilac , piney , camphor , grape , anisic , buttery , gassy , fatty , waxy , acid , minty , aromatic , mossy , violet , citrus , peppery , caramelic , medicinal , tobacco , pear , lily , sour , orange , animal , honey , hyacinth , rose ) ., Second , 17 qualities ( 22 . 97% , named “Group 2” ) exhibited an error rate lower than 0 . 5 but a recall lower than 0 . 5 ( amber , geranium , metallic , fruity , pineapple , ethereal , plum , woody , balsamic , creamy , green , berry , oily , spicy , floral , winey , herbaceous ) ., Third , 18 qualities ( 24 . 32% , named “Group 3” ) showed an error rate greater than ( or equal to ) 0 . 5 and a recall greater than ( or equal to ) 0 . 5 ( leathery , aldehydic , mushroom , coco , mimosa , tea , nut , root , peachy , earthy , powdery , orris , apple , leafy , apricot , musty , brandy , narcissus ) ., Fourth , one quality ( 1 . 35% , named “Group 4” ) showed an error rate greater than ( or equal to ) 0 . 5 and a recall lower than 0 . 5 ( banana ) ., To further examine whether the generated physicochemical rules were specific to a given perceptual quality , in other words whether they provided a good and relevant model , we used Bootstrap confidence intervals to evaluate whether the generated F-measure of the rules/models was significative ., Here , knowing that a given set of rules covers X molecules , we sampled 100 , 000 sets of X molecules ( with replacement ) and calculated the F-measure of each sample according to the studied quality ., Next , the confidence intervals ( CI: 99% ) of these sets were computed ., Afterwards , the F-measure of the set of discovered rules was compared to this CI ., Results showed that for all 74 qualities , the F-measure was significant in that its value was outside ( and greater ) the CI at 99% ., Finally , to examine how the model built with 82 physicochemical descriptors performed compared to a model built with all 4000 descriptors , we calculated the F-measure for each quality ( computed on the basis of all sets of rules ) in both types of models ., Results showed that , on average , the F-measure was significantly greater ( p<0 . 0001 ) in the model with 82 physicochemical descriptors ( mean = 0 . 592 , SEM = 0 . 012 ) compared to the model with all 4000 descriptors ( mean = 0 . 487 , SEM = 0 . 011 ) , reflecting that the use of a small but explicative and intelligible set of descriptors enhances performance ., To sum up , we provide here a computational framework that enables the automatic extraction , from a complex and heterogeneous dataset , descriptive rules linking subgroups in a chemical space onto subgroups in a perceptual space ., As can be seen in Fig 3a , only 3 qualities could be best described by a single physicochemical rule whereas more than two thirds of the qualities needed between 3 and 5 rules to be described ., When dealing with the confidence of the rules , a gradient was observed whereby some rules were associated with a good rate of recall and minimum rate of error , whereas other rules exhibited a lower confidence in describing olfactory qualities ., Note that all the generated rules are available to the reader in S1 Table ., The computational approach that we developed is available at the following address: https://projet . liris . cnrs . fr/olfamine/ Here , we analyzed some of the best-known qualities in the field of olfactory evaluation , namely fruity , floral , woody , camphor , earthy , spicy , fatty ., The analysis of the rules and combinations of rules ( see S1 Table ) , shows that the number of rules is quite high for these qualities ranging from six ( floral ) , seven ( camphor , earthy ) , eight ( spicy , woody ) , nine ( fatty ) to twelve ( fruity ) ., From a physicochemical point of view , translated into interpretable rules , the floral quality is characterized by either aromatic and strongly hydrophobic molecules or non-aromatic and moderately hydrophobic odorants ., For camphor , molecules are rather small in size , moderately hydrophobic , and eventually cyclic ., The earthy quality is characterized by moderately hydrophobic molecules with unsaturations ., The spicy quality is characterized by rather rigid molecules , eventually aromatic ., Woody quality includes hydrophobic molecules , rather not cyclic nor aromatic ., For the fatty , the molecules have a larger carbon-chain skeleton which is highly hydropobic with aldehyde or acid functions ., Finally , for the fruity quality , molecules are described as having moderate hydrophobicity and being medium to large in size ., To push the interpretation further , we examined qualities associated with generated physicochemical rules with the highest level of confidence ., Here , we attempted, ( i ) to understand the rules based on a priori knowledge and, ( ii ) to examine whether the rules could raise new scientific assumptions ., We analyzed a total of eleven qualities corresponding to the first quartile of the distribution of all rules ., Based on the Euclidian distance to the “ideal” situation; 473 rules were generated by our analysis ( see Fig 4 ) ., These qualities were: sulfuraceous , vanillin , phenolic , musk , sandalwood , almond , orange-blossom , jasmine , hay , tarry , smoky ., The “sulfuraceous” quality was described as follows: R1: 0 . 0<nCsp2<0 . 0 0 . 0<nHAcc<0 . 0 11 . 611<Se<22 . 069 144 . 039<SAtot<222 . 269 0 . 0<TPSA ( Tot ) <50 . 6; R2: 1 . 0<nS<2 . 0 1 . 0<nC<6 . 0 0 . 0<N%<0 . 0 25 . 0<C%<33 . 3 38 . 8<TPSA ( Tot ) <64 . 18; R3: 1 . 0<nS<2 . 0 -0 . 264<Hy<0 . 323 102 . 715<SAtot<222 . 269 0 . 0<O%<6 . 3 ., These descriptions suggest , somewhat intuitively , that sulfuraceous odorants encompass molecules with one or two sulfur atoms and are moderately heavy , with a maximum of six carbon atoms ., Four rules defined the “phenolic” quality: R1: 216 . 155<SAtot<218 . 661 0 . 0<nCrs<0 . 0 0 . 0<nOHp<0 . 0 30 . 4<C%<45 . 0 0 . 0<Ui<2 . 322; R2: 1 . 117<Mi<1 . 118 -0 . 768<Hy<-0 . 158 0 . 0<nR = Ct<0 . 0 43 . 5<H%<50 . 0 0 . 0<nOxiranes<0 . 0 0 . 0<nR = Cp<0 . 0; R3: 2 . 807<Uc<2 . 807 3 . 0<nCp<5 . 0 0 . 4<ARR<0 . 545 2 . 0<Ui<2 . 0 -0 . 888<Hy<-0 . 277 37 . 8<C%<40 . 0 0 . 0<nOHt<0 . 0 0 . 0<nOHp<0 . 0; R4: 0 . 6<ARR<0 . 75 1 . 0<nArOH<2 . 0 2 . 807<Uc<3 . 17 170 . 356<SAtot<222 . 475 0 . 893<MLOGP<2 . 778 0 . 0<nArCO<0 . 0 ., Thus , odorants having a “phenolic” quality are of moderate size , with few unsaturations and low hydrophilicity ( and high lipophilicity ) ., It can be regarded as a cyclic molecule ., A good consistency is observed between the 4 rules ., For “vanillin” , the following rules were observed: R1: 0 . 5<ARR<0 . 545 3 . 0<nCb-<4 . 0 3 . 0<nHAcc<3 . 0 1 . 0<nArOR<2 . 0 0 . 0<nR = Cp<0 . 0 0 . 0<nArCO<0 . 0 38 . 1<C%<46 . 2; R2: 3 . 0<nCb-<3 . 0 3 . 0<nO<3 . 0 0 . 0<nArCOOR<0 . 0 -0 . 727<Hy<0 . 66 42 . 1<H%<50 . 0 0 . 0<nArCO<0 . 0 38 . 1<C%<42 . 3; R3: 2 . 0<nCsp3<2 . 0 1 . 0<nArOR<2 . 0 0 . 699<MLOGP<1 . 75 0 . 0<nArCOOR<0 . 0 0 . 0<nArCO<0 . 0 2 . 0<nCb-<4 . 0 ., These descriptions suggest that odorants belonging to this group are mostly cyclic molecule ( like the prototypical molecule vanillin ) , with 3 Hydrogen bond acceptors branched on saturated carbons atoms on an aromatic cycle ., When considering the “musk” quality , the following rules emerged: R1: 3 . 72<MLOGP<4 . 045 2 . 0<nCrs<15 . 0 1 . 0<nCIC<1 . 0 333 . 936<SAtot<436 . 545; R2: 4 . 0<nCb-<6 . 0 33 . 0<nBT<47 . 0 0 . 0<nCbH<2 . 0; R3: 0 . 0<RBN<0 . 0 11 . 0<nCs<16 . 0; R4: 238 . 46<MW<270 . 41 57 . 1<H%<63 . 8 402 . 5<SAtot<440 . 301 0 . 0<nR07<0 . 0 -0 . 931<Hy<-0 . 763 0 . 0<ARR<0 . 316 0 . 0<RBN<12 . 0 0 . 0<nCt<3 . 0 ., Musky molecules are heavy and hydrophobic compounds ., This is reflected by a rather large logP , surface area or molecular weight ., From a general point of view , these descriptors reflect well the features of musky odorants ., For the “sandalwood” quality , two rules were observed: R1: 3 . 0<nCrt<5 . 0 1 . 0<nHDon<1 . 0 0 . 0<nR04<0 . 0 1 . 0<nCrq<2 . 0; R2: 3 . 0<nCrt<5 . 0 1 . 0<nHDon<1 . 0 -0 . 429<Hy<-0 . 325 2 . 0<nR05<3 . 0 ., Sandalwood odorants are quite diverse and minor modifications within their structure can abolish the sandalwood note ., The rules which are mined here correspond to models which are very simple and hardly capture the subtlety of this odorant family 28 ., The description presented here corresponds to the prototypic beta-santalol structure which has a campholenic skeleton ., The “almond” quality was described by four rules: R1: 0 . 0<nCp<0 . 0 152 . 443<SAtot<165 . 41 1 . 0<nO<2 . 0 2 . 0<Ui<2 . 585; R2: 0 . 706<ARR<0 . 8 0 . 0<nArCO<0 . 0 1 . 0<nO<1 . 0 3 . 0<Uc<3 . 807 0 . 143<MLOGP<3 . 571 -0 . 917<Hy<-0 . 71 0 . 0<nCb-<2 . 0; R3: 1 . 0<nH<5 . 0 0 . 0<nOxiranes<0 . 0 1 . 0<nHAcc<3 . 0 1 . 0<nN<2 . 0 23 . 79<TPSA ( Tot ) <90 . 27 0 . 0<O%<14 . 3 0 . 0<ARR<0 . 75; R4: 1 . 0<nArCHO<1 . 0 11 . 0<nBT<20 . 0 45 . 0<C%<47 . 1 -0 . 864<Hy<-0 . 668 1 . 0<nHAcc<2 . 0 ., These descriptions suggest that odorants evoking an almond-like quality are compounds bearing at least one oxygen and/or other hydrogen bond-accepting atom but also bearing an aromatic cycle ., This means that the structure bears several unsaturations ., These chemicals are thus relatively small and can be compared to the prototypical structure of benzaldehyde ., Four physicochemical rules described the “orange-blossom” quality: R1: 10 . 0<nCsp2<10 . 0 9 . 23<TPSA ( Tot ) <58 . 89; R2: 1 . 0<nArNH2<1 . 0 213 . 361<SAtot<326 . 286 0 . 0<nR = Cs<0 . 0 0 . 0<nCt<0 . 0 37 . 9<C%<51 . 5; R3: 0 . 773<ARR<0 . 857 39 . 4<H%<45 . 5 9 . 23<TPSA ( Tot ) <52 . 32 3 . 0<nCb-<5 . 0; R4: 47 . 243<Se<53 . 454 4 . 0<nCbH<9 . 0 3 . 287<MLOGP<5 . 007 3 . 0<nHAcc<4 . 0 0 . 231<ARR<0 . 462 ., These descriptions characterize very diverse structures ranging from very small to medium or large compounds ., As a general rule , one can note the presence of unsaturations , consistent with a terpenic structure , associated with a quite hydrophobic feature ., The “jasmine” quality was described by six rules: R1: 12 . 0<nC<13 . 0 43 . 37<TPSA ( Tot ) <44 . 76; R2: 336 . 137<SAtot<337 . 327 0 . 0<nR = Cs<0 . 0; R3: 7 . 0<nCsp2<8 . 0 1 . 0<nCb-<1 . 0 2 . 0<nCp<3 . 0 50 . 0<H%<53 . 3 4 . 0<nCsp3<5 . 0 1 . 0<nCs<3 . 0 1 . 0<nRCOOR<1 . 0; R4: 1 . 0<nCb-<1 . 0 2 . 034<MLOGP<2 . 386 2 . 0<nHet<3 . 0 7 . 0<nCsp2<8 . 0 1 . 0<nCp<2 . 0 -0 . 807<Hy<-0 . 727 0 . 0<nArCOOR<0 . 0 0 . 0<nArOR<0 . 0; R5: 5 . 0<RBN<6 . 0 1 . 0<nRCO<1 . 0 291 . 434<SAtot<350 . 346 10 . 0<nC<13 . 0 0 . 0<nArCO<0 . 0; R6: 1 . 0<nR = Ct<1 . 0 4 . 0<nCs<8 . 0 2 . 0<nCconj<4 . 0 0 . 0<nCt<0 . 0 -0 . 912<Hy<-0 . 873 ., This rule characterizes, ( i ) | Introduction, Methods, Results, Discussion | An important goal in researching the biology of olfaction is to link the perception of smells to the chemistry of odorants ., In other words , why do some odorants smell like fruits and others like flowers ?, While the so-called stimulus-percept issue was resolved in the field of color vision some time ago , the relationship between the chemistry and psycho-biology of odors remains unclear up to the present day ., Although a series of investigations have demonstrated that this relationship exists , the descriptive and explicative aspects of the proposed models that are currently in use require greater sophistication ., One reason for this is that the algorithms of current models do not consistently consider the possibility that multiple chemical rules can describe a single quality despite the fact that this is the case in reality , whereby two very different molecules can evoke a similar odor ., Moreover , the available datasets are often large and heterogeneous , thus rendering the generation of multiple rules without any use of a computational approach overly complex ., We considered these two issues in the present paper ., First , we built a new database containing 1689 odorants characterized by physicochemical properties and olfactory qualities ., Second , we developed a computational method based on a subgroup discovery algorithm that discriminated perceptual qualities of smells on the basis of physicochemical properties ., Third , we ran a series of experiments on 74 distinct olfactory qualities and showed that the generation and validation of rules linking chemistry to odor perception was possible ., Taken together , our findings provide significant new insights into the relationship between stimulus and percept in olfaction ., In addition , by automatically extracting new knowledge linking chemistry of odorants and psychology of smells , our results provide a new computational framework of analysis enabling scientists in the field to test original hypotheses using descriptive or predictive modeling . | An important issue in olfaction sciences deals with the question of how a chemical information can be translated into percepts ., This is known as the stimulus-percept problem ., Here , we set out to better understand this issue by combining knowledge about the chemistry and cognition of smells with computational olfaction ., We also assumed that not only one , but several physicochemical models may describe a given olfactory quality ., To achieve this aim , a first challenge was to set up a database with ~1700 molecules characterized by chemical features and described by olfactory qualities ( e . g . fruity , woody ) ., A second challenge consisted in developing a computational model enabling the discrimination of olfactory qualities based on these chemical features ., By meeting these 2 challenges , we provided for several olfactory qualities new chemical models describing why an odorant molecule smells fruity or woody ( among others ) ., For most qualities , multiple ( rather than a single ) chemical models were generated ., These findings provide new elements of knowledge about the relationship between odorant chemistry and perception ., They also make it possible to envisage concrete applications in the aroma and fragrance field where chemical characterization of smells is an important step in the design of new products . | smell, chemical compounds, statistics, social sciences, neuroscience, data mining, perception, physicochemical properties, cognitive psychology, scientists, forecasting, odorants, mathematics, materials science, information technology, science and technology workforce, physical chemistry, chemical properties, research and analysis methods, physical properties, computer and information sciences, mathematical and statistical techniques, chemistry, physics, people and places, professions, psychology, science policy, careers in research, biology and life sciences, population groupings, materials, physical sciences, sensory perception, cognitive science, phenols, statistical methods | null |
2,452 | journal.pcbi.1007348 | 2,019 | Learning unsupervised feature representations for single cell microscopy images with paired cell inpainting | Feature representations of cells within microscopy images are critical for quantifying cell biology in an objective way ., Classically , researchers have manually designed features that measure phenomena of interest within images: for example , a researcher studying protein subcellular localization may measure the distance of a fluorescently-tagged protein from the edge of the cell 1 , or the correlation of punctuate proteins with microtubules 2 ., By extracting a range of different features , an image of a cell can be represented as a set of values: these feature representations can then be used for numerous downstream applications , such as classifying the effects of pharmaceuticals on cancer cells 3 , or exploratory analyses of protein localization 1 , 4 ., The success of these applications depends highly on the quality of the features used: good features are challenging to define , as they must be sensitive to differences in biology , but robust to nuisance variation such as microscopy illumination effects or single cell variation 5 ., Convolutional neural networks ( CNNs ) have achieved state-of-the-art performance in tasks such as classifying cell biology in high-content microscopy and imaging flow cytometry screens 6–9 , or segmenting single cells in images 10 , 11 ., A key property driving this performance is that CNNs automatically learn features that are optimized to represent the components of an image necessary for solving a training task 12 ., Donahue et al . 13 and Razavian et al . 14 demonstrate that the feature representations extracted by the internal layers of CNNs trained as classifiers achieve state-of-the-art results on even very different applications than the task the CNN was originally trained for; studies specific to bio-imaging report similar observations about the quality of CNN features 6 , 15 , 16 ., The features learned by CNNs are thought to be more sensitive to relevant image content than human-designed features , offering a promising alternative for feature-based image analysis applications ., However , learning high-quality features with CNNs is a current research challenge because it depends highly on the training task ., For example , autoencoders , or unsupervised CNNs trained to reconstruct input images , usually do not learn features that generalize well to other tasks 17–19 ., On the other hand , classification tasks result in high-quality features 13 , 14 , but they rely on large , manually-labeled datasets , which are expensive and time-consuming to generate ., For example , to address the challenge of labeling microscopy images in the Human Protein Atlas , the project launched a massive crowd-sourcing initiative in collaboration with an online video game , spanning over one year and involving 322 , 006 gamers 20 ., Because advances in high-content throughput microscopy are leading to routine generation of thousands of images 21 , the new cell morphologies and phenotypes discovered and need for integration of datasets 4 may require the continuous update of models ., Unsupervised methods that result in the learning of high-quality features without the use of manual labels would resolve the bottleneck of collecting and updating labels: we would , in principle , be able to learn feature representations for any dataset , without the need for experts to curate , label , and maintain training images ., If obtaining expert-assigned labels for microscopy images is challenging , then obtaining labels for the single cells within these images is even more difficult ., Studying biological phenomena at a single-cell level is of current research interest 22: even in genetically-identical cell cultures , single-cell variability can originate from a number of important regulatory mechanisms 23 , including the cell cycle 24 , stochastic shuttling of transcription factors between compartments 25 , or variability in the DNA damage response 26 ., Thus , ideally , a method would efficiently generate feature representations of single cells for arbitrary datasets using deep learning , without the need for labelled single-cell training data ., In this study , we asked if CNNs could automatically learn high-quality features for representing single cells in microscopy images , without using manual labels for training ., We investigate self-supervised learning , which proposes training CNNs using labels that are automatically available from input images ., Self-supervised learning aims to develop a feature representation in the CNN that is useful for other tasks: the training task is only a pretext , and the CNN may not achieve a useful level of performance at this pretext task ., This differs from weakly-supervised learning , where a learnt network is used directly for an auxiliary task 27 , such as segmenting tumors in histology images using a network trained to predict disease class 28 ., After training , the output of the CNN is discarded , and internal layers of the CNN are used as the features ., The logic is that by learning to solve the pretext task , the CNN will develop features that are useful for other applications ., The central challenge in self-supervised learning is defining a pretext task that encourages the learning of generalizable features 29 ., Successful self-supervised learning strategies in the context of natural images , include CNNs trained to predict the appearance of withheld image patches based upon its context 18 , the presence and location of synthetic artifacts in images 29 , or geometric rotations applied to input images 30 ., The idea is that to succeed at the pretext task , the CNN needs to develop a strong internal representation of the objects and patterns in the images ., When transferred to tasks such as classification , segmentation , or detection , features developed by self-supervised methods have shown state-of-the-art results compared to other unsupervised methods , and in some cases , perform competitively with features learned by supervised CNNs 17 , 29–32 ., Here , we present a novel self-supervised learning method designed for microscopy images of protein expression in single cells ., Our approach leverages the typical structure of these images to define the pretext training task: in many cases , each image contains multiple genetically identical cells , growing under the same experimental condition , and these cells exhibit similar patterns of protein expression ., The cells are imaged in multiple “channels” ( such as multiple fluorescence colours ) that contain very different information ., By exploiting this structure , we define a pretext task that relies only upon image content , with no prior human labels or annotations incorporated ., In our examples , one set of channels represents one or more structures in the cell ( e . g . the cytosol , nucleus , or cytoskeleton ) , and another channel represents proteins of interest that have been fluorescently tagged to visualize their localization ( with a different protein tagged in each image ) ., Then , given both channels for one cell and the structural markers for a different cell from the same image , our CNN is trained to predict the appearance of the protein of interest in the second cell , a pretext task that we term “paired cell inpainting” ( Fig 1A ) ., To solve this pretext task , we reason that the CNN must identify protein localization in the first ( or “source” ) cell and reconstruct a similar protein localization in the second ( or “target” ) cell in a way that adapts to single cell variability ., In Fig 1 , the protein is localized to the nucleoli of human cells–the network must recognize the localization of the protein in the source cell , but also transfer it to the equivalent structures in the target cell , despite differences in the morphology of the nucleus between the two cells ., Thus , by the design of our pretext task , our method learns representations of single cells ., To demonstrate the generalizability of our method , we automatically learn feature representations for datasets of both human and yeast cells , with different morphologies , microscopes and resolutions , and fluorescent tagging schemes ( Fig 1B ) ., We exemplify the quality of the features learned by our models in several use-cases ., First , we show that for building classifiers , the features learned through paired cell inpainting improve in discriminating protein subcellular localization classes at a single-cell level compared to other unsupervised methods ., Next , we establish that our features can be used for unsupervised exploratory analysis , by performing an unsupervised proteome-wide cluster analysis of protein analysis in human cells , capturing clusters of proteins in cellular components at a resolution challenging to annotate by human eye ., Finally , we determine that our features are useful for single-cell analyses , showing that our features can distinguish phenotypes in spatially-variable single cell populations ., We would like to learn a representation for single cells in a collection of microscopy images , I ., We define each image i as a collection of single cells , i = {ci , 1 …ci , n} ., We note that the only constraint on i is that its single cells Ci must be considered similar to each other , so i does not need to be strictly defined as a single digital image so long as this is satisfied; in our experiments , we consider an “image” i to be all fields of view corresponding to an experimental well ., We define single cells to be image patches , so c∈RH×W×Z , where Z are the channels ., We split the images by channel into c = ( x , y ) , where x∈RH×W×Z1 , y∈RH×W×Z2 , and Z1 , Z2 , ⊆ Z . For this work , we assign to Z1 channels corresponding to structural markers , or fluorescent tags designed to visualize structural components of the cell , where all cells in the collection of images have been labeled with the tag ., We assign to Z2 channels corresponding to proteins , or channels where the tagged biomolecule will vary from image to image ., We define a source cell cs , which is associated with a target cell ct satisfying constraints that both cells are from the same image , cs ∈ is , ct ∈ it , is = it , and cs ≠ ct ., Our goal is to train a neural network that will solve the prediction problem yt^=f ( xs , ys , xt ) ∀cs , ct∈I , where yt^ represents the predicted protein channels that vary between images ., For this work , we train the network on the prediction problem by minimizing a standard pixel-wise mean-squared error loss between the predicted target protein yt^ and the actual target protein yt:, L ( yt^ , yt ) =1h∙w∑h , w ( yt^h , w-yth , w ) 2, As with other self-supervised methods , our pretext training task is only meant to develop the internal feature representation of the CNN ., After training , yt^ is discarded , and the CNN is used as a feature extractor ., Importantly , while our pretext task predicts a label yt , we consider our overall method to be unsupervised , because these labels are defined automatically from image content without any human supervision ., One limitation in our pretext task is that some protein localizations are not fully deterministic in respect to the structure of the cell and are therefore challenging to predict given the inputs we define ., In these cases , we observe that the network produces smoothed predictions that we hypothesize are an averaged guess of the localization ., We show an example in Fig 1C; while the source protein is localized to the nucleus in a punctate pattern , the target protein is predicted as a smooth distribution throughout the nucleoplasm ., However , as our inpainting task is a pretext and discarded after training , we are not concerned with outputting fully realistic images ., The averaging effect is likely due to our choice of a mean squared loss function; should more realistic images be desired , different loss functions , such as adversarial losses 33 , may produce better results ., As the goal of our training task is to obtain a CNN that can encode single cell image patches into feature representations , we construct independent encoders for ( xs , ys ) and for xt , which we call the “source cell encoder” and the “target marker encoder” , respectively ., After training with our self-supervised task , we isolate the source cell encoder , and discard all other components of our model ., This architecture allows us to obtain a feature representation of any single cell image patch independently without also having to input target cell markers ., To obtain single cell features , we simply input a single cell image patch , and extract the output of an intermediate convolutional layer in the source cell encoder ., We show a summary of our architecture in Fig 1C ., Following other work in self-supervised learning 17 , 18 , 30 , we use an AlexNet architecture for the source cell encoder , although we set all kernel sizes to 3 due to the smaller sizes of our image patches , and we add batch normalization after each convolutional layer ., We use a smaller number of filters and fewer convolutional layers in the architecture of the target marker encoder; we use three convolutional layers , with 16 , 32 , and 32 filters , respectively ., Finally , for the decoder , we reverse the AlexNet architecture ., The goal of our training is to develop the learned features of our source cell encoder , which will later be used to extract single cell feature representations ., If the network were to utilize bleed-through from the fluorescent marker channels to predict the target marker image , or overfit to the data and ‘memorize’ the target cell images , then the network would not need to learn to extract useful information from the source cell images ., To rule out that our trained models were subject to these effects , we produced inpainting results from a trained model , where we paired source cells with target cells where there is a mismatch between source and target protein localization ., Here , the model has seen both the source and target cells during training , but never the specific pair due to the structure of our training task , as the cells originate from different images ., We qualitatively confirmed that our trained networks were capable of synthesizing realistic results agreeing with the protein localization of the source cell ( S1 Fig ) , suggesting that our models are not trivially overfitting to the target marker images ., Because the training inputs to our self-supervised task consist of pairs of cells , the number of possible combinations is large , as each cell may be paired with one of many other cells ., This property is analogous to data augmentation , increasing the number of unique training inputs to our network ., To sample training inputs , for each epoch , we iterate through every single cell in our training dataset , set it as the source cell , and draw with uniform probability a target cell from the set of all valid possible target cells ., Our pretext task relies on the assumption that protein expression in single cells from the same image is similar ., This assumption is not always true: in the datasets used in this work , some proteins exhibit significant single cell variability in their protein abundance or localization 34 , 35 ., These proteins may contribute noise because paired single cells will have ( unpredictably ) different protein expression patterns and the model will not learn ., Although the Human Protein Atlas documents variable proteins 24 , for our experiments we do not remove these and confirm that our model still learns good features in spite of this noise ., For yeast cells , we used the WT2 dataset from the CYCLoPS database 36 ., This collection expresses a cytosolic red fluorescent protein ( RFP ) in all cells , and tags proteins of interest with green fluorescent protein ( GFP ) ., We use the RFP channel as the structural marker , and the GFP channel as the protein ., To extract single cell crops from the images in this dataset , we segmented our images using YeastSpotter on the structural marker channel 37 , and extract a 64x64 pixel crop around the identified cell centers; we discarded any single cells with an area smaller than 5% or greater than 95% of the image crop , as these are likely artifacts arising from under- or over-segmentation ., We discard any images with fewer than 30 cells ., We preprocessed crops by rescaling each crop to be in the range of 0 , 1 ., These preprocessing operations result in a total of 1 , 165 , 713 single cell image patches grouped into 4 , 069 images ( where each image is 4 fields of view ) , with a total of 4 , 069 of 4 , 138 proteins passing this filter ., We also trained a second different yeast cell model , using a NOP1pr-GFP library previously published by Weill et al . 37 This collection tags all proteins of interest with GFP at the N-terminal of proteins with a NOP1 promoter , and is also imaged in brightfield ., We used the brightfield channel as a structural marker and the GFP channel as the protein ., We extracted and preprocessed single cell crops using the same procedure for the CYCLoPS dataset , and discarded any images with fewer than 30 single cells ., These preprocessing operations result in a total of 563 , 075 single cell image patches grouped into 3 , 067 images ( where each image is 3 fields of view ) , with a total of 3 , 067 of 3 , 916 proteins passing this filter ., Finally , we trained a third yeast cell model , using a dataset previously published by Tkach et al . 38 , a Nup49-RFP GFP-ORF library ., This collection expresses a nuclear pore protein ( Nup49 ) fused to RFP in all cells , and tags proteins of interest with green fluorescent protein ( GFP ) ., We use the RFP channel as the structural marker , and the GFP channel as the protein ., We extracted and preprocessed single cell crops using the same procedure for the CYCLoPS dataset , and discarded any images with fewer than 30 single cells ., These preprocessing operations result in a total of 1 , 733 , 127 single cell image patches grouped into 4 , 085 images ( where each image is 3 fields of view ) , with a total of 4 , 085 of 4 , 149 proteins passing this filter ., For human cells , we use images from version 18 of the Human Protein Atlas 24 ., We were able to download jpeg images for a total of 12 , 068 proteins ., Each protein may have multiple experiments , which image different cell line and antibody combinations ., We consider an image to be of a protein for the same cell line and antibody combination; accordingly , we have 41 , 517 images ( where each image is usually 2 fields of view ) ., We downloaded 3 channels for these images ., Two visualize the nuclei and microtubules , which we use as the structural marker channels ., The third channel is an antibody for the protein of interest , which we use as the protein channel ., To extract single cell crops from this image , we binarize the nuclear channel with an Otsu filter and find nuclei by labeling connected components as objects using the scikit-image package 39 ., We filter any objects with an area of less than 400 pixels , and extract a 512x512 pixel crop around the center of mass of remaining objects ., To reduce training time , we rescale the size of each crop to 64x64 pixels ., We preprocessed crops by rescaling each crop to be in the range of 0 , 1 , and clipped pixels under 0 . 05 intensity for the microtubule and nuclei channels to 0 to improve contrast ., Finally , we remove any images with fewer than 5 cells , leaving a total of 638 , 640 single cell image patches grouped into 41 , 285 images , with a total of 11 , 995 of 12 , 068 proteins passing this filter ., Crop sizes for our datasets ( 64x64 pixels for yeast cells and 512x512 pixels for human cells ) were chosen such that a crop fully encompasses an average cell from each of these datasets ., We note that different image datasets may require different crop sizes , depending on the resolution of the images and the size of the cells ., While each crop is centered around a segmentation , we did not filter crops with overlapping or clumped cells , so some crops may contain multiple cells ., In general , we observed that our models did not have an issue learning to inpaint protein expression from a crop with multiple cells to a crop with a single cell , or vice versa: S1 Fig shows an example of a case where we inpaint protein expression from a crop with two cells to a crop with one cell ., During training , we apply random horizontal and vertical flips to source and target cells independently as data augmentation ., We trained models for 30 epochs using an Adam optimizer with an initial learning rate of 1e-4 ., After training , we extract representations by maximum pooling the output of an intermediate convolutional layer , across spatial dimensions ., This strategy follows previous unsupervised representation extraction from self-supervised learning methods 17 , 30 , which sample activations from feature maps ., To benchmark the performance of features learned using paired cell painting , we obtained features from other commonly-used feature representation strategies ., As classic computer vision baselines for our yeast cell benchmarks , we obtained features extracted using CellProfiler 40 for a classification dataset of 30 , 889 image crops of yeast cells directly from the authors 41 ., These features include measurements of intensity , shape , and texture , and have been optimized for classification performance ., Further details are available from 41 ., We also extracted features from these yeast cell image crops using interpretable expert-designed features by Handfield et al . 1 We followed procedures previously established by the authors: we segmented cells using provided software , and calculated features from the center cell in each crop ., For our transfer learning baselines in our yeast cell benchmarks , we used a VGG16 model pretrained on ImageNet , using the Keras package ., We benchmarked three different input strategies: ( 1 ) we mapped channels arbitrarily to RGB channels ( RFP to red , GFP to green , blue channel left empty ) ; ( 2 ) we inputted each channel separately as a greyscale image and concatenated the representations; ( 3 ) we inputted only the GFP channel as a greyscale image and used this representation alone ., In addition , we benchmarked representations from each convolutional layer of VGG16 , post-processed by maximum pooling across spatial dimensions ( as we did for our self-supervised features ) ., S2 Fig shows classification performance using each layer in VGG16 with the kNN classifier described in our benchmarks , across all three strategies ., In general , we observed that the 3rd input strategy resulted in superior performance , with performance peaking in the 4th convolutional block of VGG16 ., We report results from the top-performing layer using the top-performing input strategy in our benchmarks ., Contrary to previous work in transfer learning on microscopy images by Pawlowski et al . , we extract features from the intermediate convolutional layers of our transferred VGG16 model instead of the final fully-connected layer 15 ., This modification allows us to input our images at their original size , instead of resizing them to the size of the images originally used to train the transferred model ., As our work operates on single cell crops , which are much smaller than the full images benchmarked in previous work ( 64x64 pixels compared to 1280x1024 pixels ) , we found that inputting images at their original size instead of stretching them resulted in performance gains: our top-performing convolutional layer ( block4_conv1 ) with inputs at original size achieved 69 . 33% accuracy , whereas resizing the images to 224x224 and using features from the final fully-connected layer ( as-is , without max-pooling , as described in 15 ) achieves 65 . 64% accuracy ., In addition , we found that extracting features from images at original resolution improves run-time: on our machine , inputting 64x64 crops and extracting features from the best-performing layer was about 16 times faster than inputting resized 224x224 images and extracting features from the final fully-connected layer ., Finally , for the supervised baseline , we used the model and pretrained weights provided by Kraus et al . 6 ., We inputted images as previously described ., To ensure that metrics reported for these features were comparable with the other accuracies we reported , we extracted features from this model and built the same classifier used for the other feature sets ., We found that features from the final fully connected layer before the classification layer performed the best , and report results from this layer ., To compare the performance of various feature representations with our single yeast cell dataset , we built kNN classifiers ., We preprocessed each dataset by centering and scaling features by their mean and standard deviation ., We employed leave-one-out cross-validation and predicted the label of each cell based on its neighbors using Euclidean distance ., S1 Table shows classification accuracy with various parameterizations of k ., We observed that regardless of k , feature representations were ranked the same in their classification performance , with our paired cell inpainting features always outperforming other unsupervised feature sets ., However , k = 11 produced the best results for all feature sets , so we report results for this parameterization ., As classic computer vision baselines for our human cell benchmarks , we curated a set of texture , correlation , and intensity features ., For each crop , we measured the sum , mean , and standard deviation of intensity from pixels in the protein channels , and the Pearson correlation between the protein channel and the microtubule and nucleus channels ., We extracted Haralick texture features from the protein channel at 5 scales ( 1 , 2 , 4 , 8 , and 16 pixels ) ., Finally , as the transfer learning baseline for our human cell benchmarks , we extracted features from the pretrained VGG16 model using the same input strategies and layer as established in our yeast cell benchmark ., To directly measure and compare how a feature set groups together cells with similar localizations in their feature spaces , measured the average pairwise distance between cells in the feature space ., We preprocess single cell features by scaling to zero mean and unit variance , to control for feature-to-feature differences in scaling within feature sets ., Then , to calculate the distance between two single cells c , we use the Euclidean distance between their features f:d ( cx , cy ) =∥fcx-fcy∥2 ., Given two images with the same localization term , we calculate the average distance of all cells in the first image paired with all cells in the second image and normalize these distances to an expectation of the average pairwise distance between images with different localization terms ., A negative normalized average pairwise distance score indicates that the distances are smaller than expectation ( so single cells in images with the same label are on average closer in the feature space ) ., For the Human Protein Atlas images , we restricted our analysis to proteins that only had a single localization term shared by at least 30 proteins , resulting in proteins with 18 distinct localization terms ( as listed in Fig 2B ) ., For each localization term , we calculated average pairwise distances for 1 , 000 random protein pairs with the same localization term , relative to an expectation from 1 , 000 random protein pairs with each of the possible other localization terms ( for a total of 17 , 000 pairs sampled to control for class imbalance ) ., For our experiments controlling for cell line , we also introduce the constraint that the images must be of cells of the same or different cell lines , depending on the experiment ., Because some localization terms and cell line combinations are rare , we did not control for class imbalance and drew 10 , 000 random protein pairs with any different localization terms ( not necessarily each other different term ) , and compared this to 10 , 000 random protein pairs with the same localization term ., Hence , the distances in the two experiments are not directly comparable ., For proteins localized to two compartments , we calculated a score for each cell based upon its distance to the first compartment versus its distance to the second compartment ., To do so , we averaged the feature vectors for all single-cells in images annotated to localize to each compartment alone to define the average features for the two compartments ., Then , for every single-cell , we calculated the distance of the single-cell’s features relative to the two compartments’ averages and take a log ratio of the distance to the first compartment divided by the distance to the second compartment ., A negative number reflects that the single-cell is closer to the first compartment in the feature space , while a positive number reflects that the single-cell is closer to the second compartment ., Code and pre-trained weights for the models used in this work are available at https://github . com/alexxijielu/paired_cell_inpainting ., To assess the quality of feature learned through paired cell inpainting we trained a model for yeast fluorescent microscopy images using paired cell inpainting , on an unlabelled training set comprising of the entire WT2 screen in the CyCLOPS dataset , encompassing 1 , 165 , 713 single cells from 4 , 069 images ., Good features are sensitive to differences in biology , but robust to nuisance variation 5 ., As a first step to understanding if our features had these properties , we compared paired cell inpainting features with other feature sets at the task of discriminating different subcellular localization classes in yeast single cells ., To do this , we made use of a test set of 30 , 889 single cell image patches manually assigned to 17 different protein subcellular localization classes by Chong et al . 41 ., These single cells have been curated from a different image screen than the one we used to train our model , and thus represent an independent test dataset that was never seen by the model during training ., To evaluate feature sets , we constructed a simple kNN classifier ( k = 11 ) and evaluated classification performance by comparing the predicted label of each single cell based on its neighbors to its actual label ., While more elaborate classifiers could be used , the kNN classifier is simple and transparent , and is frequently employed to compare feature sets for morphological profiling 15 , 16 , 42 , 43 ., Like these previous works , the goal of our experiment is to assess the relative performance of various feature sets in a controlled setting , not to present an optimal classifier ., To use the feature representation from a self-supervised CNN , we must first identify which layers represent generalizable information about protein expression patterns ., Different layers in self-supervised models may have different properties: the earlier layers may only extract low-level features , but the later layers may be too specific to production of the pretext task 12 ., For this reason , identifying self-supervised features with a layer-by-layer evaluation of the model’s properties is standard 17–19 , 29 , 30 , 32 ., Since our model has five convolutional layers , it can be interpreted as outputting five different feature sets for each input image ., To determine which feature set would be best for the task of classifying yeast single cells , we constructed a classifier on our test set for the features from each layer independently , as shown in Table 1 ., Overall , we observe that the third ( Conv3 ) and fourth ( Conv4 ) convolutional layers result in the best performance ., Next , we sought to compare our features from paired cell painting with other unsupervised feature sets commonly used for cellular microscopy images ., First , we compared our features with feature sets designed by experts: CellProfiler features 40 that we | Introduction, Methods, Results, Discussion | Cellular microscopy images contain rich insights about biology ., To extract this information , researchers use features , or measurements of the patterns of interest in the images ., Here , we introduce a convolutional neural network ( CNN ) to automatically design features for fluorescence microscopy ., We use a self-supervised method to learn feature representations of single cells in microscopy images without labelled training data ., We train CNNs on a simple task that leverages the inherent structure of microscopy images and controls for variation in cell morphology and imaging: given one cell from an image , the CNN is asked to predict the fluorescence pattern in a second different cell from the same image ., We show that our method learns high-quality features that describe protein expression patterns in single cells both yeast and human microscopy datasets ., Moreover , we demonstrate that our features are useful for exploratory biological analysis , by capturing high-resolution cellular components in a proteome-wide cluster analysis of human proteins , and by quantifying multi-localized proteins and single-cell variability ., We believe paired cell inpainting is a generalizable method to obtain feature representations of single cells in multichannel microscopy images . | To understand the cell biology captured by microscopy images , researchers use features , or measurements of relevant properties of cells , such as the shape or size of cells , or the intensity of fluorescent markers ., Features are the starting point of most image analysis pipelines , so their quality in representing cells is fundamental to the success of an analysis ., Classically , researchers have relied on features manually defined by imaging experts ., In contrast , deep learning techniques based on convolutional neural networks ( CNNs ) automatically learn features , which can outperform manually-defined features at image analysis tasks ., However , most CNN methods require large manually-annotated training datasets to learn useful features , limiting their practical application ., Here , we developed a new CNN method that learns high-quality features for single cells in microscopy images , without the need for any labeled training data ., We show that our features surpass other comparable features in identifying protein localization from images , and that our method can generalize to diverse datasets ., By exploiting our method , researchers will be able to automatically obtain high-quality features customized to their own image datasets , facilitating many downstream analyses , as we highlight by demonstrating many possible use cases of our features in this study . | learning, fluorescence imaging, engineering and technology, signal processing, social sciences, light microscopy, green fluorescent protein, neuroscience, learning and memory, luminescent proteins, cognitive psychology, microscopy, experimental organism systems, crops, research and analysis methods, imaging techniques, crop science, animal studies, proteins, fluorescence microscopy, agriculture, biochemistry, psychology, image processing, biology and life sciences, yeast and fungal models, cognitive science | null |
385 | journal.pcbi.1006633 | 2,019 | Deep image reconstruction from human brain activity | While the externalization of states of the mind is a long-standing theme in science fiction , it is only recently that the advent of machine learning-based analysis of functional magnetic resonance imaging ( fMRI ) data has expanded its potential in the real world ., Although sophisticated decoding and encoding models have been developed to render human brain activity into images or movies , the methods are essentially limited to image reconstructions with low-level image bases 1 , 2 , or to matching to exemplar images or movies 3 , 4 , failing to combine the visual features of multiple hierarchical levels ., While several recent approaches have introduced deep neural networks ( DNNs ) for the image reconstruction task , they have failed to fully utilize hierarchical information to reconstruct visual images 5 , 6 ., Furthermore , whereas categorical decoding of imagery contents has been demonstrated 7 , 8 , the reconstruction of internally generated images has been challenging ., The recent success of DNNs provides technical innovations to study the hierarchical visual processing in computational neuroscience 9 ., Our recent study used DNN visual features as a proxy for the hierarchical neural representations of the human visual system and found that a brain activity pattern measured by fMRI could be decoded ( translated ) into the response patterns of DNN units in multiple layers representing the hierarchical visual features given the same input 10 ., This finding revealed a homology between the hierarchical representations of the brain and the DNN , providing a new opportunity to utilize the information from hierarchical visual features ., Here , we present a novel approach , named deep image reconstruction , to visualize perceptual content from human brain activity ., This technique combines the DNN feature decoding from fMRI signals with recently developed methods for image generation from the machine learning field ( Fig 1 ) 11 ., The reconstruction algorithm starts with a given initial image and iteratively optimizes the pixel values so that the DNN features of the current image become similar to those decoded from brain activity across multiple DNN layers ., The resulting optimized image is considered as a reconstruction from the brain activity ., We optionally introduced a deep generator network ( DGN ) 12 to constrain the reconstructed images to look similar to natural images by performing optimization in the input space of the DGN ., We trained the decoders that predicted the DNN features of viewed images from fMRI activity patterns following the procedures of Horikawa & Kamitani ( 2017 ) 10 ., In the present study , we used the VGG19 DNN model 13 , which consisted of sixteen convolutional layers and three fully connected layers and was pre-trained with images in ImageNet 14 to classify images into 1 , 000 object categories ( see Materials and Methods: “Deep neural network features” for details ) ., We constructed one decoder for a single DNN unit to predict outputs of the unit ., We trained decoders corresponding to all the units in all the layers ( see Materials and Methods: “DNN feature decoding analysis” for details ) ., The feature decoding analysis was performed with fMRI activity patterns in visual cortex ( VC ) measured while subjects viewed or imagined visual images ., Our experiments consisted of the training sessions in which only natural images were presented and the test sessions in which independent sets of natural images , artificial shapes , and alphabetical letters were presented ., In another test session , a mental imagery task was performed ., The decoders were trained using the fMRI data from the training sessions , and the trained decoders were then used to predict DNN feature values from the fMRI data of the test sessions ( the accuracies are shown in S1 Fig ) ., Decoded features were then forwarded to the reconstruction algorithm to generate an image using variants of gradient descent optimization ( see Material and Methods: “Reconstruction from a single DNN layer” and “Reconstruction from multiple DNN layers” for details ) ., The optimization was performed to minimize the error between multi-layer DNN features decoded from brain activity patterns and those calculated from the input image by iteratively modifying the input image ., For natural image reconstructions , to improve the “naturalness” of reconstructed images , we further introduced the constraint using a deep generator network ( DGN ) derived from the generative adversarial network algorithm ( GAN ) 15 , which is known to capture a latent space explaining natural images 16 ( see Material and Methods: “Natural image prior” for details ) ., Examples of reconstructions for natural images are shown in Fig 2 ( see S2 Fig for more examples , and see S1 Movie for reconstructions through the optimization processes ) ., The reconstructions obtained with the DGN capture the dominant structures of the objects within the images ., Furthermore , fine structures reflecting semantic aspects like faces , eyes , and texture patterns were also generated in several images ., Our extensive analysis on each of the individual subjects demonstrated replicable results across the subjects ., Moreover , the same analysis on a previously published dataset 10 also replicated qualitatively similar reconstructions to those in the present study ( S3 Fig ) ., To investigate the effect of the DGN , we evaluated the quality of reconstructions generated both with and without using it ( Fig 3A and 3B; see S4 Fig for individual subjects; see Material and Methods: “Evaluation of reconstruction quality” ) ., While the reconstructions obtained without the DGN also successfully reproduced rough silhouettes of dominant objects , they did not show semantically meaningful appearances ( see S5 Fig for more examples; also see S6 Fig for reconstructions from different initial states for both with and without the DGN ) ., Evaluations using pixel-wise spatial correlation and human judgment both showed almost comparable accuracy for reconstructions with and without the DGN ( accuracy of pixel-wise spatial correlation , with and without the DGN , 76 . 1% and 79 . 7%; accuracy of human judgment , with and without the DGN , 97 . 0% and 96 . 0% ) ., However , reconstruction accuracy evaluated using pixel-wise spatial correlation showed slightly higher accuracy with reconstructions performed without the DGN than with the DGN ( two-sided signed-rank test , P < 0 . 01 ) , whereas the opposite was observed for evaluations by human judgment ( two-sided signed-rank test , P < 0 . 01 ) ., These results suggest the utility of the DGN that enhances the perceptual similarity of reconstructed images to target images by rendering semantically meaningful details in the reconstructions ., To characterize the ‘deep’ nature of our method , the effectiveness of combining multiple DNN layers was tested using both objective and subjective assessments 5 , 17 , 18 ., For each of the 50 test natural images , reconstructed images were generated with a variable number of multiple layers ( Fig 4A; DNN1 only , DNN1–2 , DNN1–3 , … , DNN1–8; see S7 Fig for more examples ) ., In the objective assessment , the pixel-wise spatial correlations to the original image were compared between two combinations of DNN layers ., In the subjective assessment , an independent rater was presented with an original image and a pair of reconstructed images , both from the same original image but generated with different combinations of multiple layers , and was required to indicate which of the reconstructed images looked more similar to the original image ., While the objective assessment showed higher winning percentages for the earliest layer ( DNN1 ) alone , the subjective assessment showed increasing winning percentages for a larger number of DNN layers ( Fig 4B ) ., Our additional analysis showed poor reconstruction quality from individual layers especially from higher layers ( see S8 Fig for reconstructions from individual layers ) ., These results suggest that combining multiple levels of visual features enhanced the perceptual reconstruction quality even though the pixel-wise accuracy is lost ., Given the true DNN features , instead of decoded features , as the input , the reconstruction algorithm produces almost complete reconstructions of original images ( S8 Fig ) , indicating that the DNN feature decoding accuracy would determine the quality of reconstructed images ., To further confirm this , we calculated the correlation between the feature decoding accuracy and the reconstruction quality for individual images ( S9 Fig ) ., The analyses showed positive correlations for both the objective and subjective assessments , suggesting that improving feature decoding accuracy could improve reconstruction quality ., We found that the luminance contrast of a reconstruction was often reversed ( e . g . , the stained-glass images in Fig 2 ) , presumably because of the lack of ( absolute ) luminance information in the fMRI signals , even in the early visual areas 19 ., Additional analyses revealed that the feature values of filters with high luminance contrast in the earliest DNN layers ( conv1_1 in VGG19 ) were better decoded when they were converted to absolute values ( Fig 5A and 5B ) , demonstrating a clear discrepancy between the fMRI and raw DNN signals ., The large improvement levels demonstrate the insensitivity of fMRI signals to pixel luminance , suggesting the linear-nonlinear discrepancy of DNN and fMRI responses to pixel luminance ., This discrepancy may explain the reversal of luminance observed in several reconstructed images ., While this may limit the potential for reconstructions from fMRI signals , the ambiguity might be resolved by modelling DNNs to fill the gaps between signals of DNNs and fMRI ., Alternatively , further emphasis of the high-level visual information in hierarchical visual features may help to resolve the ambiguity of luminance by incorporating information on semantic context ., To confirm that our method was not restricted to the specific image domain used for the model training , we tested whether it was possible to generalize the reconstruction to artificial images ., This was challenging , because both the DNN and our decoding models were solely trained on natural images ., The reconstructions of artificial shapes and alphabetical letters are shown in Fig 6A and 6B ( also see S10 Fig and S2 Movie for more examples of artificial shapes , and see S11 Fig for more examples of alphabetical letters ) ., The results show that artificial shapes were successfully reconstructed with moderate accuracy ( Fig 6C left; 70 . 5% by pixel-wise spatial correlation , 91 . 0% by human judgment; see S12 Fig for individual subjects ) and alphabetical letters were also reconstructed with high accuracy ( Fig 6C right; 95 . 6% by pixel-wise spatial correlation , 99 . 6% by human judgment; see S13 Fig for individual subjects ) ., These results indicate that our model did indeed ‘reconstruct’ or ‘generate’ images from brain activity , and that it was not simply making matches to exemplars ., Furthermore , the successful reconstructions of alphabetical letters demonstrate that our method can expand the possible states of visualizations , with advance in resolution over reconstructions performed in previous studies 1 , 20 ., To assess how the shapes and colors of the stimulus images were reconstructed , we separately evaluated the reconstruction quality of each of shape and color by comparing reconstructed images of the same colors and shapes ., Analyses with different visual areas showed different trends in reconstruction quality for shapes and colors ( Fig 7A and see S14 Fig for more examples ) ., Human judgment evaluations suggested that shapes were reconstructed better from early visual areas , whereas colors were reconstructed better from the mid-level visual area V4 ( Fig 7B and see S15 Fig for individual subjects; ANOVA , interaction between task type shape vs . color and brain areas V1 vs . V4 , P < 0 . 01 ) , although the interaction effect was marginal when considering evaluations by pixel-wise spatial correlation ( P = 0 . 06 ) ., These contrasting patterns further support the success of shape and color reconstructions and indicate that our method can be a useful tool to characterize the information content encoded in the activity patterns of individual brain areas by visualization ., Finally , to explore the possibility of visually reconstructing subjective content , we performed an experiment in which participants were asked to produce mental imagery of natural and artificial images shown prior to the task session ., The reconstructions generated from brain activity due to mental imagery are shown in Fig 8 ( see S16 Fig and S3 Movie for more examples ) ., While the reconstruction quality varied across subjects and images , rudimentary reconstructions were obtained for some of the artificial shapes ( Fig 8A and 8B for high and low accuracy images , respectively ) ., In contrast , imagined natural images were not well reconstructed , possibly because of the difficulty of imagining complex natural images ( Fig 8C; see S17 Fig for vividness scores of imagery ) ., While the pixel-wise spatial correlation evaluations of reconstructed artificial images did not show high accuracy ( Fig 8D; 51 . 9%; see S18 Fig for individual subjects ) , this may have been due to the possible disagreements in positions , colors and luminance between target and reconstructed images ., Meanwhile , the human judgment evaluations showed accuracy higher than the chance level , suggesting that imagined artificial images were recognizable from the reconstructed images ( Fig 8D; 83 . 2%; one-sided signed-rank test , P < 0 . 01; see S18 Fig for individual subjects ) ., Furthermore , separate evaluations of color and shape reconstructions of artificial images suggested that shape rather than color had a major contribution to the high proportion of correct answers by human raters ( Fig 8E; color , 64 . 8%; shape , 87 . 0%; two-sided signed-rank test , P < 0 . 01; see S19 Fig for individual subjects ) ., Additionally , poor but sufficiently recognizable reconstructions were obtained even from brain activity patterns in the primary visual area ( V1; 63 . 8%; three subjects pooled; one-sided signed-rank test , P < 0 . 01; see S20 Fig for reconstructed images and S21 Fig and S22 Fig for quantitative evaluations ) , possibly supporting the notion that low-level visual features are encoded in early visual cortical activity during mental imagery 21 ., Taken together , these results provide evidence for the feasibility of visualizing imagined content from brain activity patterns ., We have presented a novel approach to reconstruct perceptual and mental content from human brain activity combining visual features from the multiple layers of a DNN ., We successfully reconstructed viewed natural images , especially when combined with a DGN ., The results from the extensive analysis on each subject were replicated across different subjects ., Reconstruction of artificial shapes was also successful , even though the reconstruction models used were trained only on natural images ., The same method was also applied to mental imagery , and revealed rudimentary reconstructions of mental content ., Our method is capable of reconstructing various types of images , including natural images , colored artificial shapes , and alphabetical letters , even though each component of our reconstruction model , the DNN models and the DNN feature decoders , was solely trained with natural images ., The results strongly demonstrated that our method was certainly able to ‘reconstruct’ or ‘generate’ images from brain activity , differentiating our method from the previous attempts to visualize perceptual contents using the exemplar matching approach , which suffers from restrictions derived from pre-selected image/movie sets 3 , 4 ., We introduced the GAN-based constraint using the DGN for natural image reconstructions to enhance the naturalness of reconstructed images , rendering semantically meaningful details to the reconstructions ., A variant of the GAN-based approach has demonstrated the utility in a previous face image reconstruction study , too 22 ., GAN-derived feature space appears to provide efficient constraints on resultant images to enhance the perceptual resemblance to the image set on which a GAN is trained ., While one of the strengths of the present method is its generalizability across image types , there remains room for substantial improvements in reconstruction performance ., Because we used the models ( DNNs and decoders ) trained with natural ‘object’ images from the ImageNet database 14 , whose images contain objects around the center , it would not be optimal for the reconstruction of other types of images ., Furthermore , because we used the DNN model trained to classify images into 1 , 000 object categories , the representations acquired in the DNN would be specifically suited to the particular objects ., One could train the models with diverse types of images , such as scenes , textures , and artificial shapes , as well as object images , to improve general reconstruction performance ., If the target image type is known in prior , one can use a specific set of images and a DNN model training task that are matched to it ., Other DNN models with different architectures could also be used to improve general reconstruction performance ., As the reconstruction quality is positively correlated with the feature decoding accuracy ( S9 Fig ) , DNNs with highly decodable units are likely to improve reconstructions ., Recent studies evaluated different types of DNNs in term of the prediction accuracy of brain activity given their feature values ( or the encoding accuracy ) 23–25 ., Although it remains to be seen how closely the encoding and decoding accuracies are linked , it is expected that more ‘brain-like’ DNN models would yield high-quality reconstructions ., Our approach provides a unique window into our internal world by translating brain activity into images via hierarchical visual features ., Our method can also be extended to decode mental contents other than visual perception and imagery ., By choosing an appropriate DNN architecture with substantial homology with neural representations , brain-decoded DNN features could be rendered into movies , sounds , text , or other forms of sensory/mental representations ., The externalization of mental contents by this approach might prove useful in communicating our internal world via brain–machine/computer interfaces ., All subjects provided written informed consent for participation in our experiments , in accordance with the Declaration of Helsinki , and the study protocol was approved by the Ethics Committee of ATR ., Three healthy subjects with normal or corrected-to-normal vision participated in our experiments: Subject 1 ( male , age 33 ) , Subject 2 ( male , age 23 ) and Subject 3 ( female , age 23 ) ., This sample size was chosen on the basis of previous fMRI studies with similar experimental designs 1 , 10 ., Visual stimuli consisted of natural images , artificial shapes , and alphabetical letters ., The natural images were identical to those used in Horikawa & Kamitani ( 2017 ) 10 , which were originally collected from the online image database ImageNet ( 2011 , fall release ) 14 ., The images were cropped to the center and resized to 500 × 500 pixels ., The artificial shapes consisted of a total of 40 combinations of 5 shapes and 8 colors ( red , green , blue , cyan , magenta , yellow , white , and black ) , in which the shapes were identical to those used in Miyawaki et al . ( 2008 ) 1 and the luminance was matched across colors except for white and black ., The alphabetical letter images consisted of the 10 black letters , A , C , E , I , N , O , R , S , T , and U . We conducted two types of experiments: image presentation experiments and a mental imagery experiment ., The image presentation experiments consisted of four distinct session types , in which different variants of visual images were presented ( training natural images , test natural images , artificial shapes , and alphabetical letters ) ., All visual stimuli were rear-projected onto a screen in the fMRI scanner bore using a luminance-calibrated liquid crystal display projector ., To minimize head movements during fMRI scanning , subjects were required to fix their heads using a custom-molded bite-bar individually made for each subject ., Data from each subject were collected over multiple scanning sessions spanning approximately 10 months ., On each experimental day , one consecutive session was conducted for a maximum of 2 hours ., Subjects were given adequate time for rest between runs ( every 5–8 min ) and were allowed to take a break or stop the experiment at any time ., The image presentation experiments consisted of four distinct types of sessions: training natural-image sessions , test natural-image sessions , artificial-shape sessions , and alphabetical-letter sessions ., Each session consisted of 24 , 24 , 20 , and 12 separate runs , respectively ., For these four sessions , each run comprised 55 , 55 , 44 , and 11 stimulus blocks , respectively , with these consisting of 50 , 50 , 40 , and 10 blocks with different images , and 5 , 5 , 4 , and 1 randomly interspersed repetition blocks where the same image as in the previous block was presented ( 7 min 58 s for the training and test natural-image sessions , 6 min 30 s for the artificial-shape sessions , and 5 min 2 s for the alphabetical-letter sessions , for each run ) ., Each stimulus block was 8 s ( training natural-images , test natural-images , and artificial-shapes ) or 12 s ( alphabetical-letters ) long , and was followed by a 12-s rest period for the alphabetical-letters , while no rest period was used for the training natural-images , test natural-images , and artificial-shapes ., Images were presented at the center of the display with a central fixation spot and were flashed at 2 Hz ( 12 × 12 and 0 . 3 × 0 . 3 degrees of visual angle for the visual images and fixation spot respectively ) ., The color of the fixation spot changed from white to red for 0 . 5 s before each stimulus block began , to indicate the onset of the block ., Additional 32- and 6-s rest periods were added to the beginning and end of each run respectively ., Subjects were requested to maintain steady fixation throughout each run and performed a one-back repetition detection task on the images , responding with a button press for each repeated image , to ensure they maintained their attention on the presented images ( mean task performance across three subjects: sensitivity 0 . 9820; specificity 0 . 9995; pooled across sessions ) ., In one set of training natural-image session , a total of 1 , 200 images were presented only once ., This set of training natural-image session was repeated five times ( 1 , 200 × 5 = 6 , 000 samples for training ) ., In the test natural-image , artificial-shape , and alphabetical-letter sessions , 50 , 40 , and 10 images were presented 24 , 20 , and 12 times each respectively ., The presentation order of the images was randomized across runs ., In the mental imagery experiment , subjects were required to visually imagine ( recall ) one of 25 images selected from those presented in the test natural image and artificial shape sessions of the image presentation experiment ( 10 natural images and 15 artificial images ) ., Prior to the experiment , subjects were asked to relate words to visual images , so that they could recall the visual images from word cues ., The imagery experiment consisted of 20 separate runs , with each run containing 26 blocks ( 7 min 34 s for each run ) ., The 26 blocks consisted of 25 imagery trials and a fixation trial , in which subjects were required to maintained a steady fixation without any imagery ., Each imagery block consisted of a 4-s cue period , an 8-s mental imagery period , a 3-s evaluation period , and a 1-s rest period ., Additional 32- and 6-s rest periods were added to the beginning and end of each run respectively ., During the rest periods , a white fixation spot was presented at the center of the display ., At 0 . 8 s before each cue period , the color of the fixation spot changed from white to red for 0 . 5 s , to indicate the onset of the blocks ., During the cue period , words specifying the visual images to be imagined were visually presented around the center of the display ( 1 target and 25 distractors ) ., The position of each word was randomly changed across blocks to avoid cue-specific effects contaminating the fMRI response during mental imagery periods ., The word corresponding to the image to be imagined was presented in red ( target ) and the other words were presented in black ( distractors ) ., Subjects were required to start imagining a target image immediately after the cue words disappeared ., The imagery period was followed by a 3-s evaluation period , in which the word corresponding to the target image and a scale bar was presented , to allow the subjects to evaluate the correctness and vividness of their mental imagery on a five-point scale ( very vivid , fairly vivid , rather vivid , not vivid , cannot correctly recognize the target ) ., This was performed by pressing the left and right buttons of a button box placed in their right hand , to change the score from its random initial setting ., As an aid for remembering the associations between words and images , the subjects were able to use control buttons to view the word and visual image pairs during every inter-run-rest period ., fMRI data were collected using a 3 . 0-Tesla Siemens MAGNETOM Verio scanner located at the Kokoro Research Center , Kyoto University ., An interleaved T2*-weighted gradient-echo echo planar imaging ( EPI ) scan was performed to acquire functional images covering the entire brain ( TR , 2000 ms; TE , 43 ms; flip angle , 80 deg; FOV , 192 × 192 mm; voxel size , 2 × 2 × 2 mm; slice gap , 0 mm; number of slices , 76; multiband factor , 4 ) ., High-resolution anatomical images of the same slices obtained for the EPI were acquired using a T2-weighted turbo spin echo sequence ( TR , 11000 ms; TE , 59 ms; flip angle , 160 deg; FOV , 192 × 192 mm; voxel size , 0 . 75 × 0 . 75 × 2 . 0 mm ) ., T1-weighted magnetization-prepared rapid acquisition gradient-echo ( MP-RAGE ) fine-structural images of the entire head were also acquired ( TR , 2250 ms; TE , 3 . 06 ms; TI , 900 ms; flip angle , 9 deg , FOV , 256 × 256 mm; voxel size , 1 . 0 × 1 . 0 × 1 . 0 mm ) ., The first 8 s of scans from each run were discarded to avoid MRI scanner instability effects ., We then used SPM ( http://www . fil . ion . ucl . ac . uk/spm ) to perform three-dimensional motion correction on the fMRI data ., The motion-corrected data were then coregistered to the within-session high-resolution anatomical images with the same slices as the EPI , and then subsequently to the whole-head high-resolution anatomical images ., The coregistered data were then re-interpolated to 2 × 2 × 2 mm voxels ., Data samples were created by first regressing out nuisance parameters from each voxel amplitude for each run , including any linear trend and the temporal components proportional to the six motion parameters calculated during the motion correction procedure ., After that , voxel amplitudes were normalized relative to the mean amplitude of the initial 24-s rest period of each run and were despiked to reduce extreme values ( beyond ± 3 SD for each run ) ., The voxel amplitudes were then averaged within each 8-s ( training natural image-sessions ) or 12-s ( test natural-image , artificial-shape , and alphabetical-letter sessions ) stimulus block ( four or six volumes ) , and within the 16-s mental imagery block ( eight volumes , mental imagery experiment ) , after shifting the data by 4 s ( two volumes ) to compensate for hemodynamic delays ., V1 , V2 , V3 , and V4 were delineated following the standard retinotopy experiment 26 , 27 ., The lateral occipital complex ( LOC ) , fusiform face area ( FFA ) , and parahippocampal place area ( PPA ) were identified using conventional functional localizers 28–30 ( See S1 Supporting Information for details ) ., A contiguous region covering the LOC , FFA , and PPA was manually delineated on the flattened cortical surfaces , and the region was defined as the higher visual cortex ( HVC ) ., Voxels overlapping with V1–V3 were excluded from the HVC ., Voxels from V1–V4 and the HVC were combined to define the visual cortex ( VC ) ., In the regression analysis , voxels showing the highest correlation coefficient with the target variable in the training image session were selected to decode each feature ( with a maximum of 500 voxels ) ., We used the Caffe implementation of the VGG19 deep neural network ( DNN ) model 13 , which was pre-trained with images in ImageNet 14 to classify 1 , 000 object categories ( the pre-trained model is available from https://github . com/BVLC/caffe/wiki/Model-Zoo ) ., The VGG19 model consisted of a total of sixteen convolutional layers and three fully connected layers ., To compute outputs by the VGG19 model , all visual images were resized to 224 × 224 pixels and provided to the model ., The outputs from the units in each of the 19 layers ( immediately after convolutional or fully connected layers , before rectification ) were treated as a vector in the following decoding and reconstruction analysis ., The number of units in each of the19 layers is the following: conv1_1 and conv1_2 , 3211264; conv2_1 and conv2_2 , 1605632; conv3_1 , conv3_2 , conv3_3 , and conv3_4 , 802816; conv4_1 , conv4_2 , conv4_3 , and conv4_4 , 401408; conv5_1 , conv5_2 , conv5_3 , and conv5_4 , 100352; fc6 and fc7 , 4096; and fc8 , 1000 ., In this study , we named five groups of convolutional layers as DNN1–5 ( DNN1: conv1_1 , and conv1_2; DNN2: conv2_1 , and conv2_2; DNN3: conv3_1 , conv3_2 , conv3_3 , and conv3_4; DNN4: conv4_1 , conv4_2 , conv4_3 , and conv4_4; and DNN5: conv5_1 , conv5_2 , conv5_3 , and conv5_4 ) , and three fully-connected layers as DNN6–8 ( DNN6: fc6; DNN7: fc7; and DNN8: fc8 ) ., We used the original pre-trained VGG19 model to compute the feature unit activities , but for analyses with fMRI data from the mental imagery experiment , we changed the DNN model so that the max pooling layers were replaced by average pooling layers , and the ReLU activation function was replaced by a leaky ReLU activation function with a negative slope of 0 . 2 ( see Simonyan & Zisserman ( 2015 ) 13 for the details of the original DNN architecture ) ., We used a set of linear regression models to construct multivoxel decoders to decode the DNN feature vector of a seen image from the fMRI activity patterns obtained in the training natural-image sessions ( training dataset ) ., In this study , we used the sparse linear regression algorithm ( SLR ) 31 , which can automatically select important voxels for decoding by introducing sparsity into a weight estimation through Bayesian estimation of parameters with the automatic relevance determination ( ARD ) prior ( see Horikawa & Kamitani ( 2017 ) 10 for a detailed description ) ., The training dataset was used to train the decoders to decode the values of individual units in the feature vectors of all DNN layers ( one decoder for one DNN feature unit ) , and the trained decoders were then applied to the test datasets ., For details of the general procedure of feature decoding , see Horikawa & Kamitani ( 2017 ) 10 ., For the test datasets , fMRI samples corresponding to the same stimulus or mental imagery were averaged across trials to increase the signal-to-noise ratio of the fMRI signals ., To compensate for possible differences in the signal-to-noise ratio between training and test samples , the decoded features of individual DNN layers were normalized by multiplying them by a single scalar , so that the norm of the decoded vectors of individual DNN layers matched with the mean norm of the true DN | Introduction, Results, Discussion, Materials and methods | The mental contents of perception and imagery are thought to be encoded in hierarchical representations in the brain , but previous attempts to visualize perceptual contents have failed to capitalize on multiple levels of the hierarchy , leaving it challenging to reconstruct internal imagery ., Recent work showed that visual cortical activity measured by functional magnetic resonance imaging ( fMRI ) can be decoded ( translated ) into the hierarchical features of a pre-trained deep neural network ( DNN ) for the same input image , providing a way to make use of the information from hierarchical visual features ., Here , we present a novel image reconstruction method , in which the pixel values of an image are optimized to make its DNN features similar to those decoded from human brain activity at multiple layers ., We found that our method was able to reliably produce reconstructions that resembled the viewed natural images ., A natural image prior introduced by a deep generator neural network effectively rendered semantically meaningful details to the reconstructions ., Human judgment of the reconstructions supported the effectiveness of combining multiple DNN layers to enhance the visual quality of generated images ., While our model was solely trained with natural images , it successfully generalized to artificial shapes , indicating that our model was not simply matching to exemplars ., The same analysis applied to mental imagery demonstrated rudimentary reconstructions of the subjective content ., Our results suggest that our method can effectively combine hierarchical neural representations to reconstruct perceptual and subjective images , providing a new window into the internal contents of the brain . | Machine learning-based analysis of human functional magnetic resonance imaging ( fMRI ) patterns has enabled the visualization of perceptual content ., However , prior work visualizing perceptual contents from brain activity has failed to combine visual information of multiple hierarchical levels ., Here , we present a method for visual image reconstruction from the brain that can reveal both seen and imagined contents by capitalizing on multiple levels of visual cortical representations ., We decoded brain activity into hierarchical visual features of a deep neural network ( DNN ) , and optimized an image to make its DNN features similar to the decoded features ., Our method successfully produced perceptually similar images to viewed natural images and artificial images ( colored shapes and letters ) , whereas the decoder was trained only on an independent set of natural images ., It also generalized to the reconstruction of mental imagery of remembered images ., Our approach allows for studying subjective contents represented in hierarchical neural representations by objectifying them into images . | medicine and health sciences, diagnostic radiology, functional magnetic resonance imaging, neural networks, applied mathematics, social sciences, light, neuroscience, electromagnetic radiation, magnetic resonance imaging, algorithms, simulation and modeling, optimization, luminance, mathematics, brain mapping, visible light, vision, neuroimaging, research and analysis methods, computer and information sciences, imaging techniques, physics, psychology, radiology and imaging, diagnostic medicine, biology and life sciences, sensory perception, physical sciences | null |
1,308 | journal.pcbi.1002768 | 2,012 | A Bayesian Inference Framework to Reconstruct Transmission Trees Using Epidemiological and Genetic Data | Predicting the most likely transmission routes of a pathogen through a population during an epidemic outbreak provides valuable information , which can be used to inform intervention strategies and design control policies 1 , 2 ., In principle , studying transmission routes during past epidemics is likely to be broadly informative of how the same pathogens spread through similar populations in future outbreaks ., Estimating a set of connected transmission routes from a single case is synonymous with estimating the transmission tree corresponding to the outbreak ., Uncovering the transmission routes between individual hosts or other relevant infectious units ( for example farms or premises ) can provide valuable epidemiological information , such as the factors associated with source and target individuals , dissemination kernels and transmission modes ., Unfortunately , reconstructing these transmission trees with available data can be an exceptionally hard task , as the problem is typically underdetermined: the precise number of cases is often unknown , and dates and times of infections are rarely known with precision , making it difficult to distinguish between a large number of alternative scenarios 3 ., With knowledge of location and timing of disease incidence it is possible to sample transmission trees that are consistent with the space-time data , and when these samples of trees share emergent statistical or structural properties , they can lead to epidemiological insights ., For example , Haydon et al . 4 generated transmission trees corresponding to the 2001 Foot-and-Mouth Disease Virus ( FMDV ) epidemics in the UK , and used these trees to estimate the reproductive number during different weeks of the epidemic ., These trees could be pruned to investigate the consequences of different or earlier interventions on the final size of the epidemics ., However , the data were consistent with very large numbers of different trees and so the approach was not suited to identifying with confidence “who infected who” ., For pathogens with high mutation rates that fix mutations across their genome during the course of a single outbreak , genetic data can provide critical additional information regarding the relationships between isolates ., The last few years have witnessed a revolution in our ability to generate genomic data relatively cheaply and in an automatised fashion 5 ., Pathogen genome sequences collected during epidemics , if sufficiently diverse , can then be used to discriminate between alternative transmission routes ., Several attempts to reconstruct transmission pathways have tried to combine genetic and other epidemiological data , many by adding spatial or temporal information to the process of phylogenetic reconstruction 6–11 ., However , Jombart et al . point out that a “phylogenetic” approach attempts to infer hypothetical common ancestors among the sampled genomes , and may not be appropriate for a set of genomes containing both ancestors and their descendants 12 ., Cottam et al . 13 identified a large set of transmission trees that were consistent with available genetic data , and ranked the likelihood of these trees using data on their relative timings , to find the most likely transmission tree ., Ypma et al . 14 moved this approach forward by constructing an inference scheme that uses spatial , temporal and genetic data simultaneously , but assumed these data are independent of each other ., Genetic and epidemiological data are evidently correlated , and a rigorous inference scheme should estimate the likelihood of a transmission tree accounting for these correlations ., In this work , we present a novel framework , based on a bayesian inference scheme , able to reconstruct transmission trees and infection dates of susceptible premises , integrating coherently genetic and spatiotemporal data with a single model and likelihood function ., Our scheme uses epidemiological data ( times of reporting and removal from the susceptible population of infected , spatially-confined hosts , their locations , and estimates of the age of an infection based on clinical signs ) together with pathogen sequences obtained from infected hosts to estimate transmission trees and infection dates during outbreaks ., The genetic information is incorporated considering the probability distribution of the number of substitutions between sequences during the time durations separating them , and computing the likelihood of observing these sequences for a given transmission tree and the estimated infection dates ., Each host generates an isotropic infectious potential responsible for transmission between hosts , whose strength is estimated from the data; the dynamical progression of the disease , from latency to infectiousness is part of the estimation scheme ( for a visual representation see Fig . 1 ) ., As an illustration of the method , we concentrate on the case of FMDV , an infectious disease affecting cloven-hoofed animals , which has severely affected the UK in 2001 and , on a smaller scale but still contentiously , in 2007 ., The infectious agent is single-stranded , positive-sense RNA virus , belonging to the genus Aphthovirus in the Picornaviridae family , and its small genome ( 8 . 2 kb ) is easily sequenced ., Its high substitution rate ( per nt per day as measured over part of the 2001 UK epidemic 13 ) , implies that the number of mutations accumulate during infection of host individuals on a single premise is sufficient to be reasonably confident of distinguishing between infected premises ., Upon infection by FMDV , a host individual first experiences a non-infectious latent period with lesions appearing on peripheral epithelia subsequently ., The virus can spread through aerosol dispersal , on fomites , or through direct contact ., Importantly , a visual exam of the clinical state of the lesions on infected hosts can provide valuable information about the age of the infection ., For this application , premises comprising populations of spatially-confined hosts will be considered as the unit of infection ( the centroids of premises will be used as geographical coordinates ) , and complete FMDV genomes sampled from each premise will be used for the inference; the removal of a premise from the population corresponds to its culling ., As the time course of FMDV infection within an individual host follows empirically characterised distributions 13 , when transmission events are inferred between premises infected at very different times and therefore with correspondingly long and unrealistic apparent latency durations , we interpret these as an indication of the presence of one or more unsampled infected premises , that epidemiologically linked the observed premises ., After testing our method on simulated data , we considered two real datasets from two different FMDV epidemics: the 2007 UK epidemic ( 8 premises ) 15 and the Darlington cluster within the 2001 UK epidemic ( 15 premises ) 13 ., For the former case , we confirmed the role of IP5 as the link between the two phases of the epidemics , whereas for the latter , our scheme highlights the presence of premises outside our sample that were part of the transmission process ., While in this paper we discuss results related to FMDV , our method is in principle general and can be applied to epidemics generated by other pathogens , for which genetic and epidemiological data are both available ., Prior to applying our method to real data , we first used our model to simulate data for an outbreak infecting 20 premises whose locations are known in a 22×11 km area ., The model was fitted to the observable data , that is , for each premise , the time at which the virus was detected , a 8000 bp DNA sequence sampled at , an assessment of the lesion age , and the time at which the premise was culled ( see Fig . 1 for a visualisation ) ., More information on this dataset can be found in Text S1 ., In Fig . 2 ( top left ) , the size of the dots corresponds to the posterior probabilities of pairwise transmissions , while the circles represent the true transmissions as they occurred in the simulation ., Fig . 2 ( top right ) shows the tree with highest posterior probability ., We note that only one true transmission ( ) is not reconstructed accurately , the algorithm instead identifying ., However , the transmission has a high posterior probability and is included in the tree with the second highest posterior probability ( see Fig . S2 ) ., The posterior probabilities for the mean latency duration and the mean transmission distance include the true values in the 95%-posterior intervals ( bottom panels of Fig . 2 ) ., Posterior distributions for other model parameters and latent variables are provided in the Figs ., S3 , S4 ., In order to test our method for a large dataset , we considered an upscaled simulation of an outbreak infecting 100 premises ., Results are described in Text S1 ., Having established the validity of the inference scheme , we applied it to a dataset corresponding to the 2007 outbreak of FMDV in the UK , which infected 8 premises in Surrey and Berkshire 15 ., Genetic sequences and epidemiological collected on each premise are available in the Dataset S1 and S2 , respectively ., The most likely reconstructed scenario ( Fig . 3 , top right ) comprises two phases: IP1b was infected by an external source , and transmitted the virus to the neighbouring premise IP2b and to IP5 further away; the virus remained contained and undetected on IP5 until it spread to a closeby premise IP4b; finally the virus spread from IP4b to the other premises ., While the link made by IP5 between the two phases is highly supported , the estimation of the other transmissions was more uncertain: within the two clusters ( IP1b , IP2b , IP5 ) and ( IP5 , IP4b , IP3b , IP3c , IP6b , IP7 , IP8 ) several other transmission scenarios have non-negligible posterior probabilities ( Fig . 3 , top left and Fig . S5 ) ., The mean estimated latency duration has a posterior median of 14 days and a 95%-credible interval of ( 6 , 49 ) ( as shown in Fig . 3 , bottom left ) ; the long delay between the infection of IP5 and the subsequent transmissions is responsible for this result ( posterior distributions of latency durations of every premises are shown in Fig . S7 ) ., The long distance between IP5 and its source ( IP5 is 18 . 2 km away from IP1b ) explains the large mean transmission distance ( Fig . 3 , bottom right ) , whose posterior median is 17 km and 95%-posterior interval is ( 5 , 58 ) ., Posterior distributions of other model parameters and latent variables are provided in Figs ., S6 , S7 , while a phylogenetic tree , based on statistical parsimony tree , implemented in the software package TCS 16 is represented in Fig . S14 ., For a more complex scenario , we considered the FMDV epidemic that occurred in the UK in 2001 , and in particular a group of 12 premises within the so-called “Darlington cluster” ( Durham county ) , for which one virus sequence per premise is available 13 ., This spatial cluster comprises 3 additional premises that were not epidemiologically linked to the rest of the cluster and which we exclude ( we discuss the choice of the subgroup of premises in the Text S1 ) ., Genetic sequences and epidemiological data for this cluster can be found in the Datasets S3 and S4 , respectively ., Our method allowed us to reconstruct a transmission scenario with little ambiguity , accounting for over 99% of the posterior probability , where premise K plays the role of a hub and only two chains of transmissions of length greater than two are found ( Fig . 4 , top panels ) ., When premises become infectious approximately at the same time , they have a very low probability of mutual infection , even if the collected genomes are very close and share substitutions ( premises M and D , or L and E , for example ) ., Premise K , on the other hand , became infectious very early on and is then estimated to have seeded the infection to the many premises that were observed at later times ., Interestingly , some premises infected by the hub share mutations that are not found on the other premises , suggesting that different unsampled strains evolved on the hub and went on to infect distinct clusters of farms ( see the statistical parsimony network in Fig . S14 ) ., However , another hypothesis can be formulated: the virus fixed the common substitutions while replicating on an unsampled premise , which constitutes a missing node in the transmission tree ., This “ghost premise” went on to infect the premises we observed ., The missing node scenario is supported by the distribution of the mean latency duration estimated for this dataset , which has a median of 24 days , and a 95%-posterior interval of ( 17 , 35 ) ( Fig . 4 , bottom left ) ., These values are inconsistent with a typical latency period of FMDV of 5 days ( 95% confidence interval of 1–12 ) 17–19 ., In particular , the premises infected by the hub all display high mean latency values ( Fig . S11 ) ., We propose that these unrealistically long latency periods indicate the existence of missing premises intermediate in the chain of infection and so in our model , latency should be considered as an aggregated parameter , corresponding to the the sum of the real latent period and the time the virus spent on the unsampled premise ., We will return to this point in the Discussion ., The comparison of our results with those found by Cottam et al . on the same dataset 13 highlights that our method strengthens the role of infecting hubs in the network ( premise K ) , and therefore infers a lower number of long transmission chains ., Details about the individual differences between the most likely trees inferred by the two methods can be found in Text S1 , while transmission trees with higher posterior probabilities and posterior probabilities of other paramteres can be found in Figs ., S9 , S10 ., The estimates of the transmission kernel for the two real data sets are similar: the 95%-posterior intervals of the mean transmission distance ( defined as ) overlap , ranging from 5 to 58 km for the 2007 outbreak and ranging from 9 to 72 km for the 2001 epidemic ( Figs . 3 and 4 , bottom right panels ) ., On the other hand , the posterior distributions we obtained are related to the range of distances covered in the data sets ( up to about 24 km for 2007 and 16 km for 2001 ) , and cannot be used to extrapolate long distance transmission events: despite the large values of the mean transmission distance , the lengths of the average inferred transmission in the trees with the highest posterior probabilities are 4 . 3 km for the 2007 outbreak and 5 . 8 km for the 2001 epidemic ., In the inference scheme , we used vague priors for model parameters ., When we estimated the interval from the end of latency to detection , however , we used a more informative prior , centered over the estimated lesion age ( Eq ., ( 8 ) in Materials and Methods ) ., We investigated the effect on the most likely transmission tree of, ( i ) using a flatter prior ( thus believing less than we did previously in the veterinarian assessment ) and, ( ii ) using a more peaked prior ( thus believing in it more ) ., The trees are illustrated in Fig . S12 , and the priors in the Fig . S13 ., For the 2007 outbreak , the tree differed only by one transmission in case, ( i ) , and by three transmissions in case, ( ii ) ., Remarkably , in all cases , the identification of the link between the two phases in IP5 maintained a posterior probability of one ., For the 2001 epidemic , the star-like shape ( with K as a hub ) of the tree was strengthened in case, ( i ) , where premise K now infected 9 premises , while more chains of length greater than two were inferred in case, ( ii ) ., Constraining the inference less around the estimates of the lesion ages relaxes the timing constraints and increases the weight accorded to genetic similarity in the transmission inference ., As a result , transmissions mirror more closely the phylogenetic structure of the dataset , leading to a reduced hub role of premise K . In conclusion , we remark that the tree structure is robust and does not crucially depend on the specific choice of the prior for the values of the time intervals between the end of latency and detection ( lesion ages ) ., Our method relies on one approximation: we do not reconstruct the genomes transmitted at the times of infection , and therefore we obtain a pseudo-posterior probability for the genetic data , where the similarity between isolates only depends on the Hamming distance between the sequences , and not on the full genetic network ( see Materials and Methods for details ) ., We checked whether the use of a pseudo-posterior distribution led to appropriate inference by applying the estimation algorithm to three series of 100 simulations ( one for the test outbreak and two for the FMDV datasets ) generated using our model ., For the first series , we used the parameter values that were used in the test simulation ., For the two other series , we used the posterior medians of the parameters estimated previously ., We were especially interested in the fraction of correctly predicted pairwise transmissions: for each premise , between 79% and 93% of the simulations reproduced the source with the highest posterior probability in the original inference ( Table 1 ) ., Given the challenging nature of the data sets ( closely spaced premises becoming infectious almost simultaneously in the test data , and an abnormally long period of time between infection and transmission between two waves of infection in the 2007 data ) , these results suggest the approximation is performing well ., Moreover , the mean of the posterior probability of each true transmission ( the proportion of iterations in the chain at which a premise is infected by the estimated source ) is also reproduced in about 80% of the cases ., Performances vary slightly across datasets depending on the characteristics of the epidemics ( e . g . number of premises and parameter values ) , but are broadly compatible ., For example , in the second phase of the 2007 outbreak , several scenarios have high posterior probabilities , lowering the fraction of correctly estimated transmissions ., Further performance estimators are listed in Table S1 ., We propose here a new bayesian inference scheme , with which we estimate transmission trees and infection dates for an epidemic outbreak using genetic and epidemiological data ., Our scheme is general , and with slight modification can be applied to rapidly evolving pathogens affecting spatially-confined hosts ., To illustrate how this approach can be used to generate new insights and deliver statistically formal measures of confidence ( in particular transmission links ) , we applied it to the case of an RNA virus ( FMDV ) infecting premises whose spatial location is known ., The knowledge of complete viral sequences , timing of reporting and culling of premises and estimates of the age of an infection made this case an ideal benchmark ., After testing our method on simulated data ( 20 premises ) , we applied it to two pre-existing datasets: the still disputed 2007 FMDV outbreak in the UK ( 8 premises ) 15 and the Darlington cluster within the larger 2001 epidemic ( 12 premises ) 13 ., The method proved successful in reconstructing the transmission network on the test dataset , and highlighted the role of IP5 as a relay between the two phases of the 2007 outbreak ., The results for the Darlington cluster are intriguing , as they highlight the likely incompleteness of the dataset , and suggest the presence of unobserved premises in the transmission tree ., The performance of the algorithm was evaluated through simulations , which showed the inference scheme to be consistent and accurate and able to deal successfully with clusters of infections ., The power of this inference platform relies on a number of simplifying assumptions ., In this application we have made two in particular that require further consideration ., The first postulates that the epidemics are generated by a single introduction of the pathogen to a single premise ., While this may often be adequate for small or early stage outbreaks , it is likely to be inadequate for more complex cases ., For example , the Darlington dataset is a small subset of the 2001 epidemic , in which it was first considered to be an isolated cluster of infected premises ., Previous analysis on the whole cluster 13 demonstrated two independent introductions ., Trying to estimate “polyphyletic” transmission trees assuming only a single root would strain this formulation of the model and lead to unrealistic results ., In order to solve this problem , the MCMC should be able to explore a parameter space where independent introductions range from one to the number of the premises ( each of them being independently infected by an external source ) and compute their likelihood ., Moreover , the genetic data can be used to discriminate between a situation where a single external source infects several spatially-confined hosts in a cluster , and the presence of multiple external sources , characterised by distinct genomes ., In practice , we could proceed by, ( i ) describing the external source ( s ) as a set of genetic sequences varying in time ( and possibly in space ) ,, ( ii ) specifying the probability of transmission of the infection from the external source ( s ) to any of the premises and, ( iii ) updating the transmission tree at each iteration of the MCMC by comparing this probability with the probability of transmission from one of the infectious premises in the cluster considered ., The second assumption is that the epidemic has been completely observed and that there are no missing nodes in the transmission tree ., When this assumption is likely to be violated , as in the case of the Darlington cluster , our method inferred unrealistically long latency times for some premises , an indication that a missing intermediate infected premise , where virus might have replicated extensively , may have been involved in the transmission chain ., This situation is particularly likely in large epidemics , where perfect knowledge of every case is unlikely , or in epidemics arising in areas or countries where host or premise identification is ambiguous and comprehensive collection of data not feasible ., In the 2007 outbreak , where no infected premises were missing , the premise linking the two phases showed a mean latency duration of over 25 days ., In this case , the observation results from the real time the virus spent on the farm prior to its detection and reporting: by the time it was observed , the animals had started to heal and dating the lesions was more difficult ., The long latency times could also account for the time virus spent in a non-replicative state ( e . g . on fomites ) : this case would be indicated by a slow rate of evolution on the premise where the virus is observed ., In conclusion , extended latency times are valuable “alarm bells” , as they suggest a discrepancy between the observations and the actual course of the disease ., A substantial improvement to the scheme would be to include in the inference additional sources of data , such as the locations of premises that may have maintained infections that were not detected , or premises that were infected but were removed prior to being confirmed as infected ., We leave this development for future work ., We only mention here that the solution given in the paragraph above to deal with multiple introductions could be adapted to deal with missing premises: any infectious premise could generate a set of genetic sequences describing possible missing premises ., This set of sequences could then be used to compute a new probability of transmission from missing premises , to be compared with the probabilities of transmission from internal and external sources ., We leave this for future work ., Other minor assumptions in our model can be readily eased ., We hypothesized that all premises have the same infection potential; however , it would be straightforward to make the infectiousness parameter in the model a function of the specific characteristic of the premise , like size or composition ( for example , for FMDV sheep are considered to be less infectious than cows , which are in turn less infectious than pigs 17 ) ., Moreover , we note that the infectious potential felt by a premise at time is the sum of the contributions deriving from all the other premises that are infectious at that particular time ., As unsampled premises could also contribute to this potential , the temporal dynamics of infection could be modeled in a more complex manner than the step function adopted here ., The estimation of the age of an infection from clinical signs is used as a prior distribution in our scheme: an accurate knowledge of this quantity makes the inference computationally more efficient , but it is not essential , and the method can be applied to cases where this quantity is not available ., The model used for the mutations of the virus is very simple and does not account for the specific characteristiscs of the FMDV genome , or for some well-known mutation biases ( like the transition/transversion bias observed in 20: we decided once more to go for the simplest and more general assumption , while more detailed and pathogen-specific mutation models could easily be incorporated in our framework ., Our “hosts” do not necessarily correspond to single animals/humans but were interpreted in a wider sense as “infectious units” ., These units do not constitute a limitation to our method: even in the case of an infection where the units are individuals , the genetic divergence between sequencing results from an unknown number of viral replications in the donor individual post sampling ( but prior to transmission ) and in the recipient prior to sampling ., In the case of a higher-order unit of infection , the genetic divergence between sequences from sequential samples will be just the result of a larger unknown number of generations ., It is conceivable that multiple pathogen strains circulated on a single premise remained unsampled and went on to infect other premises ., For example , FMDV is known to generate independent populations within single animals 20 and different genomes could circulate on a premise ., Ideally , several sequences from each premise should be obtained and these data incorporated into the model ., Finally , for the specific pathogen considered here , we have used a fixed substitution rate for both the Darlington cluster and the 2007 outbreak ., Independent estimates obtained for the whole 2001 epidemic 21 and for 2007 outbreak yield very similar values , which do not change substantially the likelihoods of observing the sequenced genomes ., In other applications , the substitution rate may be poorly known ., In these cases , it could be viewed as an unknown parameter and estimated in the MCMC simulation ., Computation time is a key element for a method that is expected to be useful in real-time during an outbreak ., The computation time was strongly reduced by using a conditional pseudo-distribution of observed sequences instead of the exact conditional distribution ., Clearly , it would be ideal to run the Bayesian estimation using the exact conditional distribution of observed sequences ., To do so , one could incorporate in the MCMC the unknown transmitted genetic sequences as augmented data ( see Eq ., ( 3 ) below ) , initialize using for example statistical parsimony 16 and determine a proposal distribution for based on a stochastic algorithm estimating genetic networks 22 ., Unfortunately , this strategy is at present unfeasible on standard computing resources ., However , despite the use of a pseudo-distribution , the running time of our inference algorithm strongly increases with the number of premises ., We stress that the main focus of this work was to combine epidemiological and genetic data in a coherent framework , rather than producing an optimised code ., Basic optimization procedures should dramatically increase the efficiency of the code ., In particular , we suggest three directions worth pursuing:, ( i ) use a conditional pseudo-distribution of the genetic sequences which can be computed faster , but still yielding a good approximation of the posterior distribution of the unknowns;, ( ii ) parallelize the MCMC 23 and code it in a lower-level language;, ( iii ) use alternative algorithms , such as sequential Monte Carlo 24 ., Our bayesian inference scheme is a rigorous general platform on which different models can be implemented and tested ., It is a useful tool that could be used in real time to detect the presence of missing links in inferred chains of transmission , and to assign confidence values to each inferred transmission event ., The specific model we chose for FMDV contains a representation of the dynamics of FMD infections ., Different models could be implemented to describe the dynamics of different pathogens , or the specific characteristics of a particular outbreak , while still maintaining rigorous estimation based on genetic and epidemiologic data ., Previous work was initiated by Cottam et al . 13 , and significantly extended by Jombart et al . 12 and Ypma et al . 14: all these studies considered the likelihood of the transmission tree given temporal , spatial and genetic data ( here denoted by the generic vectors , and ) as a product of three independent likelihoods: ., Cottam et al . assumed a binary ( ) and a uniform ( their estimation does not depend on the location of the premises ) ; Jombart et al . designed a less “ad hoc” approach by introducing a maximum parsimony strategy to weight genetic similarity , while spatial and temporal information were considered only when several possible ancestors were genetically indistinguishable; finally Ypma et al . had more complicated forms for these likelihood functions ., Our method can be considered as the “next step” on this road , as we relax the assumption of independence between the information sources , and we estimate the likelihood of transmission trees given all the sources of information simultaneously ., Although some specific aspects of our inference scheme can be refined , expressing the likelihood of a transmission tree as a joint likelihood , depending on both epidemiological and genetic data , significantly advances this form of analysis ., The test data sets analyzed in the Results section were simulated under the model presented below and in Text S1 ., In these data sets , the outbreak spread over 20 premises ( F1 , … , F20 ) , randomly and uniformly located in a rectangular 20×10 km region ., Values of transmission and latency parameters were and ., Observed sequences had length and substitution rate ., In Text S1 , we analyzed an upscaled test data set with 100 premises , with the same premise density as above , and same values for parameters , , and ., The data corresponding to the 2007 FMDV outbreak in the UK and to the Darlington cluster within the 2001 epidemic can be found in Refs ., 15 and 13 , respectively , and are incudedin the Datasets S1 , S2 , S3 , S4 ., In particular , FMDV sequence length was and the substitution rate per nt per day 13 ., Consider a cluster of infected hosts ( in this case premises ) whose centroids are located at Longitude-Latitude coordinates ., Let be the function defining the transmission tree: a given premise is infected by a source , which consists of either another premise , , or an external source denoted by 0 ., For each premise , we consider four timing variables as illustrated by Fig . 1: premise is infected by at time , is infectious at time , where is the latency duration for premise , is detected as infected at time and is removed from the infectious population at time ., The duration from infectiousness to detection , , is assessed by experts on the base of clinical signs: let denote this assessment ., At time , the pa | Introduction, Results, Discussion, Materials and Methods | The accurate identification of the route of transmission taken by an infectious agent through a host population is critical to understanding its epidemiology and informing measures for its control ., However , reconstruction of transmission routes during an epidemic is often an underdetermined problem: data about the location and timings of infections can be incomplete , inaccurate , and compatible with a large number of different transmission scenarios ., For fast-evolving pathogens like RNA viruses , inference can be strengthened by using genetic data , nowadays easily and affordably generated ., However , significant statistical challenges remain to be overcome in the full integration of these different data types if transmission trees are to be reliably estimated ., We present here a framework leading to a bayesian inference scheme that combines genetic and epidemiological data , able to reconstruct most likely transmission patterns and infection dates ., After testing our approach with simulated data , we apply the method to two UK epidemics of Foot-and-Mouth Disease Virus ( FMDV ) : the 2007 outbreak , and a subset of the large 2001 epidemic ., In the first case , we are able to confirm the role of a specific premise as the link between the two phases of the epidemics , while transmissions more densely clustered in space and time remain harder to resolve ., When we consider data collected from the 2001 epidemic during a time of national emergency , our inference scheme robustly infers transmission chains , and uncovers the presence of undetected premises , thus providing a useful tool for epidemiological studies in real time ., The generation of genetic data is becoming routine in epidemiological investigations , but the development of analytical tools maximizing the value of these data remains a priority ., Our method , while applied here in the context of FMDV , is general and with slight modification can be used in any situation where both spatiotemporal and genetic data are available . | In order to most effectively control the spread of an infectious disease , we need to better understand how pathogens spread within a host population , yet this is something we know remarkably little about ., Cases close together in their locations and timing are often thought to be linked , but timings and locations alone are usually consistent with many different scenarios of who-infected-who ., The genome of many pathogens evolves so quickly relative to the rate that they are transmitted , that even over single short epidemics we can identify which hosts contain pathogens that are most closely related to each other ., This information is valuable because when combined with the spatial and timing data it should help us infer more reliably who-transmitted-to-who over the course of a disease outbreak ., However , doing this so that these three different lines of evidence are appropriately weighted and interpreted remains a major statistical challenge ., In our paper we present a new statistical method for combining these different types of data and estimating trees that show how infection was most likely transmitted between individuals in a host population ., Because sequencing genetic material has become so affordable , we think methods like ours will become very important for future epidemiology . | systems biology, computer science, computer modeling, veterinary epidemiology, ecology, evolutionary modeling, theoretical ecology, biology, computational biology, veterinary science | null |
1,748 | journal.pcbi.1003109 | 2,013 | A Family of Algorithms for Computing Consensus about Node State from Network Data | A goal of many of network studies ( e . g . 1–4 ) is to predict the effects of perturbations , such as extinction and predation events , on network structure ., Making these predictions requires information about network connectivity ( e . g . is the network scale-free , exponential , etc . ) ., When the connectivity is non-uniform , it is also important to quantify variation at the node level in order to identify nodes that , if removed , are likely to impact negatively network stability ., This is well recognized and many useful methods have been developed for measuring this variation 1–2 , 5–14 in a range of networks , including the world-wide web 5 , food webs describing trophic interactions 1 , 2 , networks of interactions between genes and proteins 6–10 , and social networks , in both animal and human societies 11–16 ., Patterns of connectivity can also influence node function in the larger system of which the network is a part ., For example , in previous work on the behavioral causes of multi-scale social structure in primate societies 14 , 17–22 it was found that group consensus about an individuals ability to win fights – its social power ( see Sec . Primate communication network ) –is population coded in a status signaling network ., In this system , individuals use subordination signals to communicate to adversaries that they perceive themselves to be the weaker opponent ., The signals are often repeated and are always unidirectional ( emitted by one individual in a pair but not the other ) ., A single signal indicates that the sender perceives the receiver capable of using force successfully against him ., The frequency of signals emitted ( over some defined time period ) indicates the strength of the senders perception that the receiver can successfully use force against him ., In the work cited above it was demonstrated that consensus in the group about individual ability to successfully win its fights can be calculated by quantifying uniformity in the weighted in-degree distribution of signals sent to by its senders and weighting this score by the total number of signals received ( this calculation is described in Sec . Shannon consensus ) ., The resulting score for may not be the preferred score for of any specific group member , but can be said to reflect the groups collective view about how good is at winning fights ., Correspondingly , the rank order associated with the distribution of scores in the population might not match the preferred rank order of any single individual , but as the outcome of integrating over all of the individual opinions , it can be said to be the consensus social power rank order ., The data indicate that individuals can estimate their own social power and also know something about how others in the group are collectively perceived 17 , 20 , 23 , 24 ., Consequently , social power is informative about the likely cost of interaction when interactions are not strictly pair-wise ( a common feature of these systems and the reason why a consensus-based definition is important ) 17–19 ., Under heavy-tailed power distributions , in which a few individuals are disproportionately powerful , conflict management mechanisms like third-party policing ( a critical social function ) can emerge and are performed by nodes in the tail of the power distribution 14 , 20 ., Policing is an important social function because by controlling conflict it facilitates edge building by nodes in the signaling as well as other social networks 18 , 20 ., These results suggest that ( 1 ) network structure can encode node function and that ( 2 ) measures that quantify agreement in node connectivity patterns can be used to decode this population coding of node function ., In Table 1 we give several examples of other networks in which node function might also be population coded and consensus estimation could be useful for identifying important nodes ., In principle , consensus about node state or function can be quantified by measuring the uniformity of a nodes weighted in-degree distribution 14 , as in the above example , by measuring the “flow” into and out of a node ( depth ) , or using simple counts ., To capture these competing notions of consensus , we introduce a variety of alternative information theoretic , diffusion , and count algorithms that capture breadth and depth to different degrees , and so serve as hypotheses about how functional variation in nodes is encoded in interaction networks via consensus ., The algorithms take an interaction network as input and produce a vector of scores for the nodes in the network as output ., We interpret the score of node as the collective opinion , or consensus , about state or its capacity to perform a given behavior ., We note that the algorithms only quantify agreement in the connectivity patterns; what the consensus is about– state– depends on the type of interactions in the interaction network ., For example , in the work on power in primate societies mentioned above , the interaction matrix contained directed subordination signals ., These signals have special properties that allow them to reliably encode information about the ability to win fights , which is the basis of power 19 ., We discuss the importance of the interaction matrix for the interpretation of consensus in greater detail in Sec ., Background and motivation ., After introducing the algorithms , we compare their mathematical properties , and in a few cases , establish approximate equivalence ., We introduce three data sets that we use to empirically evaluate how well the output of the algorithms predicts node function out of sample ., We investigate the properties of these algorithms that make them predictive measures of consensus ., The data sets include a status communication network in a primate society , a network of collaborating condensed matter physicists from a prominent journal , and a functional linkage network of yeast genes that influence viability and growth ., Finally , we assess the sensitivity of the algorithms to systematic error at the node level and strategic manipulation of the network by nodes or small sub-sets of the network ., Here we consider one additional algorithm , the Borda count , for computing consensus on networks ., The Borda count is an algorithm that is traditionally used to determine the outcome of an election ., Each member of a voting population ranks the candidates of the election ., This is analogous to each individual in a primate group emitting signals to others in accordance with whom they perceive as more or less likely to use force successfully ., The Borda count aggregates these preferences into one ranking over the candidates ., Supposing that there are candidates , each voter gives votes to his highest preference , to his next highest choice , on down to one vote to his least favorite candidate ., A voter can rank candidates equally and the candidates votes in this case are the average of the numbers of votes they would have received were they not tied ., A candidates score is the sum of his votes from each voter ., In the signaling case , the receiver of the most signals from a given individual will receive n “votes” and the receiver of the fewest signals from that individual will receive one “vote” ., In unweighted networks , each individual “voter” divides the group into nodes with whom he does or does not interact , giving the same number of votes to the individuals in each class ., Mathematically , we define a matrix such that is the rank given node by node , where gives rank to its highest preference and rank to its lowest preference , and define the vector is the Borda consensus scores for node ., The Borda count is more coarse-grained than the total frequency of interactions received because information about the number of interactions received is lost and only the ordinal ranking of nodes by the number of interactions received is used ., It does , however , convey information about agreement among interaction partners ., If we find that a node has a high score under the Borda count , this indicates that many other nodes rank the receiver highly and agree about its relative value to them ., Hence like Shannon Consensus it should be intrinsically sensitive to certain kinds of bias in the interaction matrix ( see Sec . Empirical Comparison for further discussion ) ., All of the algorithms we compare provide some measure of consensus in a network about the state of a given node , such that we expect they are positively correlated ( these data are presented in Sec . Basics of data set ) ., In fact , we can describe these correlations by deriving mathematical relationships between some of the algorithms ., The mathematical relationships between the breadth algorithms are easiest to see ., , and are related by the definition of and a simple theorem about :Consider the definitions of and :These definitions make the mathematical relationships and on the one hand and obvious ., If the network is unweighted , then we can write the following algorithms as a function of in-degree , :where is constant across nodes and depends on the total number of edges in the network ., In this case , the rankings generated by these algorithms will be the same , although the actual values will be different ., Eigenvector Centrality can be related to the redistribution probabilities and in-degree ., Recall that is the stochastic transition matrix where denotes the probability of walking from node to node and Eigenvector Centrality is defined by the equation Since is stochastic , for all and we can choose such that ., Since , this gives If we let , then and we can show thatwhere ., In the case that for all , these bounds can be combined to giveThis bound gives an indication of how is related to the number of interactions received and the redistribution weights used in the calculation of ., As we increase the redistribution weight , the minimum possible Eigenvector Centrality scores increases ., In general , nodes that engage in more interactions and that interact with nodes with few other interaction partners will have higher Eigenvector Centrality scores ., Much of the research on consensus aims to determine how a group comes to a single decision , such as which direction to move , who should be president , etc ., 30–33 ., In this study our aim is somewhat different ., Our goal is to quantify how much consensus there is in the group ( e . g . network ) about the state of a node ( is it on or off , is it capable of performing a target function , etc . ) ., Hence the interpretation of consensus turns on the meaning of the edges in the network , represented by the data in the interaction matrix , as much as on the algorithm applied to the matrix to compute the consensus scores for the nodes ., It is therefore critical that the interaction data used to construct the matrix be chosen carefully ., Below we provide basic details about the three test systems –a primate status communication network , a collaboration network , and a functional gene linkage network ., We provide the biological interpretation of the edges in the networks and of node state , and we introduce the functional data used to empirically evaluate the algorithms performance ., We note that the mechanistic basis for consensus as an important network measurement is best understood for the primate communication network , and this fact is reflected in that sections length ., In Table 1 we provide interpretations of consensus scores for several different kinds of networks in addition to those we describe below ., We are using a primate communication network in a large captive social group of pigtailed macaques ( Macaca nemestrina ) to measure social power , operationalized as group consensus about individual ability to win fights ., We are using a collaboration network to measure reputation , defined operationally as group consensus about whether to work with a given scientist ., We are using a network of functional linkages between genes to measure gene importance , defined operationally as group consensus about whether to be functionally linked with a given gene ( this “decision” could be made in either developmental or evolutionary time ) ., Each algorithm produces as output a vector of scores for nodes in the network ., In Table S1 in Text S1 we present the correlations between these outputs for each network ., The distribution of consensus scores for each of the networks , according to each of the algorithms considered in this paper , is presented in Figures S2 , S3 and S4 ., Within each data set , most algorithms suggest roughly similar distributions ., In the case of the signaling network , these distributions look heavy-tailed , which is consistent with the distributions of functional data ., Additionally , for the signaling network , two of the algorithms– Shannon Consensus and Weighted Simple Consensus– produce distributions that are not significantly different than normal after log transform , indicating they are consistent with the log-normal distribution ., Our predictor variables are the social power indices produced by the consensus algorithms ., Dependent variables include: support solicited – requests for support received by a third-party to a fight from fight participants ( should be positively correlated with power ) ; intervention cost – operationalized as the intensity of aggression received by an intervener in response to its interventions into fights among group members ( should be negatively correlated with ) ; and intensity of aggression used by an intervener during its intervention ( should be negatively correlated ) ( these variables are defined and the data collection methods are described in Section Methods and in 14 ) ., These dependent variables are corrected for underlying variation in tendency to fight ( see Section Methods ) ., All algorithms are significantly correlated with the dependent variables ( , Table 4 ) ., The best predictor of the dependent variables is Weighted Simple Consensus , followed closely by Shannon Consensus , and Eigenvector Centrality ., The worst predictors are Davids Score , Borda Count , and Simple Consensus ., The most highly predictive algorithms have very similar values , so it is hard to differentiate between them based on their predictive power alone ., However , as we discuss below , these algorithms vary in their sensitivity to source biases and in their computational and cognitive complexity ., In this social system , there are a few individuals in the tail of the power distribution who are disproportionately powerful 14 , 17 ., This is borne out in our data , as the correlation between the algorithm scores and the dependent variables is substantially higher for the top quartiles than the bottom quartiles ( Figure 1 ) ., Our predictor variables are the reputation indices produced by the consensus algorithms ., The dependent variable is total amount of grant money awarded to a PI or CoPI by the National Science Foundation ( see Section Methods ) ., Of all the algorithms we consider , only Eigenvector Centrality is significantly correlated with this external variable ( , Table 4 ) ., Two reasons , one mathematical and one sociological , appear to account for this result ., First , Eigenvector Centrality can distinguish between nodes that have identical local neighborhoods ., In-degree can only take integer values and there is presumably an upper bound on the number of possible collaborators given time and other constraints ., In this network , the highest in-degree observed is so that there are possible values , , a nodes in-degree can take ., As Eigenvector Centrality can take any value between and , it can give different scores to nodes with the same in-degree ., In other words , Eigenvector Centrality uses global information to differentiate between nodes that are locally identical ., This effect is not as noticeable in the subordination signaling network because there are only individuals in the signaling network and therefore less degeneracy in the in-degree distribution ., Second , it is perhaps not surprising that for this kind of network Eigenvector Centrality is more predictive of the dependent variable than the breadth algorithms – although physicists involved in the process of awarding grants to others are expected to recuse themselves when confronted with an application from one of their own collaborators , they may be more likely to award grants to collaborators of their collaborators ., Therefore , having many collaborators may not be that helpful in receiving grant money , but scientists whose collaborators have many collaborators may have an advantage ., Our predictor variables are the importance indices produced by the consensus algorithms ., The dependent variables are the viability and competitive fitness of organisms with mutated versions of the gene ., For each of our algorithms , the importance scores for essential genes are significantly higher than the importance scores for non-essential genes ( , Table 4 ) ., Similarly , for each algorithm , the importance scores are significantly negatively correlated with the competitive fitness variable ( , Table 4 ) ., The most predictive algorithms are , in order , Eigenvector Centrality , Simple Consensus , the Borda count , and Shannon Consensus ., In differentiating between essential and non-essential genes , Eigenvector Centrality is marginally better than the other algorithms ., In predicting competitive fitness , the four most predictive algorithms perform equally well ., With both external variables , the test statistics are noticeably smaller for the Graph Laplacian than for the other algorithms ., As we showed above , on unweighted networks , On both the collaboration and linkage networks , nodes with high in-degree tend to interact with many other highly connected nodes ., For both networks , we find high correlations between in-degree , , and the sum of the in-degrees of a nodes neighbors , ( , , for the collaboration network and , , for the linkage network ) ., Nodes that have many interactions with other highly connected nodes receive low Graph Laplacian scores , a counterintuitive result that suggests the Graph Laplacian is not a robust measure of consensus ., We summarize the predictive performance of the algorithms on the three data sets in Table 5 ., An important question in evaluating the performance of a consensus algorithm is how sensitive the algorithm is to deficiencies in the data in the interaction matrix ., Aspects of this question have been addressed in previous work ., Ghoshal et al . 57 showed that in scale-free networks of sufficient size , if all edges in the network are shuffled but the in-degrees maintained , the ranking of the nodes according to eigenvector centrality is not severely perturbed ., This type of shuffle allows the researcher to simulate the effects of missing or noisy data in the interaction matrix on an algorithms output ., We are particularly interested in the effects on the algorithms output of nodes systematically making errors in their assessments of the states of other nodes or nodes attempting to manipulate social structure by “loading the deck” or inflating the consensus scores of nodes by , for example , manipulating the weighted degree distribution ., ( One way to manipulate the weighted degree distribution is to inflate a nodes weighted in-degree by sending many signals . ), Capturing this kind of “deficiency , ” which we call source bias requires a different kind of shuffle ., First , we measure in our interaction matrices the correlation between a nodes Shannon entropy ( as defined in Sec . Shannon consensus ) and the total frequency of interactions it receives ( weighted in-degree or in-degree ) ( see Table S1 in Text S1 ) ., If entropy and in-degree were poorly correlated , we could independently evaluate the effects of receiving many interactions from receiving interactions from many individuals ., However , this is not the case on the data sets we consider ., We break the correlation by systemically shuffling the data in the matrices such that we create matrices with strong source biases but conserve the total number of interactions ( e . g . signals ) received ., We now have two matrices –the original , unshuffled matrix , and the shuffled matrix , ., We then compute consensus scores for the nodes using the unshuffled and shuffled matrices and assess how much the rank order changes under the shuffle ., More specifically , for a given pair of interaction partners in the network , say nodes and , we construct a matrix in which the target node , receives all of its interactions from partner node , ., If the original network is directed , we hold constant the out-edges of in addition to holding constant weighted in-degree ., If the original network is undirected , we maintain the symmetry ., The subordination signaling network is small enough so that we can perform this shuffle for every pair of partners ., However , the collaboration network and the functional linkage networks are too large to exhaust every pair of partners , so we choose of the nodes that are also represented in the functional data sets ., Partner nodes are chosen at random from the target nodes neighbors ., An algorithm is said to be sensitive to source bias if the rank order of the shuffled matrix , differs from the rank order of the original matrix , ., Large changes in the rank order indicate that the test algorithm tends to give higher scores to nodes that interact with many neighbors than to nodes that interact strongly with just one other node and is an indication that the algorithm is sensitive to source biases ., We find that Shannon Consensus , and the Graph Laplacian , , tend to be quite sensitive to source biases ( Figure 2 ) ., This is expected , as and depend on the entropy of the receiving distribution , which is by design in the shuffled matrices ., By definition , in-degree , , is maximally insensitive to source bias as we hold it constant in our shuffle ., Eigenvector Centrality , , is also fairly insensitive , but the explanation why is initially counter-intuitive ., As can be seen in Figure 2 , Eigenvector Centrality appears to be particularly insensitive to the shuffle for the subordination signaling network , as the rank order for the shuffled and unshuffled matrices for that network is very similar ., The reason for this is that in the subordination signaling network individuals who receive many signals receive some of these signals from partners who themselves receive many signals ., In addition individuals who receive many signals send very few signals ., Hence there is information about breadth encoded in the second and third order connections ( and so forth ) in the network ., Even if we shuffle the matrix so that all of an individuals signals come from a single other node , as long as we hold constant the out-edges of , the in-edges to are likely to be from an individual who itself receives relatively many signals ., Eigenvector Centrality , by emphasizing paths through the network , takes these second and third order connections into account ., It is consequently likely to get the rank order right , even after we reduce the diversity or breadth in the first order or direct connections , as long as the second and third order connections in the shuffled matrices encode information about the first order connections in the unshuffled matrices ., See Text S1 , Section Sensitivity of eigenvector centrality on transitive networks for more discussion of the relationship between transitivity and sensitivity to source bias ., This suggests that measures of consensus that emphasize depth– paths through the networks –also implicitly measure consensus breadth when there is some degree of either assortativity or transitivity in the network , and work well because of these features ., In the absence of transitivity , or when transitivity is very low , depth measures like Eigenvector Centrality , should not perform well as measures of consensus , unless , as in the case of the Graph Laplacian , they explicitly incorporate Shannon information ., In the Text S1 we provide details on an additional analysis we performed to evaluate algorithm sensitivity to source bias ., The results reported in this paper and elsewhere ( 17 , see also 60–62 ) suggest that at least in social networks nodes may be making strategic decisions about social interactions using knowledge of how they are perceived by the group ., For example , the individuals in the primate study group appear to estimates of their relative power to make decisions about whether to intervene in conflicts 17 ., This requires that they have some knowledge of moments or properties of the distribution of power ( e . g . approximate variance ) ., An important question is how individuals extract this information 22 , 63 ., More generally , what do animals know about social structure and collective dynamics , how precise are their estimates , and what heuristics might they use to make calculations 64 ?, It would be useful , for example , to be able to quantify the algorithmic complexity of each algorithm so that we could rank calculations by some measure of computational difficulty ( see also 65 ) ., Ideally , we would also like to know how sensitive each algorithm is to the input data ., ( e . g . is the exact number of signals received by individual critical , or will a rough estimate do ? ) for the output distribution of power to be a useful predictor of out of sample data ., Addressing this robustness question would help to determine how much room there is to relax the mathematical requirements of a given algorithm , and find a heuristic simple enough for this study species ., Ranking the algorithms by their algorithmic complexity is a long way off , if achievable at all ., As is illustrated in Figure S1 , we can only crudely rank the algorithms given what we know about the minimum number of steps each requires in order to estimate critical quantities from an empirical perspective – the absolute power of individual , the relative power of ( e . g . where it falls in a power distribution of a given type ) , and the moments of the power distribution ., In most circumstances it seems unlikely that we , or the animals , would be interested in an isolated individuals score ., This is because it is not her power value that is important , but rather where an estimated value falls in a distribution of power scores ., Yet calculation of absolute and relative power require different computational approaches and a preliminary assessment suggests that the difficulty of these steps varies across algorithms ., We discuss these issues in greater detail in Text S1 , Section Computational complexity ., In addition to approaching the problem of complexity mathematically , we can approach it empirically by asking how sensitive the algorithms are to imperfect information in the input matrices ., For example , perhaps the individuals in our system cannot discriminate based on identity and can only remember classes of individual ( e . g . male or female , or matriline x or y , etc . ) , signals or signalers , or an interaction history of length ., By coarse-graining the input data , it is in principle possible to test how sensitive the algorithms are to this kind of imperfect information resulting from various cognitive or spatial constraints ., Aspects of this question have been addressed in previous work , as discussed in Section Sensitivity of the algorithms to source biases ., However , many questions remain open for future work ., If node function in many different systems is collectively encoded in interaction networks and this information is decodable by quantifying the agreement in network connectivity patterns , this would suggest that consensus formation is at the core of sociality ., Consider the primate society used as a model system in this paper ., Power in our primate study group is a critical social variable ., However power is not a simple variable ., The distribution of power does not map directly onto a distribution of body sizes or even a distribution of fighting abilities ., Rather it consolidates as multiple interacting individuals learn about fighting abilities and signal about this to reduce social uncertainty 14 , 19 , 21 , 22 , 63 ., When the statistics used to operationalize an aggregate social property , like power structure , are more than simple counts over strategies , and when the inputs are not simply individual traits but network data , we need to worry explicitly about the mappings between behavioral strategies and decision-making at the microscopic level and social organization 65–whether we are working with the social organization of primates or of cells forming a tissue ., A central question becomes , How do strategies get collectively combined by multiple components to produce macroscopic social properties ?, How much degeneracy characterizes this mapping ?, Once we can describe the developmental dynamics giving rise to an aggregate social property , we will be in a position to study how the social processes producing power and other kinds of social structure have evolved in a wide range of systems ., The data set , collected by J . C . Flack , is from a large , captive , breeding group of pigtailed macaques that was housed at the Yerkes National Primate Research Center in Lawrenceville , Georgia ., The physicist collaboration network was collected by Mark Newman , as described in 38 , and is available at http://www-personal . umich . edu/~mejn/netdata/ ., The data were initially collected from the Los Alamos e-Print Archive , now the arXiv at http://arxiv . org ., Since initial publication in 2001 , the network has been updated with collaborations from the arXiv through 2005 ., scientists are represented in the network and the collaborations occurred between January , 1995 and March , 2005 ., The National Science Foundation makes the data about awarded grants publicly available at http://www . nsf . gov/awards/about . jsp ., For each scientist in the collaboration network , we searched this database for any grant concerning condensed matter physics on which the scientist was one of the investigators ., If the scientist was awarded more than one grant , we summed the total amount of grants awarded him or her ., Grant data was available for of the scientists in the collaboration network ., The grants were awarded between September , 2008 and September , 2012 , with one grant starting in September , 2004 ., The functional linkage network was constructed by Lee et al . , as described in 56 and 51 , and is available at http://www . yeastnet . org/ ., Functional linkages between genes are associations that “represent functional constraints satisfied by the cell during the course of the experiments” 56 ., Evidence of a functional linkage between two genes was provided by mRNA coexpression levels , the results of protein interaction experiments , phylogenetic profiles , and the co-occurrence of the two genes in a scientific paper 51 , 56 ., Lee et al . combined these data to calculate the log-likelihood that two genes are involved in a similar function ., In our analyses , we say an edge is present if its log-likelihood score is greater than and is absent otherwise ., The resulting network has nodes ., The Saccharomyces Genome Database maintains information about the phenotypic effects of genes in the yeast genome at www . yeastgenome . org ., Two phenotypic effects reflect a genes overall importance ., One measure is the viability of organisms with a mutant version of the gene and a second measure is the competitive fitness of organisms with a mutant version of the gene ., The viability measure is binary: a mutation to a gene can lead to either a viable or an inviable organism ., An inviable organism is one that is unable to grow under standard growth conditions for S . cerevisiae , defined as glucose-containing rich medium ( YPD ) at C . A genes competitive fitn | Introduction, Results, Discussion, Methods | Biological and social networks are composed of heterogeneous nodes that contribute differentially to network structure and function ., A number of algorithms have been developed to measure this variation ., These algorithms have proven useful for applications that require assigning scores to individual nodes–from ranking websites to determining critical species in ecosystems–yet the mechanistic basis for why they produce good rankings remains poorly understood ., We show that a unifying property of these algorithms is that they quantify consensus in the network about a nodes state or capacity to perform a function ., The algorithms capture consensus by either taking into account the number of a target nodes direct connections , and , when the edges are weighted , the uniformity of its weighted in-degree distribution ( breadth ) , or by measuring net flow into a target node ( depth ) ., Using data from communication , social , and biological networks we find that that how an algorithm measures consensus–through breadth or depth– impacts its ability to correctly score nodes ., We also observe variation in sensitivity to source biases in interaction/adjacency matrices: errors arising from systematic error at the node level or direct manipulation of network connectivity by nodes ., Our results indicate that the breadth algorithms , which are derived from information theory , correctly score nodes ( assessed using independent data ) and are robust to errors ., However , in cases where nodes “form opinions” about other nodes using indirect information , like reputation , depth algorithms , like Eigenvector Centrality , are required ., One caveat is that Eigenvector Centrality is not robust to error unless the network is transitive or assortative ., In these cases the network structure allows the depth algorithms to effectively capture breadth as well as depth ., Finally , we discuss the algorithms cognitive and computational demands ., This is an important consideration in systems in which individuals use the collective opinions of others to make decisions . | Decision making in complex societies requires that individuals be aware of the groups collective opinions about themselves and their peers ., In previous work , social power , defined as the consensus about an individuals ability to win fights , was shown to affect decisions about conflict intervention ., We develop methods for measuring the consensus in a group about individuals states , and extend our analyses to genetic and cultural networks ., Our results indicate that breadth algorithms , which measure consensus by taking into account the number and uniformity of an individuals direct connections , correctly predict an individuals function even when some of the group members have erred in their assessments ., However , in cases where nodes “form opinions” about other nodes using indirect information algorithms that measure the depth of consensus , like Eigenvector Centrality , are required ., One caveat is that Eigenvector Centrality is not robust to error unless the network is transitive or assortative ., We also discuss the algorithms cognitive and computational demands ., These are important considerations in systems in which individuals use the collective opinions of others to make decisions ., Finally , we discuss the implications for the emergence of social structure . | complex systems, mathematics, ecology, applied mathematics, biology, computational biology, behavioral ecology | null |
187 | journal.pcbi.1001012 | 2,010 | Treatment-Mediated Alterations in HIV Fitness Preserve CD4+ T Cell Counts but Have Minimal Effects on Viral Load | Antiretroviral therapy has been used to successfully treat HIV-1 infection ., However , a subset of patients develops drug resistance followed by an observable increase in plasma HIV viral load ., This “virological failure” usually triggers a change in the drug regimen ., Here we examine a situation in which patients had developed resistance to most common drugs and a novel agent , enfuvirtide , was added to their failing drug regimen ., When resistance to enfuvirtide developed the use of this agent was discontinued in the hope that drug-sensitive virus would outcompete the resistant virus and enfuvirtide could be given again ., Despite the fact that resistance developed when enfuvirtide was re-administered and viral loads were unable to be suppressed , CD4+ T cell counts were preserved or increased ., Observing increasing CD4+ T cell counts without viral suppression is intriguing and suggests that issues of viral fitness may play a role ., Fitness costs have been associated with drug resistance not only to enfuvirtide but also to other drug classes 1–6 ., Further , despite virologic failure due to the emergence of drug resistance , continued treatment that imposes selective pressure on drug sensitive virus and causes outgrowth of resistant HIV is often associated with benefits such as higher sustained CD4+ T cell counts and reduction in the risk of morbidity and mortality 2–5 ., To uncover the nature of the CD4+ T cell increase and to determine a general principle that may be useful in developing treatment strategies in the face of drug resistance , we performed a detailed viral kinetic analysis of a set of patients treated with enfuvirtide in which longitudinal measurements of drug sensitive and drug resistant viral levels , as well as CD4 counts , were available ., Enfuvirtide ( ENF ) , formerly called T-20 , is a 36 amino acid synthetic peptide that binds to the HR-1 region of the HIV-1 gp41 molecule , thereby preventing fusion of the viral membrane with the target cell membrane 7 ., It is the first FDA-approved HIV-1 fusion inhibitor 8 ., As ENF is expensive and must be administered parenterally , it is often reserved for heavily pretreated patients with limited therapeutic options 9–13 ., ENF acts extracellularly prior to viral entry ., This feature provides a number of benefits , such as less susceptibility to cellular efflux transporters that lower the effective intracellular concentrations of other classes of antiretroviral drugs and little or no drug-drug interactions with drugs metabolized by the CYP 450 or N-acetyltransferase route 14 ., As with other antiviral drugs , in patients treated with ENF , the high replication rate of HIV and the low fidelity of HIV reverse transcriptase can lead to the development of drug resistance 14 ., Resistance to ENF occurs due to amino acid substitutions within the HR-1 region of gp41 at amino acids 36–45 of HIV-1 gp41 with G36D , G36S , G36V , G36E , V38A , V38M , V38E , Q40H , N42T , and N43D being the most common ENF resistant mutations 12 , 13 , 15 ., These mutations result in significantly reduced binding of ENF to HR-1 16 ., Since ENF is expensive and poorly tolerated , many individuals interrupt this drug once virologic failure is confirmed ., In a single arm prospective study of individuals exhibiting virologic failure on ENF , selective interruption of ENF was not associated with any appreciable increase in HIV RNA levels , suggesting that the drug had only limited residual activity and hence its use during failure may not be warranted 11 , 17 ., Observational data from other groups , however , have suggested that there may be a CD4+ T cell benefit associated with certain ENF-associated mutations 18 ., These data suggest that despite virological failure the drug may have continued benefit due to alterations in the viruss pathogenic effects ., Interruption of ENF in individuals with ENF-resistance is associated with a rapid decay in the resistant variant 11 , 13 , 17 ., The reason resistant virus decays in the absence of drug is not fully understood ., Although the rebound of archived more “fit” wild-type virus is often cited as the major mechanism whereby HIV resistance decays in the absence of therapy 13 , 17 , ongoing evolution within the envelop gene and the eventual selection of the wild-type virus may also account for the loss of ENF resistance when this drug is interrupted 9 ., Despite marked differences in fitness of drug-sensitive and drug-resistant viruses and evidence of ongoing viral evolution , plasma HIV-1 RNA levels remain almost constant during ENF interruption 13 ., This apparent paradox suggests that viral fitness may not be a major determinant of the steady-state level of viremia ., To more fully understand the role of viral fitness as well as other parameters determining the dynamics of HIV-1 during ENF interruption , we use mathematical models to study the competition between ENF sensitive and ENF resistant viruses after the interruption of ENF and during subsequent re-administration ., We consider only the V38A mutant because this single substitution in HIV-1 gp41 is the most frequently observed in drug resistant virus 19 and data on the population size of mutants with V38A are available 13 ., We estimate the rate of forward and backward mutations , the replication capacity of both drug-sensitive and drug-resistant viruses , and the efficacy of ENF against viral fusion when it is re-administered after interruption ., We also examine the effect of target cell level on the dynamics and steady states of drug sensitive and resistant viruses during ENF interruption and subsequent re-administration ., Lastly , we discuss virus population turnover and plasma viral RNA levels during the presence and absence of the drug ., We obtained wild-type and V38A mutant viral load and CD4+ T cell data from Department of Medicine , University of California-San Francisco , CA , USA , San Francisco General Hospital , San Francisco , CA , USA and Section of Retroviral Therapeutics , Brigham and Womens Hospital and Division of AIDS , Harvard Medical School , Boston , MA , USA ., Viral load and CD4+ data were obtained for three HIV-1 infected subjects ( P1 , P2 , and P3 ) during ENF interruption who continued to receive the other drugs in their antiretroviral regimen ., Before ENF interruption , subjects P1 , P2 and P3 were treated with ENF for 27 , 33 and 39 weeks , respectively , and each of them had the V38A mutation as the predominant virus population ( more than 85% frequency ) ., Viral load and CD4+ data were also obtained during subsequent 4-week re-administration of ENF after interruption for 76 , 68 and 38 weeks , respectively ., For subject P3 , the data were also collected during a second interruption of ENF ., Therefore , there were two data sets during ENF interruption for subject P3 ., A schematic diagram of the model is shown in Fig . 1 ., The model contains five variables: uninfected target cells , T , cells infected by ENF-sensitive virus , Is , cells infected by ENF-resistant virus , Ir , ENF-sensitive virus , Vs , and ENF-resistant virus , Vr ., The model assumes that target cells are produced at a constant rate , λ , and die at rate dT ., ENF-sensitive virus infects target cells to produce infected cells , Is , at rate βsTVs , among which a fraction μsβsTVs , become ENF-resistant during the process of reverse transcription of viral RNA to DNA due to mutation at rate μs ., Similarly , the infection by ENF-resistant virus produces infected cells , Ir , at rate βrTVr , with a fraction μrβrTVr undergoing backward mutation to the drug sensitive strain at rate μr ., Cells infected by ENF-sensitive and ENF-resistant virus produce new virions at rates psIs and prIr , and die at rates δIs and δIr , respectively ., Both viruses are cleared at the same rate c per virion ., Whether the V38A mutation in gp41 affects viral production remains unclear ., For simplicity , we assume ps\u200a=\u200apr , and describe the resistance-associated fitness loss only by a reduced infectivity rate , i . e . , βr\u200a= ( 1−α ) βs , where the fitness cost of the mutant virus , α , satisfies 0≤α≤1 ., ENF is a fusion inhibitor and reduces infection of target cells by free virus ., We assume εs and εr are the efficacies of ENF against ENF-sensitive and ENF-resistant virus , respectively , with 0≤εs , εr≤1 ., In the patient data we analyze the populations of both drug-resistant and drug-sensitive virus always remain high ( above 2 . 8 log10 HIV RNA copies/ml ) ., Thus , stochastic effects would not be significant and we formulate the model as a deterministic model – a standard two-strain viral dynamic model – similar to the ones in 20 , 21 ., The model is described by the following differential equations: ( 1 ) ( 2 ) ( 3 ) ( 4 ) ( 5 ) As measured by Ki-67 antigen expression , only a small percentage of CD4+ T cells in peripheral blood appear to be activated into proliferation and hence are preferred targets for HIV-1 infection 22 ., Therefore , we take only a fraction of the total CD4 count , i . e . the activated cells , as targets for HIV-1 infection and estimate this fraction ., The total CD4+ T cell count is assumed to be given by ( T+Is+Ir ) /a , where a denotes the fraction of CD4+ T cells that are activated ., In principle , a could be time-varying or in particular depend on the CD4+ T cell count 22 ., However , the CD4 count of the patients in this study always remains below 200/µl , and according to the relationship between CD4 count and activated cell percentage given in 22 , a 10-fold change in CD4 count ( from 20 to 200/µl ) causes only a minor change in activation percentage ( from 8 . 6% to 10 . 4% ) ., Therefore , for our study we felt it reasonable to assume a is constant ., We note that in the study we analyze 13 , ENF is given in combination with other drugs , the infection rates βs , βr and the virus production rates ps , pr that we estimate include the effects of the other drugs in the background regimen ., However , since the background regimen was failing to suppress HIV replication , these effects may be minimal ., Moreover , the data have taken only V38A mutants into account with other mutants being included in the “wild-type” ., Therefore ENF efficacy against wild-type , εs , in our model also incorporates the possible reduction in efficacy due to other mutants included in the wild-type ., Further , loss of V38A mutation at rate μr , can lead to any of a variety of viral variants that we include in the drug sensitive population ., Lastly , virus variants carrying the V38A mutation may also carry other mutations , such as compensatory mutations or other drug resistance mutations , which may affect the fitness of the drug-resistant population as well as its level of drug resistance ., We note that there is loss of some free virus due to the infection of target cells as virus must enter a cell in order to infect ., To incorporate this effect , one can add the terms − ( 1−εs ) βsTVs and − ( 1−εr ) βrTVr to Eqs ., ( 4 ) and ( 5 ) , respectively ., For the measured range of T in the subjects considered here , and the estimates of βs and βr determined below , βsT and βrT are <0 . 05 d−1 which is ∼500 times lower than the viral clearance rate c ( 23 d−1 ) , indicating that virion loss due to infection will have negligible effect on the viral dynamics compared to the term −cV ., We confirmed this by fitting the model with the terms − ( 1−εs ) βsTVs and − ( 1−εr ) βrTVr in Eqs ., ( 4 ) and ( 5 ) , respectively , in which we found almost no change in parameter estimates ., Therefore , we neglected virion loss due to infection and left only the viral clearance term ( −cV ) in the V equations ., The dynamics of free virus is typically fast in comparison with that of infected cells 23–25 ., Therefore , we assume a quasi-steady state , which from Eqs ., ( 4 ) and ( 5 ) provides Vs\u200a= ( ps/c ) Is and Vr\u200a= ( pr/c ) Ir ., This simplifies the model leaving only equations for T , Is and Ir ., Further , we set Is ( 0 ) = ( c/ps ) Vs ( 0 ) and Ir ( 0 ) = ( c/pr ) Vr ( 0 ) for data fitting as well as all simulations , where Vs ( 0 ) and Vr ( 0 ) are determined by direct measurement at the start of interruption or the start of ENF re-administration ., As measured by Mohri et al . 26 , we take the uninfected CD4+ T cell death rate d\u200a=\u200a0 . 01 day−1 ., Recent estimates show that the virion clearance rate constant , c , varies between 9 . 1 day−1 and 36 day−1 , with an average of 23 day−1 25 , 27 ., Therefore , we take c\u200a=\u200a23 day−1 ., During ENF interruption , we estimate the parameters λ ( target cell recruitment rate ) , βs ( drug sensitive virus infection rate ) , μs ( forward mutation rate ) , μr ( backward mutation rate ) , α ( fitness cost of ENF-resistance ) , ps ( production rate of drug sensitive virus ) , δ ( infected cell death rate ) , T0 ( initial uninfected target cell concentration ) and a ( fraction of CD4+ T cells that are activated ) by fitting the model to the ENF-sensitive viral load , the ENF-resistant viral load and the CD4 count data simultaneously for each patient ., Since fewer data points are available during re-administration of ENF , we fix some parameters at the values obtained by estimation during ENF interruption; and only estimate εs ( ENF efficacy against the sensitive strain ) , εr ( ENF efficacy against the resistant strain ) , λ , T0 and a ., We also fitted the data during ENF interruption and ENF re-administration allowing the initial concentrations of drug sensitive and drug resistant viruses to be free parameters , but the fit could not be improved ., We solved Eqs ., ( 1 ) – ( 5 ) numerically using the Runge-Kutta 4 algorithm in Berkeley Madonna 28 ., We also used it to obtain the best-fit parameters via a nonlinear least squares regression method ., The predicted log10 values of the ENF-sensitive and ENF-resistant viral loads and the CD4 count for each patient were fit to the corresponding log-transformed data ., Of note , to avoid the difficulty of assigning different weights to the viral loads and CD4 counts in the objective function being minimized , and give equal importance to all the widely varied values in the data set , we fitted the data in the log-scale rather than linear-scale for both viral loads and CD4 count ., Finally , for each best fit parameter estimate , we provide a 95% confidence interval ( CI ) using 200 bootstrap replicates 29 , which we performed in MATLAB ., The estimated viral dynamic parameters during ENF interruption along with their mean and sample standard deviation , and their 95% confidence interval are summarized in Table 1 and Table 2 , respectively ., Using the estimated parameters , we found the predictions of the model agree well with the data for each of the study participants ( Fig . 2 ) ., All the parameters are approximately the same for two ENF-interruptions in P3 , suggesting that the viral dynamic parameters remain stable over time ., We estimated the rates of forward and backward mutation as 2 . 24±0 . 32×10−5 and 1 . 73±0 . 30×10−5 , respectively ., Even though the backward mutation rate is slightly lower than the forward mutation rate , early after ENF interruption the ENF-resistant virus population is significantly larger than the ENF-sensitive virus population , and consequently the amount of backward mutation during the early post ENF interruption is usually higher than the amount of forward mutation ., Although there is a continued evolution in gp41 after ENF interruption , our results show that the rate of on-going evolution including backward mutation accumulation is not sufficient to explain the rapid waning of ENF-resistant virus ., For example , during the first week ( month ) post interruption , the contribution of ongoing evolution and backward mutation to the loss of cells carrying a drug-resistant proviral genome is only about 0 . 2 ( 0 . 4 ) cells per ml , which corresponds to the loss of 26 ( 70 ) drug resistant virions per ml per week ( month ) ., Since the contribution of these de-novo mutations is small , we also fitted the data using the model without de-novo mutation , i . e . , μs\u200a=\u200aμr\u200a=\u200a0 , and found that the changes in estimated parameter values lie within a range of 0–6% ., As the loss of resistant virus due to backward mutation is negligible , we find the fitness cost , i . e . , the reduction of the infectivity of the resistant virus compared to the wild-type virus , plays a more important role in the decay of ENF-resistant virus and the increase of ENF-sensitive virus ., Fitting our model to the data suggests that ENF-resistant virus is 17±3% less fit ( i . e . , α\u200a=\u200a0 . 17±0 . 03 , Table 1 ) than ENF-sensitive virus in the absence of ENF ., This fitness loss is consistent with the results in Marconi et al . 13 , although they obtained a higher estimate of the relative fitness cost ( 25–65% ) using a different fitness estimation method ., Our estimates of the virion production rate , ps\u200a=\u200apr\u200a=\u200a3628±857 virions day−1 , the infected cell death rate , δ\u200a=\u200a0 . 29±0 . 02 day−1 , and the sensitive virus infection rate , βs\u200a=\u200a7 . 1±3 . 1×10−7 ml−1 day−1 ( Table 1 ) , are approximately consistent with the estimates 1427±2000 virions day−1 , 0 . 37±0 . 19 day−1 and 11 . 8±14×10−7 ml−1 day−1 , respectively , in Stafford et al . 30 ., However , the estimate of δ is much smaller than that in some other studies 31 ., We also estimated the uninfected cell recruitment rate λ\u200a=\u200a790±311 cells ml−1 day−1 ., Our estimate of a suggests that 11% of CD4+ T cells are activated , consistent with the finding that ∼10% of CD4+ T cells in peripheral blood are Ki-67+ in the patients with CD4 count less than 200 cells/µl 22 ., After ENF interruption , ENF was re-administered to the study subjects for 4 weeks while keeping the same “background” regimen ., During this re-administration of ENF , we estimated the ENF efficacies against sensitive and resistant viruses , εs and εr ., Estimated values and their 95% confidence intervals are summarized in Table 3 and Table 4 , respectively ., Comparisons of model predictions with the patient data are shown in Fig . 3 ., Our estimates indicate that ENF re-administered following interruption is 66±6% effective in reducing infection by ENF-sensitive virus , while the effectiveness is reduced to 29±6% in reducing infection by ENF-resistant virus ., This indicates that ENF-resistant variants still remain partially sensitive to ENF even though they have reduced susceptibility ., We note that the efficacy of ENF against drug sensitive virus obtained here is a minimal estimate as it might have included the reduction of efficacy due to inclusion of other mutant virus in the drug sensitive virus data ., Other estimated parameters during ENF re-administration ( Table 3 ) are more or less the same as those estimated during ENF interruption ( Table 1 ) ., The continued activity of ENF against the drug-resistant virus is supported by the apparent immediate albeit transient and small increase in plasma HIV RNA levels observed when ENF was interrupted in a larger cohort of individuals ( see Figure 1 in 11 ) ., Despite the difference in replication capacity and changes in the proportion of ENF-sensitive and ENF-resistant viruses ( Fig . 2a ) , the total plasma viral load remains approximately the same during ENF interruption ( Fig . 2b ) ., The plasma viral load also remains unchanged during ENF re-administration ( Fig . 3 ) except for a nominal transient post-readministration suppression followed by a rebound ., This raises a question: what determines the plasma viral load ?, We first studied the effect of ENF-resistant virus fitness cost on plasma viral load ., In Figs ., 4a and 5a , we show the plasma viral load obtained from our model for different fitness costs during ENF interruption and ENF re-administration , respectively , with other parameter values held to their estimated values ., When we varied the fitness cost from 5 to 50% we did not find any observable change in plasma viral load ., This suggests that the fitness cost has a minor role in determining the total viral load ., We next studied the effect of different initial proportions of the mutant virus at the time of ENF interruption ( Fig . 4b ) and ENF re-administration ( Fig . 5b ) ., The initial proportion of ENF-resistant virus does not seem to have any effect on plasma viral load either ., From the model we can calculate the steady state level of infected cells , , which given our assumption that ps\u200a=\u200apr\u200a=\u200ap , is proportional to the total viral load , , i . e . , where an over-bar denotes a steady state value ., As the resistant virus population decays to a low level during ENF interruption , the net effect of backward mutation on the steady state is negligible ., Therefore , we neglect backward mutation and obtain the following expression for the steady state total viral load , : ( 6 ) Note that is independent of the fitness cost , α ., Similarly , during ENF re-administration we neglect the forward mutation rate as the sensitive virus replication is largely inhibited , and obtain the steady state total viral load in the presence of ENF , , as ( 7 ) In this case , the total viral load depends upon the fitness cost , α ., However , using our estimated parameters ( Table 3 ) , the second term on the right hand side is ∼20-fold smaller than the first term and hence the effect is negligible as seen in Fig . 5a ., Therefore , the fitness cost again does not have any effect on setting the total viral load ., We next studied the effect of target cells on the total viral load ., We considered two approaches: one by changing the initial target cell level and another by changing the recruitment rate of target cells ., We did not observe any effect of the initial target cell level on the total viral load during ENF interruption ( Fig . 4c ) or re-administration ( Fig . 5c ) ., Marked differences in the level of plasma viral load is seen when the target cell recruitment rate , λ , changes while keeping all other parameters fixed , during both ENF interruption ( Fig . 4d ) and re-administration ( Fig . 5d ) ., After early transient changes in viral load upon ENF interruption , the plasma viral load level remains relatively constant during the interruption with the level related to the target cell source rate λ ., A similar result is found during ENF re-administration except that it takes longer to initially stabilize the viral load level during ENF re-administration than during ENF interruption ., While we demonstrated the dependence of the viral load on λ ( Figs . 4d and 5d ) , the level of plasma viremia can also be seen by simulation to depend on p , c and δ ., This is also supported by the analytical expressions ( 6 ) and ( 7 ) for the steady state level of total virus , which to a good approximation are equal to , pλ/ ( cδ ) , during ENF interruption and re-administration ., The changes over time of the CD4 count , and of the proportion of uninfected cells , cell infected with sensitive virus , and cell infected with resistant virus are shown in Figs ., 6a and 6c , respectively ., After ENF re-administration , the proportion of uninfected cells increases , reaches a peak and then decays to a steady-state level higher than the level before ENF re-administration ., In a study by Deeks et al . 11 on a larger cohort of individuals , the subjects received an ENF-based regimen ( the same as the one received by individuals in this study ) for 34 weeks ( approximately the same period as in our study ) followed by the interruption of ENF ., During a screening period of 4 weeks just before the interruption began , they found a negligible change in CD4+ T cell counts ( mean change: 0 . 13 cells/µl/week ) suggesting that steady state was reached by the end of this long-term treatment ., They also observed the steady state T cell level after a long period of ENF interruption ., Below we calculate from our model the steady state level of uninfected CD4+ T cells to understand how the uninfected target cell level differs between long-term ENF interruption and long-term ENF re-administration ., The steady state level of target cells during ENF interruption , TE , and ENF re-administration , , can be calculated from our model and are given by ( 8 ) ( 9 ) respectively ., Before ENF is re-administered , εr\u200a=\u200a0 , as no drug is present ., After drug is given , εr>0 and Eq ., ( 8 ) shows that the target cell level should increase ., Furthermore , in addition to the efficacy of the drug against resistant virus , εr , the fitness cost , α , also contributes to the maintenance of a higher level of uninfected target cells during ENF re-administration ., In fact , even if the drug is completely ineffective against resistant virus ( i . e . , εr\u200a=\u200a0 ) , and the viral load is approximately equal during both ENF interruption and ENF re-administration as shown above , HIV infected patients with ENF re-administration will still have a higher uninfected target cell level due to the fitness loss of resistant virus ( i . e . , for α>0 ) ., Above we showed that during re-administration the total viral load , and hence the total number of infected cells also stays approximately constant ., Hence the CD4+ T cell count , which includes both uninfected and infected CD4+ T cells , is expected to increase with the increase in target cells ., This is an important result as it shows that even though the resistant virus becomes dominant during ENF re-administration ( Fig . 3 ) , the CD4+ T cell count should increase , which represents an immunologic benefit to patients ., Before ENF interruption , the ENF-resistant viral load is on average 100-fold higher than ENF-sensitive viral load ., After ENF interruption the proportion of ENF-resistant virus decreases ( Fig . 2 ) and after several weeks ENF-sensitive virus becomes dominant ., According to our simulations , the time it takes for ENF-sensitive virus to take over the viral population mainly depends upon the fitness cost ( Fig . 4e ) and the initial proportion of ENF-resistant virus ( Fig . 4f ) ., To look at this more closely , we simplify the problem by neglecting mutation and by assuming the target cell level remains constant , i . e . we assume , the steady state target cell in the presence of drug ( i . e . before the interruption ) ., This results in a system of two linear differential equations in Is and Ir ., Solving these equations , we obtain the following expression for r ( t ) =\u200aVr ( t ) /Vs ( t ) , the ratio of the two strains: ( 9 ) where r ( 0 ) ≈100 , i . e . resistant virus is approximately 100-fold more plentiful than sensitive virus ., As time off therapy increases the level of resistant virus falls and r ( t ) decreases ., When r ( t ) <1 , the sensitive virus is the dominant strain ., The time , tθ , for the proportion of resistant virus to reach r ( tθ ) during ENF interruption is ( 10 ) As indicated by the above expression , and as seen in Figs ., 4e and 4f , an increase in the fitness cost , α , causes the ENF-sensitive virus to be dominant sooner , while an increase in initial ENF-resistant virus proportion , r ( 0 ) , results in a longer time for the ENF-sensitive virus to be dominant ., Varying the T-cell count at the time of interruption from 50 to 250 or 500 µl-1 or increasing the T cell source rate , λ , does not significantly impact the proportion of ENF-resistant virus ( Figs . 4g and 4h ) ., We also studied the competition of the virus populations during ENF re-administration ( Figs . 5f–j ) ., Following ENF re-administration ENF-resistant virus reemerges rapidly and attains the proportion in an approximate time ( obtained as in ENF interruption case above ) given by ( 11 ) For the parameters in Tables 1 and 2 , the virus population changes more rapidly during ENF re-administration than during ENF interruption , and the time for the virus population to become dominated by resistant virus , i . e . for r>1 , mainly depends upon the combined effect of fitness cost and efficacy of ENF against the ENF-resistant virus ., If ENF is sufficiently effective against ENF-resistant virus or if the fitness cost is sufficiently high , the turnover is significantly delayed ., In addition to ENF efficacy and the fitness cost , there appear to be nominal effects of the initial ENF-resistant virus proportion , and target cell generation rate on the turnover of the virus population during ENF re-administration ( Figs . 5g , i ) ., Despite the resistance to ENF , re-administration of ENF might provide some benefits if ENF has partial activity against resistant virus ., Using our model to study this , we found re-administration of ENF results in transient nominal viral suppression for about 2 weeks followed by a rapid rebound in plasma HIV-1 RNA level and then attainment of a steady state viral load higher than the initial viral load in about 7 weeks ( Fig . 5e ) ., This shows that ENF re-administration is not effective in suppressing plasma viral load ., However , our model simulations show that re-administration of ENF helps in maintaining a higher CD4+ T cell level ( Fig . 6a ) ., After ENF re-administration , the CD4 count increases , reaches a peak and decays to a steady state level higher than the steady state level before the re-administration ., While the CD4+ T cell count decreases by 15% in the absence of ENF , re-administration of ENF results in an increase of the CD4+ T cell count by 18% over the treatment period of 3 months ( Fig . 6b ) , which can be clinically significant ., This gain of about 35% in the CD4 count due to ENF re-administration predicted by our model is consistent with a ∼36 . 8% increase in CD4 count ( from 95 cells/µl to 130 cells/µl ) during ENF-treatment observed in a study of 25 individuals 11 ., According to our model , this increase is observed because during re-administration of ENF , ENF-sensitive virus is replaced by ENF-resistant virus that has less ability to infect CD4+ target cells ( Fig 6c ) ., Therefore , there appears to be an immunological benefit , i . e . , achieving a higher CD4+ T cell count , in patients taking ENF , even though they might suffer virologic failure due to the emergence of resistance ., The level of CD4+ T cells increases as fitness cost or/and the efficacy of ENF against ENF-resistant virus increases because an increase in fitness cost or/and efficacy further decreases the infectivity of resistant virus ., The impact of antiretroviral drug-resistance on viral load , CD4+ T cell counts and clinical outcomes is complex ., Although the emergence of resistance to protease inhibitors and reverse transcriptase inhibitors clearly affects viral fitness ( as defined in vitro and in vivo ) 2–5 , 32 , its impact on viral load and CD4+ T cell counts is unclear ., At comparable plasma viral loads , drug resistant HIV can be associated with more sustained CD4+ T cell gains and reduction of the risk of morbidity and mortality 2 , 4 , 5 , 32 than wild-type ( drug-sensitive ) HIV ., To understand the mechanism for this apparent beneficial effect on immunologic and clinical outcomes independent of viremia , we use ENF resistance as a “probe” to explore the impact of fitness on viral and immunologic dynamics in vivo ., Although the data linking ENF resistance to viral load , CD4 and clinical outcomes is limited , the preliminary data that does exist is consistent with the more extensive literature pertaining to protease inhibitor resistance ., Specifically , despite the emergence of ENF-resistant mutations , CD4+ T cell counts have been observed to increase during therapy as the ENF resistant virus with less capacity to infect T cell replaces the ENF-sensitive virus ., A large prospective study has recently been completed in which ENF was given as a “pulse” to determine if the expansion of ENF resistance positively affects CD4+ T cell counts ., Preliminary data from 3 individuals has previously been published 13 ., Given the | Introduction, Methods, Results, Discussion | For most HIV-infected patients , antiretroviral therapy controls viral replication ., However , in some patients drug resistance can cause therapy to fail ., Nonetheless , continued therapy with a failing regimen can preserve or even lead to increases in CD4+ T cell counts ., To understand the biological basis of these observations , we used mathematical models to explain observations made in patients with drug-resistant HIV treated with enfuvirtide ( ENF/T-20 ) , an HIV-1 fusion inhibitor ., Due to resistance emergence , ENF was removed from the drug regimen , drug-sensitive virus regrown , and ENF was re-administered ., We used our model to study the dynamics of plasma-viral RNA and CD4+ T cell levels , and the competition between drug-sensitive and resistant viruses during therapy interruption and re-administration ., Focusing on resistant viruses carrying the V38A mutation in gp41 , we found ENF-resistant virus to be 17±3% less fit than ENF-sensitive virus in the absence of the drug , and that the loss of resistant virus during therapy interruption was primarily due to this fitness cost ., Using viral dynamic parameters estimated from these patients , we show that although re-administration of ENF cannot suppress viral load , it can , in the presence of resistant virus , increase CD4+ T cell counts , which should yield clinical benefits ., This study provides a framework to investigate HIV and T cell dynamics in patients who develop drug resistance to other antiretroviral agents and may help to develop more effective strategies for treatment . | The impact of antiretroviral drug-resistance on viral load , CD4+ T cells , and clinical outcomes is complex ., We used mathematical models to evaluate the benefits of HIV drug therapy in the presence of drug-resistant virus ., As an example , we considered resistance to enfuvirtide , the first FDA-approved fusion inhibitor ., If viral load increases on drug therapy due to drug resistance , therapy with this drug may be stopped ., We found that the drug resistant virus is less fit than the drug-sensitive virus in the absence of drug , and this fitness disadvantage causes the loss of drug-resistant virus during drug interruption ., After the drug-sensitive virus replaces resistant virus , enfuvirtide therapy was re-administered ., Analyzing the resulting viral kinetics , we demonstrate that despite the inability of the re-administered drug to suppress viral load because of the continued presence of drug resistant virus , therapy still provides benefit to the patient by preserving or increasing peripheral blood CD4+ T cell levels . | mathematics, virology/immunodeficiency viruses, immunology/immune response, virology/antivirals, including modes of action and resistance, computational biology/evolutionary modeling, infectious diseases/hiv infection and aids, infectious diseases/antimicrobials and drug resistance, virology/host antiviral responses | null |
1,568 | journal.pcbi.1000766 | 2,010 | Accurately Measuring Recombination between Closely Related HIV-1 Genomes | Viral diversity is one of the major obstacles to the successful eradication of HIV 1 , 2 ., It arises due to the interplay between mutations introduced by error-prone reverse transcription 3 , high levels of viral turnover 4 , retroviral recombination 5 and strong diversifying selection pressure from the immune system 2 ., All retroviruses co-package two RNA genomes into each virion ., Retroviral recombination occurs when the reverse transcriptase ( RT ) enzyme switches between co-packaged RNAs during reverse transcription ( reviewed in 6 , 7 ) ., In HIV , recombination occurs much more frequently than mutation 8 , and is a major determinant of viral diversification ., Within infected individuals , recombination allows sequential rounds of viral escape of both antibody and T-cell recognition , resulting in loss of immune control 9 , 10 ., Furthermore , recombination can both promote and suppress the generation of multiple drug resistance , by creating or breaking linkages between drug resistance mutations 11–16 ., Therefore , an accurate measurement of recombination rates directly within the HIV genome is fundamental to our understanding of HIV ., Recombination has been studied extensively , by many groups , and is typically detected by monitoring the linking of marker points from co-packaged RNA genomes into a single DNA genome ., One popular method of measuring recombination is through the use of retroviral reporter systems ., These systems measure recombination within a ‘foreign’ gene insert , such as genes that code for antibiotic resistance proteins , surface protein markers , and/or fluorescent proteins 8 , 17–24 ., Retroviral reporter systems have the advantage of being able to readily quantify a large number of recombination events within the gene insert ., However , in vitro studies show that template sequence and nucleic acid structure are important determinants of the recombination process 25 , 26 ., Therefore , measurements of recombination rates within non-HIV ‘foreign’ gene sequences will not recapitulate recombination rates within HIV sequence ., Other groups utilize the genetic variation between and within HIV subtypes , and use sequencing to monitor recombination 8 , 21 , 22 ., These systems provide the foundation to reveal recombination events within the HIV genome ., However , the use of genetically divergent RNA templates does not reflect the situation in vivo , where the vast majority of infected individuals are infected with a single virus which rapidly diversifies into a viral quasispecies over the course of infection 27 ., The use of divergent RNA sequences can lead to confounding differences in parameters known to affect recombination , including: overall RNA homology 28 , 29 , RNA packaging 30 , 31 , and the amino acid sequence of viral proteins , such as reverse transcriptase 32–34 ., Therefore , the recombination events detected using divergent RNA sequences most likely reflect the special case of inter-subtype recombination ., Hence , there is a real need to develop a retroviral recombination system which mimics the recombination that occurs between closely related , yet genetically distinct , viruses found within an infected individual ., Recombination is detected by monitoring the linking of marker points from separate RNA genomes into a single DNA genome ., Regardless of the system in which it is measured , recombination is either detected or undetected between any two marker points ., This is generally interpreted as one or zero recombination events , respectively ., However , with increasing genomic distance and/or recombination rate , there is an increased likelihood that there will be multiple template switches between any two marker points which go undetected ., Consequently , with high rates of recombination and/or genomic distances between marker points , there is a greater chance of underestimating recombination rates due to multiple template switches ., These possibilities have been mentioned previously 20 , 24 , 35 , 36 ., However , there is no current standard method to calculate recombination rates over multiple genetic regions of varying lengths that also compensates for the possibility of multiple template switches between marker points ., Additionally there exists no theoretical estimate for the error when recombination is measured without compensating for multiple template switches , as is often the case ., Here , we present a novel experimental method based on limited codon modification of the HIV genome which does not change the infectivity of the virus or any viral protein ., This allows the measurement of recombination between closely related genomes analogous to those found in the quasispecies of an infected individual ., This system measures recombination in different gene segments , allowing the identification of possible recombination ‘hotspots’ , where template switches occur at higher frequencies ., We then develop statistical tools to calculate an ‘optimal recombination rate’ that reproduces observed recombination frequencies , taking into account multiple template switches ., These tools demonstrate the error in calculating crude recombination rates ( that do not consider multiple template switches ) and emphasize the necessity for careful data analysis ., These tools also provide the basis to quantify statistical differences in recombination rates in various regions of the HIV genome , under different conditions , or infection with different target cells ., Finally , our analysis allows for testing and subsequent validation of some inherent assumptions and sources of error in the experimental design ., We compare our analytic procedure with previously published studies and find that our approach avoids some of the potential pitfalls of using reporter gene inserts ., Recombination is measured by analysing the cDNA that results from infection with non-identical ( heterozygous ) co-packaged RNA genomes ., The positions in which the RNA genomes differ are called marker points ., Recombination is detected only when the resulting cDNA contains a mixture of marker points from both RNA strands ., It is tempting to conclude that one template switch has occurred every time recombination is detected between a set of marker points , and that no template switches occurred elsewhere ., However , any even number of template switches between two fixed marker points will lead to us observing no recombination , and any odd number will result in us observing a single recombination event ( Figure 1A ) ., An important consequence of this fact is that the probability of observing a recombination event is a function of the genomic distance between the markers and the overall recombination rate ., We created a model of recombination which takes into account the possibility of not detecting recombination events ( see Materials and Methods ) ., Our recombination rate calculation ( denoted ‘optimal’ recombination rate ) reveals the relationship between the overall recombination rate , distance between marker points and the probability of observing a recombination event ( Figures 1B and 1C ) ., We show that for each genomic distance and overall rate of recombination , there is a unique probability of observing recombination ., Furthermore , with high overall rates of recombination and large genomic distances , it becomes much more difficult to calculate the recombination rate accurately ., Indeed , these probabilities eventually converge until it becomes impossible to derive the true rate of recombination because there is an equal chance of observing or not observing a recombination event ., To demonstrate the consequences of ignoring multiple template switches , we utilized a simple equation ( denoted ‘crude’ recombination rate calculation ) : r\u200a=\u200ac/nl , where r is the rate of recombination events per nucleotide per round of infection ( REPN ) , c is the number of template switches detected , n is the number of sequences , and l is the genomic distance over which recombination is measured ., This crude formula assumes that between marker points , at most one template switch can occur ., To calculate the theoretical expected error of the ‘crude’ recombination rate we first use the ‘optimal’ recombination rate equation ( Eq . A ) to determine the probability of observing recombination over different genomic distances ., We then use the ‘crude’ recombination rate calculation on these probabilities and find that this calculation consistently underestimates the real recombination rate ., At an actual recombination rate of 0 . 001 REPN ( lower than the median recombination rate measured in T-cells in this study ) , the calculated crude recombination rate is 9% lower when measured over a distance of 100 nucleotides , and 37% lower when measured over a distance of 500 nucleotides ( Figure 1D ) ., This error is even larger when the real recombination rate is 0 . 003 , where the crude rate is 25% and 68% lower than the actual rate when measured over a distance of 100 nucleotides and 500 nucleotides respectively ( Figure 1E ) ., This error is a direct result of not considering multiple template switches , emphasizing the need for our optimal recombination rate calculation ., We sought to measure the rate of recombination directly in the HIV genome ., To this end , we made a marker virus ( MK ) by introducing 6 codon modifications into the gag gene of wild-type ( WT ) HIV ., This creates 5 regions ( varying in length from 77 to 398 nucleotides ) over which we can directly measure the recombination rate of a full length HIV genome ( Figure 2A and 2B ) ., These modifications neither affect the infectivity of the virus nor alter the amino acid sequence of any viral protein ( Figure S1 ) ., The recombination process depends greatly on template sequence , RNA structure , the overall homology between sequences and the viral proteins involved in reverse transcription ., Therefore , this system mimics the situation in vivo , where recombination occurs in the context of a quasispecies of highly related , yet genetically distinct viruses ., In our experimental system , a template switch observed in the DNA provirus is most likely to be the result of viral recombination during reverse transcription of the two RNA molecules co-packaged in a heterozygous virion ., However , it is possible that recombination could also have occurred during a number of steps in sample preparation and sequencing ., To determine the potential bias within our experimental system , we quantified experimentally-induced recombination , as follows: Firstly , we tested the possibility of transfection-induced recombination which can occur via homologous recombination in the producer cell 37 ., We measured this by direct sequencing of plasmid DNA extracted from co-transfected 293T cells ( Figure 2C ) ., Of 182 sequences of plasmid DNA extracted from transfected cells we observed zero recombination events , suggesting that this is not a source of error in our system ( Table 1 ) ., Secondly , we tested for PCR-induced recombination that may occur if the polymerase switches templates during PCR amplification of the viral sequences prior to sequencing ., We measured this by performing two separate infections with either WT homozygous virus or MK homozygous virus ( Figure 2D ) ., In this case , recombination occurs at the usual rate between co-packaged HIV RNA strands , but template switching between these identical copies of RNA cannot be detected ., These homozygous samples are mixed prior to PCR ., Thus , any observable recombination can be inferred to be an artifact of the PCR ., 125 sequences were obtained and 3 recombination events were detected ( Table 1 ) ., Finally , we measured the rate of ‘inter-virion’ recombination that may have occurred if the target cells were multiply infected , and retroviral recombination was occurring between the RNA molecules of different virions ., To do this we co-infected cells with homozygous WT and homozygous MK virions ., Thus , any intra-virion recombination would be undetected , but both inter-virion recombination and PCR-induced recombination would be detected ( Figure 2E ) ., 128 sequences were obtained and 2 recombination events were detected ( Table 1 ) ., To measure the biological rate of recombination we generated a mixture of heterozygous and homozygous virus by co-transfection ., When equal amounts of two HIV plasmids are co-transfected , co-packaging of RNA into virions is random 38 ., Therefore , when we co-transfected equal amounts of WT and MK plasmid , we expect 50% heterozygous virions , 25% homozygous WT virions and 25% homozygous MK virions ( Figure 2F ) ., This mix was used to infect primary T-cells ., 118 sequences were obtained and 58 recombination events were detected ( Table 1 ) ., In determining recombination rates , it is easy to assume that transfection of equal amounts of WT and MK plasmid leads to the production of 50% heterozygous and 50% homozygous virus ., However , variations in the level of co-transfection will lead to the production of a different proportion of heterozygous virions than expected ., This will bias the calculation of recombination rates ., Our design allows us to estimate the proportion of heterozygous virions in our experiments directly from the data ( as described in Materials and Methods ) ., The estimated proportion of heterozygous virus was approximately 50% in our studies ( 48 . 6% , 45 . 1% , 49 . 7% and 46 . 1% for transfection , PCR , between virion and T-cell experiments respectively ) , indicating that there is no bias in infection rates between WT and MK virus or in the production of our heterozygous virions ., We then calculated the recombination rates for each of our experimental conditions , using both our crude and optimal recombination rate calculations ( Table 1 ) ., As we detected no recombination events in our transfection-induced recombination control , the crude and optimal recombination rates were 0 REPN ., From 125 and 128 sequences for the PCR-induced recombination control and the PCR-induced plus inter-virion control , we observe 3 and 2 recombination events respectively ., This corresponds to an optimal recombination rate of approximately 0 . 1×10−3 REPN ., For our biological sample , the crude recombination rate was calculated to be 0 . 81×10−3 REPN , and the optimal recombination rate to be 1 . 45×10−3 REPN ., Thus , the crude recombination rate underestimates the optimal rate by approximately 44% ., This underlines the importance of calculating recombination rates using our ‘optimal’ recombination rate calculation instead of the ‘crude’ method commonly used in the literature , which does not compensate for multiple template switches ., Using the above approach we are able to directly estimate the recombination rate from an experimental data set ., However , the error of this estimate is affected by the number of sequences sampled , and their distribution ., In order to determine confidence intervals for these estimates we generated probability distributions by bootstrapping the sequence data ( see Materials and Methods ) ., The 95% confidence intervals of these distributions are calculated with the Percentile Method and are shown in Table 1 ., Due to the high number of samples ( >118 for all datasets ) and relative symmetry of the bootstrap distributions ( data not shown ) , we assume very good coverage of these confidence intervals ., We conclude that the recombination rates are significantly different ( at the 0 . 05 level ) when the 95% confidence intervals do not overlap ., These distributions show that the recombination rate is not significantly different between PCR induced recombination and PCR induced plus inter-virion recombination ., Thus , inter-virion recombination is not a significant factor in our experimental setup ., However , recombination rates were significantly different between our controls and the rate of HIV RT-induced recombination in the biological sample ., The true HIV RT-induced recombination rate was then calculated with a control correction method ( see Materials and Methods ) , that is approximately a subtraction of the two recombination rates ., The RT-induced recombination rate alone is calculated to be 1 . 35×10−3 REPN in primary T-cells ., Our experimental system allows recombination to be measured between closely related viral genomes ., However , most recent recombination assays involve the insertion of fluorescent proteins into the HIV genome ., In these systems two distinct defective genes , encoding a fluorescent protein , are inserted into different HIV genomes ., A recombination event that eliminates the deactivating mutations recreates a functional fluorescent encoding gene ., Recombination can then be measured via FACS analysis of infected and fluorescent protein expressing cells ., This technique is capable of producing large quantitative datasets and has shown to be an effective tool to compare recombination rates under varying conditions ., Generally , the extent of recombination in these systems is measured as a function of the multiplicity of infection ( MOI ) of fluorescent protein expressing cells and the MOI of viral infection ., However these calculations are not easily comparable to calculations made for marker points separated by different genomic lengths ., A clearer approach is to calculate the recombination rate in terms of ‘recombination events per nucleotide per round of infection’ ( REPN ) , as this rate allows the prediction of the number of recombination events that will occur over any length of RNA ., To demonstrate how our recombination rate calculation method can be applied to fluorescent protein studies , and to make a direct comparison of these recombination rates to our own , we analysed the data from Rhodes et al . 2005 24 ., Table 2 , 3 , 4 , and 5 from Rhodes 2005 lists the total number of cells , infected cells , and green fluorescent protein positive ( GFP+ ) cells when the recombination is measured over a genomic distance of 588 , 300 , 288 and 103 base pairs , respectively ., From these ratios the GFP+ MOI/infection MOI ( ratio denoted as M ) is calculated ., This ratio represents the probability of a single infection event resulting in the reconstruction of a functional GFP protein ( see Materials and Methods ) ., Recombination is only detectable from 50% of the virions ( those that are heterozygous ) ., Thus , the probability that a heterozygous infection recreates a functional GFP is 2M ., A functional GFP is only created when the two inactivating mutations are eliminated via recombination ., However , the two inactivating mutations being ‘joined’ via recombination is equally likely ., Thus , the probability that a heterozygous infection results in mosaic cDNA ( from a nucleotide sequence perspective ) is 4M ., Using equation ( A ) ( Materials and Methods ) with R ( L ) =\u200a4M converts M into the required recombination rate measured in REPN ., Thus , taking into account the possibility of multiple template switches , the recombination rates for the data in Rhodes 2005 ranges from 0 . 49×10−3 to 0 . 97×10−3 ( table 2 ) ., Note that the calculated optimal recombination rates in Rhodes 2005 are similar regardless of the genomic distance over which recombination is measured ., This is because our analytical recombination rate calculation compensates for genomic distance when calculating the probability of multiple template switches ., Our conversion of the data in Rhodes 2005 to a recombination rate per nucleotide per round of infection is in line with previous conversions by Suryavanshi and Dixit 36 , who used curve fitting techniques to estimate an average recombination rate over the different lengths ., The advantage of our technique is that our procedure can be applied with a standard calculator and requires no curve fitting experience or software ., Crossover sites of the HIV-1 RT may consist of RNA sequence determinants that direct the RT to switch templates , and it has been suggested that RNA-RNA interactions can promote recombination in vitro 28 , 29 ., Unlike systems that measure recombination over only one region , our experimental design allows recombination to be studied in five gene segments in gag , which cover a total genomic distance of 917 base pairs ., Our analytical recombination rate calculation allows us to calculate the optimal constant recombination rate that best describes our experimental data and compensates for the possibility of multiple template switches ., This system allows us to determine:, ( i ) if the variation in recombination along the gene is significantly different than that expected by random variation ( indicating whether recombination is a random event ) ;, ( ii ) the optimal location for a recombination rate change ( determining the marker point that separates any recombination ‘hotspots’ and ‘coldspots’ ) ; and, ( iii ) whether a two recombination rate model better describes the observed experimental recombination data ., Together , these analyses will help us to determine whether recombination occurs randomly across the viral genome ., We first use a chi-squared goodness of fit test to determine if the observed frequency of recombination and the expected frequency ( calculated from our optimal recombination rate and compensating for multiple template switches , equation ( B ) Materials and Methods ) are significantly different in each gene segment ., Figure 3A profiles the experimentally observed and expected number of detected template switches that were recorded over the different sections of gag ., The experimental data displays significant variation from the expected frequencies of recombination to the observed frequencies ( p\u200a=\u200a0 . 02 ) suggesting that recombination rates vary along the gene segments ., We then adjusted our mathematical model of HIV recombination to fit two optimal recombination rates along the gene segment ., This was achieved by splitting the gene segment into two , and calculating each subsegments optimal recombination rate ., The location of the split was optimised along marker positions 2 to, 5 . We find that the recombination rate is higher towards the marker site 1 end and lower towards marker site, 6 . The optimal location for recombination rate switch was at marker site 4 ( 1 . 95×10−3 and 0 . 49×10−3 REPN from sites 1–4 and 4–6 respectively ) ( Figure 3B ) ., Comparing the dual recombination rate model to the original model with an F-test did not produce a significant p value ( p\u200a=\u200a0 . 30 , Figure 3B ) , indicating that the dual recombination rate model did not fit significantly better to justify the additional parameters ( second recombination rate and switch location ) ., To address this further , we analyzed a second set of data and sequenced 192 cDNA strands ., Again , we found that recombination is higher towards marker site 1 and lower towards marker site 6 ( Figure 3C and 3D ) ., However , an F-test comparing the one and two recombination rate models in this dataset , but this time applying the same switch location estimated in the first experiment ( one less parameter in the two recombination rate model ) , still did not achieve significance ( p\u200a=\u200a0 . 068 ) ., Thus , our data support a difference in recombination rate across the gene , but was unable to identify the precise ‘hotspots’ of higher recombination ., The assumption of an equal recombination rate amongst all sequences predicts that the frequency of multiple recombination events should be Poisson distributed ., However , due to the possibility of multiple template switches occurring between markers of varying genomic distances , and the possibility of varying recombination rate across the gag gene , the frequency of multiple detectable template switches does not follow a Poisson distribution ., We calculated this distribution to compute the expected frequency of multiple detectable template switches and compare this to our experimental results ( Figure 4 ) ., This calculation compensates for multiple crossovers and uses the individual recombination rate observed in each region ., Our data indicates some variation from the expected frequency of multiple template switches , however this was not significant ( p\u200a=\u200a0 . 096 ) ., Finally , it is possible that the limited introduction of marker points into the HIV genome altered the RNA structure in such a way as to bias the recombination process ., For example , reverse transcription commencing on the MK genome may be more likely to result in recombination than reverse transcription on the WT genome due to our codon modifications ., This predicts that the probability of recombination will be different when the RT is reverse transcribing a WT or MK marker point ., Therefore , we compared the proportion of recombination events where recombination occurred from MK to WT , versus from WT to MK in our sequences ., Of the 90 template switches observed in the pooled dataset , 42 were MK to WT and 48 were WT to MV , consistent with the null expectation of 50∶50 ( p\u200a=\u200a0 . 60 , binomial distribution ) ., This illustrates that our codon-modified markers have not significantly altered the RNA structure so as to bias the observed recombination rate ., Recombination plays an instrumental role in the evolution of HIV 39 , 40 and continues to shape the global pandemic 41 ., Despite the excellent progress made in understanding inter-subtype recombination 30 , 42 , 43 , the study of recombination occurring between closely related genomes within an infected individual has been hampered by the lack of an appropriate model ., Existing recombination systems are based on foreign reporter sequences , inter-subtype HIV genomes and/or intra-subtype HIV genomes with variation in amino acid sequences ., Therefore , we have developed a novel marker system and associated mathematical tools that:, ( i ) measures the recombination rate directly on the HIV genome;, ( ii ) controls for background recombination;, ( iii ) corrects for multiple template switching ., Our HIV recombination marker system uses genetic marker points based on the codon modification of the authentic full length HIV genome without altering the amino acid sequences ., Other groups have previously measured recombination rates on the HIV genome using the divergent RNA sequences found between or within HIV subtypes or within non-viral reporter sequences ., Our procedure has several advantages over previously published methods ., First , we rationally introduced marker points into the HIV genome at well defined locations , by avoiding RNA sequences that are known to be important for HIV replication ., These marker points allow recombination to be monitored , but do not affect the HIV replication cycle , even over multiple rounds of replication ( Figure S1 ) ., This is in contrast to recombination systems using divergent RNA sequences from different viral strains , where the differential replication capacities of the virus may bias the outcome of recombination ., Second , our marker system retains every virion protein and these are expressed in their correct biological context ., In the case of the retroviral reporter systems , it is common to completely knockout one or more HIV proteins by replacing them with non-viral reporter protein sequence ., Therefore , these reporter systems , even when attempts are made to reintroduce these proteins back into the virion , do not recapitulate the exact biological conditions occurring in the full length virus 22 , 24 ., Third , our silent modifications do not change the amino acid sequence of the viral proteins ., This is important in light of reports that the amino acid sequence of the HIV RT affects the rate of template switching 32 , 34 and that mutations in the Gag polyprotein can affect RNA packaging and recombination 44 ., Therefore , it seems likely that variations in the amino acid sequence of any viral protein involved in either assembly or reverse transcription of the virus could have unintentional consequences on the rate of recombination ., This would limit the utility of divergent RNA sequences , even from within the same subtype 45 ., Fourth , by limiting our modifications to targeted regions of the genome , we aim to maintain overall RNA structure and homology , which are critical determinants of recombination 28 , 29 , 46 ., We demonstrated that recombination occurred at an equal rate on our WT and MK genome; hence , our modifications do not change the rate of recombination ., This indicates that the variations in the recombination rate we observe are due to differences in the RNA sequence between marker points , not to the marker points themselves ., We acknowledge that there are experimental complexities associated with the direct measurement of recombination by sequencing that can lead to the inclusion of non-viral recombination artifacts ., Therefore , we carefully controlled for transfection-induced recombination , PCR-induced recombination and the effects of co-infection due to inter-virion recombination ., In our study , transfection-induced recombination can be excluded as a source of error ., We also show that inter-virion recombination , due to multiple infections of a cell , is not a significant source of error ., By contrast , most retroviral reporter systems are biased by multiple infections ., That is , in most retroviral reporter systems , multiple infections cannot be distinguished from single infections ., This decreases the apparent total number of infection events , which is required to accurately calculate the recombination rate ., To overcome this , these systems make use of MOI calculations which compensates for multiple infections ., However , MOI calculations assume that infection events are independent and random ., This is problematic in light of reports that double-infection occurs more frequently than predicted from random chance alone 47 , 48 , although this effect has been challenged by other data 22 and mathematical analysis 49 ., Nevertheless , our system has the advantage that the recombination rate calculations are not affected by the occurrence of multiple infections ., Finally , we did detect some recombination due to PCR-induced recombination but were able to optimize our PCR cycling conditions to minimize its effects ., In addition , our recombination rate calculation corrects for this background to reveal the true rate of recombination ., This highlights the necessity of including appropriate controls , as the effect of PCR-induced recombination has been ignored in similar studies 8 , 22 , 50 , 51 ., As recombination is measured by observing the linking of genetic marker points , all recombination systems are potentially biased by the occurrence of multiple template switches ., A potential solution is to reduce the genomic distance between marker points and to evenly space them on the HIV genome ., This effectively eliminates multiple template switches and any bias due to variations in genomic distance between marker points ., However , it is impossible to modify the HIV genome in this way without drastically affecting the replication cycle ., As a result , modifications that do not affect important RNA sequences or vary the amino acid sequences of viral proteins will always be unevenly spaced ., Furthermore , increasing the frequency of marker points increases the genetic diversity between co-packaged RNAs ., This is expected to decrease the observed recombination rate , as high levels of sequence identity between templates is required for efficient template switching 46 , 52 ., Thus , whilst reducing the genomic distance between markers can improve the ability to detect recombination , it also biases the observation by decreasing the likelihood of template switching in the first place ., As multiple template switches between any two marker points occur , by definition , between identical sequences , these switches take place under optimal conditions for | Introduction, Results, Discussion, Materials and Methods | Retroviral recombination is thought to play an important role in the generation of immune escape and multiple drug resistance by shuffling pre-existing mutations in the viral population ., Current estimates of HIV-1 recombination rates are derived from measurements within reporter gene sequences or genetically divergent HIV sequences ., These measurements do not mimic the recombination occurring in vivo , between closely related genomes ., Additionally , the methods used to measure recombination make a variety of assumptions about the underlying process , and often fail to account adequately for issues such as co-infection of cells or the possibility of multiple template switches between recombination sites ., We have developed a HIV-1 marker system by making a small number of codon modifications in gag which allow recombination to be measured over various lengths between closely related viral genomes ., We have developed statistical tools to measure recombination rates that can compensate for the possibility of multiple template switches ., Our results show that when multiple template switches are ignored the error is substantial , particularly when recombination rates are high , or the genomic distance is large ., We demonstrate that this system is applicable to other studies to accurately measure the recombination rate and show that recombination does not occur randomly within the HIV genome . | HIVs ability to generate and maintain high genetic diversity leads to multiple drug resistances and evasion from the immune system , eventually leading to immune failure and progression to AIDS ., HIV maintains this diversity with a process of mutation ( incorrect copying of genetic information in viral replication ) and recombination ( mixing two viral genomes in the creation of viral offspring ) ., Recombination is generally studied by inserting genes encoding non-viral fluorescent proteins ., However , recombination in such modified HIV genomes may not accurately reflect the level of recombination occurring within a patient infected with HIV ., Additionally , recombination will go undetected in regions where the parental genomes are identical , and this effect is often ignored ., We have developed a novel experimental system which allows recombination to be measured between two very closely related HIV genomes ., We have also developed statistical tools to accurately calculate the recombination rate , compensating for undetectable recombination in identical regions of the parental genomes ., We show that our experimental system bypasses some of the pitfalls of fluorescent recombination experiments and our tools provide a strong quantitative foundation for future studies in this area . | infectious diseases/hiv infection and aids, mathematics/statistics, virology/viral replication and gene regulation, computational biology | null |
1,734 | journal.pcbi.1005138 | 2,016 | Temporal Dynamics and Developmental Maturation of Salience, Default and Central-Executive Network Interactions Revealed by Variational Bayes Hidden Markov Modeling | Our ability to adapt to a constantly changing environment is thought to depend on the dynamic and flexible organization of intrinsic brain networks 1 , 2 ., Characterizing the temporal dynamics of interactions between distributed brain regions is fundamental to our understanding of human brain organization and its development 2–8 ., However , most of our current knowledge of functional brain organization in adults and children is based on investigations of time-independent functional coupling ., Progress in the field has been impeded by both a lack of appropriate computational techniques to investigate brain dynamics as well as an inadequate focus on core brain systems involved in higher-order cognition 3 , 4 , 9 , 10 ., In particular , progress has been limited by weak analytical models for identifying time-varying brain states , and their occurrence rates and mean lifetimes , for quantifying transition probabilities between brain states , and for characterizing the dynamic evolution of functional connectivity patterns over time 9–11 ., Here we overcome limitations of extant methods by developing and applying novel computational techniques for characterizing dynamic functional interactions between distributed brain regions and address two key neuroscientific goals ., The first scientific goal of our study was to investigate the dynamic functional connectivity of the salience network ( SN ) , the central-executive network ( CEN ) and the default mode network ( DMN ) , three core neurocognitive systems that play a central role in cognitive and affective information processing 1 , 12 ., Our second scientific goal was to characterize the maturation of the dynamic functional connectivity of the SN , CEN and DMN between childhood and adulthood in order to address important gaps in the literature regarding the nature of dynamic cross-network interactions over development and the question of how brain systems become more flexible during the period between childhood and adulthood ., The SN is a limbic-paralimbic network anchored in the anterior insula and dorsal anterior cingulate cortex with prominent subcortical nodes in affective and reward processing regions including the amygdala and ventral striatum 13 , 14 ., The SN plays an important role in orienting attention to behaviorally and emotionally salient and rewarding stimuli and facilitating goal-directed behavior 12 , 14–16 ., The fronto-parietal CEN is anchored in the dorsolateral prefrontal cortex and supramarginal gyrus and is critical for actively maintaining and manipulating information in working memory 17 , 18 ., The DMN is anchored in the posterior cingulate cortex , medial prefrontal cortex , medial temporal lobe , and angular gyrus 19–21 and is involved in self-referential mental activity and autobiographical memory 22 ., In adults , task-based fMRI studies have consistently demonstrated that SN , CEN and DMN nodes are involved in a wide range of cognitive tasks , with the strength of their responses increasing or decreasing proportionately with task demands 12 , 23 , 24 ., Analysis of causal interactions between these networks has also shown that high-level attention and cognitive control processes rely on dynamic interactions between these three core neurocognitive networks 16 , 25–27 ., Thus , far from operating independently , these three brain networks , which have only been probed using static time-invariant connectivity analysis , must form transient dynamic functional networks ( DFNs ) allowing for flexible within- and cross-network interactions ., While the SN , CEN and DMN can be reliably identified in most individuals using static network analysis of rs-fMRI data 26 , 28 , progress in characterizing their dynamic temporal properties has been limited by currently available computational tools and procedures ., Most current studies of dynamic brain connectivity use a sliding window approach 29 , 30 , which is problematic because of arbitrary parameters such as window size , which can lead to erroneous estimates of dynamic connectivity 7 , 9 , 11 ., Furthermore , extant methods do not provide information about the occurrence and lifetimes of individual dynamic brain states , transition probabilities between network states or unique dynamic network configurations associated with each brain connectivity state ., To address these weaknesses , we developed a novel variational Bayesian hidden Markov model ( VB-HMM ) 31 to uncover time-varying functional connectivity ., HMM uses a state-space approach to model multivariate non-stationary time series data 32 , 33 and cluster them into distinct states , each with a different covariance matrix reflecting the functional connectivity between specific brain regions ., Importantly , VB-HMM automatically prunes redundant states , retaining only those that significantly contribute to the underlying dynamics of the fMRI data , and provides the posterior distribution of parameters rather than point estimates of maximum likelihood-based methods ., We then used VB-HMM to characterize dynamic functional interactions between the SN , CEN and DMN to address our two neuroscientific goals ., VB-HMM allowed us to examine for the first time several important metrics of brain dynamics: the number of distinct brain states , their occupancy rates and mean lifetimes , and switching probabilities between brain states and DFNs ., Crucially , VB-HMM enabled us to investigate the temporal dynamics and evolution of states where the SN , DMN and CEN are fully segregated from each other , and states where they interact with each other ., We hypothesized that segregation of the SN , DMN and CEN would constitute a dominant state with high occupancy rates and mean lifetimes ., We further hypothesized that states with high occupancy rates would be temporally stable and marked by a higher probability of switching within the state compared to switching across states ., We use sub-second resting-state fMRI ( rs-fMRI ) datasets acquired as part of the Human Connectome Project ( HCP ) ( http://www . humanconnectome . org ) and demonstrate the robustness of our findings across two independent cohorts of healthy adults ., Next , we used VB-HMM and insights from our analyses of the adult brain to characterize the maturation of dynamic functional networks and connectivity associated with the SN , DMN and CEN between childhood and adulthood ., Flexible and dynamic cross-network functional interactions are essential for mature brain function 5 , 34 , yet little is known about the nature of dynamic organization and time-varying connectivity in children relative to adults ., Studies using static connectivity analyses suggest that functional brain networks undergo significant reconfiguration from childhood to adulthood , with analysis of time-averaged whole-brain connectivity patterns suggesting prominent increases as well as decreases in connectivity between childhood and adulthood ., In a previous study we showed that time-averaged connectivity within key nodes of the SN and DMN as well as their inter-network interactions is weaker in children relative to adults 28 ., Recent reports suggest that time-varying connectivity between distributed brain areas changes significantly with age , with greater temporal variability of connection strengths in children compared to adults34 ., Based on these observations , we hypothesized that compared to adults , children would show immature and less flexible patterns of dynamic connectivity between the SN , CEN and DMN ., Crucially , VB-HMM allowed us to , for the first time , probe developmental changes in dynamic networks properties including the occurrence rates and mean lifetimes of distinct brain states , such as those in which the SN , CEN and DMN are fully segregated from each other with decreased switching probabilities ., This study was approved by the Stanford University Institutional Review Board ., Written informed consent was obtained from all the subjects ., We first describe a novel VB-HMM framework we developed for characterizing dynamic brain networks in human fMRI data ., In the following sections , we represent matrices by using uppercase letters while scalars and vectors are represented using lowercase letters ., Let Y={{yts}t=1T}s=1S be the observed voxel time series , where T is the number of time samples and S is the number of subjects ., yts is an M dimensional time sample at time t for subject s , where M is the number of brain regions or nodes of the dynamic functional network under investigation ., Let Z={{zts}t=1T}s=1S be the underlying hidden/latent discrete states , where zts is the state label at time t for subject s ., Let Z be a first order Markov chain , with stationary transition ( A ) and initial distributions ( π ) defined as:, p ( zts=k|zt−1s=j ) =Ajk, ( 1 ), p ( z1s=k ) =πk, ( 2 ), where 0≤Ajk≤1 , ∑k=1KAjk=1 , and πk≥0 , ∑k=1Kπk=1 ., We assume the probability of the observation yts given its state zts=k to be a multivariate normal distribution with parameters mean μk and covariance Σk:, p ( yts|zts=k ) =N ( μk , Σk ), ( 3 ), Here we assume that the number of possible states K is not known a priori ., Each state k has M μk and an M x M Σk ., Let Φ = {π , A , Θ} ( where Θ={μk , Σk}k=1K ) be the unknown parameters of the HMM model ., Using the factorization property 35 of the Bayesian network shown in Fig 1A , the joint probability distribution of the observations , hidden states , and parameters can be written as, p ( Y , Z , Φ ) =∏s=1Sp ( z1s|π ) ∏t=2Tsp ( zts|zt−1s , A ) ∏t=1Tsp ( ( yts|zts , Θ ) P ( Φ ), ( 4 ), In maximum likelihood methods , the parameters Φ of the model are assumed to be unknown deterministic quantities , whereas in the Bayesian approach they are treated as random variables with prior probability distributions ., Here we assume that conjugate priors 35 for Φ and Z are defined as in 31 with the goal of estimating the joint posterior distribution p ( Z , Φ|Y ) of the hidden states and parameters ., Estimating this posterior distribution is analytically intractable but inference methods , such as sampling or variational methods , can instead be used 31 , 35 ., Here , to estimate p ( Z , Φ|Y ) , we use a variational Bayesian ( VB ) method 31 , which not only provides an elegant analytical approximation to the required posterior distribution but is also computationally faster than sampling approaches ., Let q ( Z , Φ|Y ) be any arbitrary probability distribution and p ( Z , Φ|Y ) be the true posterior probability distribution ., Then the log of the marginal distribution of observations Y can be written as, log\u2061P ( Y ) =F ( q ) +KL ( q||p ), ( 5 ), where F ( q ) is known as the negative free energy and KL ( q||p ) is the Kullback-Leibler ( KL ) divergence between the approximate and true posterior ., These quantities are given by, F ( q ) =∫dZdΦq ( Z , Φ|Y ) log\u2061p ( Y , Z , Φ ) q ( Z , Φ|Y ), ( 6 ), KL ( q||p ) =−∫dZdΦq ( Z , Φ|Y ) log\u2061p ( Z , Φ|Y ) q ( Z , Φ|Y ), ( 7 ), Since KL ( q||p ) is nonnegative , F ( q ) serves as the strict lower bound on log P ( Y ) ., F ( q ) and log P ( Y ) are equal if and only if the approximate posterior q ( Z , Φ|Y ) is equal to the true posterior p ( Z , Φ|Y ) for which KL ( q||p ) = 0 ., The goal of VB approximation is to find the approximate posterior for which the lower bound F ( q ) is maximized ., We make a mean field approximation on the approximate posterior 31 wherein it factorizes as, q ( Z , Φ|Y ) =q ( Z , A , Θ , π|Y ) =q ( Z|Y ) q ( π|Y ) q ( A|Y ) q ( Θ|Y ), ( 8 ), The functional forms of these factors are defined by the priors on the parameters and the likelihood of the data ., We assume conjugate priors for the priors , which results in elegant analytical approximations to the required posterior distributions of the Eq ( 8 ) ., Accordingly , the conjugate prior for π and rows of A is the Dirichlet ( Dir ) distribution , while the prior over the parameters of the Gaussian distribution Θ is the Normal-Wishart ( NW ) distribution ., We further assume that the prior distribution over Φ factorizes as, P ( Φ ) =p ( π ) p ( A ) p ( Θ ), ( 9 ), The forms of the Dirichlet and Normal-Wishart distributions are defined in 31 ., We provide the values of the hyper-parameters of these distributions in the Appendix ., Since we define conjugate priors on the model parameters , q ( π|Y ) and q ( A|Y ) follow multinomial distributions and q ( Θ|Y ) follows the Normal-Wishart distribution 31 ., The update equations for the posterior parameters are provided in the Supplementary Material ., The posterior distribution of the hidden states can be estimated using an efficient forward-backward method similar to the Baum-Welch algorithm for ML-HMM 33 , 35 ., Furthermore , our VB-HMM estimates the parameters of Normal-Wishart distribution for each state ., VB-HMM therefore discovers states for which the parameters of the Normal-Wishart distributions are distinct for each state ., A new state will be discovered if either mean or covariance or both are different in that state with respect to other states ., In task-based fMRI studies it is important to discover states with both mean and covariance differences ., However , in resting-state fMRI studies , as in the current study , differences in absolute signal levels are not relevant and states are based solely on changes in covariance over time ., This can be accomplished elegantly in our Bayesian framework using the hyperparameter λk in the joint Normal-Wishart distribution ., A non-informative prior value ( say , λk = 0 . 001 ) allows the data to determine the joint posterior distributions for the mean and covariance ., However , setting it to a very high value ( λk = 1000 ) biases the posterior to the prior mean which is 0 in our case ( equation S . 10 ) ., This ensures that our states are discovered only by the changes in covariance/inverse covariance in each state ., Similar to the expectation maximization algorithm for ML-HMM , the posterior distributions for the latent and model parameters are iteratively updated in VB-HMM as follows: We iterate steps, ( b ) and, ( c ) until the fractional lower bound F ( q ) between two consecutive thresholds is below a set threshold value of tol = 10−3 ., We initialize the states using the K-means algorithm with the number of clusters/states K set to a high value ( K = 25 ) ., The sparsity property of VB-HMM prunes away unwanted clusters/states in the model ., Like ML-HMM , VB-HMM provides suboptimal estimates of the posterior distributions , and these estimates are sensitive to the initial estimates of states using K-means initialization ., To account for this , we repeat VB-HMM with 100 different random initializations and choose the solution for which the lower bound F ( q ) is maximum ., We validated VB-HMM using three different simulation models; the details of each are provided in the Supplementary Materials ., Briefly , in Simulation-1 , we created datasets with two nodes and two hidden states ., The hidden states were constructed using a typical block design with two conditions ( or states ) : “OFF” and “ON” as shown in S2A Fig . We simulated observations with two nodes where the nodes are negatively correlated in the “OFF” state and uncorrelated in the “ON” state ., In Simulation-2 , we simulated data with six nodes and two hidden states using the HMM generative model given by Eqs 1–3 ., In this case , the two hidden states were constructed using a specified state transition matrix A and six nodes/ROIs with observations drawn from a zero-mean multivariate Gaussian distribution and state specific covariance matrices ( S3A Fig ) ., Simulation-3 also consisted of six nodes and two hidden states ., Here , however , the first three nodes/ROIs were correlated in the first half ( 116 samples ) of the experiment ( state 1 ) while the other three ROIs were correlated in the second half ( 116 samples ) of the experiment ( state 2 ) ( S4A Fig ) ., Five datasets were simulated ( akin to a group size of five subjects in fMRI studies ) for each simulation type ., We first validated VB-HMM using computer-simulated datasets generated from three different simulation models ., Here we briefly summarize the results from these simulations; details are in the Supplementary Materials ., For all three simulations , we applied VB-HMM with the number of hidden states ( K ) initialized to 25 and used VB-HMM to automatically determine the optimal number of states from the data ., S2 Fig shows the actual states , the estimated posterior probabilities and the Viterbi decoded states for Simulation-1 ., Among the 25 states , the occupancy rates of 18 states are zero suggesting that VB-HMM penalizes redundant states ., Further analysis suggests that among the seven with non-zero occupancy rates , four states together constitute 98% of the total occupancy rate and these states match the underlying true states in terms of their associated estimated Pearson correlation matrices and their occurrences with respect to their respective true states ., Similarly , 21 out of the 25 states in Simulation-2 had zero occupancy rates ( S3 Fig ) ., The top two most dominant states comprise 98% of the total occupancy rate and are well matched with the temporal occurrence of the underlying actual states ., Lastly , Simulation-3 yielded 21 out of 25 states with an occupancy rate of zero ( S4 Fig ) ., Of the four states with non-zero occupancy rates , the top two account for 99 . 2% of the total occupancy rate and match the true states used to generate the data ., These simulations demonstrate that VB-HMM can accurately discover the optimal number of states and the underlying dynamic connectivity across different models of simulated data ., We applied VB-HMM on rs-fMRI data to uncover dynamic functional interactions between the SN , CEN and DMN in two cohorts of HCP data ., Our first goal here was to identify dynamic brain states and their associated functional networks ., We computed the occupancy rates and mean lifetimes of each state as well as the switching probabilities between states ., A particular theoretical focus was on the occurrence of brain states in which the three networks were disconnected from each other ., We conducted separate analyses on Cohorts 1 and 2 and investigated the robustness and consistency of our key findings across the two cohorts ., To characterize the connectivity patterns associated with each functional state , we used a community detection algorithm on the estimated partial correlations in each state and examined the functional connectivity between ROIs ., Below we describe the salient features of the dynamic functional network structure in each cohort ., Given our focus on the temporal properties of the state in which the SN , CEN , and DMN were disconnected from each other , we combined states with a similar community structure into distinct DFNs ( see S1 Text ) ., We then examined the occupancy rates , mean lifetimes and switching probabilities of these DFNs ., Based on our primary goal of characterizing the network structure associated with segregated SN , CEN and DMN as encapsulated by DFN-1 ( Fig 3B ) and the common patterns of network structure involving DFN-1 and DFN-2 in both cohorts ( see previous sections ) , we next examined state transitions between these networks ., In each cohort , network structures corresponding to all other functional states were combined together into a mixed DFN-M ., As in previous sections , these analyses were conducted separately in the two cohorts with the aim of elucidating replicable findings ., We next used VB-HMM to characterize the maturation of dynamic functional interactions between the SN , CEN and DMN in a Stanford cohort of IQ- and gender-matched adults and children ., We used the same analytic procedures as described above on data from adults and children and then compared dynamic network properties between the two groups ., To investigate whether DFN occupancy rates and mean lifetimes differ between children and adults , we focused on DFN-1 and DFN-2 , the two dominant DFNs with identical community structures in adults and children that together account for about 77% occupancy rates in both groups ., Network configurations corresponding to all other functional states were combined into DFN-M ., The mean lifetimes , but not the occupancy rates , of all three DFNs were significantly greater in children compared to adults ( p < 0 . 05 , FDR corrected ) ( Fig 6A and 6B ) ., These findings indicate that children tend to persist longer in the same DFN than adults , as illustrated by the time evolution of the three DFNs ( Fig 5A and 5F ) ., Below we further investigate this pattern of developmental differences in terms of transition probabilities between DFNs ., To further investigate whether children tend to stay in one DFN configuration longer than adults , we computed transition probabilities in children and adults and compared them between the groups ., The probability of within-DFN transitions was not significantly different between the two groups ( p > 0 . 05 , FDR corrected ) ., However , transition probabilities to the fully disconnected SN-CEN-DMN configuration ( DFN-1 ) from both connected network configurations ( DFN-2 and DFN-M ) were significantly higher in adults compared to children ( p < 0 . 05 , FDR corrected ) ( Fig 6D ) ., In contrast , children showed a higher probability of switching between the two connected network configurations ( p < 0 . 05 , FDR corrected ) ., These findings demonstrate that , compared to children , adults switch back more frequently to DFN-1 , in which the SN , DMN and CEN are completely segregated from each other ., Finally , to investigate how dynamic functional connectivity matures with age we compared the strength of DFN connectivity assessed using within- and cross-network links as described above ., In this analysis , we further excluded participants with DFN connectivity beyond 3 standard deviations from their specific group or for whom both DFNs were not present ., After exclusion , our sample consisted of 22 adults and 16 children ., We found a significant three-way interaction between DFN ( DFN-1 vs . DFN-2 ) , link type ( within- vs . cross-network ) , and participant groups ( children vs . adults ) ( F1 , 36 = 10 . 99 , p = 0 . 002 ) ( Fig 6C ) , such that DFN-1 and DFN-2 configurations differed in connection strength by link type in adults ( F1 , 21 = 119 . 5 , p < 0 . 001 ) but not in children ( F1 , 15 = 0 . 491 , p = 0 . 494 ) ., These results demonstrate that DFN connectivity is weaker and less differentiated in children relative to adults ., The main scientific aims of our study were to ( 1 ) investigate the temporal properties of dynamic functional connectivity between the SN , CEN and DMN , three core neurocognitive networks implicated in a wide range of goal directed behaviors 12 , 15 , 16 , 26 , 48 , 49 , and ( 2 ) investigate how the temporal properties of dynamic functional connectivity between these core networks change from childhood to adulthood ., To accomplish this , we first developed a novel Bayesian HMM ( VB-HMM ) model for quantifying dynamic changes in functional connectivity ., A variational Bayes approach for estimating latent states and unknown HMM model parameters allowed us to overcome weaknesses associated with conventional methods and to investigate dynamic changes in intrinsic functional connectivity between three networks , which have previously only been investigated using static network analysis ., VB-HMM allowed us to quantify the temporal evolution of distinct brain states and probe the dynamic functional organization of the SN , CEN and DMN in an analytically rigorous manner ., Contrary to previous observations based on static time-averaged connectivity analysis 20 , 50 , we found that temporal coupling between the SN , CEN and DMN varies considerably over time and that these networks exist in a completely segregated state only intermittently with relatively short mean lifetimes ., VB-HMM also revealed immature and inflexible dynamic interactions between the SN , CEN and DMN characterized by higher mean lifetimes in individual states and reduced transition probability between states , in children relative to adults ., VB-HMM is a novel machine learning approach for identifying dynamic changes in functional brain connectivity ., VB-HMM has several advantages over existing methods 6 , 9 , 29 , 51 , 52:, ( i ) the automated estimation of latent states and their temporal evolution;, ( ii ) estimation of posterior probabilities of latent states and model parameters;, ( iii ) selection of models based on a trade-off between the model complexity and fit of the data , thereby reducing overfitting;, ( iv ) use of sparsity constraints resulting in pruning of weak states without having to specify the number of states a priori; and, ( v ) a generative model that has the potential to provide a more mechanistic understanding of human brain dynamics ., Our approach also overcomes weaknesses of existing HMM methods that are based on a maximum likelihood estimation approach and require a priori specification of the number of hidden states ., Furthermore , in contrast to conventional HMM methods , VB-HMM can discover dynamic changes in states based on signal mean or covariance or both ., This flexibility can be useful in uncovering latent brain dynamics during cognitive task processing , where states typically differ in both signal mean and covariance , as well as rs-fMRI , where states are better characterized by changes in covariance rather than mean signal levels ., In applications to rs-fMRI , as in the present study , this is accomplished in VB-HMM by setting the prior hyperparameter value λk = 1000 for each state k ., This choice forces the posterior mean values for each state ( μk ) close to prior mean ( which is zero ) ( Equation S . 10 ) and ensure that states are characterized by differences in the covariance matrices ( Σk ) , but not the mean ( μk ) ., Another advantage of our Bayesian approach is that the covariance ( or inverse covariance ) estimates are regularized and the extent of regularization is determined by the data ( Eqs S11–S . 13 ) ., This regularization ensures that the covariance matrices are full rank and therefore invertible to estimate partial correlations ., Such regularized estimation is not possible with maximum likelihood approaches ., Our simulations using three different simulation models demonstrate that VB-HMM can accurately discover the number of states , their temporal evolution , the transition probabilities between states and dynamic connectivity patterns associated with each state ( see S1 Text for details ) ., We next used VB-HMM to characterize the temporal evolution of dynamic brain states in two independent cohorts of adult participants from the HCP ., VB-HMM identified multiple stable states in both cohorts of participants ., The observation that the number of states is strictly greater than one is consistent with previous results demonstrating that the rs-fMRI time series is not stationary29 , 53 ., Importantly , VB-HMM identified similar patterns of stable brain states in both cohorts and provided reliable and replicable estimates of occupancy rates , mean lifetimes , and state transition probabilities associated with each brain state ., Although VB-HMM identified 16–19 states in both adult cohorts , only three states had occupancy rates greater than 10% ( Fig 2C and 2G ) , and these states demonstrated the highest mean lifetimes ., However , even these dominant states had short mean lifetimes ranging from 7–10 s , demonstrating that brain states are temporally persistent over durations far shorter than the length of a typical rs-fMRI scan session ., These features were observed in both adult cohorts , demonstrating the robustness of our findings ., Furthermore , analysis of the state transition probability indicated that each state had the highest probability of transitioning to itself rather than other states ( Fig 2D and 2H ) , suggesting that temporal stability of individual states does occur ., Taken together , these results demonstrate the existence of dynamic , yet stable , brain states in rs-fMRI and identify distinct connectivity patterns associated with each state ., We suggest that this balance of temporal stability and dynamic connectivity is a fundamental principle of brain organization ., By construction , VB-HMM states are characterized by distinct patterns of inter-node connectivity ( Figs 2 and 3 ) ., To test specific hypotheses related to the dynamic interactions between the SN , CEN and DMN and interpret the neurobiological relevance of connectivity profiles , we identified dynamic functional connectivity profiles associated with the three previously known static networks ., To accomplish this we applied modularity-based community detection algorithms 36 on the functional connectivity matrix estimated by VB-HMM for each state ( Fig 1B ) ., This analysis revealed that , in some cases , states with non-identical connectivity matrices had similar overall community structures ( S5 , S6 , S8 and S9 Figs ) ., For example , multiple states ( S5 and S6 Figs ) demonstrated a pattern in which the SN , CEN and DMN formed separate , segregated communities , reminiscent of the static functional networks previously identified by independent components analysis 50 ., We next combined states with identical community structures into dynamic functional networks ( DFNs ) and examined the temporal properties of segregated and non-segregated DFNs as well as the dynamic interactions between key nodes of the SN , CEN and DMN ., The SN , CEN and DMN formed separate communities and were segregated from each other ( DFN-1 in Fig 3 ) approximately 31% of the time ( 31% and 27% in Cohorts 1 and 2 , respectively ) ., In this case , all three networks maintained their within-network connectivity structure–AI and ACC nodes of the SN were connected with each other , PMC and VMPFC nodes of the DMN were connected with each other , and DLPFC and PPC nodes of the CEN were connected with each other ., Crucially , VB-HMM also revealed that this DFN had a mean lifetime of about 7–10 s ( 8 . 3 s and 8 . 8 s in Cohorts 1 and 2 , respectively ) ( Fig 3C and 3G ) ., These findings suggest that although this particular DFN configuration is a prominent feature of SN , CEN and DMN organization , it has a relatively short lifetime ., The second dominant DFN identified by VB-HMM had a community structure in which the CEN and DMN were interconnected in one community , while the SN nodes remained segregated from the CEN and DMN , forming an independent network ( DFN-2 in Fig 3 ) ., This DFN configuration had occurrence rates of 36% and 18% in Cohorts 1 and 2 , respectively ( Fig 3 ) ., The remaining states had distinct DFN configurations ( S5 and S6 Figs ) , with varying levels of cross-network interactions , but their occurrence rates were lower and not consistent across the two cohorts ., Previous work from our lab 12 39 and recent work by other labs 54 , 55 has indicated that the SN plays a critical role in switching between the DMN and the CEN ., Our results suggest that this switching is transient ( i . e . doesn’t persist for a long time ) and may occur not very frequently ., Finally , analysis of the switching probability between DFNs revealed that each DFN had a high probability ( 0 . 91 in Cohort 1 and 0 . 93 in Cohort 2 ) of making self-transitions ( Fig 3D and 3H ) ., Thus , as with individual brain states , the two dominant DFN configurations ( DFN-1 and DFN-2 in Fig 3 ) were stable over time but persistent only for short time intervals ., Taken together , these findings identify key features of dynamic functional interactions associated with the SN , CEN and DMN and confirm that the static segregated networks previously identified using independent component analysis occur only about 30% of the time ., The organization of brain networks in adults is shaped by years of development , learning and brain plasticity 5 ., Previous stud | Introduction, Materials and Methods, Results, Discussion | Little is currently known about dynamic brain networks involved in high-level cognition and their ontological basis ., Here we develop a novel Variational Bayesian Hidden Markov Model ( VB-HMM ) to investigate dynamic temporal properties of interactions between salience ( SN ) , default mode ( DMN ) , and central executive ( CEN ) networks—three brain systems that play a critical role in human cognition ., In contrast to conventional models , VB-HMM revealed multiple short-lived states characterized by rapid switching and transient connectivity between SN , CEN , and DMN ., Furthermore , the three “static” networks occurred in a segregated state only intermittently ., Findings were replicated in two adult cohorts from the Human Connectome Project ., VB-HMM further revealed immature dynamic interactions between SN , CEN , and DMN in children , characterized by higher mean lifetimes in individual states , reduced switching probability between states and less differentiated connectivity across states ., Our computational techniques provide new insights into human brain network dynamics and its maturation with development . | Characterizing the temporal dynamics of functional interactions between distributed brain regions is of fundamental importance for understanding human brain organization and its development ., Progress in the field has been hampered both by a lack of strong computational techniques to investigate brain dynamics and an inadequate focus on core brain systems involved in higher-order cognition ., Here we address these gaps by developing a novel variational Bayesian Hidden Markov Model ( VB-HMM ) that uncovers non-stationary dynamical functional networks in human fMRI data ., In two cohorts of adults , VB-HMM revealed multiple short-lived states characterized by rapid switching and transient connectivity between the salience ( SN ) , default mode ( DMN ) , and central executive ( CEN ) networks—three brain systems critical for higher-order cognition ., In children , relative to adults , VB-HMM revealed immature dynamic interactions between SN , CEN , and DMN , characterized by higher mean lifetimes in individual states , reduced switching probability between states and less differentiated connectivity across states ., Our findings suggest that the flexibility of switching between distinct brain states is weaker in childhood , and they provide a novel framework for modeling immature brain network organization in children ., More generally , the approach used here may prove useful to the investigation of dynamic brain organization in neurodevelopmental and psychiatric disorders . | children, medicine and health sciences, diagnostic radiology, functional magnetic resonance imaging, markov models, neural networks, applied mathematics, random variables, neuroscience, covariance, magnetic resonance imaging, algorithms, simulation and modeling, age groups, adults, probability distribution, mathematics, brain mapping, neuroimaging, families, research and analysis methods, computer and information sciences, imaging techniques, hidden markov models, probability theory, people and places, radiology and imaging, diagnostic medicine, population groupings, biology and life sciences, physical sciences | null |