Unnamed: 0
int64 0
2.45k
| id
stringlengths 20
20
| year
int64 2.01k
2.02k
| title
stringlengths 25
175
| sections
stringlengths 7.34k
32.4k
| headings
stringclasses 25
values | abstract
stringlengths 496
2.55k
| summary
stringlengths 684
1.93k
| keywords
stringlengths 28
908
| toc
stringclasses 1
value |
---|---|---|---|---|---|---|---|---|---|
1,406 | journal.pcbi.1006637 | 2,018 | Dynamical anchoring of distant arrhythmia sources by fibrotic regions via restructuring of the activation pattern | Many clinically relevant cardiac arrhythmias are conjectured to be organized by rotors ., A rotor is an extension of the concept of a reentrant source of excitation into two or three dimensions with an area of functional block in its center , referred to as the core ., Rapid and complex reentry arrhythmias such as atrial fibrillation ( AF ) and ventricular fibrillation ( VF ) are thought to be driven by single or multiple rotors ., A clinical study by Narayan et al . 1 indicated that localized rotors were present in 68% of cases of sustained AF ., Rotors ( phase singularities ) were also found in VF induced by burst pacing in patients undergoing cardiac surgery 2 , 3 and in VF induced in patients undergoing ablation procedures for ventricular arrhythmias 4 ., Intramural rotors were also reported in early phase of VF in the human Langendorff perfused hearts 5 , 6 ., It was also demonstrated that in most cases rotors originate and stabilize in specific locations 4–8 ., A main mechanism of rotor stabilization at a particular site in cardiac tissue was proposed in the seminal paper from the group of Jalife 9 ., It was observed that rotors can anchor and exhibit a stable rotation around small arteries or bands of connective tissue ., Later , it was experimentally demonstrated that rotors in atrial fibrillation in a sheep heart can anchor in regions of large spatial gradients in wall thickness 10 ., A recent study of AF in the right atrium of the explanted human heart 11 revealed that rotors were anchored by 3D micro-anatomic tracks formed by atrial pectinate muscles and characterized by increased interstitial fibrosis ., The relation of fibrosis and anchoring in atrial fibrillation was also demonstrated in several other experimental and numerical studies 8 , 11–14 ., Initiation and anchoring of rotors in regions with increased intramural fibrosis and fibrotic scars was also observed in ventricles 5 , 7 , 15 ., One of the reasons for rotors to be present at the fibrotic scar locations is that the rotors can be initiated at the scars ( see e . g . 7 , 15 ) and therefore they can easily anchor at the surrounding scar tissue ., However , rotors can also be generated due to different mechanisms , such as triggered activity 16 , heterogeneity in the refractory period 16 , 17 , local neurotransmitter release 18 , 19 etc ., What will be the effect of the presence of the scar on rotors in that situation , do fibrotic areas ( scars ) actively affect rotor dynamics even if they are initially located at some distance from them ?, In view of the multiple observations on correlation of anchoring sites of the rotors with fibrotic tissue this question translates to the following: is this anchoring just a passive probabilistic process , or do fibrotic areas ( scars ) actively affect the rotor dynamics leading to this anchoring ?, Answering these questions in experimental and clinical research is challenging as it requires systematic reproducible studies of rotors in a controlled environment with various types of anchoring sites ., Therefore alternative methods , such as realistic computer modeling of the anchoring phenomenon , which has been extremely helpful in prior studies , are of great interest ., The aim of this study is therefore to investigate the processes leading to anchoring of rotors to fibrotic areas ., Our hypothesis is that a fibrotic scar actively affects the rotor dynamics leading to its anchoring ., To show that , we first performed a generic in-silico study on rotor dynamics in conditions where the rotor was initiated at different distances from fibrotic scars with different properties ., We found that in most cases , scars actively affect the rotor dynamics via a dynamical reorganization of the excitation pattern leading to the anchoring of rotors ., This turned out to be a robust process working for rotors located even at distances more than 10 cm from the scar region ., We then confirmed this phenomenon in a patient-specific model of the left ventricle from a patient with remote myocardial infarction ( MI ) and compared the properties of this process with clinical ECG recordings obtained during induction of a ventricular arrhythmia ., Our anatomical model is based on an individual heart of a post-MI patient reconstructed from late gadolinium enhanced ( LGE ) magnetic resonance imaging ( MRI ) was described in detail previously 20 ., Briefly , a 1 . 5T Gyroscan ACS-NT/Intera MR system ( Philips Medical Systems , Best , the Netherlands ) system was used with standardized cardiac MR imaging protocol ., The contrast –gadolinium ( Magnevist , Schering , Berlin , Germany ) ( 0 . 15 mmol/kg ) – was injected 15 min before acquisition of the LGE sequences ., Images were acquired with 24 levels in short-axis view after 600–700 ms of the R-wave on the ECG within 1 or 2 breath holds ., The in-plane image resolution is 1 mm and through-plane image resolution is 5 mm ., Segmentation of the contours for the endocardium and the epicardium was performed semi-automatically on the short-axis views using the MASS software ( Research version 2014 , Leiden University Medical Centre , Leiden , the Netherlands ) ., The myocardial scar was identified based on signal intensity ( SI ) values using a validated algorithm as described by Roes et al . 21 ., In accordance with the algorithm , the core necrotic scar is defined as a region with SI >41% of the maximal SI ., Regions with lower SI values were considered as border zone areas ., In these regions , we assigned the fibrosis percentage as normalized values of the SI as in Vigmond et al . 22 ., In the current paper , fibrosis was introduced by generating a random number between 0 and 1 for each grid point and if the random number was less than the normalized SI at the corresponding pixel the grid point was considered as fibroblast ., Currently there is no consensus on how the SI values should be used for clinical assessment of myocardial fibrosis and various methods have been reported to produce significantly different results 23 ., However , the method from Vigmond et al . properly describes the location of the necrotic scar region in our model as for the fibrosis percentage of more than 41% we observe a complete block of propagation inside the scar ., This means that all tissue which has a fibrotic level higher than 41% behaves like necrotic scar ., The approach and the 2D model was described in detail in previous work 24–26 ., Briefly , for ventricular cardiomyocyte we used the ten Tusscher and Panfilov ( TP06 ) model 27 , 28 , and the cardiac tissue was modeled as a rectangular grid of 1024 × 512 nodes ., Each node represented a cell that occupied an area of 250 × 250 μm2 ., The equations for the transmembrane voltage are given by, C m d V i k d t = ∑ α , β ∈ { - 1 , + 1 } η i k α β g gap ( V i + α , k + β - V i k ) - I ion ( V i k , … ) , ( 1 ), where Vik is the transmembrane voltage at the ( i , k ) computational node , Cm is membrane capacitance , ggap is the conductance of the gap junctions connecting two neighboring myocytes , Iion is the sum of all ionic currents and η i k α β is the connectivity tensor whose elements are either one or zero depending on whether neighboring cells are coupled or not ., Conductance of the gap junctions ggap was taken to be 103 . 6 nS , which results in a maximum velocity planar wave propagation in the absence of fibrotic tissue of 72 cm/s at a stimulation frequency of 1 Hz ., ggap was not modified in the fibrotic areas ., A similar system of differential equations was used for the 3D computations where instead of the 2D connectivity tensor η i k α β we used a 3D weights tensor w i j k α β γ whose elements were in between 0 and 1 , depending both on coupling of the neighbor cells and anisotropy due to fiber orientation ., Each node in the 3D model represented a cell of the size of 250 × 250 × 250 μm3 ., 20s of simulation in 3D took about 3 hours ., Fibrosis was modeled by the introduction of electrically uncoupled unexcitable nodes 29 ., The local percentage of fibrosis determined the probability for a node of the computational grid to become an unexcitable obstacle , meaning that for high percentages of fibrosis , there is a high chance for a node to be unexcitable ., As previous research has demonstrated that LGE-MRI enhancement correlates with regions of fibrosis identified by histological examination 30 , we linearly interpolated the SI into the percentage of fibrosis for the 3D human models ., In addition , the effect of ionic remodeling in fibrotic regions was taken into account for several results of the paper 31 , 32 ., To describe ionic remodeling we decreased the conductance of INa , IKr , and IKs and depending on local fibrosis level as:, G Na = ( 1 - 1 . 55 f 100 % ) G Na 0 , ( 2 ) G Kr = ( 1 - 1 . 75 f 100 % ) G Kr 0 , ( 3 ) G Ks = ( 1 - 2 f 100 % ) G Ks 0 , ( 4 ), where GX is the peak conductance of IX ionic current , G X 0 is the peak conductance of the current in the absence of remodeling , and f is the local fibrosis level in percent ., These formulas yield a reduction of 62% for INa , of 70% for IKr , and of 80% for IKs if the local fibrosis f is 40% ., These values of reduction are , therefore , in agreement with the values published in 33 , 34 ., The normal conduction velocity at CL 1000 ms is 72 cm/s ( CL 1000 ms ) ., However , as the compact scar is surrounded by fibrotic tissue , the velocity of propagation in that region gradually decreases with the increase in the fibrosis percentage ., For example for fibrosis of 30% , the velocity decreases to 48 cm/s ( CL 1000 ms ) ., We refer to Figure 1 in Ten Tusscher et al 25 for the planar conduction velocity as a function of the percentage fibrosis in 2D tissue and 3D tissue ., The geometry and extent of fibrosis in the human left ventricles were determined using the LGE MRI data ., The normalized signal intensity was used to determine the density of local fibrosis ., The fiber orientation is presented in detail in the supplementary S1 Appendix ., The model for cardiac tissue was solved by the forward Euler integration scheme with a time step of 0 . 02 ms . The numerical solver was implemented using the CUDA toolkit for performing the computations on graphics processing units ., Simulations were performed on a GeForce GTX Titan Black graphics card using single precision calculations ., The eikonal equations for anisotropy generation were solved by the fast marching Sethian’s method 35 ., The eikonal solver and the 3D model generation pipeline were implemented in the OCaml programming language ., Rotors were initiated by an S1S2 protocol , as shown in the supplementary S1 Fig . Similarly , in the whole heart simulations , spiral waves ( or scroll waves ) were created by an S1S2 protocol ., For the compact scar geometry used in our simulations the rotation of the spiral wave was stationary , the period of rotation of the anchored rotor was always more than 280 ms , while the period of the spiral wave was close to 220 msec ., Therefore , we determined anchoring as follows: if the period of the excitation pattern was larger than 280 ms over a measuring time interval of 320 ms we classified the excitation as anchored ., When the type of anchoring pattern was important ( single or multi-armed spiral wave ) we determined it visually ., If in all points of the tissue , the voltage was below -20 mV , the pattern was classified as terminated ., We applied the classification algorithm at t = 40 s in the simulation ., In the whole heart , the pseudo ECGs were calculated by assuming an infinite volume conductor and calculating the dipole source density of the membrane potential Vm in all voxel points of the ventricular myocardium , using the following equation 36, E C G ( t ) = ∫ ( r → , D ( r → ) ∇ → V ( t ) ) | r → | 3 d 3 r ( 5 ), whereby D is the diffusion tensor , V is the voltage , and r → is the vector from each point of the tissue to the recording electrode ., The recording electrode was placed 10 cm from the center of the ventricles in the transverse plane ., Twelve-lead ECGs of all induced ventricular tachycardia ( VT ) of patients with prior myocardial infarction who underwent radiofrequency catheter ablation ( RFCA ) for monomorphic VT at LUMC were reviewed ., All patients provided informed consent and were treated according to the clinical protocol ., Programmed electrical stimulation ( PES ) is routinely performed before RFCA to determine inducibility of the clinical/presumed clinical VT ., All the patients underwent PES and ablation according to the standard clinical protocol , therefore no ethical approval was required ., Ablation typically targets the substrate for scar-related reentry VT ., After ablation PES is repeated to test for re-inducibility and evaluate morphology and cycle length of remaining VTs ., The significance of non-clinical , fast VTs is unclear and these VTs are often not targeted by RFCA ., PES consisted of three drive cycle lengths ( 600 , 500 and 400 ms ) , one to three ventricular extrastimuli ( ≥200 ms ) and burst pacing ( CL ≥200 ms ) from at least two right ventricular ( RV ) sites and one LV site ., A positive endpoint for stimulation is the induction of any sustained monomorphic VT lasting 30 s or requiring termination ., ECG and intracardiac electrograms ( EG ) during PES were displayed and recorded simultaneously on a 48-channel acquisition system ( Prucka CardioLab EP system , GE Healthcare , USA ) for off-line analysis ., Fibrotic scars can not only anchor the rotors but can dynamically anchor them from a large distance ., In the first experiments we studied spiral wave dynamics with and without a fibrotic scar in a generic study ., The diameter of the fibrotic region was 6 . 4 cm , based on the similar size of the scars from patients with documented and induced VT ( see the Methods section , Magnetic Resonance Imaging ) ., The percentage of fibrosis changed linearly from 50% at the center of the scar to 0% at the scar boundary ., We initiated a rotor at a distance of 15 . 5 cm from the scar ( Fig 1 , panel A ) which had a period of 222 ms and studied its dynamics ., First , after several seconds the activation pattern became less regular and a few secondary wave breaks appeared at the fibrotic region ( Fig 1 , panel B ) ., These irregularities started to propagate towards the tip of the initial rotor ( Fig 1 , panel C-D ) creating a complex activation picture in between the scar and the initial rotor ., Next , one of the secondary sources reached the tip of the original rotor ( Fig 1 , panel E ) ., Then , this secondary source merged with the initial rotor ( Fig 1 , panel F ) , which resulted in a deceleration of the activation pattern and promoted a chain reaction of annihilation of all the secondary wavebreaks in the vicinity of the original rotor ., At this moment , a secondary source located more closely to the scar dominated the simulation ( Fig 1 , panel G ) ., The whole process now started again ( Fig 1 , panels H-K ) , until finally only one source became the primary source anchored to the scar ( Fig 1L ) with a rotation period of 307 ms . For clarity , a movie of this process is provided as supplementary S1 Movie ., Note that this process occurs only if a scar with surrounding fibrotic zone was present ., In the simulation entitled as ‘No scar’ in Fig 1 , we show a control experiment when the same initial conditions were used in tissue without a scar ., In the panel entitled as ‘Necrotic scar’ in Fig 1 , a simulation with only a compact region without the surrounding fibrotic tissue is shown ., In both cases the rotor was stable and located at its initial position during the whole period of simulation ., The important difference here from the processes shown in Fig 1 ( Fibrotic scar ) is that in cases of ‘No scar’ and ‘Necrotic scar’ no new wavebreaks occur and thus we do not have a complex dynamical process of re-arrangement of the excitation patterns ., We refer to this complex dynamical process leading to anchoring of a distant rotor as dynamical anchoring ., Although this process contains a phase of complex behaviour , overall it is extremely robust and reproducible in a very wide range of conditions ., In the second series of simulations , the initial rotor was placed at different distances from the scar border , ranging from 1 . 8 to 14 . 3 cm , to define the possible outcomes , see Fig 2 . Here , in addition to a single anchored rotor shown in Fig 1H we could also obtain other final outcomes of dynamical anchoring: we obtained rotors rotating in the opposite direction ( Fig 2A , top ) , double armed anchored rotors which had 2 wavefronts rotating around the fibrotic regions ( Fig 2A , middle ) or annihilation of the rotors ( Fig 2A , bottom , which show shows no wave around the scar ) , which normally occurred as a result of annihilation of a figure-eight-reentrant pattern ., To summarize , we therefore had the following possible outcomes:, Termination of activity A rotor rotating either clockwise or counter-clockwise A two- or three-armed rotor rotating either clockwise or counter-clockwise, Fig 2 , panel B presents the relative chance of the mentioned activation patterns to occur depending on the distance between the rotor and the border of the scar ., We see , indeed , that for smaller initial distances the resulting activation pattern is always a single rotor rotating in the same direction ., With increasing distance , other anchoring patterns are possible ., If the distance was larger than about 9 cm , there is at least a 50% chance to obtain either a multi-armed rotor or termination of activity ., Also note that such dynamical anchoring occurred from huge distances: we studied rotors located up to 14 cm from the scar ., However , we observed that even for very large distances such as 25 cm or more such dynamical anchoring ( or termination of the activation pattern ) was always possible , provided enough time was given ., We measured the time required for the anchoring of rotors as a function of the distance from the scar ., For each distance , we performed about 60 computations using different seed values of the random number generator , both with and without taking ionic remodeling into account ., The results of these simulations are shown in Fig 3 . We see that the time needed for dynamical anchoring depends linearly on the distance between the border of the scar and the initial rotor ., The blue and yellow lines correspond to the scar model with and without ionic remodeling , respectively ( ionic remodeling was modelled by decreasing the conductance of INa , IKr , and IKs as explained in the Methods Section ) ., We interpret these results as follows; The anchoring time is mainly determined by the propagation of the chaotic regime towards the core of the original rotor and this process has a clear linear dependency ., For distant rotors , propagation of this chaotic regime mainly occurs outside the region of ionic remodelling , and thus both curves in Fig 3 have the same slope ., However , in the presence of ionic remodelling , the APD in the scar region is prolonged ., This creates a heterogeneity and as a consequence the initial breaks in the scar region are formed about 3 . 5 s earlier in the scar model with remodeling compared with the scar model without remodeling ., To identify some properties of the substrate necessary for the dynamical anchoring we varied the size and the level of fibrosis within the scar and studied if the dynamical anchoring was present ., Due to the stochastic nature of the fibrosis layout we performed about 300 computations with different textures of the fibrosis for each given combination of the scar size and the fibrosis level ., The results of this experiment are shown in Fig 4 . Dynamical anchoring does not occur when the scar diameter was below 2 . 6 cm , see Fig 4 . For scars of such small size we observed the absence of both the breakup and dynamical anchoring ., We explain this by the fact that if the initial separation of wavebreaks formed at the scar is small , the two secondary sources merge immediately , repairing the wavefront shape and preventing formation of secondary sources 37 ., Also , we see that this effect requires an intermediate level of fibrosis density ., For small fibrosis levels no secondary breaks are formed ( close to the boundary of the fibrotic tissue ) ., Also , no breaks could be formed if the fibrosis level is larger than 41% in our 2D model ( i . e . closer to the core ) , as the tissue behaves like an inexcitable scar ., For a fibrosis > 41% the scar effectively becomes a large obstacle that is incapable of breaking the waves of the original rotor 37 ., Close to the threshold of 41% we have also observed another interesting pattern when the breaks are formed inside the core of the scar ( inside the > 41% region ) only and cannot exit to the surrounding tissue , see the supplementary S1 Movie ., Finally , note that Fig 4 illustrates only a few factors important for the dynamical anchoring in a simple setup in an isotropic model of cardiac tissue ., The particular values of the fibrosis level and the size of the scar can also depend on anisotropy , the texture of the fibrosis and its possible heterogeneous distribution ., To verify that the dynamical anchoring takes place in a more realistic geometry , we developed and investigated this effect in a patient-specific model of the human left ventricle , see the Method section for details ., The scar in this dataset has a complex geometry with several compact regions with size around 5-7 cm in which the percentage of fibrosis changes gradually from 0% to 41% at the core of the scar based on the imaging data , see Methods section ., The remodeling of ionic channels at the whole scar region was also included to the model ( including borderzone as described the Fibrosis Model in the method section ) ., We studied the phenomenon of dynamical anchoring for 16 different locations of cores of the rotor randomly distributed in a slice of the heart at about 4 cm from the apex ( see Fig 5 ) ., Cardiac anisotropy was generated by a rule-based approach described in details in the Methods section ( Model of the Human Left Ventricle ) ., Of the 16 initial locations , shown in Fig 5 , there was dynamical anchoring to the fibrotic tissue in all cases , with and without ionic remodeling ., After the anchoring , in 4 cases the rotor annihilated ., The effect of the attraction was augmented by the electrophysiolical remodelling , similar as in 2D ., A representative example of our 3D simulations is shown in Fig 5 . We followed the same protocol as for the 2D simulations ., The top 2 rows the modified anterior view and the modified posterior view in the case the scar was present ., In column A , we see the original location of the spiral core ( 5 cm from the scar ) indicated with the black arrow in anterior view ., In column B , breaks are formed due to the scar tissue , and the secondary source started to appear ., After 3 . 7 s , the spiral is anchored around the scar , indicated with the black arrow in the posterior view , and persistently rotated around it ., In the bottom row , we show the same simulation but the scar was not taken into account ., In this case , the spiral does not change its original location ( only a slight movement , see the black arrows ) ., To evaluate if this effect can potentially be registered in clinical practice we computed the ECG for our 3D simulations ., The ECG that corresponds to the example in Fig 5 is shown in Fig 6 . During the first three seconds , the ECG shows QRS complexes varying in amplitude and shape and then more uniform beat-to-beat QRS morphology with a larger amplitude ., This change in morphology is associated with anchoring of the rotor which occurs around three seconds after the start of the simulation ., The initial irregularity is due to the presence of the secondary sources that have a slightly higher period than the original rotor ., After the rotor is anchored , the pattern becomes relatively stable which corresponds to a regular saw-tooth ECG morphology ., Additional ECGs for the cases of termination of the arrhythmia and anchoring are shown in supplementary S2 Fig . For the anchoring dynamics we see similar changes in the ECG morphology as in Fig 6 . The dynamical anchoring is accompanied by an increase of the cycle length ( 247 ± 16 ms versus 295 ± 30 ms ) ., The reason for this effect is that the rotation of the rotor around an obstacle –anatomical reentry– is usually slower than the rotation of the rotor around its own tip—functional reentry , which is typically at the limit of cycle length permitted by the ERP ., In the previous section , we showed that the described results on dynamical anchoring in an anatomical model of the LV of patients with post infarct scars correspond to the observations on ECGs during initiation of a ventricular arrhythmia ., After initiation , in 18 out of 30 patients ( 60% ) a time dependent change of QRS morphology was observed ., Precordial ECG leads V2 , V3 and V4 from two patients are depicted in Fig 7 . For both patients the QRS morphology following the extra stimuli gradually changed , but the degree of changes here was different ., In patient A , this morphological change is small and both parts of the ECG may be interpreted as a transition from one to another monomorhpic ventricular tachycardia ( MVT ) morphology ., However , for patient B the transition from polymorphic ventricular tachycardia ( PVT ) to MVT is more apparent ., In the other 16 cases we observed different variations between the 2 cases presented in Fig 7 . Supplementary S3 Fig shows examples of ECGs of 4 other patients ., Here , in patients 1 and 2 , we see substantial variations in the QRS complexes after the arrhythmia initiation and subsequently a transformation to MVT ., The recording in patient 3 is less polymorphic and in patient 4 we observe an apparent shift of the ECG from one morphology to another ., It may occur , for example , if due to underlying tissue heterogeneity additional sources of excitation are formed by the initial source ., Overall , the morphology with clear change from PVT to MVT was observed in 5/18 or 29% of the cases ., These different degrees of variation in QRS morphology may be due to many reasons , namely the proximity of the created source of arrhythmia to the anchoring region , the underlying degree of heterogeneity and fibrosis at the place of rotor initiation , complex shape of scar , etc ., Although this finding is not a proof , it supports that the anchoring phenomenon may occur in clinical settings and serve as a possible mechanism of fast VT induced by programmed stimulation ., In this study , we investigated the dynamics of arrhythmia sources –rotors– in the presence of fibrotic regions using mathematical modeling ., We showed that fibrotic scars not only anchor but also induce secondary sources and dynamical competition of these sources normally results their annihilation ., As a result , if one just compares the initial excitation pattern in Fig 1A and final excitation pattern in Fig 1L , it may appear as if a distant spiral wave was attracted and anchored to the scar ., However , this is not the case and the anchored spiral here is a result of normal anchoring and competition of secondary sources which we call dynamical anchoring ., This process is different from the usual drift or meandering of rotors where the rotor gradually changes its spatial position ., In dynamical anchoring , the break formation happens in the fibrotic scar region , then it spreads to the original rotor and merges with this rotor tip and reorganizes the excitation pattern ., This process repeats itself until a rotor is anchored around the fibrotic scar region ., Dynamical anchoring may explain the organization from fast polymorphic to monomorphic VT , also accompanied by prolongation in CL , observed in some patients during re-induction after radio frequency catheter ablation of post-infarct scar related VT ., In our simulations the dynamics of rotors in 2D tissue were stable and for given parameter values they do not drift or meander ., This type of dynamics was frequently observed in cardiac monolayers 38 , 39 which can be considered as a simplified experimental model for cardiac tissue ., We expect that more complex rotor dynamics would not affect our main 2D results , as drift or meandering will potentate the disappearance of the initial rotor and thus promote anchoring of the secondary wavebreaks ., In our 3D simulations in an anatomical model of the heart , the dynamics of rotors is not stationary and shows the ECG of a polymorphic VT ( Fig 6 ) ., The dynamical anchoring combines several processes: generation of new breaks at the scar , spread of breaks toward the original rotor , rotor disappearance and anchoring or one of the wavebreaks at the scar ., The mechanisms of the formation of new wavebreaks at the scar has been studied in several papers 15 , 37 , 40 and can occur due to ionic heterogeneity in the scar region or due to electrotonic effects 40 ., However the process of spread of breaks toward the original rotors is a new type of dynamics and the mechanism of this phenomenon remains to be studied ., To some extent it is similar to the global alternans instability reported in Vandersickel et al . 41 ., Indeed in Vandersickel et al . 41 it was shown that an area of 1:2 propagation block can extend itself towards the original spiral wave and is related to the restitution properties of cardiac tissue ., Although in our case we do not have a clear 1:2 block , wave propagation in the presence of breaks is disturbed resulting in spatially heterogeneous change of diastolic interval which via the restitution effects can result in breakup extension ., This phenomenon needs to be further studied as it may provide new ways for controlling rotor anchoring processes and therefore can affect the dynamics of a cardiac arrhythmia ., In this paper , we used the standard method of representing fibrosis by placement of electrically uncoupled unexcitable nodes with no-flux boundary conditions ., Although such representation is a simplification based on the absence of detailed 3D data , it does reproduce the main physiological effects observed in fibrotic tissue , such as formation of wavebreaks , fractionated electrograms , etc 22 ., The dynamical anchoring reported in this paper occurs as a result of the restructuring of the activation pattern and relies only on these basic properties of the fibrotic scar , i . e . the ability to generate wavebreaks and the ability to anchor rotors , which is reproduced by this representation ., In addition , for each data point , we performed simulations with at least 60 different textures ., Therefore , we expect that the effect observed in our paper is general and should exist for any possible representation of the fibrosis ., The specific conditions , e . g . the size and degree of fibrosis necessary for dynamical anchoring may depend on the detailed fibrosis structure and it would be useful to perform simulations with detailed experimentally based 3D structures of the fibrotic scars , when they become available ., Similar processes can not only occur at fibrotic scars , but also at ionic heterogeneities ., In Defauw et al . 42 , it has been shown that rotors can be attracted by ionic heterogeneities of realistic size and shape , similar to those measured in the ventricles of the human heart 43 ., These ionic heterogeneities had a prolonged APD and also caused wavebreaks , creating a similar dynamical process as described in Fig 1 ., In this study however , we demonstrated that structural heterogeneity is sufficient to trigger this type of dynamical anchoring ., It is important to note that in this study fibrosis was modeled as regions with many small inexcitable obstacles ., However , the outcome can depend on how the cellular electrophysiology and regions of fibrosis have been represented ., In modeling studies , regions of fibrosis can also be represented | Introduction, Materials and methods, Results, Discussion | Rotors are functional reentry sources identified in clinically relevant cardiac arrhythmias , such as ventricular and atrial fibrillation ., Ablation targeting rotor sites has resulted in arrhythmia termination ., Recent clinical , experimental and modelling studies demonstrate that rotors are often anchored around fibrotic scars or regions with increased fibrosis ., However , the mechanisms leading to abundance of rotors at these locations are not clear ., The current study explores the hypothesis whether fibrotic scars just serve as anchoring sites for the rotors or whether there are other active processes which drive the rotors to these fibrotic regions ., Rotors were induced at different distances from fibrotic scars of various sizes and degree of fibrosis ., Simulations were performed in a 2D model of human ventricular tissue and in a patient-specific model of the left ventricle of a patient with remote myocardial infarction ., In both the 2D and the patient-specific model we found that without fibrotic scars , the rotors were stable at the site of their initiation ., However , in the presence of a scar , rotors were eventually dynamically anchored from large distances by the fibrotic scar via a process of dynamical reorganization of the excitation pattern ., This process coalesces with a change from polymorphic to monomorphic ventricular tachycardia . | Rotors are waves of cardiac excitation like a tornado causing cardiac arrhythmia ., Recent research shows that they are found in ventricular and atrial fibrillation ., Burning ( via ablation ) the site of a rotor can result in the termination of the arrhythmia ., Recent studies showed that rotors are often anchored to regions surrounding scar tissue , where part of the tissue still survived called fibrotic tissue ., However , it is unclear why these rotors anchor to these locations ., Therefore , in this work , we investigated why rotors are so abundant in fibrotic tissue with the help of computer simulations ., We performed simulations in a 2D model of human ventricular tissue and in a patient-specific model of a patient with an infarction ., We found that even when rotors are initially at large distances from the fibrotic region , they are attracted by this region , to finally end up at the fibrotic tissue ., We called this process dynamical anchoring and explained how the process works . | dermatology, medicine and health sciences, diagnostic radiology, engineering and technology, cardiovascular anatomy, cardiac ventricles, fibrosis, magnetic resonance imaging, developmental biology, electrocardiography, bioassays and physiological analysis, cardiology, research and analysis methods, scars, arrhythmia, imaging techniques, atrial fibrillation, electrophysiological techniques, rotors, mechanical engineering, radiology and imaging, diagnostic medicine, cardiac electrophysiology, anatomy, biology and life sciences, heart | null |
2,233 | journal.pcbi.1002283 | 2,011 | Chemotaxis when Bacteria Remember: Drift versus Diffusion | The bacterium E . coli moves by switching between two types of motions , termed ‘run’ and ‘tumble’ 1 ., Each results from a distinct movement of the flagella ., During a run , flagella motors rotate counter-clockwise ( when looking at the bacteria from the back ) , inducing an almost constant forward velocity of about , along a near-straight line ., In an environment with uniform nutrient concentration , run durations are distributed exponentially with a mean value of about 2 ., When motors turn clockwise , the bacterium undergoes a tumble , during which , to a good approximation , it does not translate but instead changes its direction randomly ., In a uniform nutrient-concentration profile , the tumble duration is also distributed exponentially but with a much shorter mean value of about 3 ., When the nutrient ( or , more generally , chemoattractant ) concentration varies in space , bacteria tend to accumulate in regions of high concentration ( or , equivalently , the bacteria can also be repelled by chemorepellants and tend to accumulate in low chemical concentration ) 4 ., This is achieved through a modulation of the run durations ., The biochemical pathway that controls flagella dynamics is well understood 1 , 5–7 and the stochastic ‘algorithm’ which governs the behavior of a single motor is experimentally measured ., The latter is routinely used as a model for the motion of a bacteria with many motors 1 , 8–11 ., This algorithm represents the motion of the bacterium as a non-Markovian random walker whose stochastic run durations are modulated via a memory kernel , shown in Fig . 1 ., Loosely speaking , the kernel compares the nutrient concentration experienced in the recent past with that experienced in the more distant past ., If the difference is positive , the run duration is extended; if it is negative , the run duration is shortened ., In a complex medium bacterial navigation involves further complications; for example , interactions among the bacteria , and degradations or other dynamical variations in the chemical environment ., These often give rise to interesting collective behavior such as pattern formation 12 , 13 ., However , in an attempt to understand collective behavior , it is imperative to first have at hand a clear picture of the behavior of a single bacterium in an inhomogeneous chemical environment ., We are concerned with this narrower question in the present work ., Recent theoretical studies of single-bacterium behavior have shown that a simple connection between the stochastic algorithm of motion and the average chemotactic response is far from obvious 8–11 ., In particular , it appeared that favorable chemotactic drift could not be reconciled with favorable accumulation at long times , and chemotaxis was viewed as resulting from a compromise between the two 11 ., The optimal nature of this compromise in bacterial chemotaxis was examined in Ref ., 10 ., In various approximations , while the negative part of the response kernel was key to favorable accumulation in the steady state , it suppressed the drift velocity ., Conversely , the positive part of the response kernel enhanced the drift velocity but reduced the magnitude of the chemotactic response in the steady state ., Here , we carry out a detailed study of the chemotactic behavior of a single bacterium in one dimension ., We find that , for an ‘adaptive’ response kernel ( i . e . , when the positive and negative parts of the response kernel have equal weight such that the total area under the curve vanishes ) , there is no incompatibility between a strong steady-state chemotaxis and a large drift velocity ., A strong steady-state chemotaxis occurs when the positive peak of the response kernel occurs at a time much smaller than and the negative peak at a time much larger than , in line with experimental observation ., Moreover , we obtain that the drift velocity is also large in this case ., For a general ‘non-adaptive’ response kernel ( i . e . , when the area under the response kernel curve is non-vanishing ) , however , we find that a large drift velocity indeed opposes chemotaxis ., Our calculations show that , in this case , a position-dependent diffusivity is responsible for chemotactic accumulation ., In order to explain our numerical results , we propose a simple coarse-grained model which describes the bacterium as a biased random walker with a drift velocity and diffusivity , both of which are , in general , position-dependent ., This simple model yields good agreement with results of detailed simulations ., We emphasize that our model is distinct from existing coarse-grained descriptions of E . coli chemotaxis 13–16 ., In these , coarse-graining was performed over left- and right-moving bacteria separately , after which the two resulting coarse-grained quantities were then added to obtain an equation for the total coarse-grained density ., We point out why such approaches can fail and discuss the differences between earlier models and the present coarse-grained model ., Following earlier studies of chemotaxis 9 , 17 , we model the navigational behavior of a bacterium by a stochastic law of motion with Poissonian run durations ., A switch from run to tumble occurs during the small time interval between and with a probability ( 1 ) Here , and is a functional of the chemical concentration , , experienced by the bacterium at times ., In shallow nutrient gradients , the functional can be written as ( 2 ) The response kernel , , encodes the action of the biochemical machinery that processes input signals from the environment ., Measurements of the change in the rotational bias of a flagellar motor in wild-type bacteria , in response to instantaneous chemoattractant pulses were reported in Refs ., 17 , 18; experiments were carried out with a tethering assay ., The response kernel obtained from these measurements has a bimodal shape , with a positive peak around and a negative peak around ( see Fig . 1 ) ., The negative lobe is shallower than the positive one and extends up to , beyond which it vanishes ., The total area under the response curve is close to zero ., As in other studies of E . coli chemotaxis , we take this response kernel to describe the modulation of run duration of swimming bacteria 8–11 ., Recent experiments suggest that tumble durations are not modulated by the chemical environment and that as long as tumbles last long enough to allow for the reorientation of the cell , bacteria can perform chemotaxis successfully 19 , 20 ., The model defined by Eqs ., 1 and 2 is linear ., Early experiments pointed to a non-linear , in effect a threshold-linear , behavior of a bacterium in response to chemotactic inputs 17 , 18 ., In these studies , a bacterium modulated its motion in response to a positive chemoattractant gradient , but not to a negative one ., In the language of present model , such a threshold-linear response entails replacing the functional defined in Eq ., 2 by zero whenever the integral is negative ., More recent experiments suggest a different picture , in which a non-linear response is expected only for a strong input signal whereas the response to weak chemoattractant gradient is well described by a linear relation 21 ., Here , we present an analysis of the linear model ., For the sake of completeness , in Text S1 , we present a discussion of models which include tumble modulations and a non-linear response kernel ., Although recent experiments have ruled out the existence of both these effects in E . coli chemotaxis , in general such effects can be relevant to other systems with similar forms of the response function ., The shape of the response function hints to a simple mechanism for the bacterium to reach regions with high nutrient concentration ., The bilobe kernel measures a temporal gradient of the nutrient concentration ., According to Eq ., 1 , if the gradient is positive , runs are extended; if it is negative , runs are unmodulated ., However , recent literature 8 , 9 , 11 has pointed out that the connection between this simple picture and a detailed quantitative analysis is tenuous ., For example , de Gennes used Eqs ., 1 to calculate the chemotactic drift velocity of bacteria 8 ., He found that a singular kernel , , where is a Dirac function and a positive constant , lead to a mean velocity in the direction of increasing nutrient concentration even when bacteria are memoryless ( ) ., Moreover , any addition of a negative contribution to the response kernel , as seen in experiments ( see Fig . 1 ) , lowered the drift velocity ., Other studies considered the steady-state density profile of bacteria in a container with closed walls , both in an approximation in which correlations between run durations and probability density were ignored 11 and in an approximation in which the memory of the bacterium was reset at run-to-tumble switches 9 ., Both these studies found that , in the steady state , a negative contribution to the response function was mandatory for bacteria to accumulate in regions of high nutrient concentration ., These results seem to imply that the joint requirement of favorable transient drift and steady-state accumulation is problematic ., The paradox was further complicated by the observation 9 that the steady-state single-bacterium probability density was sensitive to the precise shape of the kernel: when the negative part of the kernel was located far beyond it had little influence on the steady-state distribution 11 ., In fact , for kernels similar to the experimental one , model bacteria accumulated in regions with low nutrient concentration in the steady state 9 ., In order to resolve these paradoxes and to better understand the mechanism that leads to favorable accumulation of bacteria , we perform careful numerical studies of bacterial motion in one dimension ., In conformity with experimental observations 17 , 18 , we do not make any assumption of memory reset at run-to-tumble switches ., We model a bacterium as a one-dimensional non-Markovian random walker ., The walker can move either to the left or to the right with a fixed speed , , or it can tumble at a given position before initiating a new run ., In the main paper , we present results only for the case of instantaneous tumbling with , while results for non-vanishing are discussed in Text S1 ., There , we verify that for an adaptive response kernel does not have any effect on the steady-state density profile ., For a non-adaptive response kernel , the correction in the steady-state slope due to finite is small and proportional to ., The run durations are Poissonian and the tumble probability is given by Eq ., 1 ., The probability to change the run direction after a tumble is assumed to have a fixed value , , which we treat as a parameter ., The specific choice of the value of does not affect our broad conclusions ., We find that , as long as , only certain detailed quantitative aspects of our numerical results depend on ., ( See Text S1 for details on this point . ), We assume that bacteria are in a box of size with reflecting walls and that they do not interact among each other ., We focus on the steady-state behavior of a population ., Reflecting boundary conditions are a simplification of the actual behavior 22 , 23; as long as the total ‘probability current’ ( see discussion below ) in the steady state vanishes , our results remain valid even if the walls are not reflecting ., As a way to probe chemotactic accumulation , we consider a linear concentration profile of nutrient: ., We work in a weak gradient limit , i . e . , the value of is chosen to be sufficiently small to allow for a linear response ., Throughout , we use in our numerics ., From the linearity of the problem , results for a different attractant gradient , , can be obtained from our results through a scaling factor ., In the linear reigme , we obtain a spatially linear steady-state distribution of individual bacterium positions , or , equivalently , a linear density profile of a bacterial population ., Its slope , which we denote by , is a measure of the strength of chemotaxis ., A large slope indicates strong bacterial preference for regions with higher nutrient concentration ., Conversely , a vanishing slope implies that bacteria are insensitive to the gradient of nutrient concentration and are equally likely to be anywhere along the line ., We would like to understand the way in which the slope depends on the different time scales present in the system ., In order to gain insight into our numerical results , we developed a simple coarse-grained model of chemotaxis ., For the sake of simplicity , we first present the model for a non-adaptive , singular response kernel , , and , subsequently , we generalize the model to adaptive response kernels by making use of linear superposition ., The memory trace embodied by the response kernel induces temporal correlations in the trajectory of the bacterium ., However , if we consider the coarse-grained motion of the bacterium over a spatial scale that exceeds the typical run stretch and a temporal scale that exceeds the typical run duration , then we can assume that it behaves as a Markovian random walker with drift velocity and diffusivity ., Since the steady-state probability distribution , , is flat for , for small we can write ( 4 ) ( 5 ) ( 6 ) Here , and ., Since we are neglecting all higher order corrections in , our analysis is valid only when is sufficiently small ., In particular , even when , we assume that the inequality is still satisfied ., The chemotactic drift velocity , , vanishes if ; it is defined as the mean displacement per unit time of a bacterium starting a new run at a given location ., Clearly , even in the steady state when the current , defined through , vanishes , may be non-vanishing ( see Eq . 8 below ) ., In general , the non-Markovian dynamics make dependent on the initial conditions ., However , in the steady state this dependence is lost and can be calculated , for example , by performing a weighted average over the probability of histories of a bacterium ., This is the quantity that is of interest to us ., An earlier calculation by de Gennes showed that , if the memory preceding the last tumble is ignored , then for a linear profile of nutrient concentration the drift velocity is independent of position and takes the form 8 ., While the calculation applies strictly in a regime with ( because of memory erasure ) , in fact its result captures the behavior well over a wide range of parameters ( see Fig . 4 ) ., To measure in our simulations , we compute the average displacement of the bacterium between two successive tumbles in the steady state , and we extract therefrom the drift velocity ., ( For details of the derivation , see Text S1 . ), We find that is negative for and that its magnitude falls off with increasing values of ( Fig . 4 ) ., We also verify that indeed does not show any spatial dependence ( data shown in Fig . of Text S1 ) ., We recall that , in our numerical analysis , we have used a small value of ; this results in a low value of ., We show below that for an experimentally measured bilobe response kernel , obtained by superposition of singular response kernels , the magnitude of becomes larger and comparable with experimental values ., To obtain the diffusivity , , we first calculate the effective mean free path in the coarse-grained model ., The tumbling frequency of a bacterium is and depends on the details of its past trajectory ., In the coarse-grained model , we replace the quantity by an average over all the trajectories within the spatial resolution of the coarse-graining ., Equivalently , in a population of non-interacting bacteria , the average is taken over all the bacteria contained inside a blob , and , hence , denotes the position of the center of mass of the blob at a time in the past ., As mentioned above , the drift velocity is proportional to , so that ., The average tumbling frequency then becomes and , consequently , the mean free path becomes ., As a result , the diffusivity is expressed as ., We checked this form against our numerical results ( Fig . 5 ) ., Having evaluated the drift velocity , , and the diffusivity , , we now proceed to write down the continuity equation ( for a more rigorous but less intuitive approach , see 10 ) ., For a biased random walker on a lattice , with position-dependent hopping rates and towards the right and the left , respectively , one has and , where is the lattice constant ., In the continuum limit , the temporal evolution of the probability density is given by a probability current , as ( 7 ) where the current takes the form ( 8 ) For reflecting boundary condition , in the steady state ., This constraint yields a steady-state slope ( 9 ) for small ., We use our measured values for and ( Figs . 4 and 5 ) , and compute the slope using Eq ., 9 ., ( For details of the measurement of , see Text S1 . ), We compare our analytical and numerical results in Fig . 2 , which exhibits close agreement ., According to Eq ., 9 , steady-state chemotaxis results from a competition between drift motion and diffusion ., For , the drift motion is directed toward regions with a lower nutrient concentration and hence opposes chemotaxis ., Diffusion is spatially dependent and becomes small for large nutrient concentrations ( again for ) , thus increasing the effective residence time of the bacteria in favorable regions ., For large values of , the drift velocity vanishes and one has a strong chemotaxis as increases ( Fig . 2 ) ., Finally , for , the calculation by de Gennes yields which exactly cancels the spatial gradient of ( to linear order in ) , and there is no accumulation 8 , 11 ., These conclusions are easily generalized to adaptive response functions ., For , within the linear response regime , the effective drift velocity and diffusivity can be constructed by simple linear superposition: The drift velocity reads ., Interestingly , the spatial dependence of cancels out and ., The resulting slope then depends on the drift only and is calculated as ( 10 ) In this case , the coarse-grained model is a simple biased random walker with constant diffusivity ., For and , the net velocity , proportional to , is positive and gives rise to a favorable chemotactic response , according to which bacteria accumulate in regions with high food concentration ., Moreover , the slope increases as the separation between and grows ., We emphasize that there is no incompatibility between strong steady-state chemotaxis and large drift velocity ., In fact , in the case of an adaptive response function , strong chemotaxis occurs only when the drift velocity is large ., For a bilobe response kernel , approximated by a superposition of many delta functions ( Fig . 1 ) , the slope , , can be calculated similarly and in Fig . 3 we compare our calculation to the simulation results ., We find close agreement in the case of a linear model with a bilobe response kernel and , in fact , also in the case of a non-linear model ( see Text S1 ) ., The experimental bilobe response kernel is a smooth function , rather than a finite sum of singular kernels over a set of discrete values ( as in Fig . 1 ) ., Formally , we integrate singular kernels over a continuous range of to obtain a smooth response kernel ., If we then integrate the expression for the drift velocity obtained by de Gennes , according to this procedure , we find an overall drift velocity , for the concentration gradient considered ( ) ., By scaling up the concentration gradient by a factor of , the value of can also be scaled up by and can easily account for the experimentally measured velocity range ., We carried out a detailed analysis of steady-state bacterial chemotaxis in one dimension ., The chemotactic performance in the case of a linear concentration profile of the chemoattractant , , was measured as the slope of the bacterium probability density profile in the steady state ., For a singular impulse response kernel , , the slope was a scaling function of , which vanished at the origin , increased monotonically , and saturated at large argument ., To understand these results we proposed a simple coarse-grained model in which bacterial motion was described as a biased random walk with drift velocity , , and diffusivity , ., We found that for small enough values of , was independent of and varied linearly with nutrient concentration ., By contrast , was spatially uniform and its value decreased monotonically with and vanished for ., We presented a simple formula for the steady-state slope in terms of and ., The prediction of our coarse-grained model agreed closely with our numerical results ., Our description is valid when is small enough , and all our results are derived to linear order in ., We assume is always satisfied ., Our results for an impulse response kernel can be easily generalized to the case of response kernels with arbitrary shapes in the linear model ., For an adaptive response kernel , the spatial dependence of the diffusivity , , cancels out but a positive drift velocity , , ensures bacterial accumulation in regions with high nutrient concentration , in the steady state ., In this case , the slope is directly proportional to the drift velocity ., As the delay between the positive and negative peaks of the response kernel grows , the velocity increases , with consequent stronger chemotaxis ., Earlier studies of chemotaxis 13–16 put forth a coarse-grained model different from ours ., In the model first proposed by Schnitzer for a single chemotactic bacterium 14 , he argued that , in order to obtain favorable bacterial accumulation , tumbling rate and ballistic speed of a bacterium must both depend on the direction of its motion ., In his case , the continuity equation reads ( 11 ) where is the ballistic speed and is the tumbling frequency of a bacterium moving toward the left ( right ) ., For E . coli , as discussed above , , a constant independent of the location ., In that case , Eq ., 11 predicts that in order to have a chemotactic response in the steady state , one must have a non-vanishing drift velocity , i . e . , ., This contradicts our findings for non-adaptive response kernels , according to which a drift velocity only hinders the chemotactic response ., The spatial variation of the diffusivity , instead , causes the chemotactic accumulation ., This is not captured by Eq ., 11 ., In the case of adaptive response kernels , the diffusivity becomes uniform while the drift velocity is positive , favoring chemotaxis ., Comparing the expression of the flux , , obtained from Eqs ., 7 and 8 with that from Eq ., 11 , and matching the respective coefficients of and , we find and ., As we argued above in discussing the coarse-grained model for adaptive response kernels , both and are spatially independent ., This puts strict restrictions on the spatial dependence of and ., For example , as in E . coli chemotaxis , our coarse-grained description is recovered only if and are also independent of ., We comment on a possible origin of the discrepancy between our work and earlier treatments ., In Ref ., 14 , a continuity equation was derived for the coarse-grained probability density of a bacterium , starting from a pair of approximate master equations for the probability density of a right-mover and a left-mover , respectively ., As the original process is non-Markovian , one can expect a master equation approach to be valid only at scales that exceed the scale over which spatiotemporal correlations in the behavior of the bacterium are significant ., In particular , a biased diffusion model can be viewed as legitimate only if the ( coarse-grained ) temporal resolution allows for multiple runs and tumbles ., If so , at the resolution of the coarse-grained model , left- and right-movers become entangled , and it is not possible to perform a coarse-graining procedure on the two species separately ., Thus one cannot define probability densities for a left- and a right-mover that evolves in a Markovian fashion ., In our case , left- and right-movers are coarse-grained simultaneously , and the total probability density is Markovian ., Thus , our diffusion model differs from that of Ref ., 14 because it results from a different coarse-graining procedure ., The model proposed in Ref ., 14 has been used extensively to investigate collective behaviors of E . coli bacteria such as pattern formation 13 , 15 , 16 ., It would be worth asking whether the new coarse-grained description can shed new light on bacterial collective behavior . | Introduction, Models, Results, Discussion | Escherichia coli ( E . coli ) bacteria govern their trajectories by switching between running and tumbling modes as a function of the nutrient concentration they experienced in the past ., At short time one observes a drift of the bacterial population , while at long time one observes accumulation in high-nutrient regions ., Recent work has viewed chemotaxis as a compromise between drift toward favorable regions and accumulation in favorable regions ., A number of earlier studies assume that a bacterium resets its memory at tumbles – a fact not borne out by experiment – and make use of approximate coarse-grained descriptions ., Here , we revisit the problem of chemotaxis without resorting to any memory resets ., We find that when bacteria respond to the environment in a non-adaptive manner , chemotaxis is generally dominated by diffusion , whereas when bacteria respond in an adaptive manner , chemotaxis is dominated by a bias in the motion ., In the adaptive case , favorable drift occurs together with favorable accumulation ., We derive our results from detailed simulations and a variety of analytical arguments ., In particular , we introduce a new coarse-grained description of chemotaxis as biased diffusion , and we discuss the way it departs from older coarse-grained descriptions . | The chemotaxis of Escherichia coli is a prototypical model of navigational strategy ., The bacterium maneuvers by switching between near-straight motion , termed runs , and tumbles which reorient its direction ., To reach regions of high nutrient concentration , the run-durations are modulated according to the nutrient concentration experienced in recent past ., This navigational strategy is quite general , in that the mathematical description of these modulations also accounts for the active motility of C . elegans and for thermotaxis in Escherichia coli ., Recent studies have pointed to a possible incompatibility between reaching regions of high nutrient concentration quickly and staying there at long times ., We use numerical investigations and analytical arguments to reexamine navigational strategy in bacteria ., We show that , by accounting properly for the full memory of the bacterium , this paradox is resolved ., Our work clarifies the mechanism that underlies chemotaxis and indicates that chemotactic navigation in wild-type bacteria is controlled by drift while in some mutant bacteria it is controlled by a modulation of the diffusion ., We also propose a new set of effective , large-scale equations which describe bacterial chemotactic navigation ., Our description is significantly different from previous ones , as it results from a conceptually different coarse-graining procedure . | physics, statistical mechanics, theoretical biology, biophysics theory, biology, computational biology, biophysics simulations, biophysics | null |
4 | journal.pcbi.1005644 | 2,017 | A phase transition induces chaos in a predator-prey ecosystem with a dynamic fitness landscape | In many natural ecosystems , at least one constituent species evolves quickly enough relative to its population growth that the two effects become interdependent ., This phenomenon can occur when selection forces are tied to such sudden environmental effects as algal blooms or flooding 1 , or it can arise from more subtle , population-level effects such as overcrowding or resource depletion 2 ., Analysis of such interactions within a unified theory of “eco-evolutionary dynamics” has been applied to a wide range of systems—from bacteria-phage interactions to bighorn sheep 3—by describing population fluctuations in terms of the feedback between demographic change and natural selection 4 ., The resulting theoretical models relate the fitness landscape ( or fitness function ) to population-level observables such as the population growth rate and the mean value of an adapting phenotypic trait ( such as horn length , cell wall thickness , etc ) ., The fitness landscape may have an arbitrarily complex topology , as it can depend on myriad factors ranging from environmental variability 5 , 6 , to inter- and intraspecific competition 7 , 8 , to resource depletion 9 ., However , these complex landscapes can be broadly classified according to whether they result in stabilizing or disruptive selection ., In the former , the landscape may possess a single , global maximum that causes the population of individuals to evolve towards a state in which most individuals have trait values at or near this maximum 10 ., Conversely , in disruptive selection , the fitness landscape may contain multiple local maxima , in which case the population could have a wide distribution of trait values and occupy multiple distinct niches 11 ., In eco-evolutionary models , the shape of the fitness landscape may itself depend on the population densities of the interacting species it describes ., Specifically , the concept that the presence of competition can lead a single-peaked fitness landscape to spontaneously develop additional peaks originates in the context of “competitive speciation” first proposed by Rosenzweig 12 ., This is formalized in genetic models in which sympatric speciation is driven by competitive pressures rather than geographic isolation 13 ., Competition-induced disruptive selection has been observed in natural populations of stickleback fish 14 , microbial communities 15 , and fruit flies 16 , 17 ., Here , we model eco-evolutionary dynamics of a predator-prey system based on first-order “gradient dynamics” 10 , 18 , a class of models that explicitly define the fitness in terms of the population growth rate r , which is taken to depend only on the mean value of the trait across the entire population , c ¯ 19 ., Despite this simplification , gradient dynamics models display rich behavior that can account for a wide range of effects observed in experimental systems—in particular , recent work by Cortez and colleagues has shown that these models can result in irregular cycles and dynamical bifurcations that depend on the standing genetic variation present in a population 20 , 21 ., In our model , gradient dynamics cause the prey fitness landscape to change as a result of predation , and we find that the resulting dynamical system exhibits chaotic dynamics ., Chaos is only possible in systems in which three or more dependent dynamical variables vary in time 22 , and previously it has been observed in predator-prey systems comprising three or more mutually interdependent species , or in which an external environmental variable ( such as seasonal variation or generic noise ) is included in the dynamics 23 , 24 ., Here we show that evolution of just one species in a two-species ecosystem is sufficient to drive the ecosystem into chaos ., Moreover , we find that chaos is driven by a density-dependent change of the fitness landscape from a stabilizing to disruptive state , and that this transition has hysteretic behavior with mathematical properties that are strongly reminiscent of a first-order phase transition in a thermodynamical system ., The resulting dynamics display intermittent properties typically associated with ecosystems poised at the “edge of chaos , ” which we suggest has implications for the study of ecological stability and speciation ., Adapting the notation and formulation used by Cortez ( 2016 ) 21 , we use a two-species competition model with an additional dynamical variable introduced to account for a prey trait on which natural selection may act ., The most general fitness function for the prey , r , accounts for density-dependent selection on a prey trait c ,, r ( x , y , c ¯ , c ) ≡ G ( x , c , c ¯ ) - D ( c , c ¯ ) - f ( x , y ) , ( 1 ), where x = x ( t ) is the time-dependent prey density , y = y ( t ) is the time-dependent predator density , c is a trait value for an individual in the prey population , and c ¯ = c ¯ ( t ) is the mean value of the trait across the entire prey population at time t ., r comprises a density-dependent birth rate G , a density-independent death rate D , and a predator-prey interaction term f , which for simplicity is assumed to depend on neither c nor c ¯ ., Thus the trait under selection in our model is not an explicit predator avoidance trait such as camouflage , but rather an endogenous advancement ( i . e . , improved fecundity , faster development , or reduced mortality ) that affects the prey’s ability to exploit resources in its environment , even in the absence of predation ., The continuous-time “gradient dynamics” model that we study interprets the fitness r as the growth rate of the prey: 19 , 25, x ˙ = x r ( x , y , c ¯ , c ) | c → c ¯ ( 2 ), y ˙ = y ( f ( x , y ) - D ˜ ( y ) ) ( 3 ), c ¯ ˙ = V ∂ r ( x , y , c ¯ , c ) ∂ c | c → c ¯ ., ( 4 ) Eq ( 2 ) is evaluated with all individual trait values c set to the mean value c ¯ because the total prey population density is assumed to change based on the fitness function , which in turn depends on the population-averaged value of the prey trait c ¯ 21 ., The timescale of the dynamics in c ¯ are set by V , which is interpreted as the additive genetic variance of the trait 10 ., While Eq ( 2 ) depends only on the mean trait value c ¯ , the full distribution of individual trait values c present in a real-world population may change over time as the relative frequencies of various phenotypes change ., In principle , additional differential equations of the form of Eq ( 4 ) could be added to account for higher moments of the distribution of c across an ensemble of individuals , allowing the gradient dynamics model to be straightforwardly extended to model a trait’s full distribution rather than just the population mean ., However , here we focus on the case where the prey density dynamics x ˙ depend only on the mean trait value to first order , and we do not include differential equations for higher-order moments of the prey trait value distribution ., The use of a single Eq ( 4 ) to describe the full dynamics of the trait distribution represents an approximation that is exact only when the phenotypic trait distribution stays nearly symmetric and the prey population maintains a constant standing genetic variation V 10 ., However , V may remain fixed even if the phenotypic variance changes , a property that is observed phenomenologically in experimental systems , and which may be explained by time-dependent heritability , breeding effects , mutation , or other transmission effects not explicitly modeled here 26–29 ., More broadly , this assumption may imply that gene selection is weak compared to phenotype selection 30 , 31 ., S1D Appendix further describes the circumstances under which V remains fixed , and also provides a first-order estimate of the magnitude of error introduced by ignoring higher-order effects ( such as skewness ) in the trait distribution ., The results suggest that these effects are small for the parameter values ( and resulting range of x and y values ) used here , due in part to limitations on the maximum skewness that a hypothetical trait distribution can achieve on the fitness landscapes studied here ., In S1D Appendix , we also compare the results presented below to an equivalent model in which a full trait distribution is present , in which case Eq ( 2 ) becomes a full integro-differential equation involving averages of the trait value over the entire prey population ., Detailed numerical study of this integro-differential equation is computationally prohibitive for the long timescales studied here , but direct comparison of the contributions of various terms in the velocity field suggests general accuracy of the gradient dynamics model for the fitness landscapes and conditions we study here ., However , in general the appropriateness of the gradient dynamics model should be checked whenever using Eq ( 4 ) with an arbitrary fitness function ., Fig 1A shows a schematic summarizing the gradient dynamics model , and noting the primary assumptions underlying this formulation ., Next , we choose functional forms for f , G , D , and D ˜ in Eqs ( 2 ) and ( 3 ) ., We start with the assumption that , for fixed values of the trait c an d its mean c ¯ , the population dynamics should have the form of a typical predator-prey system in the absence of evolutionary effects ., Because the predator dynamics are not directly affected by evolutionary dynamics , we choose a simple form for predator growth consisting of a fixed death rate and a standard Holling Type II birth rate , 32, f ( x , y ) = a 2 x y 1 + b 2 x ( 5 ), D ˜ ( y ) = d 2 ( 6 ), The predator birth rate f saturates at large values of the prey density , which is more realistic than the standard Lotka-Volterra competition term xy in cases where the prey density is large or fluctuating 22 ., A saturating interaction term ensures that solutions of the system remain bounded for a wider range of parameter values , a necessity for realistic models of long-term interactions 33 ., For the prey net growth rate ( Eq ( 1 ) , the fitness ) in the absence of the predator , we use the following functional forms ,, G ( x , c ¯ , c ) = a 1 c ¯ 1 + b 1 c ¯ ( 1 - k 1 x ( c - c ¯ ) ) ( 7 ), D ( c , c ¯ ) = d 1 ( 1 - k 2 ( c 2 - c ¯ 2 ) + k 4 ( c 4 - c ¯ 4 ) ) ., ( 8 ), The first term in Eq ( 7 ) specifies that the prey population density growth rate r | c → c ¯ depends only on a primary saturating contribution of the mean trait to the birth rate G . In other models a similar effect is achieved by modifying the mean trait evolution Eq ( 4 ) , such that extremal values of the trait are disadvantaged 21; alternative coupling methods based on exponential saturation would be expected to yield similar qualitative results 19 ., However , the additional series terms in Eqs ( 7 ) and ( 8 ) ensure that the any individual’s fitness r may differ from the rest of the population depending on the difference between its trait value c and the population mean c ¯ ., Because the functional form of this difference is unknown , its contribution expressed as second-order truncation of the series difference of the form r ( c , c ¯ ) = r ˜ | c → 0 + ( r ˜ ( c ) - r ˜ ( c ) | c → c ¯ ) ( where r ˜ represents an unscaled fitness function ) ., This ensures that when c ˙ = 0 or c = c ¯ , the system reduces to a standard prey model with a Holling Type II increase in birth rate in response to increasing mean trait value 25 ., In the results reported below , we observe that all dynamical variables remain bounded as long as parameter values are chosen such that the predator density does not equilibrate to zero ., This is a direct consequence of our use of saturating Holling Type II functional forms in Eqs ( 7 ) and ( 8 ) , which prevent the fitness landscape from increasing without bound at large c , c ¯ and also ensure that the predator and prey densities do not jointly diverge ., That the dynamics should stay bounded due to saturating terms is justified by empirical studies of predator-prey systems 34 , 35; moreover , other saturating functional forms are expected to yield similar results if equivalent parameter values are chosen 33 , 36 ., The nonlinear dependence of the mortality rate Eq ( 8 ) on the trait is based on mechanistic models of mortality with individual variation 19 , 37 , 38 ., The specific choice of a quartic in Eq ( 8 ) allows the fitness function r to have a varying number of real roots and local maxima in the domain c , c ¯ > 0 , affording the system dynamical freedom not typically possible in predator prey models with constant or linear prey fitness—in particular , for different values of k2 , k4 the fitness landscape can have a single optimal phenotype , multiple optimal phenotypes , or no optimal intermediate values ., Because any even , continuous form for the fitness landscape can be approximated using a finite number of terms in its Taylor series around c = 0 , our choice of a quartic form simply constitutes truncation of this expansion at the quartic order in order to include the simplest case in which the fitness function admits multiple local maxima—for this reason , a quartic will always represent the leading-order series expansion of a fitness landscape with multiple local maxima ., Below , we observe numerically that ∣ c - c ¯ ∣ < 1 , ex post facto justifying truncation of the higher order terms in this series expansion ., However , if the trait value c was strictly bounded to only take non-zero values on a finite interval ( as opposed to the entire real line ) , then a second-order , quadratic fitness landscape would be sufficient to admit multiple local maxima ( at the edges of the interval ) 14 ., However , the choice here of an unbounded trait value c avoids creating boundary effects , and it has little consequence due to the steep decay of the quartic function at large values of |c| , which effectively confines the possible values of c ¯ accessible by the system ., In physics , similar reasons—unbounded domains , multiple local optima , and continuity—typically justify the use of quartic free energy functions in minimal models of systems exhibiting multiple energetic optima , such as the Ginzberg-Landau free energy used in models of superconducting phase transitions 39 ., We note that the birth rate Eq ( 7 ) contributes a density-dependent term to the fitness function even in the absence of predation ( y = 0 ) 21 ., Unlike the death rate function , the effect of the individual trait value on this term is directional: the sign of c - c ¯ determines whether birth rates increase or decrease ., As the population density x increases , the effect of these directional effects is amplified , consistent with the observed effect of intraspecific competition and crowding in experimental studies of evolution 40 , 41 ., The chaotic dynamics reported below arise from this density-dependent term because the term prevents the Jacobian of the system ( 2 ) , ( 3 ) and ( 4 ) from having a row and column with all zeros away from the diagonal; in this case , the prey trait ( and thus evolutionary dynamics ) would be uncoupled from the rest of the system , and would thus relax to a stable equilibrium ( as is necessary for a first-order single-variable equation ) ., In that case , c ¯ would essentially remain fixed and the predator-prey dynamics would become two-dimensional in x and y , precluding chaos ., For similar reasons , density-dependent selection has been found to be necessary for chaos in some discrete-time evolutionary models , for which chaotic dynamics require a certain minimum degree of association between the fitness and the trait frequencies 42 ., Inserting Eqs ( 5 ) , ( 7 ) and ( 8 ) , into Eq ( 1 ) results in a final fitness function of the form, r ( x , y , c ¯ , c ) = a 1 c ¯ 1 + b 1 c ¯ ( 1 - k 1 x ( c - c ¯ ) ) - d 1 ( 1 - k 2 ( c 2 - c ¯ 2 ) + k 4 ( c 4 - c ¯ 4 ) ) - a 2 x y 1 + b 2 x ., ( 9 ), This fitness landscape is shown in Fig 1B , for typical parameter values and predator and prey densities used in the numerical results below ., Depending on the current predator and prey densities , the local maximum of the system can appear in two different locations , which directly affects the dynamics described in the next section ., Inserting Eq ( 9 ) into Eqs ( 2 ) , ( 3 ) and ( 4 ) results in a final form for the dynamical equations ,, x ˙ = x ( a 1 c ¯ 1 + b 1 c ¯ - a 2 y 1 + b 2 x - d 1 ) ( 10 ), y ˙ = y ( y a a 2 x 1 + b 2 x - d 2 ) ( 11 ), c ¯ ˙ = c ¯ V ( ( 2 k 2 d 1 ) - ( 4 k 4 d 1 ) c ¯ 2 - ( a 1 k 1 ) x 1 + b 1 c ¯ ) ., ( 12 ), Due to the Holling coupling terms , the form of these equations qualitatively resembles models of vertical , tritrophic food webs—the mean trait value c ¯ affects the growth rate of the prey , which in turn affects the growth rate of the predator 24 , 32 , 43 ., The coupling parameter ya introduces asymmetry into the competition when ya ≠ 1; however , it essentially acts as a scale factor that only affects the amplitude of the y cycles and equilibria rather than the dynamics ., Additionally , because the predator-prey interaction term Eq ( 5 ) is unaffected by the trait , our model contains no triple-product x y c ¯ interaction terms , which typically stabilize the dynamics ., For our analysis of the system ( 10 ) , ( 11 ) and ( 12 ) , we first consider the case where evolution proceeds very slowly relative to population dynamics ., In the case of both no evolution ( V = 0 ) and no predation ( y = 0 ) , the prey growth Eq ( 10 ) advances along the one-dimensional nullcline y ˙ , c ¯ ˙ = 0 , y = 0 ., Depending on whether the fixed mean trait value c ¯ exceeds a critical value ( c ¯ † ≡ d 1 / ( a 1 - b 1 d 1 ) ) , the prey density will either grow exponentially ( c ¯ > c ¯ † ) or collapse exponentially ( c ¯ < c ¯ † ) because the constant c ¯ remains too low to sustain the prey population in the absence of evolutionary adaptation ., The requirement that c ¯ > c ¯ † carries over to the case where a predator is added to the system but evolutionary dynamics remain fixed , corresponding to a two dimensional system advancing along the two-dimensional nullcline c ¯ ˙ = 0 ., In this case , as long as c ¯ > c ¯ † , the prey density can exhibit continuous growth or cycling depending in the relative magnitudes of the various parameters in Eqs ( 10 ) and ( 11 ) ., The appearance and disappearance of these cycles is determined by a series of bifurcations that depends on the values of c ¯ and b1 , b2 relative to the remaining parameters a1 , a2 , d1 , d2 ( S1A Appendix ) ., In the full three-variable system ( 10 ) , ( 11 ) and ( 12 ) , c ¯ passes through a range of values as time progresses , resulting in more complex dynamics than those observed in the two-dimensional case ., For very small values of V , the evolutionary dynamics c ¯ ˙ are slow enough that the system approaches the equilibrium predicted by the two-variable model with c ¯ constant ., The predator and prey densities initially grow , but the prey trait value does not change fast enough for the prey population growth to sustain—eventually resulting in extinction of both the predator and prey ., However , if V takes a slightly larger value , so that the mean trait value can gradually change with a growing prey population density ( due to the density-dependent term in Eq ( 10 ) ) , then the population dynamics begin to display regular cycling with fixed frequencies and amplitudes ( Fig 2A , top ) ., This corresponds to a case where the evolutionary dynamics are slow compared to the ecological dynamics , but not so slow as to be completely negligible ., Finally , when V is the same order of magnitude as the parameters governing the ecological dynamics , the irregular cycles become fully chaotic , with both amplitudes and frequencies that vary widely over even relatively short time intervals ( Fig 2A , bottom ) ., Typically , the large V case would correspond to circumstances in which the prey population develops a large standing genetic variation 10 , 44 ., That the dynamics are chaotic , rather than quasi-periodic , is suggested by the presence of multiple broad , unevenly-spaced peaks in the power spectrum 45 ( Figure A in S1E Appendix ) , as well as by numerical tabulation of the Lyapunov spectrum ( described further below ) ., Due to the hierarchical coupling of Eqs ( 10 ) , ( 11 ) and ( 12 ) , when plotted in three-dimensions the chaotic dynamics settle onto a strange attractor that resembles the “teacup” attractor found in models of tritrophic food webs 24 , 46 ( Fig 2B ) ., Poincare sections though various planes of the teacup all appear linear , suggesting that the strange attractor is effectively two-dimensional—consistent with pairings of timescales associated with different dynamical variables at different points in the process ( Figure B in S1E Appendix ) ., In the “rim” of the teacup , the predator density changes slowly relative to the prey density and mean trait value ., This is visible in a projection of the attractor into the x - c ¯ plane ( Fig 2B , bottom inset ) ., However , in the “handle” of the teacup , the mean trait value varies slowly relative to the ecological dynamics ( c ¯ ˙ ≈ 0 ) , resulting in dynamics that qualitatively resemble the two-dimensional “reduced” system described above for various fixed values of c ¯ ( Fig 2B , top inset ) ., The structure of the attractor suggests that the prey alternately enters periods of evolutionary change and periods of competition with the predator ., A closer inspection of a typical transition reveals that this “two timescale” dynamical separation is responsible for the appearance of chaos in the system ( Fig 3A ) ., As the system explores configuration space , it reaches a metastable configuration corresponding to a high mean trait value c ¯ , which causes the prey density to nearly equilibrate to a low density due to the negative density-dependent term in Eq ( 10 ) ., During this period ( the “rim” of the teacup ) , the predator density gradually declines due to the lack of prey ., However , once the predator density becomes sufficiently small , the prey population undergoes a sudden population increase , which triggers a period of rapid cycling in the system ( the “handle” of the teacup attractor ) ., During this time , the predator density continuously increases , causing an equivalent decrease in the prey density that resets the cycle to the metastable state ., The sudden increase in the prey population at low predator densities can be understood from how the fitness function r ( from Eq ( 9 ) ) changes over time ., Fig 3B shows a kymograph of the log-scaled fitness Eq ( 9 ) as a function of individual trait values c , across each timepoint and corresponding set of ( x , y , c ¯ ) values given in panel A . Overlaid on this time-dependent fitness landscape are curves indicating the instantaneous location of the local maximum ( black ) and minimum ( white ) ., By comparing panels A and B , it is apparent that the mean trait value during the “metastable” period of the dynamics stays near the local maximum of the fitness function , which barely varies as the predator density y changes ., However , when y ( t ) ≈ 0 . 25 , the fitness function changes so that the local minimum and local maximum merge and disappear from the system , leading to a new maximum spontaneously appearing at c = 0 ., Because V is large enough ( for these parameters ) that the gradient dynamics occur over timescales comparable to the competition dynamics , the system tends to move rapidly towards this new maximum in the fitness landscape , resulting in rapidly-changing dynamics in x and c ¯ ., Importantly , because of the symmetric coupling of the prey fitness landscape r to the prey density x , this rapid motion resets the fitness landscape so that the maximum once again occurs at the original value , resulting in a period of rapid cycling ., The fitness landscape at two representative timepoints in the dynamics is shown in Fig 3C ., That the maxima in the fitness Function ( 9 ) suddenly change locations with continuous variation in x , y is a direct consequence of the use of a high-order ( here , quartic ) polynomial in c to describe the fitness landscape ., The quartic represents the simplest analytic function that admits more than one local maxima in its domain , and the number of local maxima is governed by the relative signs of the coefficients of the ( c 2 - c ¯ 2 ) and ( c 4 - c ¯ 4 ) terms in Eq ( 9 ) , which change when the system enters the rapid cycling portion of the chaotic dynamics at t = 500 in Fig 3A ., This transition marks the mean prey trait switching from being drawn ( via the gradient dynamics ) to a single fitness peak at an intermediate value of the trait ceq ≈ 0 . 707 to being drawn instead to one of two peaks: the existing peak , or a new peak at the origin ., Thus the metastable period of the dynamics corresponds to a period of stabilizing selection: if the fitness landscape were frozen in time during this period , then an ensemble of prey would all evolve to a single intermediate trait value corresponding to the location of the global maximum ., Conversely , if the fitness landscape were held fixed in the multipeaked form it develops during a period of rapid cycling , given sufficient time an ensemble of prey would evolve towards subpopulations with trait values at the location of each local fitness maximum—representing disruptive selection ., That the fitness landscape does not remain fixed for extended durations in either a stabilizing or disruptive state—but rather switches between the two states due to the prey density-dependent term in Eq ( 9 ) — underlies the onset of chaotic cycling in the model ., Density-dependent feedback similarly served to induce chaos in many early discrete-time ecosystem models 23 ., However , the “two timescale” form of the chaotic dynamics and strange attractor here is a direct result of reversible transitions between stabilizing and disruptive selection ., If the assumptions underlying the gradient dynamics model do not strictly hold—if the additive genetic variance V slowly varies via an additional dynamical equation , or if the initial conditions are such that significant skewness would be expected to persist in the phenotypic distribution , then the chaotic dynamics studied here would be transient rather than indefinite ., While the general stability analysis shown above ( and in the S1 Appendix ) would still hold , additional dynamical equations for V or for high-order moments of the trait distribution would introduce additional constraints on the values of the parameters , which would ( in general ) increase the opportunities for the dynamics to become unstable and lead to diverging predator or prey densities ., However , in some cases these additional effects may actually serve to stabilize the system against both chaos and divergence ., For example , if additional series terms were included in Eq ( 8 ) such that the dependence of mortality rate on c ¯ and c had an upper asymptote 25 , then c ¯ ˙ = 0 would be true for a larger range of parameter values—resulting in the dynamical system remaining planar for a larger range of initial conditions and parameter values , precluding chaos ., The transition between stabilizing and disruptive selection that occurs when the system enters a period of chaotic cycling is strongly reminiscent of a first-order phase transition ., Many physical systems can be described in terms of a free energy landscape , the negative gradient of which determines the forces acting on the system ., Minima of the free energy landscape correspond to equilibrium points of the system , which the dynamical variables will approach with first-order dynamics in an overdamped limit ., When a physical system undergoes a phase transition—a qualitative change in its properties as a single “control” parameter , an externally-manipulable variable such as temperature , is smoothly varied—the transition can be understood in terms of how the control parameter changes the shape of the free energy landscape ., The Landau free energy model represents the simplest mathematical framework for studying such phase transitions: a one-dimensional free energy landscape is defined as a function of the control parameter and an additional independent variable , the “order parameter , ” a derived quantity ( such as particle density or net magnetization ) with respect to which the free energy can have local minima or maxima ., In a first-order phase transition in the Landau model , as the control parameter monotonically changes the relative depth of a local minimum at the origin decreases , until a new local minimum spontaneously appears at a fixed nonzero value of the order parameter—resulting in dynamics that suddenly move towards the new minimum , creating discontinuities in thermodynamic properties of the system such as the entropy 47 ., First-order phase transitions are universal physical models , which have been used to describe a broad range of processes spanning from superconductor breakdown 48 to primordial black hole formation in the early universe 49 ., In the predator-prey model with prey evolution , the fitness function is analogous to the free energy , with the individual trait value c serving as the “order parameter” for the system ., The control parameter for the transition is the prey density , x , which directly couples into the dynamics via the density-dependent term in Eq ( 7 ) ., Because the fitness consists of a linear combination of this term in Eq ( 7 ) and a quartic landscape Eq ( 8 ) , the changing prey density “tilts” the landscape and provokes the appearance of the additional , disruptive peak visible in Fig 3C ., The appearance and disappearance of local maxima as the system switches between stabilizing and disruptive selection is thus analogous to a first-order phase transition , with chaotic dynamics being a consequence of repeated increases and decreases of the control parameter x above and below the critical prey densities x* , x** at which the phase transition occurs ., Similar chaotic dynamics emerge from repeated first-order phase transitions in networks of coupled oscillators , which may alternate between synchronized and incoherent states that resemble the “metastable” and “rapid cycling” portions of the predator-prey dynamics 50 ., The analogy between a first-order phase transition and the onset of disruptive selection can be used to study the chaotic dynamics in terms of dynamical hysteresis , a defining feature of such phase transitions 47 ., For different values of x , the three equilibria corresponding to the locations of the local minima and maxima of the fitness landscape , ceq , can be calculated from the roots of the cubic in Eq ( 12 ) ., The resulting plots of ceq vs x in Fig 4 are generated by solving for the roots in the limit of fast prey equilibration , c ¯ → c e q , which holds in the vicinity of the equilibria ( S1B Appendix ) ., The entry into the transient chaotic cycling occurs when x increases gradually and shifts ceq with it; x eventually attains a critical value x* ( x* ≈ 0 . 45 for the parameters used in the figures ) , causing ceq to jump from its first critical value c* to the origin ( the red “forward” branch in Fig 4 ) ., This jump causes rapid re-equilibration of c ¯ ( t ) , resulting in the rapid entry into cycling observable in Fig 3A ., However , x cannot increase indefinitely due to predation; rather , it decreases until it reaches a second critical value x** , at which point ceq jumps back from the origin to a positive value ( the blue “return” branch in Fig 4; x** = 0 . 192 for these parameter values ) ., This second critical point marks the return to the metastable dynamics in Fig 3A ., This asymmetry in the forward and backwards dynamics of x lead to dynamical time-irreversibility ( hysteresis ) and the jagged , sawtooth-like cycles visible in the dynamics of the full system ., Because the second jump in ceq is steeper , the parts of the trajectories associated with the “return” transition in Fig 3A appear steeper ., Additionally , the maximum value obtained by c ¯ ( t ) anywhere on the attractor , c e q m a x , is determined by the limiting value o | Introduction, Model, Results, Discussion | In many ecosystems , natural selection can occur quickly enough to influence the population dynamics and thus future selection ., This suggests the importance of extending classical population dynamics models to include such eco-evolutionary processes ., Here , we describe a predator-prey model in which the prey population growth depends on a prey density-dependent fitness landscape ., We show that this two-species ecosystem is capable of exhibiting chaos even in the absence of external environmental variation or noise , and that the onset of chaotic dynamics is the result of the fitness landscape reversibly alternating between epochs of stabilizing and disruptive selection ., We draw an analogy between the fitness function and the free energy in statistical mechanics , allowing us to use the physical theory of first-order phase transitions to understand the onset of rapid cycling in the chaotic predator-prey dynamics ., We use quantitative techniques to study the relevance of our model to observational studies of complex ecosystems , finding that the evolution-driven chaotic dynamics confer community stability at the “edge of chaos” while creating a wide distribution of opportunities for speciation during epochs of disruptive selection—a potential observable signature of chaotic eco-evolutionary dynamics in experimental studies . | Evolution is usually thought to occur very gradually , taking millennia or longer in order to appreciably affect a species survival mechanisms ., Conversely , demographic shifts due to predator invasion or environmental change can occur relatively quickly , creating abrupt and lasting effects on a species survival ., However , recent studies of ecosystems ranging from the microbiome to oceanic predators have suggested that evolutionary and ecological processes can often occur over comparable timescales—necessitating that the two be addressed within a single , unified theoretical framework ., Here , we show that when evolutionary effects are added to a minimal model of two competing species , the resulting ecosystem displays erratic and chaotic dynamics not typically observed in such systems ., We then show that these chaotic dynamics arise from a subtle analogy between the evolutionary concept of fitness , and the concept of the free energy in thermodynamical systems ., This analogy proves useful for understanding quantitatively how the concept of a changing fitness landscape can confer robustness to an ecosystem , as well as how unusual effects such as history-dependence can be important in complex real-world ecosystems ., Our results predict a potential signature of a chaotic past in the distribution of timescales over which new species can emerge during the competitive dynamics , a potential waypoint for future experimental work in closed ecosystems with controlled fitness landscapes . | ecology and environmental sciences, predator-prey dynamics, population dynamics, systems science, mathematics, population biology, thermodynamics, computer and information sciences, ecosystems, dynamical systems, free energy, community ecology, physics, population metrics, ecology, predation, natural selection, trophic interactions, biology and life sciences, physical sciences, population density, evolutionary biology, evolutionary processes | null |
1,522 | journal.pgen.1000098 | 2,008 | Evaluating Statistical Methods Using Plasmode Data Sets in the Age of Massive Public Databases: An Illustration Using False Discovery Rates | “Omic” technologies ( genomic , proteomic , etc . ) have led to high dimensional experiments ( HDEs ) that simultaneously test thousands of hypotheses ., Often these omic experiments are exploratory , and promising discoveries demand follow-up laboratory research ., Data from such experiments require new ways of thinking about statistical inference and present new challenges ., For example , in microarray experiments an investigator may test thousands of genes aiming to produce a list of promising candidates for differential genetic expression across two or more treatment conditions ., The larger the list , the more likely some genes will prove to be false discoveries , i . e . genes not actually affected by the treatment ., Statistical methods often estimate both the proportion of tested genes that are differentially expressed due to a treatment condition and the proportion of false discoveries in a list of genes selected for follow-up research ., Because keeping the proportion of false discoveries small ensures that costly follow-on research will yield more fruitful results , investigators should use some statistical method to estimate or control this proportion ., However , there is no consensus on which of the many available methods to use 1 ., How should an investigator choose ?, Although the performance of some statistical methods for analyzing HDE data has been evaluated analytically , many methods are commonly evaluated using computer simulations ., An analytical evaluation ( i . e . , one using mathematical derivations to assess the accuracy of estimates ) may require either difficult-to-verify assumptions about a statistical model that generated the data or a resort to asymptotic properties of a method ., Moreover , for some methods an analytical evaluation may be mathematically intractable ., Although evaluations using computer simulations may overcome the challenge of intractability , most simulation methods still rely on the assumptions inherent in the statistical models that generated the data ., Whether these models accurately reflect reality is an open question , as is how to determine appropriate parameters for the model , what realistic “effect sizes” to incorporate in selected tests , as well as if and how to incorporate correlation structure among the many thousands of observations per unit 2 ., Plasmode data sets may help overcome the methodological challenges inherent in generating realistic simulated data sets ., Catell and Jaspers 3 made early use of the term when they defined a plasmode as “a set of numerical values fitting a mathematico-theoretical model . That it fits the model may be known either because simulated data is produced mathematically to fit the functions , or because we have a real—usually mechanical—situation which we know with certainty must produce data of that kind . ”, Mehta et al . ( p . 946 ) 2 more concisely refer to a plasmode as “a real data set whose true structure is known . ”, The plasmodes can accommodate unknown correlation structures among genes , unknown distributions of effects among differentially expressed genes , an unknown null distribution of gene expression data , and other aspects that are difficult to model using theoretical distributions ., Not surprisingly , the use of plasmode data sets is gaining traction as a technique of simulating reality-based data from HDEs 4 ., A plasmode data set can be constructed by spiking specific mRNAs into a real microarray data set 5 ., Evaluating whether a particular method correctly detects the spiked mRNAs provides information about the methods ability to detect gene expression ., A plasmode data set can also be constructed by using a current data set as a template for simulating new data sets for which some truth is known ., Although in early microarray experiments , sample sizes were too small ( often only 2 or 3 arrays per treatment condition ) to use as a basis for a population model for simulating data sets , larger HDE data sets have recently become publicly available , making their use feasible for simulation experiments ., In this paper , we propose a technique to simulate plasmode data sets from previously produced data ., The source-data experiment was conducted at the Center for Nutrient–Gene Interaction ( CNGI , www . uab . edu/cngi ) , at the University of Alabama at Birmingham ., We use a data set from this experiment as a template for producing a plasmode null data set , and we use the distribution of effect sizes from the experiment to select expression levels for differentially expressed genes ., The technique is intuitively appealing , relatively straightforward to implement , and can be adapted to HDEs in contexts other than microarray experiments ., We illustrate the value of plasmodes by comparing 15 different statistical methods for estimating quantities of interest in a microarray experiment , namely the proportion of true nulls ( hereafter denoted π0 ) , the false discovery rate ( FDR ) 6 and a local version of FDR ( LFDR ) 7 ., This type of analysis enables us , for the first time , to compare key omics research tools according to their performance in data that , by definition , are realistic exemplars of the types of data biologists will encounter ., The illustrations given here provide some insight into the relative performance characteristics of the 15 methods in some circumstances , but definitive claims regarding uniform superiority of one method over another would require more extensive evaluations over multiple types of data sets ., Steps for plasmode creation that are described herein are relatively straightforward ., First , an HDE data set is obtained that reflects the type of experiment for which statistical methods will be used to estimate quantities of interest ., Data from a rat microarray experiment at CNGI were used here ., Other organisms might produce data with different structural characteristics and methods may perform differently on such data ., The CNGI data were obtained from an experiment that used rats to test the pathways and mechanisms of action of certain phytoestrogens 8 , 9 ., In brief , rats were divided into two large groups , the first sacrificed at day 21 ( typically the day of weaning for rats ) , the second sacrificed at day 50 ( the day , corresponding to late human puberty , when rats are most susceptible to chemically induced breast cancer ) ., Each of these groups was subdivided into smaller groups according to diet ., At 21 and 50 days , respectively , the relevant tissues from these rat groups were appropriately processed , and gene expression levels were extracted using GCOS ( GeneChip Operating Software ) ., We exported the microarray image ( * . CEL ) files from GCOS and analyzed them with the Affymetrix Package of Bioconductor/R to extract the MAS 5 . 0 processed expression intensities ., The arrays and data were investigated for outliers using Pearsons correlation , spatial artifacts 10 and a deleted residuals approach 11 ., It is important to note that only one normalization method was considered , but the methods could be compared on RMA normalized data as well ., In fact , comparisons of methods performances on data from different normalization techniques could be done using the plasmode technique ., Second , an HDE data set that compares effect of a treatment ( s ) is analyzed and the vector of effect sizes is saved ., The effect size used here was a simple standardized mean difference ( i . e . , a two sample t-statistics ) but any meaningful metric could be used ., Plasmodes , in fact , could be used to compare the performance of statistical methods when different statistical tests were used to produce the P-values ., We chose two sets of HDE data as templates to represent two distributions of effect sizes and two different null distributions ., We refer to the 21-day experiment using the control group ( 8 arrays ) and the treatment group ( EGCG supplementation , 10 arrays ) as data set 1 , and the 50-day experiment using the control group ( 10 arrays ) and the treatment group ( Resveratrol supplementation , 10 arrays ) as data set, 2 . There were 31 , 042 genes on each array , and two sample pooled variance t-tests for differential expression were used to create a distribution of P-values ., Histograms of the distributions for both data sets are shown in Figure, 1 . The distribution of P-values for data set 1 shows a stronger signal ( i . e . , a larger collection of very small P-values ) than that for data set 2 , suggesting either that more genes are differentially expressed or that those that are expressed have a larger magnitude treatment effect ., This second step provided a distribution of effects sizes from each data set ., Next , create the plasmode null data set ., For each of the HDE data sets , we created a random division of the control group of microarrays into two sets of equal size ., One consideration in doing so is that if some arrays in the control group are ‘different’ from others due to some artifact in the experiment , then the null data set can be sensitive to how the arrays are divided into two sets ., Such artifacts can be present in data from actual HDEs , so this issue is not a limitation of plasmode use but rather an attribute of it , that is , plasmodes are designed to reflect actual structure ( including artifacts ) in a real data set ., We obtained the plasmode null data set from data set 1 by dividing the day 21 control group of 8 arrays into two sets of 4 , and for data set 2 by dividing the control group of 10 arrays into two sets of 5 arrays ., Figure 2 shows the two null distributions of P-values obtained using the two sample t-test on the plasmode null data sets ., Both null distributions are , as expected , approximately uniform , but sampling variability allows for some deviation from uniformity ., A proportion 1−π0 of effect sizes were then sampled from their respective distributions using a weighted probability sampling technique described in the Methods section ., What sampling probabilities are chosen can be a tuning parameter in the plasmode creation procedure ., The selected effects were incorporated into the associated null distribution for a randomly selected proportion 1−π0 of genes in a manner also described in the Methods section ., What proportion of genes is selected may depend upon how many genes in an HDE are expected to be differentially expressed ., This may determine whether a proportion equal to 0 . 01 or 0 . 5 is chosen to construct a plasmode ., Proportions between 0 . 05 and 0 . 2 were used here as they are in the range of estimated proportions of differentially expressed genes that we have seen from the many data sets we have analyzed ., Finally , the plasmode data set was analyzed using a selected statistical method ., We used two sample t-tests to obtain a plasmode distribution of P-values for each plasmode data set because the methods compared herein all analyze a distribution of P-values from an HDE ., P-values were declared statistically significant if smaller than a threshold τ ., Box 1 summarizes symbol definitions ., When comparing the 15 statistical methods , we used three values of π0 ( 0 . 8 , 0 . 9 , and 0 . 95 ) and two thresholds ( τ\u200a=\u200a0 . 01 and 0 . 001 ) ., For each choice of π0 and threshold τ , we ran B\u200a=\u200a100 simulations ., All 15 methods provided estimates of π0 , 14 provided estimates of FDR , and 7 provided estimates of LFDR ., Because the true values of π0 and FDR are known for each plasmode data set , we can compare the accuracy of estimates from the different methods ., There are two basic strategies for estimating FDR , both predicated on an estimated value for π0 , the first using equation ( 1 ) below , the second using a mixture model approach ., Let PK\u200a=\u200aM/K be the proportion of tests that were declared significant at a given threshold , where M and K were defined with respect to quantities in Table, 1 . Then one estimate for FDR at this threshold is , ( 1 ) The mixture model ( usually a two-component mixture ) approach uses a model of the form , ( 2 ) where f is a density , p represents a P-value , f0 a density of a P-value under the null hypothesis , f1 a density of a P-value under the alternative hypothesis , π0 is interpreted as before , and θ a ( possibly vector ) parameter of the distribution ., Since valid P-values are assumed , f0 is a uniform density ., LFDR is defined with respect to this mixture model as , ( 3 ) FDR is defined similarly except that the densities in ( 3 ) are replaced by the corresponding cumulative distribution functions ( CDF ) , that is , ( 4 ) where F1 ( τ ) is the CDF under the alternative hypothesis , evaluated at a chosen threshold τ ., ( There are different definitions of FDR and the definition in ( 4 ) is , under some conditions , the definition of a positive false discovery rate 12 ., However , in cases with a large number of genes many of the variants of FDR are very close 13 ) ., The methods are listed for quick reference in Table, 2 . Methods 1–8 use different estimates for π0 and , as implemented herein , proceed to estimate FDR using equation ( 1 ) ., Method 9 uses a unique algorithm to estimate LFDR and does not supply an estimate of FDR ., Methods 10–15 are based on a mixture model framework and estimate FDR and LFDR using equations ( 3 ) and ( 4 ) where the model components are estimated using different techniques ., All methods were implemented using tuning parameter settings from the respective paper or ones supplied as default values with the code in cases where the code was published online ., First , to compare their differences , we used the 15 methods to analyze the original two data sets , with data set 1 having a “stronger signal” ( i . e . , lower estimates of π0 and FDR ) ., Estimates of π0 from methods 3 through 15 ranged from 0 . 742 to 0 . 837 for data set 1 and 0 . 852 to 0 . 933 for data set, 2 . ( Methods 1 and 2 are designed to control for rather than estimate FDR and are designed to be conservative; hence , their estimates were much closer to, 1 . ) Results of these analyses can be seen in the Supplementary Tables S1 and S2 ., Next , using the two template data sets we constructed plasmode data sets in order to compare the performance of the 15 methods for estimating π0 ( all methods ) , FDR ( all methods except method 9 ) , and LFDR ( methods 9–15 ) ., Figures 3 and 4 show some results based on data set, 2 . More results are available in the Figures S1 , S2 , S3 , S4 , S5 , and S6 ., Figure 3 shows the distribution of 100 estimates for π0 using data set 2 when the true value of π0 is equal to 0 . 8 and 0 . 9 ., Methods 1 and 2 are designed to be conservative ( i . e . , true values are overestimated ) ., With a few exceptions , the other methods tend to be conservative when π0\u200a=\u200a0 . 8 and liberal ( the true value is underestimated ) when π0\u200a=\u200a0 . 9 ., The variability of estimates for π0 is similar across methods , but some plots show a slightly larger variability for methods 12 and 15 when π0\u200a=\u200a0 . 9 ., Figure 4 shows the distribution of estimates for FDR and LFDR at the two thresholds ., The horizontal lines in the plots show the mean ( solid line ) and the minimum and maximum ( dashed lines ) of the true FDR value for the 100 simulations ., A true value for LFDR is not known in the simulation procedure ., The methods tend to be conservative ( overestimate FDR ) when the threshold τ\u200a=\u200a0 . 01 and are more accurate at the lower threshold ., Estimates of FDR are more variable for methods 11 , 13 , and 14 and estimates for LFDR more variable for methods 13 and 14 , with the exception of a few unusual estimates obtained from method 9 ., The high variability of FDR estimates from method 11 may be due to a “less than optimal” choice of the spanning parameter in a numerical smoother ( see also Pounds and Cheng 27 ) ., We did not attempt to tune any of the methods for enhanced performance ., Researchers have been evaluating the performance of the burgeoning number of statistical methods for the analysis of high dimensional omic data , relying on a mixture of mathematical derivations , computer simulations , and sadly , often single dataset illustrations or mere ipse dixit assertions ., Recognizing that the latter two approaches are simply unacceptable approaches to method validation 2 and that the first two suffer from limitations described earlier , an increasing number of investigators are turning to plasmode datasets for method evaluation 28 ., An excellent example is the Affycomp website ( http://affycomp . biostat . jhsph . edu/ ) that allows investigators to compare different microarray normalization methods on datasets of known structure ., Other investigators have also recently used plasmode-like approaches which they refer to as ‘data perturbation’ 29 , 30 , yet it is not clear that these ‘perturbed datasets’ can distinguish true from false positives , suggesting greater need for articulation of principles or standards of plasmode generation ., As more high dimensional experiments with larger sample sizes become available , researchers can use a new kind of simulation experiment to evaluate the performance of statistical analysis methods , relying on actual data from previous experiments as a template for generating new data sets , referred to herein as plasmodes ., In theory , the plasmode method outlined here will enable investigators to choose on an empirical basis the most appropriate statistical method for their HDEs ., Our results also suggest that large , searchable databases of plasmode data sets would help investigators find existing data sets relevant to their planned experiments ., ( We have already implemented a similar idea for planning sample size requirements in HDEs 31 , 32 . ), Investigators could then use those data sets to compare and evaluate several analytical methods to determine which best identifies genes affected by the treatment condition ., Or , investigators could use the plasmode approach on their own data sets to glean some understanding of how well a statistical method works on their type of data ., Our results compare the performance of 15 statistical methods as they process the specific plasmode data sets constructed from the CNGI data ., Although identifying one uniformly superior method ( if there is one ) is difficult within the limitations of this one comparison , our results suggest that certain methods could be sensitive to tuning parameters or different types of data sets ., A comparison over multiple types of source data sets with different distributions of effects sizes could add the detail necessary to clearly recommend certain methods over others 1 ., Other papers have used simulation studies to compare the performance of methods for estimating π0 and FDR ( e . g . , Hsueh et al . 33; Nguyen 34; Nettleton et al . 35 ) ., We compared methods that use the distribution of P-values as was done in Broberg 36 and Yang and Yang 37 ., Unlike our plasmode approach , most earlier comparison studies used normal distributions to simulate gene expression data and incorporated dependence using a block diagonal correlation structure as in Allison et al 26 ., A key implication and recommendation of our paper is that , as data from the growing number of HDEs is made publicly available , researchers may identify a previous HDE similar to one they are planning or have recently conducted and use data from these experiments to construct plasmode data sets with which to evaluate candidate statistical methods ., This will enable investigators to choose the most appropriate method ( s ) for analyzing their own data and thus to increase the reliability of their research results ., In this manner , statistical science ( as a discipline that studies the methods of statistics ) becomes as much an empirical science as a theoretical one ., The quantities in Table 1 are those for a typical microarray experiment ., Let N\u200a=\u200aA+B and M\u200a=\u200aC+D and note that both N and M will be known and K\u200a=\u200aN+M ., However , the number of false discoveries is equal to an unknown number C . The proportion of false discoveries for this experiment is C/M ., Benjamini and Hochberg 6 defined FDR as , P ( M>0 ) where I{M>0} is an indicator function equal to 1 if M>0 and zero otherwise ., Storey 12 defined the positive FDR as ., Since P ( M>0 ) ≥1− ( 1−τ ) K , and since K is usually very large , FDR≈pFDR , so we do not distinguish between FDR and pFDR as the parameter being estimated and simply refer to it as FDR with estimates denoted ( and ) ., Suppose we identify a template data set corresponding to a two treatment comparison for differential gene expression for K genes ., Obtain a vector , δ , of effect sizes ., One suggestion is the usual t-statistic , where the ith component of δ , is given by ( 5 ) where ntrt , nctrl are number of biological replicates in the treatment and control group , respectively , X̅i , trt , X̅i , ctrl are the mean gene expression levels for gene i in treatment and control groups , and , is the usual pooled sample variance for the ith gene , where the two sample variances are given by , ., In what follows , we will use this choice for δi since it allows for effects to be described by a unitless quantity , i . e . , it is scaled by the standard error of the observed mean difference X̅i , trt−X̅i , ctrl for each gene ., For convenience , assume that nctrl is an even number and divide the control group into two sets of equal size ., Requiring nctrl≥4 allows for at least two arrays in each set , thus allowing estimates of variance within each of the two sets ., This will be the basis for the plasmode “null” data set ., There are ways of making this division ., Without loss of generality , assume that the first nctrl/2 arrays after the division are the plasmode control group and the second nctrl/2 are the plasmode treatment group ., Specify a value of π0 and specify a threshold , τ , such that a P-value ≤τ is declared evidence of differential expression ., Execute the following steps ., One can then obtain another data set and repeat the entire process to evaluate a method on a different type of data , perhaps from a different organism having a different null distribution , or a different treatment type giving a different distribution of effect sizes , δ ., Alternatively , one might choose to randomly divide the control group again and repeat the entire process ., This would help assess how differences in arrays within a group or possible correlation structure might affect results from a method ., If some of the arrays in the control group have systematic differences among them ( e . g . , differences arising from variations in experimental conditions—day , operator , technology , etc . ) , then the null distribution can be sensitive to the random division of the original control group into the two plasmode groups , particularly if nctrl is small . | Introduction, Results, Discussion, Methods | Plasmode is a term coined several years ago to describe data sets that are derived from real data but for which some truth is known ., Omic techniques , most especially microarray and genomewide association studies , have catalyzed a new zeitgeist of data sharing that is making data and data sets publicly available on an unprecedented scale ., Coupling such data resources with a science of plasmode use would allow statistical methodologists to vet proposed techniques empirically ( as opposed to only theoretically ) and with data that are by definition realistic and representative ., We illustrate the technique of empirical statistics by consideration of a common task when analyzing high dimensional data: the simultaneous testing of hundreds or thousands of hypotheses to determine which , if any , show statistical significance warranting follow-on research ., The now-common practice of multiple testing in high dimensional experiment ( HDE ) settings has generated new methods for detecting statistically significant results ., Although such methods have heretofore been subject to comparative performance analysis using simulated data , simulating data that realistically reflect data from an actual HDE remains a challenge ., We describe a simulation procedure using actual data from an HDE where some truth regarding parameters of interest is known ., We use the procedure to compare estimates for the proportion of true null hypotheses , the false discovery rate ( FDR ) , and a local version of FDR obtained from 15 different statistical methods . | Plasmode is a term used to describe a data set that has been derived from real data but for which some truth is known ., Statistical methods that analyze data from high dimensional experiments ( HDEs ) seek to estimate quantities that are of interest to scientists , such as mean differences in gene expression levels and false discovery rates ., The ability of statistical methods to accurately estimate these quantities depends on theoretical derivations or computer simulations ., In computer simulations , data for which the true value of a quantity is known are often simulated from statistical models , and the ability of a statistical method to estimate this quantity is evaluated on the simulated data ., However , in HDEs there are many possible statistical models to use , and which models appropriately produce data that reflect properties of real data is an open question ., We propose the use of plasmodes as one answer to this question ., If done carefully , plasmodes can produce data that reflect reality while maintaining the benefits of simulated data ., We show one method of generating plasmodes and illustrate their use by comparing the performance of 15 statistical methods for estimating the false discovery rate in data from an HDE . | biotechnology, mathematics, science policy, computational biology, molecular biology, genetics and genomics | null |
931 | journal.pcbi.1006166 | 2,018 | Variability in pulmonary vein electrophysiology and fibrosis determines arrhythmia susceptibility and dynamics | Success rates for catheter ablation of persistent atrial fibrillation ( AF ) patients are currently low; however , there is a subset of patients for whom pulmonary vein isolation ( PVI ) alone is a successful treatment strategy 1 ., PVI ablation may work by preventing triggered beats from entering the left atrial body , or by converting rotors or functional reentry around the left atrial/pulmonary vein ( LA/PV ) junction to anatomical reentry around a larger circuit , potentially converting AF to a simpler tachycardia 2 ., It is difficult to predict whether PVI represents a sufficient treatment strategy for a given patient with persistent AF 1 , and it is unclear what to do for the majority of patients for whom it is not effective ., Patients with AF exhibit distinct properties in effective refractory period ( ERP ) and conduction velocity ( CV ) in the PVs ., For example , paroxysmal AF patients have shorter ERP and longer conduction delays compared to control patients 3 ., AF patients show a number of other differences to control patients: PVs are larger 4; PV fibrosis is increased; and fiber direction may be more disorganised , particularly at the PV ostium 5 ., There are also differences within patient groups; for example , patients for whom persistent AF is likely to terminate after PVI have a larger ERP gradient compared to those who require further ablation 1 , 3 ., Electrical driver location changes as AF progresses; drivers ( rotors or focal sources ) are typically located close to the PVs in early AF , but are also located elsewhere in the atria with longer AF duration 6 ., Atrial fibrosis is a major factor associated with AF and modifies conduction ., However , there is conflicting evidence on the relationship between fibrosis distribution and driver location 7 , 8 ., It is difficult to clinically separate the individual effects of these factors on arrhythmia susceptibility and maintenance ., We hypothesise that the combination of PV properties and atrial body fibrosis determines driver location and , thus , the likely effectiveness of PVI ., In this study , we tested this hypothesis by using computational modelling to gain mechanistic insight into the individual contribution of PV ERP , CV , fiber direction , fibrosis and anatomy on arrhythmia susceptibility and dynamics ., We incorporated data on APD ( action potential duration , as a surrogate for ERP ) and CV for the PVs to determine mechanisms underlying arrhythmia susceptibility , by testing inducibility from PV ectopic beats ., We also predicted driver location , and PVI outcome ., All simulations were performed using the CARPentry simulator ( available at https://carp . medunigraz . at/carputils/ ) ., We used a previously published bi-atrial bilayer model 9 , which consists of resistively coupled endocardial and epicardial surfaces ., This model incorporates detailed atrial structure and includes transmural heterogeneity at a similar computational cost to surface models ., We chose to use a bilayer model rather than a volumetric model incorporating thickness for this study because of the large numbers of parameters investigated , which was feasible with the reduced computational cost of the bilayer model ., As previously described , the bilayer model was constructed from computed tomography scans of a patient with paroxysmal AF , which were segmented and meshed to create a finite element mesh suitable for electrophysiology simulations ., Fiber information was included in the model using a semi-automatic rule based method that matches histological descriptions of atrial fiber orientation 10 ., The left atrium of the bilayer model consists of linearly coupled endocardial and epicardial layers , while the right atrium is an epicardial layer , with endocardial atrial structures including the pectinate muscles and crista terminalis ., The left and right atrium of the model are electrically connected through three pathways: Bachmann’s bundle , the coronary sinus and the fossa ovalis ., Tissue conductivities were tuned to human activation mapping data from Lemery et al . 9 , 11 ., The Courtemanche-Ramirez-Nattel human atrial ionic model was used with changes representing electrical remodelling during persistent AF 12 , together with a doubling of sodium conductance to produce realistic action potential upstroke velocities 9 , and a decrease in IK1 by 20% to match clinical restitution data 13 ., Regional heterogeneity in repolarisation was included by modifying ionic conductances of the cellular model , as described in Bayer et al . 14 , which follows Aslanidi et al . and Seemann et al . 15 , 16 ., Parameters for the baseline PV model were taken from Krueger et al . 17 ., The following PV properties were varied as shown in schematic Fig 1: APD , CV , fiber direction , the inclusion of fibrosis in the PVs and the atrial geometry ., These are described in the following sections ., To investigate the effects of PV length and diameter on arrhythmia inducibility and arrhythmia dynamics , bi-atrial bilayer meshes were constructed from MRI data for twelve patients ., All patients gave written informed consent; this study is in accordance with the Declaration of Helsinki , and approved by the Institutional Ethics Committee at the University of Bordeaux ., Patient-specific models with electrophysiological heterogeneity and fiber direction were constructed using our modelling pipeline , which uses a universal atrial coordinate system to map scalar and vector data from the original bilayer model to a new patient specific mesh ., Late gadolinium enhancement MRI ( average resolution 0 . 625mm x 0 . 625mm x 2 . 5mm ) was performed using a 1 . 5T system ( Avanto , Siemens Medical Solutions , Erlangen , Germany ) ., These LGE-MRI data were manually segmented using the software MUSIC ( Electrophysiology and Heart Modeling Institute , University of Bordeaux , Bordeaux France , and Inria , Sophia Antipolis , France , http://med . inria . fr ) ., The resulting endocardial surfaces were meshed ( using the Medical Imaging Registration Toolkit mcubes algorithm 18 ) and cut to create open surfaces at the mitral valve , the four pulmonary veins , the tricuspid valve , and each of the superior vena cava , the inferior vena cava and the coronary sinus using ParaView software ( Kitware , Clifton Park , NY , USA ) ., The meshes were then remeshed using mmgtools meshing software ( http://www . mmgtools . org/ ) , with parameters chosen to produce meshes with an average edge length of 0 . 34mm to match the resolution of the previously published bilayer model 9 ., Two atrial coordinates were defined for each of the LA and RA , which allow automatic transfer of atrial structures to the model , such as the pectinate muscles and Bachmann’s bundle ., These coordinates were also used to map fiber directions to the bilayer model ., To investigate the effects of PV electrophysiology on arrhythmia inducibility and dynamics , we varied PV APD and CV by modifying the value of the inward rectifier current ( IK1 ) conductance and tissue level conductivity respectively ., IK1 conductance was chosen in this case to investigate macroscopic differences in APD 19 , although several ionic conductances are known to change with AF 20 ., Modifications were either applied homogeneously or following a ostial-distal gradient ., This gradient was implemented by calculating geodesic distances from the rim of mesh nodes at the distal PV boundary to all nodes in the PV and from the rim of nodes at the LA/PV junction to all nodes in the PV ., The ratio of these two distances was then used as a distance parameter from the LA/PV junction to the distal end of the PV ( see Fig 1 ) ., IK1 conductance was multiplied by a value in the range 0 . 5–2 . 5 , resulting in PV APDs in the clinical range of 100–190ms 3 , 21 , 22 ., This rescaling was either a homogeneous change or followed a gradient along the PV length ., Gradients of IK1 conductance varied from the baseline value at the LA/PV junction , to a maximum scaling factor at the distal boundary ., PV APDs are reported at 90% repolarisation for a pacing cycle length of 1000ms ., LA APD is 185ms , measured at a LA pacing cycle length of 200ms ., To cover the clinically observed range of PV CVs , longitudinal and transverse tissue conductivities were divided by 1 , 2 , 3 or 5 , resulting in CVs , measured along the PV axis , in the range: 0 . 28–0 . 67m/s 3 , 21–24 ., To model heterogeneous conduction slowing , conductivities were varied as a function of distance from the LA/PV junction , ranging from baseline at the junction to a maximum rescaling ( minimum conductivity ) at the distal boundary ., The direction of this gradient was also reversed to model conduction slowing at the LA/PV junction 5 ., Motivated by the findings of Hocini et al . 5 , interstitial fibrosis was modelled for the PVs with a density varying along the vein , increasing from the LA/PV junction to the distal boundary ., This was implemented by randomly selecting edges of elements of the mesh with probability scaled by the distance parameter and the angle of the edge compared to the element fiber direction , where edges in the longitudinal fiber direction were four times more likely to be selected than those in the transverse direction , following our previous methodology 25 ., To model microstructural discontinuities , no flux boundary conditions were applied along the connected edge networks , following Costa et al . 26 ., An example of modelled PV interstitial fibrosis is shown in S1A Fig . For a subset of simulations , interstitial fibrosis was incorporated in the biatrial model based on late gadolinium enhancement ( LGE ) -MRI data , using our previously published methodology 25 ., In brief , likelihood of interstitial fibrosis depended on both LGE intensity and the angle of the edge compared to the element fiber direction ( see S1B Fig ) ., LGE intensity distributions were either averaged over a population of patients 27 , or for an individual patient ., The averaged distributions were for patients with paroxysmal AF ( averaged over 34 patients ) , or persistent AF ( averaged over 26 patients ) ., For patient-specific simulations , the model arrhythmia dynamics were compared to AF recordings from a commercially available non-invasive ECGi mapping technology ( CardioInsight Technologies Inc . , Cleveland , OH ) for which phase mapping analysis was performed as previously described 28 ., PV fiber direction shows significant inter-patient variability ., Endocardial and epicardial fiber direction in the four PVs was modified according to fiber arrangements described in the literature 5 , 29 , 30 ., Six arrangements were considered , as follows:, 1 . circular arrangement on both the endocardium and epicardium;, 2 . spiralling arrangement on both the endocardium and epicardium;, 3 . circular arrangement on the endocardium , with longitudinal epicardial fibers;, 4 . fibers progress from longitudinal at the distal vein to circumferential at the ostium , with identical endocardial and epicardial fibers;, 5 . epicardial layer fibers as per case 4 , with circumferential endocardial fibers;, 6 . as per case 4 , but with a chaotic fiber arrangement at the LA/PV junction ., These fiber distributions are shown in S2 Fig . Cases 4–6 were implemented by setting the fiber angle to be a function of the distance along the vein , measured from the LA/PV junction to the distal boundary , varying from circumferential at the junction to longitudinal at the distal end ( representing a change of 90 degrees ) ., The disorder in fiber direction at the LA/PV junction for case 6 was implemented by taking the fibers of case 4 and adding independent standard Gaussian distributions scaled by the distance from the distal boundary , resulting in the largest perturbations at the ostium ., Arrhythmia inducibility was tested by extrastimulus pacing from each of the four PVs individually using a clinically motivated protocol 31 , to simulate the occurrence of PV ectopics ., Simulations were performed for each of the PVs , to determine the effects of ectopic beat location on inducibility ., Sinus rhythm was simulated by stimulating the sinoatrial node region of the model at a cycle length of 700ms throughout the simulation ., Each PV was paced individually with five beats at a cycle length of 160ms , and coupling intervals between the first PV beat and a sinus rhythm beat in the range 200–500 ms . Thirty-two pacing protocols were applied for each model set up: eight coupling intervals ( coupling interval = 200 , 240 , 280 , 320 , 360 , 400 , 440 , 480ms ) , for each of the four PVs ., Inducibility is reported as the proportion of cases resulting in reentry; termed the inducibility ratio ., The effects of PVI were determined for model set-ups that used the original bilayer geometry and in which the arrhythmia lasted for greater than two seconds ., PVI was applied two seconds post AF initiation in each case by setting the tissue conductivity close to zero ( 0 . 001 S/m ) in the regions shown in S3 Fig . For each case , ten seconds of arrhythmia data were analysed , starting from two seconds post AF initiation , to identify re-entrant waves and wavefront break-up using phase ., The phase of the transmembrane voltage was calculated for each node of the mesh using the Hilbert transform , following subtraction of the mean 32 ., Phase singularities ( PSs ) for the transmembrane potential data were identified by calculating the topological charge of each element in the mesh 33 , and PS spatial density maps were calculated using previously published methods 14 ., PS density maps were then partitioned into the LA body , PV regions , and the RA to assess where drivers were located in relation to the PVs ( see S3 Fig ) ., The PV region was defined as the areas enclosed by , and including , the PVI lines; the LA region was then the rest of the LA and left atrial appendage ., The PV PS density ratio was then defined as the total PV PS count divided by the total model PS count over both atria ., A difference in APD between the model LA and PVs was required for AF induction ., Modelling the PVs using LA cellular properties resulted in non-inducibility , whereas , modelling the LA using PV cellular properties resulted in either non-inducibility or macroreentry ., The effects of modifying PV APD homogeneously or following a gradient are shown in Table 1 ., Simulations in which PV APD was longer than LA APD were non-inducible ( PV APD 191ms ) ., As APD was decreased below the baseline value ( 181ms ) , inducibility initially increased and then fluctuated ., Comparing cases with equal distal APD , arrhythmia inducibility was significantly higher for APD following a ostial-distal gradient than for homogeneous APD ( p = 0 . 03 from McNemar’s test ) ., PS location was also affected by PV APD ., PV PS density was low in cases of short APD , an example of which is shown in Fig 2 where reentry is no longer seen around the LA/PV junction in the case of short APD ( 120ms ) ., This change was more noticeable for cases with homogeneous PV APD than for a gradient in APD; PV reentry was observed for the baseline case and a heterogeneous APD case , but not for a homogeneous decrease in APD ., Arrhythmia inducibility decreased with homogeneous CV slowing ( from 0 . 38 i . e . 12/32 at 0 . 67m/s to 0 . 03 i . e . 1/32 at 0 . 28m/s ) ., In the baseline model , reentry occurs close to the LA/PV junction due to conduction block when the paced PV beat encounters a change in fiber direction at the base of the PVs , together with a longer LA APD compared to the PV APD ., In this case , the wavefront encounters a region of refractory tissue due to the longer APD in the LA ., However , when PV CV is slowed homogeneously , the wavefront takes longer to reach the LA tissue , giving the tissue enough time to recover , such that conduction block and reentry no longer occurs ., Modifying conductivity following a gradient means that , unlike the homogeneous case , the time taken for the extrastimulus wavefront to reach the LA tissue is similar to the baseline case , so the LA tissue might still be refractory and conduction block might occur ., In the case that conduction was slowest at the distal vein , the inducibility was similar to the baseline case ( see Table 2 , GA , inducibility is 0 . 38 at baseline and 0 . 34 for the cases with CV slowing ) ., Cases with greatest conduction slowing at the LA/PV junction ( see Table 2 , GB ) exhibit an increase in inducibility ( from 0 . 38 to 0 . 53 ) when CV is decreased because of the discontinuity in conductivity at the junction ., Fig 2 shows that reentry is seen around the LA/PV junction in cases with both baseline and slow CV , indicating that the presence of reentry at the LA/PV junction is independent of PV CV ., PV conduction properties are also affected by PV fiber direction ., Modifications in fiber direction increased inducibility compared to the baseline fiber direction ( baseline case: 0 . 38; modified fiber direction cases 1-6: 0 . 53-0 . 63 ) ., The highest inducibility occurred with circular fibers at the ostium ( cases 1 and 4 , 0 . 63 ) , independent of fiber direction at the distal PV end ., This inducibility was reduced if the epicardial fibers were not circular at the ostium ( case 3 , 0 . 56 ) , or if fibers were spiralling ( case 2 , 0 . 56 ) instead of circular ., Next we investigated the interplay between PV properties and atrial fibrosis ., LA fibrosis properties were varied to represent interstitial fibrosis in paroxysmal and persistent AF patients , incorporating average LGE-MRI distributions 27 into the model ., These control , paroxysmal and persistent AF levels of fibrosis were then combined with PV properties varied as follows: baseline CV and APD ( 0 . 67m/s , 181ms ) , slow CV ( 0 . 51m/s ) , short APD ( 120ms ) , slow CV and short APD ., PS distributions in Fig 2 show that reentry occurred around the LA/PV junction in the case of baseline PV APD for control or paroxysmal levels of fibrosis , but not for shorter PV APD ., Modifying PV CV did not affect whether LA/PV reentry is observed ., Rotors were found to stabilise to regions of high fibrosis density in the persistent AF case ., Models with PV fibrosis had a higher inducibility compared to the baseline case ( 0 . 47 vs . 0 . 38 ) and a higher PV PS density since reentry localised there ., Fig 3 shows an example with moderate PV fibrosis ( A ) in which reentry changed from around the RIPV to the LIPV later in the simulation; adding a higher level of PV fibrosis resulted in a more stable reentry around the right PVs ( B ) ., The relationship between LA fibrosis and PV properties on driver location was investigated on an individual patient basis for four patients ., For patients for whom rotors were located away from the PVs ( Fig 4 LA1 ) , increasing model fibrosis from low to high increased the model agreement with clinical PS density 2 . 3 ± 1 . 0 fold ( comparing the sensitivity of identifying clinical regions of high PS density using model PS density between the two simulations ) ., For other patients , lower levels of fibrosis were more appropriate ( 2 . 1 fold increase in agreement for lower fibrosis , Fig 4 LA2 ) , and PV isolation converted fibrillation to macroreentry in the model ., Arrhythmia inducibility showed a large variation between patient geometries ( 0 . 16–0 . 47 ) ., Increasing PV area increased inducibility to a different degree for each vein: right superior PV ( RSPV ) inducibility was generally high ( > 0 . 75 for all but one geometry ) independent of PV area; left superior PV ( LSPV ) inducibility increased with PV area ( Spearman’s rank correlation coefficient of 0 . 36 indicating positive correlation; line of best fit gradient 0 . 27 , R2 = 0 . 3 ) ; left inferior PV ( LIPV ) and right inferior PV ( RIPV ) inducibility exhibited a threshold effect , in which veins were only inducible above a threshold area ( Fig 5A ) ., There is no clear relationship between PV length and inducibility ., PV PS density ratio increased as PV area increased ( Fig 5B , Spearman’s rank correlation coefficient of 0 . 41 indicating positive correlation ) ., Fig 5C shows that rotor and wavefront trajectories depend on patient geometry , exhibiting varied importance of the PVs compared to other atrial regions ., PVI outcome was assessed for cases with varied PV APD ( both with a homogeneous change or following a gradient ) , with the inclusion of PV fibrosis and with varied PV fiber direction because these factors were found to affect the PV PS density ratio ., PVI outcome was classified into three classes depending on the activity 1 second after PVI was applied in the model: termination , meaning there was no activity; macroreentry , meaning that there was a macroreentry around the LA/PV junctions; AF sustained by LA rotors , meaning there were drivers in the LA body ., These classes accounted for different proportions of the outcomes: termination ( 27 . 3% of cases ) , macroreentry ( 39 . 4% ) , or AF sustained by LA rotors ( 33 . 3% ) ., Calculating the PV PS density ratio before PVI for each of these classes shows that cases in which the arrhythmia either terminated or changed to a macroreentry are characterised by a statistically higher PV PS density ratio pre-PVI than cases sustained by LA rotors post-PVI ( see Fig 6 , t-test comparing termination and LA rotors shows they are significantly different , p<0 . 001; comparing macroreentry and LA rotors p = 0 . 01 ) ., High PV PS density ratio may indicate likelihood of PVI success ., In this computational modelling study , we demonstrated that the PVs can play a large role in arrhythmia maintenance and initiation , beyond being simply sources of ectopic beats ., We separated the effects of PV properties and atrial fibrosis on arrhythmia inducibility , maintenance mechanisms and the outcome of PVI , based on population or individual patient data ., PV properties affect arrhythmia susceptibility from ectopic beats; short PV APD increased arrhythmia susceptibility , while longer PV APD was found to be protective ., Arrhythmia inducibility increased with slower CV at the LA/PV junction , but not for cases with homogeneous CV changes or slower CV at the distal PV ., The effectiveness of PVI is usually attributed to PV ectopy , but our study demonstrates that the PVs affect reentry in other ways and this may , in part , also account for success or failure of PVI ., Both PV properties and fibrosis distribution affect arrhythmia dynamics , which varies from meandering rotors to PV reentry ( in cases with baseline or long APD ) , and then to stable rotors at regions of high fibrosis density ., PS density in the PV region was high for cases with PV fibrosis ., The measurement of fibrosis and PV properties may indicate patient specific susceptibility to AF initiation and maintenance ., PV PS density before PVI was higher in cases in which AF terminated or converted to a macroreentry; thus , high PV PS density may indicate likelihood of AF termination by PVI alone ., PV repolarisation is heterogeneous in the PVs 23 , and exhibits distinct properties in AF patients , with Rostock et al . reporting a greater decrease in PV ERP than LA ERP in patients with AF , termed AF begets AF in the PVs 21 ., Jais et al . found that PV ERP is greater than LA ERP in AF patients , but this gradient is reversed in AF patients 3 ., ERP measured at the distal PV is shorter than at the LA/PV junction during AF 5 , 22 ., Motivated by these clinical and experimental studies , we modelled a decrease in PV APD , which was applied either homogeneously , or as a gradient of decreasing APD along the length of the PV , with the shortest APD at the distal PV rim ., An initial decrease in APD increased inducibility ( Table 1 ) , which agrees with clinical findings of increased inducibility for AF patients ., Applying this change following a gradient , as observed in previous studies , led to an increased inducibility compared to a homogeneous change in APD ., Similar to Calvo et al . 34 we found that rotor location depends on PV APD ( Fig 2 ) ., Thus PV APD affects PVI outcome in two ways; on the one hand , decreasing APD increases inducibility , emphasising the importance of PVI in the case of ectopic beats; on the other hand , PV PS density decreases for cases with short PV APD , and PVI was less likely to terminate AF ., Multiple studies have measured conduction slowing in the PVs 3 , 5 , 21–24 ., We modelled changes in tissue conductivity either homogeneously , or as a function of distance along the PV ., Simply decreasing conductivity and thus decreasing CV , decreased inducibility ( Table 2 ) ., Kumagai et al . reported that conduction delay was longer for the distal to ostial direction 22 ., We found that modifying conductivity following a gradient , with CV decreasing towards the LA/PV junction , resulted in an increase in inducibility in the model ., This agrees with the clinical observations of Pascale et al . 1 ., This suggests that PVI should be performed in cases in which CV decreases towards the LA/PV junction as these cases have high inducibility ., Changes in CV may also be due to other factors , including gap junction remodelling , modified sodium conductance or changes in fiber direction 5 , 29 ., A variety of PV fiber patterns have been described in the literature and there is variability between patients ., Interestingly , all of the PV fiber directions considered in our study showed an increased inducibility compared to the baseline model ., Verheule et al . 29 documented circumferential strands that spiral around the lumen of the veins , motivating the arrangements for cases 1 and 4 in our study; Aslanidi et al . 15 reported that fibers run in a spiralling arrangement ( case 2 ) ; Ho et al . 30 measured mainly circular or spiral bundles , with longitudinal bundles ( cases 3 and 5 ) ; Hocini et al . 5 reported longitudinal fibers at the distal PV , with circumferential and a mixed chaotic fiber direction at the PV ostium ( case 6 ) ., Using current imaging technologies , PV fiber direction cannot be reliably measured in vivo ., In our study , fiber direction at the PV ostium was found to be more important than at the distal PV; the greatest inducibility was for cases with circular fibers at the ostium on both endocardial and epicardial surfaces , independent of fiber direction at the distal PV end ., Similar to modelling studies by both Coleman 35 and Aslanidi 15 , inducibility increased due to conduction block near the PVs ., PVs may be larger in AF patients compared to controls 4 , 36 , and this difference may vary between veins; Lin et al . found dilatation of the superior PVs in patients with focal AF originating from the PVs , but no difference in the dimensions of inferior PVs compared to control or to patients with focal AF from the superior vena cava or crista terminalis 37 ., We found that inducibility increased with PV area for the LSPV , LIPV and RIPV , but not for the RSPV ( see Fig 5 ) ., In addition , PV PS density ratio increased with total PV area , suggesting that PVI alone is more likely to be a successful treatment strategy in the case of larger veins ., However , Den Uijl et al . found no relation between PV dimensions and the outcome of PVI 38 ., Rotors were commonly found in areas of high surface curvature , including the LA/PV junction and left atrial appendage ostia , which agrees with findings of Tzortzis et al . 39 ., However , there were differences in PS density between geometries , with varying importance of the LA/PV junction ( Fig 5 ) , demonstrating the importance of modelling the geometry of an individual patient ., Myocardial tissue within the PVs is significantly fibrotic , which may lead to slow conduction and reentry 5 , 30 , 40 ., More fibrosis is found in the distal PV , with increased connective tissue deposition between myocardial cells 41 ., We modelled interstitial PV fibrosis with increasing density distally , and found that the inclusion of PV fibrosis increased PS density in the PV region of the model due to increased reentry around the LA/PV junction and wave break in the areas of fibrosis ., This , together with the results in Fig 6 , suggests that PVI alone is more likely to be a successful in cases of high PV fibrosis ., There are multiple methodologies for modelling atrial fibrosis 25 , 42 , 43 , and the choice of method may affect this localisation ., Population based distributions of atrial fibrosis were modelled for paroxysmal and persistent patients , together with varied PV properties ., The presence of LA/PV reentry depends on both PV properties and the presence of fibrosis; reentry is seen at the LA/PV junction for cases with baseline PV APD , but not for short PV APD , and stabilised to areas of high fibrosis in persistent AF , for which LA/PV reentry no longer occurred ., This suggests that rotor location depends on both fibrosis and PV properties ., This finding may explain the clinical findings of Lim et al . in which drivers are primarily located in the PV region in early AF , but AF complexity increased with increased AF duration , and drivers are also located at sites away from the PVs 6 ., During early AF , PV properties may be more important , while with increasing AF duration , there is increased atrial fibrosis in the atrial body that affects driver location ., This suggests that in cases with increased atrial fibrosis in the atrial body , ablation in addition to PVI is likely to be required ., Simulations of models with patient-specific atrial fibrosis together with varied PV properties performed in this study offer a proof of concept for using this approach in future studies ., The level of atrial fibrosis and PV properties that gave the best fit of the model PS density to the clinical PS density varied between patients ., Measurement of PV ERP and conduction properties using a lasso catheter before PVI could be used to tune the model properties , together with LGE-MRI or an electro-anatomic voltage map ., It is difficult to predict whether PVI alone is likely to be a successful treatment strategy for a patient with persistent AF 44 ., This will depend on both the susceptibility to AF from ectopic beats , together with electrical driver location , and electrical size ., Our study describes multiple factors that affect the susceptibility to AF from ectopic beats ., Measurement of PV APD , PV CV and PV size will allow prediction of the susceptibility to AF from ectopic beats ., Arrhythmia susceptibility increased in cases with short PV APD , slower CV at the LA/PV junction and larger veins , suggesting the importance of PVI in these cases ., The likelihood that PVI terminates AF was also found to depend on driver location , assessed using PS density ., Our simulation studies suggest that high PV PS density indicates likelihood of PVI success ., Thus either measuring this clinically using non-invasive ECGi recordings , or running patient-specific simulations to estimate this value may suggest whether ablation in addition to PVI should be performed ., In a recent clinical study , Navara et al . observed AF termination during ablation near the PVs , before complete isolation , in cases where rotational and focal activity were identified close to these ablation sites 45 ., These data may support the PV PS density metric suggested in our study ., Our simulations show that PV PS density depends on PV APD , the degree of PV fibrosis and to a lesser extent on PV fiber direction ., To the best of the authors’ knowledge , there are no previous studies on the relationship between fibrosis in the PVs , or PV fiber direction , and the success rate of PVI ., Measuring atrial electrogram properties , including AF cycle length , before and after ablation may indicate changes in local tissue refractoriness 46 ., PV APD can be estimated clinically by pacing to find the PV ERP; and PV fibrosis may be estimated using LGE-MRI , although this is challenging , as the tissue is thin ., PV | Introduction, Materials and methods, Results, Discussion | Success rates for catheter ablation of persistent atrial fibrillation patients are currently low; however , there is a subset of patients for whom electrical isolation of the pulmonary veins alone is a successful treatment strategy ., It is difficult to identify these patients because there are a multitude of factors affecting arrhythmia susceptibility and maintenance , and the individual contributions of these factors are difficult to determine clinically ., We hypothesised that the combination of pulmonary vein ( PV ) electrophysiology and atrial body fibrosis determine driver location and effectiveness of pulmonary vein isolation ( PVI ) ., We used bilayer biatrial computer models based on patient geometries to investigate the effects of PV properties and atrial fibrosis on arrhythmia inducibility , maintenance mechanisms , and the outcome of PVI ., Short PV action potential duration ( APD ) increased arrhythmia susceptibility , while longer PV APD was found to be protective ., Arrhythmia inducibility increased with slower conduction velocity ( CV ) at the LA/PV junction , but not for cases with homogeneous CV changes or slower CV at the distal PV ., Phase singularity ( PS ) density in the PV region for cases with PV fibrosis was increased ., Arrhythmia dynamics depend on both PV properties and fibrosis distribution , varying from meandering rotors to PV reentry ( in cases with baseline or long APD ) , to stable rotors at regions of high fibrosis density ., Measurement of fibrosis and PV properties may indicate patient specific susceptibility to AF initiation and maintenance ., PV PS density before PVI was higher for cases in which AF terminated or converted to a macroreentry; thus , high PV PS density may indicate likelihood of PVI success . | Atrial fibrillation is the most commonly encountered cardiac arrhythmia , affecting a significant portion of the population ., Currently , ablation is the most effective treatment but success rates are less than optimal , being 70% one-year post-treatment ., There is a large effort to find better ablation strategies to permanently cure the condition ., Pulmonary vein isolation by ablation is more or less the standard of care , but many questions remain since pulmonary vein ectopy by itself does not explain all of the clinical successes or failures ., We used computer simulations to investigate how electrophysiological properties of the pulmonary veins can affect rotor formation and maintenance in patients suffering from atrial fibrillation ., We used complex , biophysical representations of cellular electrophysiology in highly detailed geometries constructed from patient scans ., We heterogeneously varied electrophysiological and structural properties to see their effects on rotor initiation and maintenance ., Our study suggests a metric for indicating the likelihood of success of pulmonary vein isolation ., Thus either measuring this clinically , or running patient-specific simulations to estimate this metric may suggest whether ablation in addition to pulmonary vein isolation should be performed ., Our study provides motivation for a retrospective clinical study or experimental study into this metric . | medicine and health sciences, engineering and technology, cardiovascular anatomy, fibrosis, electrophysiology, endocardium, simulation and modeling, developmental biology, epicardium, research and analysis methods, cardiology, arrhythmia, atrial fibrillation, rotors, mechanical engineering, anatomy, physiology, biology and life sciences, heart | null |
900 | journal.pntd.0006075 | 2,017 | Development and preliminary evaluation of a multiplexed amplification and next generation sequencing method for viral hemorrhagic fever diagnostics | Outbreaks of viral hemorrhagic fever ( VHF ) occur in many parts of the world 1 , 2 ., VHFs are caused by various single-stranded RNA viruses , the majority of which are classified in Arenaviridae , Filoviridae , and Flaviviridae families and Bunyavirales order 3 ., Human infections show high morbidity and mortality rates , can spread easily , and require rapid responses based on comprehensive pathogen identification 1 , 3 , 4 ., However , routine diagnostic approaches are challenged when fast and simultaneous screening for different viral pathogens in higher numbers of individuals is necessary 5 ., Even PCR as a widely used diagnostic method , usually providing specific virus identification , requires intense hands-on time for parallel screening of larger quantity of specimens and provides limited genetic information about the target virus ., Multiplexing of different specific PCR assays aims at dealing with these drawbacks; however , until recently , it was limited to a few primer pairs in one reaction due to a lack of amplicon identification approaches for more than five targets 6 , 7 ., Next Generation Sequencing ( NGS ) has provided novel options for the identification of viruses , including simultaneous and unbiased screening for different pathogens and multiplexing of various samples in a single sequencing run 8 ., Furthermore , the development of real-time sequencing platforms has enabled processing and analysis of individual specimens within reasonable timeframes 9 ., However , virus identification with NGS is also accompanied by major drawbacks , such as diminished sensitivity when viral genome numbers in the sample are insufficient and masked by unbiased sequencing of all nucleic acids present in the specimen , including the host genome 10 , 11 ., Attempts to increase the sensitivity of NGS-based diagnostics have focused on enrichment of virus material and libraries before sequencing , including amplicon sequencing , PCR-generated baits , and solution-based capture techniques 12–14 ., The strategy of ultrahigh-multiplex PCR with subsequent NGS has previously been employed for human single nucleotide polymorphism typing , genetic variations in human cardiomyopathies , and bacterial biothreat agents 15–17 ., In this study , we describe the development and initial evaluation of a novel method for targeted amplification and NGS-identification of viral febrile disease and hemorrhagic fever agents and assess the feasibility of this approach in diagnostics ., The human specimens , used for the evaluation of the developed panel were obtained from adults after written informed consent and in full compliance of the local ethics board approval ( Ankara Research and Training Hospital , 13 . 07 . 11/0426 ) ., Viruses reported to cause VHF as well as related strains , associated with febrile disease accompanied by arthritis , respiratory symptoms , or meningoencephalitis , were included in the design to enable differential diagnosis ( Table 1 ) ., For each virus strain , all genetic variants with complete or near-complete genomes deposited in GenBank ( https://www . ncbi . nlm . nih . gov/genbank/ ) were assembled into groups of >90% nucleotide sequence identity via the Geneious software ( version 9 . 1 . 3 ) 18 ., The consensus sequence of each group was included in the design ., The primer sequences were deduced using the Ion AmpliSeq Designer online tool ( https://ampliseq . com/browse . action ) which provides a custom multiplex primer pool design for NGS ( Thermo Fisher Scientific , Waltham , MA ) ., For initial evaluation of the approach and as internal controls , human-pathogenic viruses belonging in identical and/or distinct families/genera but not associated with hemorrhagic fever or febrile disease were included in the design ( Table 1 ) ., The designed primers were tested in silico for specific binding to the target virus strains , including all known genotypes and genetic variants ., The primer sets were aligned to their specific target reference sequences and relative primer orientation , amplicon size and overlap , and total mismatches for each primer were evaluated using the Geneious software 18 ., Pairs targeting a specific virus with less than two mismatches in sense and antisense primers were defined as a hit and employed for sensitivity calculations ., Unspecific binding of each primer to non-viral targets was investigated via the BLASTn algorithm , implemented within the National Center for Biotechnology Information website ( https://blast . ncbi . nlm . nih . gov/Blast . cgi ) 19 ., The sensitivity and specificity of the primer panel for each virus were determined via standard methods as described previously 20 ., The performance of the novel panel for the detection of major VHF agents was evaluated via selected virus strains ., For this purpose , nucleic acids from Yellow fever virus ( YFV ) strain 17D , Rift Valley fever virus ( RVFV ) strain MP-12 , Crimean-Congo hemorrhagic fever virus ( CCHFV ) strain UCCR4401 , Zaire Ebola virus ( EBOV ) strain Makona-G367 , Chikungunya virus ( CHIKV ) strain LR2006-OPY1 and Junin mammarenavirus ( JUNV ) strain P3766 were extracted with the QIAamp Viral RNA Mini Kit ( Qiagen , Hilden , Germany ) with subsequent cDNA synthesis according to the SuperScript IV Reverse Transcriptase protocol ( Thermo Fisher Scientific ) ., Genome concentration of all strains was determined by specific quantitative real-time PCRs using plasmid-derived virus standards , as described previously ( protocols are available upon request ) ., Genome equivalents ( ge ) of 100–103 for each virus were prepared and mixed with 10 ng of human genetic material recovered from HeLa cells ., In order to compare the efficiency of amplification with the novel panel versus direct NGS , all virus cDNAs were further subjected to second strand cDNA synthesis using the NEBNext RNA Second Strand Synthesis Module ( New England BioLabs GmbH , Frankfurt , Germany ) according to the manufacturer’s instructions ., Reagent-only mixes and HeLa cell extracts were employed as negative controls in the experiments ., The performance of the panel was further tested on clinical specimens from individuals with a clinical and laboratory diagnosis of VHF 21 ., For this purpose , previously stored sera with quantifiable CCHFV RNA and lacking IgM or IgG antibodies were employed and processed via High Pure Viral Nucleic Acid Kit ( Roche , Mannheim , Germany ) and the SuperScript IV Reverse Transcriptase ( Thermo Fisher Scientific ) protocols , as suggested by the manufacturer ., Two human sera , without detectable nucleic acids of the targeted viral strains were tested in parallel as negative controls ., The specimens were amplified using the custom primer panels designed for HFVs with the following PCR conditions for each pool: 2 μl of viral cDNA mixed with human genetic material , 5 μl of primer pool , 0 . 5 mM dNTP ( Invitrogen , Karlsruhe , Germany ) , 5 μl of 10 x Platinum Taq buffer , 4 mM MgCl2 , and 10 U Platinum Taq polymerase ( Invitrogen ) with added water to a final volume of 25 μl ., Cycling conditions were 94°C for 7 minutes , 45 amplification cycles at 94°C for 20 seconds , 60°C for 1 minute , and 72°C for 20 seconds , and a final extension step for 6 minutes ( at 72°C ) ., Thermal cycling was performed in an Eppendorf Mastercycler Pro ( Eppendorf Vertrieb Deutschland , Wesseling-Berzdorf , Germany ) with a total runtime of 90 minutes ., The amplicons obtained from the virus strains were subjected to the Ion Torrent Personal Genome Machine ( PGM ) System for NGS analysis ( Thermo Fisher Scientific Inc . ) ., Initially , the specimens were purified with an equal volume of Agencourt AMPure XP Reagent ( Beckman Coulter , Krefeld , Germany ) ., PGM libraries were prepared according to the Ion Xpress Plus gDNA Fragment Library Kit , using the “Amplicon Libraries without Fragmentation” protocol ( Thermo Fisher Scientific ) ., For direct NGS , specimens were fragmented with the Ion Shear Plus Reagents Kit ( Thermo Fisher Scientific ) with a reaction time of 8 minutes ., Subsequently , libraries were prepared using the Ion Xpress Plus gDNA Fragment Library Preparation kit and associated protocol ( Thermo Fisher Scientific ) ., All libraries were quality checked using the Agilent Bioanalyzer ( Agilent Technologies , Frankfurt , Germany ) , quantitated with the Ion Library Quantitation Kit ( Thermo Fisher Scientific ) , and pooled equimolarly ., Enriched , template-positive Ion PGM Hi-Q Ion Sphere Particles were prepared using the Ion PGM Hi-Q Template protocol with the Ion PGM Hi-Q OT2 400 Kit ( Thermo Fisher Scientific ) ., Sequencing was performed with the Ion PGM Hi-Q Sequencing protocol , using a 318 chip ., Amplicons obtained from CCHFV-infected individuals and controls were processed for nanopore sequencing via MinION ( Oxford Nanopore Technologies , Oxford , United Kingdom ) ., The libraries were prepared using the ligation sequencing kit 1D , SQK-LSK108 , R9 . 4 ( Oxford Nanopore Technologies ) ., Subsequently , the libraries were loaded on Oxford Nanopore MinION SpotON Flow Cells Mk I , R9 . 4 ( Oxford Nanopore Technologies ) using the library loading beads and run until initial viral reads were detected ., The sequences generated by PGM sequencing were trimmed to remove adaptors from each end using Trimmomatic 22 , and reads shorter than 50 base pairs were discarded ., All remaining reads were mapped against the viral reference database prepared during the design process via Geneious 9 . 1 . 3 software 18 ., During and after MinION sequencing , all basecalled reads in fast5 format were extracted in fasta format using Poretools software 23 ., The BLASTn algorithm was employed for sequence similarity searches in the public databases when required ., The AmpliSeq design for the custom multiplex primer panel resulted in two pools of 285 and 256 primer pairs for the identification of 46 virus species causing hemorrhagic fevers , encompassing 6 , 130 genetic variants of the strains involved ., All amplicons were designed to be within a range of 125–375 base pairs ., Melting temperature values of the primers ranged from 55 . 3°C to 65 . 0°C ., No amplicons <1 , 000 base pairs with primer pairs in relative orientation and distance to each other could be identified , leading to an overall specificity of 100% for all virus species ., The primer sequences in the panels are provided in S1 Table ., The overall sensitivity of the panel reached 97 . 9% , with the primer pairs targeting 6 , 007 out of 6 , 130 genetic variants ( 1 mismatch in one or both of each primers of a primer pair accepted , as described above ) ( Fig 1 ) ., Impaired sensitivity was noted for Hantaan virus ( 0 . 05 ) ., Evaluation of all Hantaan virus variants in GenBank revealed that newly added virus sequences were divergent by up to 17% from sequences included in the panel design , leading to diminished primer binding ., These sequences could be fully covered by two sets of additional primers ., Amplification of viral targets with the multiplex PCR panel prior to NGS resulted in a significant increase of viral read numbers compared to direct NGS ( Figs 2 and 3 , S2 Table ) ., In specimens with 103 ge of the target strain , the ratio of viral reads to unspecific background increased from 1×10−3 to 0 . 25 ( CCHFV ) , 3×10−5 to 0 . 34 ( RVFV ) , 1×10−4 to 0 . 27 ( EBOV ) , and 2×10−5 to 0 . 64 ( CHIKV ) with fold-changes of 247 , 10 , 297 , 1 , 633 , and 25 , 398 , respectively ., In direct NGS , no viral reads could be detected for CCHFV and CHIKV genomic concentrations lower than 103 , and this approach failed to identify YFV and JUNV regardless of the initial virus count ., In targeted NGS , the limit of detection was noted as 100 ge for YFV , CCHFV , RVFV , EBOV , and CHIKV and 101 ge for JUNV ., For the viruses detectable via direct NGS , amplification provided significant increases in specific viral reads over total reads ratios , from 10−4 to 0 . 19 ( CCHFV , 1 , 900-fold change ) , 2×10−5 to 0 . 19 ( RVFV , 9 , 500-fold change ) , and 3×10−4 to 0 . 56 ( EBOV , 1 , 866-fold change ) ., The average duration of the workflow of direct and targeted NGS via PGM was 19 and 20 . 5 hours , respectively ., In all patient sera evaluated via nanopore sequencing following amplification , the causative agent could be detected after 1 to 9 minutes of the NGS run ( Table 2 ) ., The characterized sequences were 89–99% identical to the CCHFV strain Kelkit L segment ( GenBank accession: GQ337055 ) known to be in circulation in Turkey 24 , 25 ., No targeted viral sequence could be observed in human sera used as negative controls during 1 hour of sequencing ., The preparation , amplification , and sequencing steps of the clinical specimens could be completed with a total sample-to-result time of less than 3 . 5 hours ., In this study , we report the development and evaluation of an ultrahigh-multiplex PCR for the enrichment of viral targets before NGS , which aims to provide a robust molecular diagnosis in VHFs ., The panel was observed to be highly specific and sensitive and to have the capacity to detect over 97% of all known genetic variants of the targeted 46 viral species in silico ., The sensitivity of the primer panel was impaired by virus sequences not included in the original design , as noted for Hantaan virus in this study ., As 36 out of a total of 59 isolates have been published after panel design was completed , these genetic variants of Hantaan virus could not be detected with a comparable sensitivity or not at all with the current panel ., This indicates that the panel has to be adapted to newly-available sequences in public databases ., We have evaluated how the panel could be updated to accommodate these recently-added sequences and observed that two additional primer pairs could sufficiently cover all divergent entries ., Although the approach for the panel design as well as the actual design with the AmpliSeq pipeline was successful for all genetic variants included , the amplification of viral sequences significantly diverging from the panel could not be guaranteed , which may also apply for novel viruses ., Unlike other pathogenic microorganisms , viruses can be highly variable in their genome ., Only rarely do they share genes among all viruses or virus species that could be targeted as a virus-generic marker by amplification ., Our strategy for primer design and the AmpliSeq pipeline do not permit the generation of degenerated primers or the targeting of very specific consensus sequences ., However , the design of the primer panel is relatively flexible , and additional primer pairs can be appended in response to recently published virus genomes ., Moreover , an updated panel will also encompass non-viral pathogens relevant for differential diagnosis , and syndrome-specific panels targeting only VHF agents or virally induced febrile diseases such as West Nile fever and Chikungunya can be developed ., We have further tested the panel using quantitated nucleic acids of six well-characterized viruses responsible for VHF or severe febrile disease , with a background of human genetic material to simulate specimens likely to be submitted for diagnosis , using the semiconductor PGM sequencing platform ., The impact of amplification was evaluated with a comparison of direct and amplicon-based NGS runs ., Overall , targeted amplification prior to NGS ensured viral read detection in specimens with the lowest virus concentration ( 1 ge ) in five of the six viruses evaluated and 10 ge in the remaining strain , which is within the range of the established real-time PCR assays ., Furthermore , this approach enabled significant increases in specific viral reads over background in all of the viruses , with varying fold changes in different strains and concentrations ( Figs 2 and 3 ) ., The increased sensitivity and specificity provided with the targeted amplification suggest that it can be directly employed for the investigation of suspected VHF cases where viremia is usually short and the time point of maximum virus load is often missed 1 , 5 ., Finally , we evaluated the VHF panel by using serum specimens obtained during the acute phase of CCHFV-induced disease and employed an alternate NGS platform based on nanopore sequencing ., This approach enabled virus detection and characterization within 10 minutes of the NGS run and can be completed in less than 3 . 5 hours in total ( Table 2 ) ., The impact of the nanopore sequencing has been revealed previously , during the EBOV outbreak in West Africa where the system provided an efficient method for real-time genomic surveillance of the causative agent in a resource-limited setting 26 ., Field-forward protocols based on nanopore sequencing have also been developed recently for pathogen screening in arthropods 27 ., Specimen processing time is likely to be further reduced via the recently developed rapid library preparation options ., While the duration of the workflow is longer , the PGM and similar platforms are well-suited for the parallel investigation of higher specimen numbers ., Although we have demonstrated in this study that targeted amplification and NGS-based characterization of VHF and febrile disease agents is an applicable strategy for diagnosis and surveillance , there are also limitations of this approach ., In addition to the requirement of primer sequence updates , the majority of the workflow requires non-standard equipment and well-trained personnel , usually out of reach for the majority of laboratories in underprivileged geographical regions mainly affected by these diseases ., However , NGS technologies are becoming widely available with reduced total costs and can be swiftly transported and set up in temporary facilities in field conditions 26 , 27 ., During outbreak investigations , where it is impractical and expensive to test for several individual agents via specific PCRs , this approach can easily provide information on the causative agent , facilitating timely implementation of containment and control measures ., Additional validation of the approach will be provided with the evaluation of well-characterized clinical specimen panels and direct comparisons with established diagnostic assays ., In conclusion , virus enrichment via targeted amplification followed by NGS is an applicable method for the diagnosis of VHFs which can be adapted for high-throughput or nanopore sequencing platforms and employed for surveillance or outbreak monitoring . | Introduction, Methods, Results, Discussion | We describe the development and evaluation of a novel method for targeted amplification and Next Generation Sequencing ( NGS ) -based identification of viral hemorrhagic fever ( VHF ) agents and assess the feasibility of this approach in diagnostics ., An ultrahigh-multiplex panel was designed with primers to amplify all known variants of VHF-associated viruses and relevant controls ., The performance of the panel was evaluated via serially quantified nucleic acids from Yellow fever virus , Rift Valley fever virus , Crimean-Congo hemorrhagic fever ( CCHF ) virus , Ebola virus , Junin virus and Chikungunya virus in a semiconductor-based sequencing platform ., A comparison of direct NGS and targeted amplification-NGS was performed ., The panel was further tested via a real-time nanopore sequencing-based platform , using clinical specimens from CCHF patients ., The multiplex primer panel comprises two pools of 285 and 256 primer pairs for the identification of 46 virus species causing hemorrhagic fevers , encompassing 6 , 130 genetic variants of the strains involved ., In silico validation revealed that the panel detected over 97% of all known genetic variants of the targeted virus species ., High levels of specificity and sensitivity were observed for the tested virus strains ., Targeted amplification ensured viral read detection in specimens with the lowest virus concentration ( 1–10 genome equivalents ) and enabled significant increases in specific reads over background for all viruses investigated ., In clinical specimens , the panel enabled detection of the causative agent and its characterization within 10 minutes of sequencing , with sample-to-result time of less than 3 . 5 hours ., Virus enrichment via targeted amplification followed by NGS is an applicable strategy for the diagnosis of VHFs which can be adapted for high-throughput or nanopore sequencing platforms and employed for surveillance or outbreak monitoring . | Viral hemorrhagic fever is a severe and potentially lethal disease , characterized by fever , malaise , vomiting , mucosal and gastrointestinal bleeding , and hypotension , in which multiple organ systems are affected ., Due to modern transportation and global trade , outbreaks of viral hemorrhagic fevers have the potential to spread rapidly and affect a significant number of susceptible individuals ., Thus , urgent and robust diagnostics with an identification of the causative virus is crucial ., However , this is challenged by the number and diversity of the viruses associated with hemorrhagic fever ., Several viruses classified in Arenaviridae , Filoviridae , and Flaviviridae families and Bunyavirales order may cause symptoms of febrile disease with hemorrhagic symptoms ., We have developed and evaluated a novel method that can potentially identify all viruses and their genomic variants known to cause hemorrhagic fever in humans ., The method relies on selected amplification of the target viral nucleic acids and subsequent high throughput sequencing technology for strain identification ., Computer-based evaluations have revealed very high sensitivity and specificity , provided that the primer design is kept updated ., Laboratory tests using several standard hemorrhagic virus strains and patient specimens have demonstrated excellent suitability of the assay in various sequencing platforms , which can achieve a definitive diagnosis in less than 3 . 5 hours . | sequencing techniques, medicine and health sciences, rift valley fever virus, pathology and laboratory medicine, togaviruses, pathogens, tropical diseases, microbiology, alphaviruses, viruses, next-generation sequencing, chikungunya virus, rna viruses, genome analysis, neglected tropical diseases, molecular biology techniques, microbial genetics, bunyaviruses, microbial genomics, research and analysis methods, viral hemorrhagic fevers, infectious diseases, viral genomics, genomics, crimean-congo hemorrhagic fever virus, medical microbiology, microbial pathogens, molecular biology, virology, viral pathogens, transcriptome analysis, genetics, biology and life sciences, viral diseases, computational biology, dna sequencing, hemorrhagic fever viruses, organisms | null |
1,279 | journal.pcbi.1007284 | 2,019 | Fast and near-optimal monitoring for healthcare acquired infection outbreaks | Since the time of Hippocrates , the “father of western medicine” , a central tenet of medical care has been to “do no harm . ”, Unfortunately , the scourge of healthcare acquired infections ( HAI ) challenges the medical system to honor this tenet ., When patients are hospitalized they are seeking care and healing , however , they are simultaneously being exposed to risky infections from others in the hospital , and in their weakened state are much more susceptible to these infections than they would be normally ., Acquiring these infections increases the chances of either dying or becoming even sicker , which also lengthens the time the patient needs to stay in the hospital ( increasing costs ) ., These infections can range from pneumonia and gastro-intestinal infections like Clostridium difficile to surgical site infections and catheter associated infections , which puts nearly any patient in the hospital at risk ., Antibiotic treatments intended to aid in recovery from one infection , may open the door for increased risk of infection from another ., Healthcare acquired infections are a significant problem in the United States and around the world ., Some estimates put the annual cost between 28 and 45 billion US dollars per year in the US 1 ., More importantly , they inflict a significant burden on human health ., A recent study estimated more than 2 . 5 million new cases per year in Europe alone , inflicting a loss of just over 500 disability-adjusted life years ( DALYS ) per 100 , 000 population 2 ., Given their burden and cost , their prevention is a high priority for infection control specialists ., A simple approach to monitor HAI outbreaks would be to test every patient and staff in the hospital and swab every possible location for HAI infection ., However , such a naive process is too expensive to implement ., A better strategy is required to efficiently monitor HAI outbreaks ., A recent review article 3 included 29 hospital outbreak detection algorithms described in the literature ., They found these fall into five main categories: simple thresholds , statistical process control , scan statistics , traditional statistical models , and data mining methods ., Comparing the performance of these methods is challenging given the myriad diseases , definitions of outbreaks , study environments , and ultimately the purpose of the studies themselves ., However , the authors identify that few of these studies were able to leverage important covariates in their detection algorithms ., For example , including the culture site or antibiotic resistance was shown to boost detectability ., Past simulation based approaches 4 tackle optimal surveillance system design , by choosing clinics as sensors , to increase sensitivity and time to detection for outbreaks in a population ., In contrast , our approach selects most vulnerable people and locations to infections as sensors to detect outbreaks in a hospital setting ., Different kinds of mechanistic models have also been used for studying HAI spread 5 , 6 , 7 , 8 ., Most of these are differential equation based models ., We refer to 9 for a review of mechanistic models of HAI transmission ., On a broader level , sensor selection problem for propagation ( of contents , disease , rumors and so on ) over networks has gained much attention in the data mining community ., Traditional sensor selection approaches 10 , 11 typically select a set of nodes which require constant monitoring ., Instead , in this paper , we select sensor set as well as the rate to monitor each sensor ., Hence , our approach is novel from the data mining perspective as well ., Recently Shao et al . 12 proposed selecting a set of users on social media to detect outbreaks in the general population ., Similarly , Reis et al . 13 proposed an epidemiological network modeling approach for respiratory and gastrointestinal disease outbreaks ., Other closely related data mining problems include selecting nodes for inhibiting epidemic outbreaks ( vaccination ) 14 , 15 , 16 and inferring missing infections in an epidemic outbreak 17 ., We employ a simulation and data optimization based approach to design our algorithm and to provide robust bounds on its performance ., Additionally , our simulation model is richly detailed in terms of the class of individuals and locations where sampling can occur ., None of the prior works explicitly model the multiple pathways of infections for HAI outbreaks and fail in separating the location contamination and infections in people ., We formalize the sensor set problem as an optimization problem over the space of rate vectors , which represent the rates at which to monitor each location and person ., We consider two objectives , namely the probability of detection and the detection time , and show that the prior satisfies a mathematical property called submodularity , which enables efficient algorithms ., In addition , we leverage data generated from a carefully calibrated simulation using real data collected from a local hospital ., Our extensive experiments show that our approach outperforms the state-of-the-art general outbreak detection algorithm ., We also show that our approach achieves the minimum outbreak detection time compared to other alternatives ., To the best of our knowledge , we are the first to provide a principled data-driven optimization based approach for HAI outbreak detection ., Though we validate our approach for a specific HAI , namely C . difficile , our general approach is applicable for other HAIs with similar disease model as well ., As previously mentioned , we propose a data-driven approach in selecting the sensors ., There are multiple challenges in obtaining actual HAI spread data such as high cost , data sparsity , and the ability to safeguard patient personal information ., For this reason , we rely on simulated HAI contagion data ., We use a highly-detailed agent-based simulation that employs a mobility log obtained from local hospitals 18 , 19 to produce realistic contagion data ., All the steps of this methodology are described in detail in 18 , and we summarize them below for completeness ., Fig 1 shows a visualization of simulated HAI spread ., In this simulation , people ( human agents ) move across various locations ( static agents ) as defined by the mobility log and spread HAI in stochastic manner ., The simulation was developed in the following three steps: design of an in-silico or computer-based population and its activities , conceptualization of a disease model for a pathogen of interest , and the employment of a highly-detailed simulation ., The following sections describe the data creation process in more detail ., Recall that our goal is to select a set of agents as sensors , and the rate at which each such sensor should be monitored , such that future HAI outbreaks are detected with high probability , and as early as possible ., However , these have to be selected within given resource constraints ., We start with a formalization of these problems ., Finding a minimum cost sensor set is a challenging optimization problem , and we present efficient algorithms by using the notion of submodularity ., We first define some notations ., Let bold letters represent vectors ., Let P and L denote the sets of human agents and locations respectively; let n = |P ∪ L|—this will be the total number of agents in our simulations ., Let B denote the budget on a number of samples that is permitted ( weighted by cost of agents ) , i . e , it is the sum of expected number of swabs to detect whether a location is contaminated or a human is infected ., As mentioned earlier , the mobility logs are represented as a bipartite temporal network G ( P , L , E , T ) , with two partitions P and L representing agents , E representing who-visits-what-location relationship and T representing time/duration of the visit ., We consider each agent to be a node in the temporal network G . Hence we use the terms node and agent interchangeably ., Now , let c ∈ Rn , be the vector of costs , i . e . , cv is the cost of monitoring node v . Let r ∈ Rn be the vector of monitoring rates , where rv denotes that the probability that node v is monitored ( e . g . , swabbed ) each day ., Finally , let Tmax denote the maximum time in each simulation instance ., Unfortunately , Problems 1 and 2 are both computationally very challenging ., In fact , both the problems can be proven to be NP-hard ., Lemma 1 . Problem 1 is NP-hard ., Lemma 2 . Problem 2 is NP-hard ., We provide the proofs for both the lemmas in the supplementary , where we show that the NP-Complete SetCover problem can be viewed as a special case of both the Problems 1 and 2 . Since our problems are in the computational class NP-hard , they cannot be solved optimally in polynomial time even for simplistic instances , unless P = NP ., The instances we need to consider are pretty large , so a naive exhaustive search for the optimal solution is also not feasible and will be too slow ., Therefore , we focus on designing efficient near-optimal approximate solutions ., We begin with Problem 1 . The function we are trying to optimize for problem 1 is defined over a discrete lattice , i . e . the rate vector r ., Our approach is to show that this function is a submodular lattice function ., The notion of submodularity , which is typically defined over set functions , can be extended to discrete lattice functions ( e . g . recenty in 32 ) ., Informally , submodularity means that the objective value has a property of diminishing returns for a small increase in the rate in any dimension ., It is important to note that submodularity for lattice functions is more nuanced than for simple set functions ( we define it formally in the Supplementary Information section ) ., Fortunately , it turns out that this property implies that a natural greedy algorithm ( which maximizes the objective marginally at each step ) gurantees a ( 1 − 1/e ) -approximation to the optimal solution ., Without such a property , it is not clear how to solve Problem 1 efficiently even for a small budget ., We have the following lemma ., Lemma 3 ., The objective in Problem 1 is a submodular lattice function ., The detailed description of the submodularity property and proof of lemma 3 are presented in the supplementary ., Our HaiDetect algorithm for Problem 1 selects the sensors to be monitored and rates such that nodes which tend to get infected across multiple simulation instances have higher infection rates ., Specifically , at each step , HaiDetect selects the node v and the rate r among all possible candidate pairs of nodes and rates , such that the average marginal gain is maximized ., HaiDetect keeps adding nodes and/or increasing the rates to monitor the selected nodes until the weighted sum of the rates is equal to the budget B . The detailed pseudocode is presented in Algorithm 1 . Algorithm 1 HaiDetect Require: I , budget B 1: for each feasible initial vector r0 do 2: Initialize the rate vector r = r0 3: while ∑v rv ⋅ cv < B do 4: Find a node v and rate r maximizing average marginal gain 5: Let rv = r 6: Remove all candidate pairs of nodes and rates which are not feasible 7: Return the best rate vector r HaiDetect has desirable properties in terms of both effectiveness and speed ., The performance guarantee of HaiDetect is given by the following lemma ., Lemma 4 ., HaiDetect gives a ( 1-1/e ) approximation to the optimal solution ., The lemma above gives an offline bound on the performance of HaiDetect , i . e . , we can state that the ( 1-1/e ) approximation holds even before the computation starts ., We can actually obtain a tighter bound by computing an empirical online bound ( once the solution is obtained ) which can be derived using the submodularity and monotonicity of Problem 1 . For us to state the empirical bound , let us define some notations ., Let the solution selected by HaiDetect for a budget B be r ^ ., Similarly , let the optimal vector for the same budget be r* ., For simplicity , let the objective function in Problem 1 be R ( ⋅ ) ., For all nodes v and for a ∈ 0 , 1 , let us define Δv as follows:, Δ v = max a R ( r ^ ∨ a · χ { v } ) - R ( r ^ ) ( 8 ) Similarly let us define σv as the argument which maximizes Δv σ v = arg max a R ( r ^ ∨ a · χ { v } ) - R ( r ^ ) ( 9 ) Now , let δ v = Δ v c v · σ v . Note that for each node v , there is a single δ ., Let the sequence of nodes s1 , s2 , … , sn be ordered in decreasing order of δv ., Now let K be the index such that θ = ∑ i = 1 K - 1 c s i σ s i ≤ B and ∑ i = 1 K c s i σ s i > B . Now the following lemma can be stated ., Lemma 5 . The online bound on R ( r* ) in terms of the current rate r ^ assigned by HaiDetect is as follows:, R ( r * ) ≤ R ( r ^ ) + ∑ i = 1 K - 1 Δ s i + B - θ c s K σ s K Δ s K The lemma above allows us to compute how far the solution given by HaiDetect is from the optimal ., We compute this bound and explore the results in detail in the Results section ., In addition to the performance guarantee , HaiDetect’s running time complexity is as follows ., Lemma 6 . The running time complexity of HaiDetect is O ( c ⋅ B2 ( |P| + |L| ) ) , where c is the number of unique initial vectors r0 , B is the budge , P is the set of human agents and L is the set of locations ., Note that the constant c is much smaller than the total population , i . e . , c << |P| + |L| in our case as infections are sparse and we do not need to consider agents and locations which never get infected ., The most expensive computational step in Algorithm 1 is the estimation of the node v and rate r that gives the maximum average marginal gain ( Step ( i ) of 1, ( b ) ) ., This can be expedited using lazy evaluations and memoization ., Hence , the algorithm is also quite fast in practice ., Moreover , it also embarrassingly parallelizable ., The steps, ( a ) and, ( b ) for each initial vector can be performed in parallel ., We also propose a similar algorithm HaiEarlyDetect for Problem 2 . The main idea here is that we assign higher rates to nodes which tend to get infected earlier in many simulation instances ., The pseudocode for HaiEarlyDetect is presented in Algorithm 2 . Algorithm 2 HaiEarlyDetect Require: I , budget B 1: for each feasible initial vector r0 do 2: Initialize the rate vector r = r0 3: while ∑v rv ⋅ cv < B do 4: Find a node v and rate r minimizing the average detection time 5: Let rv = r 6: Remove all candidate pairs of nodes and rates which are not feasible 7: Return the best rate vector r As shown in Algorithm 2 , HaiEarlyDetect optimizes the marginal gain in the objective in Problem 2 in each iteration ., It turns out that the objective in Problem 2 is not submodular ., However , as shown by our empirical results , the greedy approach we propose works very well in practice and outperforms the baselines ., Moreover , it too runs fast in practice as the same optimization techniques discussed earlier for HaiDetect applies to HaiEarlyDetect as well ., In the previous section , we discussed two types of bounds on the performance of HaiDetect ., Here we show how far the solution given by HaiDetect is from the optimal value for various budgets ., For this experiment , we ran HaiDetect on a set of 100 simulations and computed the value of the objective in Problem 1 for the resulting rate vector ., We also computed the overall bound , based on ( 1 − 1/e ) approximation and the empirical bound as per Lemma 5 ., Since the objective value cannot exceed the number of simulations , we also compute the lowest bound as the minimum of two bounds and the number of simulations ., We repeat the experiment for budget size from 1 to 50 ., The resulting plot is presented in Fig 5 ., Fig 5 highlights several interesting aspects ., First of all , we can see that the online bound is always tighter than the offline bound ., Moreover , we also observe that as the performance of HaiDetect reaches close to the optimal ( with increase in budget ) , the online bound becomes more and more tight until both the performance and bound are equal , indicating the values of the budget for which HaiDetect solves Problem 1 optimally ., This results demonstrates that HaiDetect can accurately find sensors which can detect any observed outbreaks given sufficient budget ., Given that HaiDetect is near-optimal for the observed outbreaks , we evaluate its effectiveness in detecting unseen ( “future” ) outbreaks ., Here , we compare the performance of HaiDetect and Celf with respect to the budget on “unobserved” simulations ., For this experiment , we performed a 5-fold cross validation on 200 simulations ., Specifically , we divided the simulations into 5 groups , and at each turn we selected the sensors in the first four groups and computed the sum of outbreak detection probability as shown in Eq 1 in the fifth group ( the test set ) ., Then we normalize the resulting sum of outbreak detection probability by total number of simulation instances in the the same group ., The normalized value can be intuitively described as the average probability of detecting a future outbreak ., We repeat this process five times ensuring each group is used for success evaluation ., We then compute the overall average and its standard error ., We repeat the entire process for the budgets from 1 to 50 ., The result of our experiment is show in Fig 6 ., The first observation is that HaiDetect consistently outperforms Celf for all values of the budget ., The disparity between the methods is more apparent for larger values of budget ., The difference in quality of the sensors can be explained by the fact that Celf only assigns rate of 0 or 1 ., However , HaiDetect can strategically assign non-integer rates so as to maximize the likelihood of detection ., We can also observe that the standard error for the HaiDetect decreases and is negligible for larger budgets ., However , it is not the case for Celf ., This shows that not only the quality of sensors detected by HaiDetect is better , but it is more stable as well ., Finally , we see that probability of an outbreak being detected by sensors selected by HaiDetect is 0 . 96 when budget is equal to 50 , whereas it is only around 0 . 75 for Celf ., Similarly , a budget of only 25 is required to detect an outbreak with probability of 0 . 8 for HaiDetect ., For the same budget , sensors selected by Celf detect cascades with probability of 0 . 55 ., The result highlights that HaiDetect produces more reliable monitoring strategy for HAI outbreak detection ., Here , we investigate the change in performance of HaiDetect and Celf as the number of simulations used to detect the sensor increases ., For this experiment , we used 150 distinct simulations ., We divided the simulations into two categories , training and testing sets ., We used the cascades in the training set to select the sensors and used the ones in the testing set to measure quality ., First we decided on a budget of 10 and training size of 10 cascades ., We ran both HaiDetect and Celf for this setting and measured the quality using the cascades in the testing set ., We then increased the training size by 10 till we reached the size of 100 ., We repeated the same procedure for budgets of 30 and 50 ., We compute the average probability of detection in the same manner as described above ., Fig 7 summarizes the result ., We can observe that HaiDetect outperforms Celf consistently ., It reinforces the previous observation that HaiDetect selects good sensors for the HAI outbreak detection ., An interesting observation is that the performance tails off after training size of 20 for larger budgets , which implies that not many cascades have to be observed before we can select good quality sensors ., This is an encouraging finding as gathering large number of real cascades of HAI spread is not feasible ., Next we study the change in performance of HaiDetect with the training size for various budget sizes ., Here we tracked the performance of HaiDetect for budgets of 10 , 30 , and 50 for training sets of various size ., The result is summarized in Fig 8 ., As shown in the figure , the difference between peformance of HaiDetect for budgets 30 and 10 is much larger than that for budgets 50 and 30 ., The normalized objective , or the probability of detection , is close to 1 at budget 50 , indicating that monitoring sensors at rates assigned by HaiDetect detects almost all the HAI outbreaks ., Hence , in expectation , roughly 50 swabs a day is enough to monitor an outbreak in a hospital wing ., Again , we observe that performance of HaiDetect tails off after the training size of 20 ., It provides extra validation for the observation that a limited number of observed cascades are enough to select high quality sensors ., A desirable property of sensors is that they aid in early detection of outbreaks ., Here we study the average detection time of future outbreaks using the sensors and rates selected by HaiEarlyDetect ., In this experiment , we first divided our simulations into equally sized training and testing sets , each having 100 simulations ., We ran HaiEarlyDetect on the training set to detect sensors and rates at which to monitor them ., Then , we monitored the selected sensors at the inferred rates and measured the detection time for each simulation in the testing set ., We repeated the entire process for various budgets ., The detection time averaged over 100 simulated outbreaks in the testing set is summarized in Fig 9 and the variance in the detection time is shown in Fig 10 ., As shown in the figure , as the budget increases the average detection time decreases ., According to our results , the average time to detect an outbreak in the testing set while monitoring sensors selected for budget of 1000 is roughly six days ., This is impressive considering the fact that monitoring all agents results in detection time of 4 days monitoring all of more than 1200 nurses results in detection time of 8 days ., Hence , monitoring these sensors detect the outbreak earlier with fewer budget ., Another advantage of our sensors is that they are diverse ., Significant proportion of the selected sensors include patients and fomites , which are easier to monitor than the nurses ., Hence , monitoring the sensors selected by HaiEarlyDetect also has an economic advantage ., An interesting observation seen in Fig 10 is that the variablity in average detection time decreases with the increase in budget ., Hence , we expect the performance of our sensors to be fairly consistent in detecting future outbreaks for larger budgets ., Moreover , the median time to detect an outbreak ( as shown by the box plots ) is always less than the average ., Hence , we expect that performance of HaiEarlyDetect to be generally better than that suggested by the average detection time ., For budget of 1000 , the median detection time is just 5 days ., Note that monitoring all agents results in detection time of 4 days ., This implies that in practice our approach requires only 1000 swabs per day to detect an outbreak within a single day of the first infection ., An interesting question is how many potential cases can be prevented by monitoring the sensors selected by HaiEarlyDetect ., Here we study how many nodes get infected before an outbreak is detected and how many potential infections can be prevented by monitoring our sensors for various budgets ., As in the previous experiment , for a given budget , we leverage 100 simulations to select sensors and their monitoring rates ., Once the sensors are selected , we count the number of infections that occur in a test simulation before a sensor is infected and how many further infections occur following the infections of sensors ., We then average these numbers over 100 test simulations ., The results are summarized in Table 2 ., As shown in Table 2 , for the budget of 10 samples/swabs , 4 . 31 potential future infections could prevented ., Note that there are only 23 infections on average per simulation ., For the budget of only 200 , 15 . 02 infections could be prevented , which is about 66% of potential infections ., The number goes up to 17 , or 74% for the budget of 1000 ., The result shows that even for a low budget ( less than 200 swabs per day ) , our approach could help prevent a significant number of future infections ., Next we study the types of agents that are selected by HaiDetect as sensors ., For this experiment , we use 100 randomly selected simulations to detect sensors for a wide range of budgets ., After the sensors are selected , we sum up the rates of each category of agents like nurses , doctors , patients , and so on ., Fig 11, ( a ) shows the distribution of sensor allocation for each category of agents at low budgets ., We observe that for a budget of 10 , nearly 60% of the total budget is spent on selecting nurses ., Since nurses are the most mobile agents , the result highlights the fact that HaiDetect selects the most important agents as sensors early on ., Similarly , Fig 11, ( b ) shows the distribution of sensors for higher budgets ., Here we observe that nearly 35% of the budget is allocated for nurses ., Fomites and patients have roughly equal allocations of about 20% ., 17% of the budget is allocated to doctors ., The rest of the categories have minimal allocation ., The distribution shows that HaiDetect selects heterogeneous sensors including both people and objects/locations as intended ., Finally , we are also interested on the scheduling implications of the sensors selected by HaiDetect ., To this end , we measure the aggregated proportion of budget assigned to each rate for the sensors we select ., The results are summarized in Fig 12 ., As shown in Fig 12, ( a ) , most of the sensors have rate of 0 . 1 ., Very few sensors have rate from 0 . 2 to 0 . 5 ., Finally , there is a sudden spike at rate = 1 . 0 ., When we look at rate distribution for each category separately , interestingly we observe that only nurses have rates of 1 . 0 ., This implies that certain nurses have to be monitored each day to detect HAI outbreak ., The reason behind this unexpected behaviour can be attributed to the fact that the hospital from where the mobility log was collected , required all the nurses to attend a daily meeting ., Hence , all the nurses were in contact with each other every day and it is likely that nurses infect each other in case of an outbreak ., Hence , there is an advantage in monitoring some of the nurses everyday to quickly detect HAI outbreak ., Effective and early detection of HAI outbreaks are important problems in hospital infection control , and have not been studied systematically so far ., While these are challenging problems , understanding their structure can help in designing effective algorithms and optimizing resources ., Current practices in hospitals are fairly simple , and do not attempt to optimize resources ., Our algorithms perform better than many natural heuristics , and our results show that a combination of data and model driven approach is effective in detecting HAIs ., Since there is limited data on disease incidence , good models and simulations play an important role in designing algorithms and evaluating them . | Introduction, Materials and methods, Results, Discussion | According to the Centers for Disease Control and Prevention ( CDC ) , one in twenty five hospital patients are infected with at least one healthcare acquired infection ( HAI ) on any given day ., Early detection of possible HAI outbreaks help practitioners implement countermeasures before the infection spreads extensively ., Here , we develop an efficient data and model driven method to detect outbreaks with high accuracy ., We leverage mechanistic modeling of C . difficile infection , a major HAI disease , to simulate its spread in a hospital wing and design efficient near-optimal algorithms to select people and locations to monitor using an optimization formulation ., Results show that our strategy detects up to 95% of “future” C . difficile outbreaks ., We design our method by incorporating specific hospital practices ( like swabbing for infections ) as well ., As a result , our method outperforms state-of-the-art algorithms for outbreak detection ., Finally , a qualitative study of our result shows that the people and locations we select to monitor as sensors are intuitive and meaningful . | Healthcare acquired infections ( HAIs ) lead to significant losses of lives and result in heavy economic burden on healthcare providers worldwide ., Timely detection of HAI outbreaks will have a significant impact on the health infrastructure ., Here , we propose an efficient and effective approach to detect HAI outbreaks by strategically monitoring selected people and locations ( sensors ) ., Our approach leverages outbreak data generated by calibrated mechanistic simulation of C . difficile spread in a hospital wing and a careful computational formulation to determine the people and locations to monitor ., Results show that our approach is effective in detecting outbreaks . | medicine and health sciences, gut bacteria, medical personnel, sociology, social sciences, health care, simulation and modeling, health care providers, systems science, mathematics, network analysis, social networks, nosocomial infections, bacteria, allied health care professionals, research and analysis methods, clostridium difficile, infectious diseases, computer and information sciences, epidemiology, agent-based modeling, people and places, professions, nurses, population groupings, biology and life sciences, physical sciences, organisms | null |
1,891 | journal.pcbi.1003705 | 2,014 | Rethinking Transcriptional Activation in the Arabidopsis Circadian Clock | The task of the circadian clock is to synchronize a multitude of biological processes to the daily rhythms of the environment ., In plants , the primary rhythmic input is sunlight , which acts through photoreceptive proteins to reset the phase of the clock to local time ., The expression levels of the genes at the core of the circadian clock oscillate due to mutual transcriptional and post-translational feedbacks , and the complexity of the feedbacks makes it difficult to predict and understand the response of the system to mutations and other perturbations without the use of mathematical modelling 1 ., Early modelling of the system by Locke et al . demonstrated the feasibility of gaining new biological insights into the clock through the use of model predictions 2 ., The earliest model described the system as a negative feedback loop between the two homologous MYB-like transcription factors CIRCADIAN CLOCK ASSOCIATED 1 ( CCA1 ) and LATE ELONGATED HYPOCOTYL ( LHY ) 3 , 4 on one hand and TIMING OF CAB EXPRESSION 1 ( TOC1/PRR1 ) 5 on the other ., Over the past decade , models have progressed to describing the system in terms of multiple interacting loops , still centred around LHY/CCA1 ( treated as one component ) and TOC1 ., The latest published model by Pokhilko et al . ( 2013 ) describes transcriptional and post-translational interactions between more than dozen components ., We refer to that model as P2012 6 , in keeping with the tradition of naming the Arabidopsis clock models after author and submission year ( cf . L2005 2 , L2006 7 , P2010 8 and P2011 9 ) ., The clock depends on several genes in the PSEUDO RESPONSE REGULATOR ( PRR ) family: PRR9 , PRR7 , PRR5 , PRR3 and TOC1/PRR1 are expressed in a clear temporal pattern , with PRR9 mRNA peaking in the morning , PRR7 and PRR5 before and after noon , respectively , and PRR3 and TOC1 near dusk 10 ., PRR9 , PRR7 and PRR5 act to repress expression of CCA1 and LHY during the day 11 , but , until recently , TOC1 was thought to be a nightly activator of CCA1 and LHY , acting through some unknown intermediate ., However , TOC1 has firmly been shown to be a repressor of both CCA1 and LHY , and it now takes its place in the models as the final repressor of the “PRR wave” 9 , 12–14 ., PRR3 has yet to be included in the clock models and the roles of the other PRRs are being reevaluated following the realization that TOC1 acts as a repressor 15 ., The GIGANTEA ( GI ) protein has long been thought to form part of the clock 16 , whereas EARLY FLOWERING 3 ( ELF3 ) was known to affect clock function 17 but was only more recently found to be inside the clock , rather than upstream of it 18 , 19 ., GI and ELF3 interact with each other and with other clock-related proteins such as the E3 ubiquitin-ligase COP1 20 ., GI plays an important role in regulating the level and activity of ZEITLUPE ( ZTL ) 21 , which in turn affects the degradation of TOC1 22 and PRR5 23 but not of the other PRRs 24 ., The clock models by Pokhilko et al . include GI and ZTL; GI regulates the level of ZTL by sequestering it in a GI-ZTL complex during the day and releasing it at night 8 ., Together with EARLY FLOWERING 4 ( ELF4 ) and LUX ARRHYTHMO ( LUX ) , ELF3 is necessary for maintaining rhythmicity in the clock 25–27 ., The three proteins are localized to the nucleus , and ELF3 is both necessary and sufficient for binding ELF4 and LUX into a complex termed the evening complex ( EC ) 19 ., In recent models , EC is a major repressor; it was introduced in P2011 to repress the transcription of PRR9 , LUX , TOC1 , ELF4 and GI 9 ., We here present a model ( F2014 ) of the circadian clock in Arabidopsis , extending and revising the earlier models by Pokhilko et al . ( P2010–P2012 ) ., To incorporate as much as possible of the available knowledge about the circadian clock into the framework of a mathematical model , we have compiled a large amount of published data to use for model fitting ., These curated data are made available for download as described in Methods ., The aim of this work is to clarify the role of transcriptional activation in the Arabidopsis circadian clock ., Specifically , we use modelling to test whether the available data are compatible with models with and without activation ., There is no direct experimental evidence for any of the activators postulated in earlier models , and as a crucial step in remodelling the system we have removed all transcriptional activation from the equations ., Instead , we have added a major clock component missing from earlier models: the transcription factor REVEILLE 8 ( RVE8 ) , which positively regulates the expression of a large fraction of the clock genes 28 , 29 ., A further addition is the nightly transcription factor NOX/BROTHER OF LUX ARRHYTHMO ( NOX/BOA ) , which is similar to LUX but may also act as an activator of CCA1 30 ., By examining transcriptional activation within the framework of our model , we have clarified the relative contributions of the activators to their different targets ., Overexpression of ELF3 rescues clock function in the otherwise arrythmic elf4-1 mutant 27 ., This suggests that the function of ELF4 is to amplify the effects of ELF3 through the ELF3-ELF4 complex , which led us to consider an evening complex ( EC ) where free ELF3 protein can play the role of ELF3-ELF4 , albeit with highly reduced efficacy ., This , together with our aim to add the NOX protein in parallel with LUX , as described in the next section , prompted us to rethink how to model this part of the clock ., EC is not given its own variable in the differential equations , unlike in the earlier models ., Instead , EC activity is seen as rate-limited by LUX and NOX on one hand and by ELF3-ELF4 and free ELF3 on the other ., In either pair , the first component is given higher importance , in accordance with previous knowledge ., For details , see the equations in Text S1 ., This simplified description requires few parameters , which was desirable because the model had to be constrained using time course data for the individual components of EC , mainly at the mRNA level ., The effects of our changes to EC are illustrated in Figure 2 , which shows EC and related model components in the transition from cycles of 12 h light , 12 h dark ( LD 12:12 ) to constant light ( LL ) ., ELF3 , which is central to EC in our model , behaved quite differently at the mRNA level compared with the P2011 and P2012 models , and more closely resembled the available experimental data , with a broad nightly peak and a trough in the morning at zeitgeber time ( ZT ) 0–4 ( Figure 2A ) ., The differences in the dynamics of the EC components between our eight parameter sets demonstrate an interesting and more general point: The components that are most reliably constrained are not always those that were fitted to measured data ., In our case , the model was fitted to data for the amount of ELF3 mRNA ( Figure 2A ) and total ELF3 protein ( not shown ) , but the distribution between free ELF3 and ELF3 bound in the ELF3-ELF4 complex was not directly constrained by any data ., As expected , the variation between parameter sets was indeed greater for the levels of free ELF3 protein and the ELF3-ELF4 complex , as shown in Figure 2B–C ., However , the predicted level of EC ( Figure 2D ) showed less variation than even the experimentally constrained ELF3 mRNA ., This indicates that the shape and timing of EC were of such importance that the EC profile was , in effect , tightly constrained by data for the seven EC repression targets ( PRR9 , PRR7 , PRR5 , TOC1 , GI , LUX and ELF4 ) ., NOX is a close homologue of LUX , with a highly similar DNA-binding domain and a similar expression pattern which peaks in the evening ., Like LUX , NOX can form a complex with ELF3 and ELF4 , but it is only partially redundant with LUX , which has a stronger clock phenotype 31 ., The recruitment of ELF3 to the PRR9 promoter is reduced in the lux-4 mutant and abolished in the LUX/NOX double amiRNA line 32 ., To explain these findings , we introduced NOX into the model as a component acting in parallel with LUX; we assumed that NOX and LUX play similar roles as transcriptional repressors in the evening complex ., There is evidence that NOX binds to the promoter of CCA1 ( and possibly LHY ) in vivo and activates its transcription ., Accordingly , the peak level of CCA1 expression is higher when NOX is overexpressed , and the period of the clock is longer 30 ., This possible role of NOX as an activator fits badly with its reported redundancy with LUX as a repressor ., In an attempt to resolve this issue , we first modelled the system with NOX only acting as a repressor in EC , and then investigated the effects of adding the activation of CCA1 expression ., Figure 3 illustrates the role of NOX in the model in comparison with LUX ., The differences in their expression profiles ( Figure 3A–B ) reflect the differences in their transcriptional regulation ( cf . Figure 1 ) ., CCA1 expression is decreased only marginally in the nox mutant ( Figure 3C–D ) but more so in lux ( Figure 3E ) ., Because of the redundancy between NOX and LUX , the model predicted that the double mutant lux;nox has a stronger impact on circadian rhythms , with CCA1 transcription cut at least in half compared with lux ( Figure S2A ) ., According to the model , the loss of LUX and NOX renders the evening complex completely ineffective , which in turn allows the PRR genes ( including TOC1 ) to be expressed at high levels and thereby repress LHY and CCA1 ., A comparison with the P2011 and P2012 models , which include LUX but not NOX , is shown in Figure 3B , C and E . Here , the most noticeable improvement in our model was the more accurate peak timing after entry into LL , where in the earlier models the clock phase was delayed during the first subjective night 33 ., Period lengthening and increased CCA1 expression was observed in NOX-ox only for some of the parameter sets ( Figure 3F ) ., The four parameter sets with increased CCA1 all had a very weakly repressing NOX whose main effect was to counter LUX by taking its place in EC ., Removing NOX from EC in the equations and reoptimizing a relevant subset of the parameters worsened the fit to the data ( Figure S3 ) ., These results support the idea of NOX acting through EC in manner that makes it only partially redundant with LUX ., The possibility that NOX is a transcriptional activator of CCA1 and LHY was probed by adding an activating term to the equations ( see Text S1 ) and reoptimizing the parameters that control transcription of CCA1 and LHY ., The resulting activation was very weak in all parameter sets , and had negligible effect on the expression of CCA1 in NOX-ox ( Figure S2B–C ) ., Accordingly , the addition of the activation term did not improve the fit to data as measured by the cost function described in Methods ( Figure S3 ) ., In earlier models that included the PRR genes , the PRRs were described as a series of activators; during the day , PRR9 activated the transcription of PRR7 , which similarly activated PRR5 ., These interactions improved the clocks entrainability to different LD cycles 8 ., However , this sequential activation disagrees with experimental data for prr knockout mutants , which indicate that loss of function of one PRR leaves the following PRR virtually unaffected ., For instance , experiments have shown that the expression levels of PRR5 and TOC1 ( as well as LHY and CCA1 ) are unaffected in both prr9-1 and prr7-3 knockout mutants 11 , 34 ., Instead , direct interactions between the PRRs have been found to be negative and directed from the later PRRs in the sequence to the earlier ones 15 , 35 ., A strong case has been made for TOC1 as a repressor of the PRR genes 9 , 14 ., As in P2012 , we modelled transcription of PRR9 , PRR7 and PRR5 as repressed by TOC1 , but we also included negative auto-regulation of TOC1 , as suggested by the ChIP-seq data that identified the TOC1 target genes 14 ., Likewise , PRR5 directly represses expression of PRR9 and PRR7 35 , and we have added these interactions to the model ., As illustrated in Figure 4A–C , this reformulation of the PRR wave is compatible with correct timing of the expression of the PRRs in the wild type , and the timing and shape of the expression curves were improved compared with the P2012 model ., An earlier version of our model gave similar profiles despite missing the repression by PRR5 , which suggests that such repression is not of great importance to the clock ., A nightly repressor appears to be acting on the PRR7 promoter , as seen in the rhythmic expression of PRR7 in LD in the cca1-11;lhy-21;toc1-21 mutant 36 ., An observed increase in PRR7 expression at ZT 0 in the lux-1 mutant relative to wild type 29 points to EC as a possible candidate ., Although Helfer et al . report that LUX does not bind to the LUX binding site motif found in the PRR7 promoter 31 , we included EC among the repressors of PRR7 ., This interaction was confirmed by Mizuno et al . while this manuscript was in review 37 , demonstrating the power of modelling and of timely publication of models ., We further let EC repress PRR5 ., We are not aware of any evidence for such a connection , but the parameter fitting consistently assigned a high value to the connection strength , as was also the case with PRR7 ., This result hints that nightly repression of PRR5 is of importance , whether it is caused by EC or some related clock component ., The real test of the model came with knocking out members of the PRR wave ., Here , the model generally outperformed the P2012 model , as judged by eye , but we are missing data for some important experiments such as PRR7 in prr9 ., As an example , Figure 4D shows the level of PRR5 protein in the prr9;prr7 double mutant , where half of our parameter sets predict the correct profile and peak phase ., In the earlier models , the only remaining inputs to PRR5 were ( a hypothetical delayed LHY/CCA1 ) , TOC1 ( in P2012 only ) and light ( which stabilized the protein ) , and these were unable to shape the PRR5 profile correctly ., The crucial difference in our model was the repression of PRR5 by CCA1 and LHY , as described in the next section ., CCA1 and LHY appear to work as transcriptional repressors in most contexts in the clock ( see e . g . 38 ) , but knockdown and overexpression experiments seem to suggest that they act as activators of PRR9 and PRR7 34 ., Accordingly , previous models have used activation by LHY/CCA1 , combined with an acute light response , to accomplish the rapid increase observed in PRR9 mRNA in the morning ., However , with the misinterpretation of TOC1 regulation of CCA1 12 in mind , we were reluctant to assume that the activation is a direct effect ., To investigate this issue , we modelled the clock with CCA1 and LHY acting as repressors of all four PRRs ., If repression was incompatible with the data for any of the PRRs , parameter fitting should reduce the strength of that repression term to near zero ., As is shown in Figure 4E , the model consistently made CCA1 and LHY strongly repress PRR5 and TOC1 ., PRR7 was also repressed , but in a narrower time window that acted to modulate the phase of its expression peak ., In contrast , PRR9 was virtually unaffected; CCA1 and LHY do not directly repress PRR9 in the model ., Even though CCA1 and LHY were not modelled as activators , the model reproduced the reduction in PRR9 expression observed in the cca1-11;lhy-21 double mutant ( Figure 4F and Figure S4 ) ., PRR7 behaved similarly to PRR9 in both experiments and model ., Conversely , in the P2011 and P2012 models , where LHY/CCA1 was supposed to activate PRR9 , there was no reduction in the peak level of PRR9 mRNA in cca1;lhy compared to wild type ( Figure S5A ) ., To explore whether CCA1 and LHY may be activating PRR9 transcription , we temporarily added an activation term to the equations ( see Text S1 ) and reoptimized the relevant model parameters ., The activation term came to increase PRR9 expression around ZT 2 at least twofold in two of the eight parameter sets , and by a smaller amount in several ( Figure S5B ) ., This would seem to suggest that activation improved the fit between data and model ., Surprisingly , there was no improvement as measured by the cost function ( Figure S3 ) ., With the added activation , PRR9 was reduced only marginally more in cca1;lhy than in the original model ( Figure S5C ) ., A likely explanation is that feedbacks through EC and TOC1 , which repress PRR9 , almost completely negate the removed activation of PRR9 in the cca1;lhy mutant ., Thus the model neither requires nor rules out activation of PRR9 by CCA1 and LHY ., Like CCA1 and LHY , RVE8 is a morning expressed MYB-domain transcription factor ., However , unlike CCA1 and LHY , RVE8 functions as an activator of genes with the evening element motif , and its peak activity in the afternoon is strongly delayed in relation to its expression 28 ., Based on experimentally identified targets , we introduced RVE8 into our model as an activator of the five evening expressed clock components PRR5 , TOC1 , GI , LUX and ELF4 , as well as the morning expressed PRR9 29 ., PRR5 binds directly to the promoter of RVE8 to repress its transcription 35 , and it is likely that PRR7 and PRR9 share this function 28 , 29 ., Using only these three PRRs as repressors of RVE8 was sufficient to capture the expression profile and timing of RVE8 , both in LL and LD ( Figure 5A ) ., RVE8 is partially redundant with RVE4 and RVE6 28 , which led us to model the rve8 mutant as a 60% reduction in the production of RVE8 ., To clearly see the effects of RVE8 in the model , we instead compared with the rve4;rve6;rve8 triple mutant , which we modelled as a total knockout of RVE8 function ., The phase of the clock was delayed in LD , and the period lengthened by approximately two hours in LL in the simulated triple mutant , in agreement with with data for LHY ( Figure 5B–C ) , though we note that CAB::LUC showed a greater period lengthening in experiments 29 ., To investigate the significance of RVE8 as an activator in the model , we made a version of the model without RVE8 ., The model parameters were reoptimized against the time course data ( excluding data for RVE8 and from rve mutants ) ., As with NOX , we found that removing the activation had no clear effect on the costs of the parameter sets after refitting ( Figure S3 ) ., It appears that activators such as RVE8 are not necessary for clock function ., Still , the effects of the rve mutants can only be explained when RVE8 is present in the model , motivating its inclusion ., The model used RVE8 as an activator for four of its targets in a majority of the parameter sets ( Figure 5D–F ) ., The exceptions were TOC1 and ELF4 ., Although TOC1 is a binding target of RVE8 in vivo , TOC1 expression is not strongly affected by RVE8-ox or rve8-1 28 , 39 ., This was confirmed by our model , where the parameter fitting disfavoured the activation of TOC1 in most of the parameter sets ( Figure 5E ) ., The eight parameter sets may not represent an exhaustive exploration of the parameter space , but the results nevertheless support the notion that the effect of RVE8 on TOC1 is of marginal importance ., Constraining the many parameters in our model requires a cost function based on a large number of experiments ., To this end , we compiled time course data from the published literature , mainly by digitizing data points from figures using the free software package g3data 40 ., We extracted more than 11000 data points from 800 time courses in 150 different mutants or light conditions , from 59 different papers published between 1998 and 2013 ., The median time resolution was 3 hours ., The list of time courses and publications can be found in Text S2 , and the raw time course data and parameter values are available for download from http://cbbp . thep . lu . se/activities/clocksim ., Most of the compiled data refer to the mRNA level , from measurements using Northern blots or qPCR , but there are also data at the protein level ( 67 time courses ) and measurements of gene expression using luciferase assays ( 12 time courses ) ., About one third of the time courses can be considered as replicates , mainly from wild type plants in the most common light conditions ., Many of these data are controls for different mutants ., Where wild type and mutant data were plotted with the same normalization , we made note of this , as their relative levels provide crucial information that is lost if the curves are individually normalized ., To find suitable values for the model parameters , we constructed a minimalistic cost function based on the mean squared error between simulations and time course data ., This approach was chosen to allow the model to capture as many features of the gene expression profiles as possible , with a minimum of human input ., The cost function consists of two parts , corresponding to the profiles and levels of the time course data , respectively ., For each time course with experimental data points the corresponding simulated data were obtained from the model ., The simulations were performed with the mutant background represented in the model equations , with entrainment for up to 50 days in light/dark cycles followed by measurements , all in the experimental light conditions ., The cost for the concentration profile was computed as ( 1 ) Since the profile levels are thus normalized , eq ., ( 1 ) is independent of the units of measurements ., The parameters ( see Text S2 for values ) allowed us to weight time courses to reflect their relative importance , e . g . where less data was available to constrain some part of the model ., Where several experimental time courses had the same normalization , e . g . in comparisons between wild type and mutants , the model should reproduce the relative changes in expression levels between the time courses ., For each group of time courses , we could minimize the sum ( 2 ) Unlike eq ., ( 1 ) , the nominators in this sum are guaranteed to be non-zero , which allows us to operate in log-space where fold changes up or down from the mean will be equally penalized ., Replacing with and likewise for we write the final scaling cost for group as ( 3 ) This cost term thus penalizes non-uniform scaling between experiment and data within the group ., The total cost to minimize was ( 4 ) where sets the balance between fitting the simulation to the profile or the level of the data ., We used A downside to our approach is that period and phase differences between different data sets result in fitting to a mean behaviour that is more damped than any individual data set ., To reduce this problem , we removed the most obvious outliers from the fitting procedure ., We also considered distorting the time axis ( e . g . dynamic time warping ) to normalize the period of oscillations in constant conditions , in order to better capture the effects of mutants relative to the wild type ., This process would be cumbersome and arbitrary , which is why it was deemed outside the scope of our efforts ., Compared to previous models by Pokhilko et al . , fewer parameters were manually constrained in our model ., In the P2010–P2012 models , roughly 40% of the parameters were constrained based on the experimental data 6 , 8 , 9 , and the remaining free parameters were fitted to mRNA profiles in LD and the free running period in LL and DD ( constant dark ) in wild type and mutants 9 ., For the F2014 model , we completely constrained 16 parameters in order to obtain correct dynamics for parts of the system where we lacked sufficient time course data ., Specifically , the parameters governing COP1 were taken from P2011 where they were introduced , whereas the parameters for the ZTL and GI proteins ( except the GI production and transport rates ) were fitted by hand to the figures in 41 ., All other parameters were fitted to the collected time course data through the cost function ., The eight parameter sets presented here were selected from a group of 30 , where each was independently seeded from the best of 1000 random points in parameter space , then optimized using parallel tempering for iterations at four different temperatures which were gradually lowered ., The resulting parameter values , which are listed in Text S1 , typically span at least an order of magnitude between the different parameter sets ( Figure S6 ) ., The sensitivity of the cost function to parameter perturbations is presented in Figure S7 and further discussed in Text S1 ., Plots of the single best parameter set against all experimental data is shown in Figure S8 ., To simulate the system and evaluate the cost function rapidly enough for parameter optimization to be feasible , we developed a C++ program that implements ODE integration and parameter optimization using the GNU Scientific Library 42 ., Evaluating the cost function for a single point in parameter space , against the full set of experiments and data , took about 10 seconds on a 3 GHz Intel Core i7 processor ., Our software is released under the GNU General Public License ( GPL ) 43 and is available from http://cbbp . thep . lu . se/activities/clocksim/ ., Accurately modelling the circadian clock as a network of a dozen or more genes is challenging ., Previous modelling work ( e . g . P2010–P2012 ) 6 , 8 , 9 has drawn on existing data and knowledge to constrain the models , but as the amount of data increases it becomes ever more difficult to keep track of the effects of mutations and other perturbations ., For a system as large as the plant circadian clock , it is desirable to automate the parameter search as much as possible , but encoding the uncertainties surrounding experimental data in a computer-evaluated cost function is not trivial ., Our modelling demonstrates the feasibility of fitting a model of an oscillating system against a large set of data without the construction of a complicated cost function based on qualitative aspects of the model output , such as entrainability , free-running period or amplitude ., Instead , we relied on the large amount of compiled time course data to constrain the model , using a direct comparison between simulations and data ., This minimalistic cost function had the additional advantage of allowing the use of time courses that span a transition in environmental conditions , e . g . from rhythmic to constant light , where the transient behaviour of the system may contain valuable information ., Consequently , our model correctly reproduces the phase of the clock after such transitions ( see e . g . Figure 3C ) ., Our approach makes it easy to add new data , at the price of ignoring previous knowledge ( e . g . , clock period ) from reporters that are not represented in the model ., Accordingly , our primary modelling goal was not to reproduce the correct periods of different clock mutants , but rather to capture the profiles of mRNA and protein curves , and the changes in amplitude and profile between mutants and different light conditions ., Compiling a large amount of data from different sources has allowed us to see patterns in expression profiles that were not apparent without independent replication ., For example , the TOC1 mRNA profile shows a secondary peak during the night in many data sets ( see examples in Figure 4B ) ., All collected time course data were used in fitting the parameters ., To validate the model , we instead used independently obtained period data from clock period mutants ., The results are shown in Text S1 ., In brief , most predictions in LL are in good agreement with experiments , with the exception of elf4 where the period changes in the wrong direction ., To experimentally measure a specific parameter value , such as the nuclear translocation rate of a protein , is exceptionally challenging ., Hence , constraining a model with measured parameters can introduce large uncertainties in the model predictions , especially when the understanding of the full system is incomplete ., Fitting the model with free parameters can instead give a large spread in individual parameter values , but result in a set of models that make well constrained predictions ., For this reason , we have based our results on an ensemble of independently optimized parameter sets , as recommended by Gutenkunst et al . 44 ., At the cost of computational time , this approach gives a more accurate picture of the uncertainties in the model and its predictions , rather than focusing on individual parameter values ., Based on our experience of curation of time course data , we offer some suggestions for how data can be compiled and treated to be more useful to modellers ., These points arose in the context of the circadian clock , but they apply to experiments that are to be used for modelling in a broader context ., Two of these suggestions concern the preservation of information about the relative expression levels between experiments ., One example of the value of such information comes from the dramatic reduction in PRR9 expression in cca1;lhy ( Figure 4F ) ., As implied in the section on PRR9 activation in Results , clock models ought to be able to explain both shape and level of expression curves in such mutant experiments , but this is only possible if that information is present in the data ., Based on the current knowledge of the clock , most clock components are exclusively or primarily repressive , and RVE8 sets itself apart by functioning mainly ( or solely ) as an activator ., According to our model , RVE8 has only a marginal effect on the expression of TOC1 , but activates PRR5 and other genes more strongly , in agreement with earlier interpretations of the experimental data 29 ., We note that all six targets of RVE8 in the model ( PRR9 , PRR5 , TOC1 , GI , LUX and ELF4 ) are also binding targets of TOC1 14 ., This may be a coincidence , because TOC1 is a repressor of a majority of the genes in the model ., It is conceivable , however , that activation by RVE8 around noon is gated by TOC1 to confer sensitivity to the timing of RVE8 relative to TOC1 in a controlled fashion ., We were surprised by the ease with which we could remove RVE8 from the model ., After reoptimization of the parameters , the cost was decreased in three of the eight parameter sets compared with the original model ( Figure S3 ) ., Thus , the clock is not dependent on activation for its function ( although it should be noted that the model without RVE8 lost the ability to explain any RVE8-related experiments ) ., This result indicates that the model possesses a high degree of flexibility , whereby the remaining components and parameters are able to adjust and restore the behaviour of the system ., Such flexibility challenges our ability to test hypotheses about individual interactions in the model , but we argue that predictions can also be made based on entropy ., Even if an alteration to the model , such as the addition of RVE8 , does not result in a significant change in the cost function , it may open up new parts of the high-dimensional parameter space ., If , following local optimization , most parameter sets indicate that a certain interaction is activating , we may conclude that the activation is likely to be true ., The parameter space is sampled in accordance with the prior belief that the model should roughly minimize the cost function , and the same reasoning motivates the use an ensemble of parameter sets to explore the model ., The conclusion about activation is indeed strengthened by the use of multiple parameter sets , because we learn whether it is valid in different areas of the parameter space ., Our model agrees with a majority of the compiled data sets , but like earlier models it also fails to fit to data for some mutants ., This indicates that important clock components or interactions may yet be unknown or misinterpreted ., We here give a few examples ., | Introduction, Results, Methods, Discussion | Circadian clocks are biological timekeepers that allow living cells to time their activity in anticipation of predictable daily changes in light and other environmental factors ., The complexity of the circadian clock in higher plants makes it difficult to understand the role of individual genes or molecular interactions , and mathematical modelling has been useful in guiding clock research in model organisms such as Arabidopsis thaliana ., We present a model of the circadian clock in Arabidopsis , based on a large corpus of published time course data ., It appears from experimental evidence in the literature that most interactions in the clock are repressive ., Hence , we remove all transcriptional activation found in previous models of this system , and instead extend the system by including two new components , the morning-expressed activator RVE8 and the nightly repressor/activator NOX ., Our modelling results demonstrate that the clock does not need a large number of activators in order to reproduce the observed gene expression patterns ., For example , the sequential expression of the PRR genes does not require the genes to be connected as a series of activators ., In the presented model , transcriptional activation is exclusively the task of RVE8 ., Predictions of how strongly RVE8 affects its targets are found to agree with earlier interpretations of the experimental data , but generally we find that the many negative feedbacks in the system should discourage intuitive interpretations of mutant phenotypes ., The dynamics of the clock are difficult to predict without mathematical modelling , and the clock is better viewed as a tangled web than as a series of loops . | Like most living organisms , plants are dependent on sunlight , and evolution has endowed them with an internal clock by which they can predict sunrise and sunset ., The clock consists of many genes that control each other in a complex network , leading to daily oscillations in protein levels ., The interactions between genes can be positive or negative , causing target genes to be turned on or off ., By constructing mathematical models that incorporate our knowledge of this network , we can interpret experimental data by comparing with results from the models ., Any discrepancy between experimental data and model predictions will highlight where we are lacking in understanding ., We compiled more than 800 sets of measured data from published articles about the clock in the model organism thale cress ( Arabidopsis thaliana ) ., Using these data , we constructed a mathematical model which compares favourably with previous models for simulating the clock ., We used our model to investigate the role of positive interactions between genes , whether they are necessary for the function of the clock and if they can be identified in the model . | systems biology, physiological processes, computer and information sciences, network analysis, physiology, chronobiology, biology and life sciences, regulatory networks, computational biology, computerized simulations | null |
521 | journal.pcbi.1006080 | 2,018 | Bamgineer: Introduction of simulated allele-specific copy number variants into exome and targeted sequence data sets | The emergence and maturation of next-generation sequencing technologies , including whole genome sequencing , whole exome sequencing , and targeted sequencing approaches , has enabled researchers to perform increasingly more complex analysis of copy number variants ( CNVs ) 1 ., While genome sequencing-based methods have long been used for CNV detection , these methods can be confounded when applied to exome and targeted sequencing data due to non-contiguous and highly-variable nature of coverage and other biases introduced during enrichment of target regions1–5 ., In cancer , this analysis is further challenged by bulk tumor samples that often yield nucleic acids of variable quality and are composed of a mixture of cell-types , including normal stromal cells , infiltrating immune cells , and subclonal cancer cell populations ., Circulating tumor DNA presents further challenges due to a multimodal DNA fragment size distribution and low amounts of tumor-derived DNA in blood plasma ., Therefore , development of CNV calling methods on arbitrary sets of tumor-derived data from public repositories may not reflect the type of tumor specimens encountered at an individual centre , particularly formalin-fixed-paraffin embedded tissues routinely profiled for diagnostic testing ., Due to lack of a ground truth for validating CNV callers , many studies have used simulation to model tumor data6 ., Most often , simulation studies are used in an ad-hoc manner using customized formats to validate specific tools and settings with limited adaptability to other tools ., More generalizable approaches aim at the de novo generation of sequencing reads according to a reference genome ( e . g . wessim3 , Art-illumina7 , and dwgsim8 ., However , de novo simulated reads do not necessarily capture subtle features of empirical data , such as read coverage distribution , DNA fragment insert size , quality scores , error rates , strand bias and GC content6; factors that can be more variable for exome and targeted sequencing data particularly when derived from clinical specimens ., Recently , Ewing et al . developed a tool , BAMSurgeon , to introduce synthetic mutations into existing reads in a Binary alignment Mapping ( BAM ) file9 ., BAMSurgeon provides support for adjusting variant allele fractions ( VAF ) of engineered mutations based on prior knowledge of overlapping CNVs but does not currently support direct simulation of CNVs themselves ., Here we introduce Bamgineer , a tool to modify existing BAM files to precisely model allele-specific and haplotype-phased CNVs ( Fig 1 ) ., This is done by introducing new read pairs sampled from existing reads , thereby retaining biases of the original data such as local coverage , strand bias , and insert size ., As input , Bamgineer requires a BAM file and a list of non-overlapping genomic coordinates to introduce allele-specific gains and losses ., The user may explicitly provide known haplotypes or chose to use the BEAGLE10 phasing module that we have incorporated within Bamgineer ., We implemented parallelization of the Bamgineer algorithm for both standalone and high performance computing cluster environments , significantly improving the scalability of the algorithm ., Overall , Bamgineer gives investigators complete control to introduce CNVs of arbitrary size , magnitude , and haplotype into an existing reference BAM file ., We have uploaded all software code to a public repository ( http://github . com/pughlab/bamgineer ) ) ., For all proof-of-principle experiments , we used exome sequencing data from a single normal ( peripheral blood lymphocyte ) DNA sample ., DNA was captured using the Agilent SureSelect Exome v5+UTR kit and sequenced to 220X median coverage as part of a study of neuroendocrine tumors ., Reads were aligned to the hg19 build of the human genome reference sequence and processed using the Genome Analysis Toolkit ( GATK ) Best Practices pipeline ., Following the validation of our tool for readily-detected chromosome- and arm-level events , we next used Bamgineer to simulate CNV profiles mimicking 3 exemplar tumors from each of 10 different cancer types profiled by The Cancer Genome Atlas using the Affymetrix SNP6 microarray platform: lung adenocarcinoma ( LUAD ) ; lung squamous cell ( LUSC ) ; head and neck squamous cell carcinoma ( HNSC ) ; glioblastoma multiformae ( GBM ) ; kidney renal cell carcinoma ( KIRC ) ; bladder ( BLCA ) ; colorectal ( CRC ) ; uterine cervix ( UCEC ) ; ovarian ( OV ) , and breast ( BRCA ) cancers ( Table 1 ) ., To select 3 exemplar tumors for each cancer type , we chose profiles that best represented the copy number landscape for each cancer type ., First , we addressed over-segmentation of the CNV calls from the microarray data by merging segments of <500 kb in size with the closest adjacent segment and removing the smaller event from the overlapping gain and loss regions ., We then assigned a score to each tumor that reflects its similarity to other tumor of the same cancer type ( S7 Fig ) ., This score integrates total number of CNV gain and losses ( Methods , Eq 6 ) , median size of each gain and loss , and the overlap of CNV regions with GISTIC peaks for each cancer type as reported by The Cancer Genome Atlas ( Table 1 ) ., We selected three high ranking tumors for each cancer type such that , together , all significant GISTIC15 peaks for that tumor type were represented ., A representative profile from a single tumor is shown in Fig 2C ., Subsequently , for each of the 30 selected tumor profiles ( 3 for each of 10 cancer types ) , we introduced the corresponding CNVs at 5 levels of tumor cellularity ( 20 , 40 , 60 , 80 , and 100% ) resulting in 150 BAM files in total ., For each BAM file , we used Sequenza to generate allele-specific copy number calls as done previously ., Tumor/normal log2 ratios are shown in Fig 3 for one representative from each cancer type ., From this large set of tumors , we next set out to compare Picard metrics and CNV calls as we did for the arm- and chromosome-level pilot ., We evaluated Bamgineer using several metrics: tumor allelic ratio , SNP phasing consistency , and tumor to normal log2 ratios ( Fig 4 ) ., As expected , across all regions of a single copy gain , tumor allelic ratio was at ~0 . 66 ( interquartile range: 0 . 62–0 . 7 ) for the targeted haplotype and 0 . 33 ( interquartile range: 0 . 3–0 . 36 ) for the other haplotype ., As purity was decreased , we observed a corresponding decrease in allelic ratios , from 0 . 66 down to 0 . 54 ( interquartile range: 0 . 5–0 . 57 ) for targeted and an increase ( from 0 . 33 ) to 0 . 47 ( interquartile range: 0 . 43–0 . 5 ) for the other haplotype for 20% purity ( Fig 4A and 4B ) ., These changes correlated directly with decreasing purity ( R2 > 0 . 99 ) for both haplotypes ., Similarly , for single copy loss regions , as purity was decreased from 100% to 20% the allelic ratio linearly decreased ( R2 > 0 . 99 ) from ~0 . 99 ( interquartile range: 0 . 98–1 . 0 ) for targeted haplotype to ~0 . 55 ( interquartile range: 0 . 51–0 . 58 ) for targeted haplotype and increases from 0 to ~0 . 43 ( interquartile range: 0 . 4–0 . 46 ) for the other haplotype ( Fig 4B ) ., The results for log2 tumor to normal depth ratios of segments normalized for average ploidy were also consistent with the expected values ( Methods , Eq 2 ) ., For CNV gain regions , log2 ratio decreased from ~0 . 58 ( log2 of 3/2 ) to ~0 . 13 as purity was decreased from 100% to 20% ., For CNV loss regions , as purity was decreased from 100% to 20% , the log2 ratio increased from -1 ( log2 of 1/2 ) to -0 . 15 , consistent with Eq 2 ( Fig 4C; S1-S4 for individual cancers ) ., Ultimately , we wanted to assess whether Bamgineer was introducing callable CNVs consistent with segments corresponding to the exemplar tumor set ., To assess this , we calculated an accuracy metric ( Fig 4D ) as:, accuracy=TP+TFTP+TF+FP+FN, where TP , TF , FP and FN represent number of calls from Sequenza corresponding to true positives ( perfect matches to desired CNVs ) , true negatives ( regions without CNVs introduced ) , false positives ( CNV calls outside of target regions ) and false negatives ( target regions without CNVs called ) ., TP , TF , TN , FN were calculated by comparing Sequenza absolute copy number ( predicted ) to the target regions for introduction of 1 Mb CNV bins across the genome ., As tumor content decreased , accuracy for both gains and losses decreased as false negatives became increasingly prevalent due to small shifts in log2 ratios ., We note that ( as expected ) , decreasing cancer purity from 100% to 20% generally decreases the segmentation accuracy ., Additionally , we observe that segmentation accuracy is on average , significantly higher for gain regions compared to the loss regions for tumor purity levels below 40% ( Fig 4D ) ., This is consistent with previous studies that show the sensitivity of CNV detection from sequencing data is slightly higher for CNV gains compared to CNV losses16 ., We also note that with decreasing cancer purity , the decline in segmentation accuracy follows a linear pattern of decline for gain regions and an abrupt stepwise decline for loss regions ( Fig 4D; segmentation accuracies are approximately similar for 40% and 20% tumor purities ) ., Finally , we observed a degree of variation in terms of segmentation accuracy across individual cancer types ( S1–S4 Figs ) ., Segmentation accuracy was lower for LUAD , OV and UCEC compared to other simulated cancer types for this study ., The relative decline in performance is seen in cancer types where CNV gains and losses cover a sizeable portion of the genome; and hence , the original loss and gain events sampled from TCGA had significant overlaps ., As a result , after resolving overlapping gain and loss regions ( S7 Fig ) , on average , the final target regions constitute a larger number of small ( < 200 kb ) loss regions immediately followed by gain regions and vice versa; making the accurate segmentation challenging for the CBS ( circular binary segmentation ) algorithm implemented by Sequenza relying on presence of heterozygous SNPs ., This can cause uncertainties in assignments of segment boundaries ., In summary , application of an allele-specific caller to BAMs generated by Bamgineer recapitulated CNV segments consistent with >95% ( medians: 95 . 1 for losses and 97 . 2 for gains ) of those input to the algorithm ., However , we note some discrepancies between the expected and called events , primarily due to small CVNs as well as large segments of unprobed genome between exonic sequences ., To evaluate the use of Bamgineer for circulating tumor DNA analysis , we simulated the presence of an EGFR gene amplification in read alignments from a targeted 5-gene panel ( 18 kb ) applied to a cell-free DNA from a healthy donor and sequenced to >50 , 000X coverage ., To mirror concentrations of tumor-derived fragments commonly encountered in cell-free DNA17 , 18 , we introduced gain of an EGFR haplotype at frequencies of 100 , 10 , 1 , 0 . 1 , and 0 . 01% ., This haplotype included 3 SNPs covered by our panel , which were phased and subject to allele-specific gain accordingly ., As with the exome data , we observed shifts in coverage of specific allelic variants , and haplotype representation consistent with the targeted allele frequencies ( Fig 5A , Supplemental S1 Table ) ., Furthermore , read pairs introduced to simulate gene amplification retain the bimodal insert size distribution characteristic of cell-free DNA fragments ( Fig 5B and 5C ) ., While this experiment showcases the ability of Bamgineer to faithfully represent features of original sequencing data while controlling allelic amplification at the level of the individual reads , these subtle shifts are currently beyond the sensitivity of conventional CNV callers when applied to small , deeply covered gene panels ., Therefore , it is our hope that Bamgineer may be of value to aid develop of new methods capable of detecting copy number variants supported by a small minority of DNA fragments in a specimen ., Bamgineer is computationally intensive and the runtime of the program is dictated by the number of reads that must be processed , a function of the coverage of the genomic footprint of target regions ., To ameliorate the computational intensiveness of the algorithm , we employed a parallelized computing framework to maximize use of a high-performance compute cluster environment when available ., We took advantage of two features in designing the parallelization module ., First , we required that added CNVs are independent for each chromosome ( although nested events can likely be engineered through serial application of Bamgineer ) ., Second , since we did not model interchromosomal CNV events , each chromosome can be processed independently ., As such , CNV regions for each chromosome can be processed in parallel and aggregated as a final step ., S8 Fig shows the runtimes for The Cancer Genome Atlas simulation experiments ., Using a single node with 12 cores and 128 GB of RAM , each synthetic BAM took less than 3 . 5 hours to generate ., We also developed a version of Bamgineer that can be launched from sun grid engine cluster environments ., It uses python pipeline management package ruffus to parallelize tasks automatically and log runtime events ., It is highly modular and easily updatable ., If disrupted during a run , the pipeline can continue to completion without re-running previously completed intermediate steps ., Here , we introduced Bamgineer , to introduce user-defined haplotype-phased allele-specific copy number events into an existing Binary Alignment Mapping ( BAM ) file , obtained from exome and targeted sequencing experiments ., As proof of principle , we generated , from a single high coverage ( mean: 220X ) BAM file derived from a human blood sample , a series of 30 new BAM files containing a total of 1 , 693 simulated copy number variants ( on average , 56 CNVs comprising 1800Mb i . e . ~55% of the genome per tumor ) corresponding to profiles from exemplar tumors for each of 10 cancer types ., To demonstrate quantitative introduction of CNVs , we further simulated 4 levels of tumor cellularity ( 20 , 40 , 60 , 80% purity ) resulting in an additional 120 new tumor BAM files ., We validated our approach by comparing CNV calls and inferred purity values generated by an allele-specific CNV-caller ( Sequenza14 ) as well as a focused comparison of allelic variant ratios , haplotype-phasing consistency , and tumor/normal log2 ratios for inferred CNV segments ( S1–S4 Figs ) ., In every case , inferred purity values were within ±5% of the targeted purity; and majority of engineered CNV regions were correctly called by Sequenza ( accuracy > 94%; S1–S4 Figs ) ., Allele variant ratios were also consistent with the expected values both for targeted and the other haplotypes ( Median within ±3% of expected value ) ., Median tumor/normal log2 ratios were within ±5% of the expected values ., To demonstrate feasibility beyond exome data , we next evaluated these same metrics in a targeted 5-gene panel applied to a cell-free DNA sequencing library generated from a healthy blood donor and sequenced to >10 , 000X coverage17 To simulate concentrations of tumor-derived fragments typically encountered in cancer patients , we introduced EGFR amplifications at frequencies of 100 , 10 , 1 , 0 . 1 , and 0 . 01% ., As with the exome data , we observed highly specific shifts in allele variant ratios , log2 coverage ratios , and haplotype representation consistent with the targeted allele frequencies ., Our method also retained the bimodal DNA insert size distribution observed in the original read alignment ., However , it is worthwhile noting that , these minute shifts are currently beyond the sensitivity of existing CNV callers when applied to small , deeply covered gene panels ., Consequently , we anticipate that Bamgineer may be of value to aid develop of new methods capable of detecting copy number variants supported by a small minority of DNA fragments ., In the experiments conducted in this study , we limited ourselves to autosomes and to maximum total copy number to 4 ., Naturally , Bamgineer can readily simulate higher-level copy number states and alter sex chromosomes as well ( S10 Fig ) ., While chromosome X in diploid state ( e . g . XX in normal female ) is treated identically to autosomes , for both X and Y chromosomes beginning in haplotype state ( e . g . XY in normal male ) , the haplotype phasing step is skipped and Bamgineer samples all reads on these chromosomes independently ., For high-level amplifications , the ability of Bamgineer to faithfully retain the features of the input Bam file ( e . g . DNA fragment insert size , quality scores and so on ) , depends on the intrinsic factors such as the length of the desired CNV , mean depth of coverage and fragment length distribution of the original input BAM file ( see Materials and Methods ) ., The significance of this work in the context of CNV inference in cancer is twofold:, 1 ) users can simulate CNVs using their own locally-generated alignments so as to reflect lab- , biospecimen- , or pipeline-specific features;, 2 ) bioinformatic methods development can be better supported by ground-truth sequencing data reflecting CNVs without reliance on generated test data from suboptimal tissue or plasma specimens ., Bamgineer addresses both problems by creating standardized sequencing alignment format ( BAM files ) harbouring user-defined CNVs that can readily be used for algorithm optimization , benchmarking and other purposes ., We expect our approach to be applicable to tune algorithms for detection of subtle CNV signals such as somatic mosaicism or circulating tumor DNA ., As these subtle shifts are beyond the sensitivity of many CNV callers , we expect our tool to be of value for the development of new methods for detecting such events trained on conventional DNA sequencing data ., By providing the ability to create customized user-generated reference data , Bamgineer will prove valuable inn development and benchmarking of CNV calling and other sequence data analysis tools and pipelines ., The work presented herein can be extended in several directions ., First , Bamgineer is not able to reliably perform interchromosomal operations such as chromosomal translocation , as our focus has been on discrete regions probed by exome and targeted panels ., Second , while Bamgineer is readily applicable to whole genome sequence data , sufficient numbers of reads are required for re-pairing when introducing high-level amplifications ., As such , shallow ( 0 . 1-1X ) or conventional ( ~30X ) whole genome sequence data may only be amenable to introduction of arm-level alterations as smaller , focal targets may not contain sufficient numbers of reads to draw from to simulate high-level amplifications ., Additionally , in our current implementation , we limited the simulated copy numbers to non-overlapping regions ., Certainly , such overlapping CNV regions occur in cancer and iterative application of Bamgineer may enable introduction of complex , nested events ., Finally , introduction of compound , serially acquired CNVs may be of interest to model subclonal phylogeny developed over time in bulk tumor tissue samples ., The user provides 2 mandatory inputs to Bamgineer as command-line arguments: 1 ) a BAM file containing aligned paired-end sequencing reads ( “Normal . bam” ) , 2 ) a BED file containing the genome coordinates and type of CNVs ( e . g . allele-specific gain ) to introduce ( “CNV regions . bed” ) ., Bamgineer can be used to add four broad categories of CNVs: Balanced Copy Number Gain ( BCNG ) , Allele-specific Copy Number Gain ( ASCNG ) , Allele-specific Copy Number Loss ( ACNL ) , and Homozygous Deletion ( HD ) ., For example , consider a genotype AB at a genomic locus where A represents the major and B represents the minor allele ., Bamgineer can be applied to convert that genomic locus to any of the following copy number states:, {A , B , ABB , AAB , ABB , AABB , AAAB , ABBB , …}, An optional VCF file containing phased germline calls can be provided ( phased_het . vcf ) ., If this file is not provided , Bamgineer will call germline heterozygous single nucleotide polymorphisms ( SNPs ) using the GATK HaplotypeCaller and then categorize alleles likely to be co-located on the same haplotypes using BEAGLE and population reference data from the HapMap project ., To obtain paired-reads in CNV regions of interest , we first intersect Normal . bam with the targeted regions overlapping user-defined CNV regions ( roi . bed ) ., This operation generates a new BAM file ( roi . bam ) ., Subsequently , depending on whether the CNV event is a gain or loss , the algorithms performs two separate steps as follows ., To introduce copy number gains , Bamgineer creates new read-pairs constructed from existing reads within each region of interest ., This approach thereby avoids introducing pairs that many tools would flag as molecular duplicates due to read 1 and read 2 having start and end positions identical to an existing pair ., If desired , these read pairs can be restricted to reads meeting a specific SAM flag ., For our exome experiments , we used read pairs with a SAM flag equal to 99 , 147 , 83 , or 163 , i . e . read paired , read mapped in proper pair , mate reverse ( forward ) strand , and first ( second ) in pair ., To enable support for the bimodal distribution of DNA fragment sizes in ctDNA , we removed the requirement for “read mapped in proper pair” and used read pairs with a SAM flag equal to 97 , 145 , 81 , or 161 ., Users considering engineering of reads supporting large inserts or intrachromosomal read pairs may also want to remove the requirement for “read mapped in proper pair” ., Additionally , we required that the selection of the newly paired read is within ±50% ( ±20% for ctDNA ) of the original read size ., The newly created read- pairs are provided unique read names to avoid confusion with the original input BAM file ., To enable inspection of these reads , these newly created read pairs are stored in a new BAM file , gain_re_paired_renamed . bam , prior to merging into the final engineered BAM ., Since we only consider high quality reads ( i . e . properly paired reads , primary alignments and mapping quality > 30 ) , the newly created BAM file contains fewer reads compared to the input file ( ~90–95% in our proof-of-principle experiments ) ., As such , at every transition we log the ratio between number of reads between the input and output files ., High-level copy number amplification ( ASCN > = 4 ) ., To achieve higher than 4 copy number amplifications , during the read/mate pairing step , we pair each read with more than one mate read ( Fig 1 ) to generate more new reads ( to accommodate the desired copy number state ) ., Though , since as stated a small portion of newly created paired reads do not meet the inclusion criteria , we aim to create more reads than necessary in the initial phase and use the sampling to adjust them in a later phase ., For instance , to simulate copy number of 6 , in theory we need create two new read pairs for every input read ., Hence , in the initial “re-pairing” step we aim to create four paired reads per read ( instead of 3 ) , so that the newly created Bam file includes enough number of reads ( as a rule of thumb , we use read-paring window size of ~20% higher than theoretical value ) ., It should be noted that the maximum copy number amplification that can faithfully retain the features of the input BAM file ( e . g . DNA fragment insert size , quality scores and so on ) , depends on the intrinsic factors such as the length of the desired CNV , mean depth of coverage and fragment length distribution of the original input BAM file ., Introduction of mutations according to haplotype state ., To ensure newly constructed read-pairs match the desired haplotype , we alter the base at heterozygous SNP locations ( phased_het . vcf ) within each read according to haplotype provided by the user or inferred using the BEAGLE algorithm ., To achieve this , we iterate through the set of re-paired reads used to increase coverage ( gain_re_paired_renamed . bam ) and modify bases overlapping SNPs corresponding to the target haplotype ( phased_het . vcf ) ., We then write these reads to a new BAM file ( gain_re_paired_renamed_mutated . bam ) prior to merging into the final engineered BAM ( S9 Fig ) ., As an illustrative example consider two heterozygous SNPs , AB and CD both with allele frequencies of ~0 . 5 in the original BAM file ( i . e . approximately half of the reads supporting reference bases and the other half supporting alternate bases ., To introduce a 2-copy gain of a single haplotype , reads to be introduced must match the desired haplotype rather than the two haplotypes found in the original data ., If heterozygous AB and CD are both located on a haplotype comprised of alternative alleles , at the end of this step , 100% of the newly re-paired reads will support alternate base-pairs ( e . g . BB and DD ) ., Based on the haplotype structure provided , other haplotype combinations are possible including AA/DD , BB/CC , etc ., Sampling of reads to reflect desired allele fraction ., Depending on the absolute copy number desired for the for CNV gain regions , we sample the BAM files according to the desired copy number state ., We define conversion coefficient as the ratio of total reads in the created BAM from previous step ( gain_repaired_mutated . bam ) to the total reads extracted from original input file ( roi . bam ) :, ρ=no . ofreadsingain_re_paired_mutated . bamno . ofreadsinroi . bam, ( 1 ), According to the maximum number of absolute copy number ( ACN ) for simulated CNV gain regions ( defined by the user ) , two scenarios are conceivable as follows ., Copy number gain example ., For instance , to achieve the single copy gain ( ACN = 3 , e . g . ABB copy state ) , the file in the previous step ( gain_re_paired_renamed_mutated . bam ) , should be sub-sampled such that on average depth of coverage is half that of extracted reads from the target regions from the original input normal file ( roi . bam ) ., Thus , the final sampling rate is calculated by dividing ½ ( 0 . 5 ) by ρ ( subsample gain_re_paired_renamed_mutated . bam such that we have half of the roi . bam depth of coverage for the region; in practice adjusted sampling rate is in the range of 0 . 51–0 . 59 i . e . 0 . 85 < ρ < 1 for CN = 3 ) and the new reads are written to a new BAM file ( gain_re_paired_renamed_mutated_sampled . bam ) that we then merge with the original reads ( roi . bam ) to obtain gain_final . bam ., Similarly to obtain three copy number gain ( ACN = 5 ) and the desired genotype ABBBB , the gain_re_paired_renamed_mutated . bam is subsampled such that depth of coverage is 3/2 ( 1 . 5 ) that of extracted reads from the target regions from the original input normal file ( note that as explained during the new paired-read generation step , we have already created more reads than needed ) ., To introduce CNV losses , Bamgineer removes reads from the original BAM corresponding to a specific haplotype and does not create new read pairs from existing ones ., To diminish coverage in regions of simulated copy number loss , we sub-sample the BAM files according to the desired copy number state and write these to a new file ., The conversion coefficient is defined similarly as the number of reads in loss_mutated . bam divided by number of reads in roi_loss . bam ( > ~0 . 98 ) ., Similar to CNV gains , the sampling rate is adjusted such that after the sampling , the average depth of coverage is half that of extracted reads from the target regions ( calculated by dividing 0 . 5 by conversion ratio , as the absolute copy number is 1 for loss regions ) ., Finally , we subtract the reads in CNV loss BAMs from the input . bam ( or input_sampled . bam ) and merge the results with CNV gain BAM ( gain_final . bam ) to obtain , the final output BAM file harbouring the desired copy number events ., To validate that the new paired-reads generated from the original BAM files show similar probability distribution , we used two-sided Kolmogorov–Smirnov ( KS ) test ., The critical D-values where calculated for α = 0 . 01 as follows:, Dα=c ( α ) n1+n2n1n2, ( 2 ), where coefficient c ( α ) is obtained from Table of critical values for KS test ( https://www . webdepot . umontreal . ca/Usagers/angers/MonDepotPublic/STT3500H10/Critical_KS . pdf; 1 . 63 for α = 0 . 01 ) and n1 and n2 are the number of samples in each dataset ., To assess tumor allelic ratio consistency , for each SNP the theoretical allele frequency parameter was used as a reference point ( Eq 3 ) ., Median , interquartile range and mean were drawn from the observed values for each haplotype-event pair for all the SNPs ., The boxplot distribution of the allele frequencies were plotted and compared against the theoretical reference point ., To assess the segmentation accuracy , we used log2 tumor to normal depth ratios of segments normalized for mean ploidy as the metric; where the mean ploidy is ( Eqs 4 and 5 ) ., To benchmark the performance of segmentation accuracy , we used accuracy as the metrics ., Statistical analysis was performed with the functions in the R statistical computing package using RStudio ., Theoretical expected values ., The expected value for tumor allelic frequencies at heterozygous SNP loci for tumor purity level of p ( 1-p: normal contamination ) is calculated as follows:, AF ( snp ) =pAFtcnt+ ( 1−p ) AFncnnpcnt+ ( 1−p ) cnn, ( 3 ), where AFt and AFn represent the expected allele frequencies for tumor and normal and cnt and cnn the expected copy number for tumor and normal at specific SNP loci ., For CNV events used in this experiment AFt are ( 1/3 or 2/3 ) for gain and ( 1 or 0 ) for loss CNVs according to the haplotype information ( whether or not they are located on the haplotype that is affected by each CNV ) ., The expected value for the average ploidy ( ∅^ ) is calculated as follows, ∅^=1W ( ∑i=1ncngwgi+∑j=1mcnlwlj+∑i=1ncnn ( W−G−L ), ( 4 ), , where cng , cnl , cnn , wg and wl represent the expected ploidy for gain , loss and normal regions , and the length of individual gain and loss events respectively ., W , G , and L represent total length ( in base pairs ) for gain regions , loss regions , and the entire genome ( ~ 3e9 ) ., The expected log2ratio for each segment is calculated as follows, log2ratio ( seg ) =log2 ( p×cnseg+ ( 1−p ) ×cnn∅^ ), ( 5 ), , where cnseg is the segment mean from Sequenza output , p tumor purity and ∅^ is the average ploidy calculated above ., cnn is the copy number of copy neutral region ( i . e . 2 ) Similarity score to rank TCGA tumors ., The similarity score for specific cancer type ( c ) and sampled tumor ( t ) is calculated as follows:, S ( c , t ) =1/ ( |2gt−Gc−Go|+|2lt−Lc−Lo|+ϵ ), ( 6 ), , where gt , Gc , Go represent the total number of gains for specific tumor sampled from Cancer Genome Atlas ( after merging adjacent regions and removing overlapping regions ) , median number of gains for specific tumor type , and number of gain events overlapping with GISTIC peaks respectively; lt , Lc , Lo represent the above quantities for CNV loss regions ( ϵ is an arbitrary small positive value to avoid zero denominator ) ., The higher score the closer is the sampled tumor to an exemplar tumor from specific cancer type . | Introduction, Results, Discussion, Materials and methods | Somatic copy number variations ( CNVs ) play a crucial role in development of many human cancers ., The broad availability of next-generation sequencing data has enabled the development of algorithms to computationally infer CNV profiles from a variety of data types including exome and targeted sequence data; currently the most prevalent types of cancer genomics data ., However , systemic evaluation and comparison of these tools remains challenging due to a lack of ground truth reference sets ., To address this need , we have developed Bamgineer , a tool written in Python to introduce user-defined haplotype-phased allele-specific copy number events into an existing Binary Alignment Mapping ( BAM ) file , with a focus on targeted and exome sequencing experiments ., As input , this tool requires a read alignment file ( BAM format ) , lists of non-overlapping genome coordinates for introduction of gains and losses ( bed file ) , and an optional file defining known haplotypes ( vcf format ) ., To improve runtime performance , Bamgineer introduces the desired CNVs in parallel using queuing and parallel processing on a local machine or on a high-performance computing cluster ., As proof-of-principle , we applied Bamgineer to a single high-coverage ( mean: 220X ) exome sequence file from a blood sample to simulate copy number profiles of 3 exemplar tumors from each of 10 tumor types at 5 tumor cellularity levels ( 20–100% , 150 BAM files in total ) ., To demonstrate feasibility beyond exome data , we introduced read alignments to a targeted 5-gene cell-free DNA sequencing library to simulate EGFR amplifications at frequencies consistent with circulating tumor DNA ( 10 , 1 , 0 . 1 and 0 . 01% ) while retaining the multimodal insert size distribution of the original data ., We expect Bamgineer to be of use for development and systematic benchmarking of CNV calling algorithms by users using locally-generated data for a variety of applications ., The source code is freely available at http://github . com/pughlab/bamgineer . | We present Bamgineer , a software program to introduce user-defined , haplotype-specific copy number variants ( CNVs ) at any frequency into standard Binary Alignment Mapping ( BAM ) files ., Copy number gains are simulated by introducing new DNA sequencing read pairs sampled from existing reads and modified to contain SNPs of the haplotype of interest ., This approach retains biases of the original data such as local coverage , strand bias , and insert size ., Deletions are simulated by removing reads corresponding to one or both haplotypes ., In our proof-of-principle study , we simulated copy number profiles from 10 cancer types at varying cellularity levels typically encountered in clinical samples ., We also demonstrated introduction of low frequency CNVs into cell-free DNA sequencing data that retained the bimodal fragment size distribution characteristic of these data ., Bamgineer is flexible and enables users to simulate CNVs that reflect characteristics of locally-generated sequence files and can be used for many applications including development and benchmarking of CNV inference tools for a variety of data types . | sequencing techniques, alleles, genetic mapping, genome analysis, copy number variation, molecular genetics, molecular biology techniques, research and analysis methods, sequence analysis, genome complexity, sequence alignment, bioinformatics, molecular biology, genetic loci, haplotypes, dna sequence analysis, heredity, database and informatics methods, genetics, biology and life sciences, genomics, dna sequencing, computational biology | null |
2,225 | journal.pcbi.1006772 | 2,019 | A component overlapping attribute clustering (COAC) algorithm for single-cell RNA sequencing data analysis and potential pathobiological implications | Single cell ribonucleic acid sequencing ( scRNA-seq ) offers advantages for characterization of cell types and cell-cell heterogeneities by accounting for dynamic gene expression of each cell across biomedical disciplines , such as immunology and cancer research 1 , 2 ., Recent rapid technological advances have expanded considerably the single cell analysis community , such as The Human Cell Atlas ( THCA ) 3 ., The single cell sequencing technology offers high-resolution cell-specific gene expression for potentially unraveling of the mechanism of individual cells ., The THCA project aims to describe each human cell by the expression level of approximately 20 , 000 human protein-coding genes; however , the representation of each cell is high dimensional , and the human body has trillions of cells ., Furthermore , scRNA-seq technologies have suffered from several limitations , including low mean expression levels in most genes and higher frequencies of missing data than bulk sequencing technology 4 ., Development of novel computational technologies for routine analysis of scRNA-seq data are urgently needed for advancing precision medicine 5 ., Inferring gene-gene relationships ( e . g . , regulatory networks ) from large-scale scRNA-seq profiles is limited ., Traditional approaches to gene co-expression network analysis are not suitable for scRNA-seq data due to a high degree of cell-cell variabilities ., For example , LEAP ( Lag-based Expression Association for Pseudotime-series ) is an R package for constructing gene co-expression networks using different time points at the single cell level 6 ., The Partial information decomposition ( PID ) algorithm aims to predict gene-gene regulatory relationships 7 ., Although these computational approaches are designed to infer gene co-expression networks from scRNA-seq data , they suffer from low resolution at the single-cell or single-gene levels ., In this study , we introduced a network-based approach , termed Component Overlapping Attribute Clustering ( COAC ) , to infer novel gene-gene subnetwork in individual components ( the subset of whole components ) representing multiple cell types and cell phases of scRNA-seq data ., Each gene co-expression subnetwork represents the co-expressed relationship occurring in certain cells ., The scoring function identifies co-expression networks by quantifying uncoordinated gene expression changes across the population of single cells ., We showed that gene subnetworks identified by COAC from scRNA-seq profiles were highly correlated with the survival rate of melanoma patients and drug responses in cancer cell lines , indicating a potential pathobiological application of COAC ., If broadly applied , COAC can offer a powerful tool for identifying gene-gene networks from large-scale scRNA-seq profiles in multiple diseases in the on-going development of precision medicine ., In this study , we present a novel algorithm for inferring gene-gene networks from scRNA-seq data ., Specifically , a gene-gene network represents the co-expression relationship of certain components ( genes ) , which indicates the localized ( cell subpopulation ) co-expression from large-scale scRNA-seq profiles ( Fig 1 ) ., Specifically , each gene subnetwork is represented by one or multiple feature vectors , which are learned from the scRNA-seq profile of the training set ., For the test set , each gene expression profile can be transformed to a feature value by one or several feature vectors which measure the degree of coordination of gene co-expression ., Since the feature vectors are learned from the relative expression of each gene , batch effects can be eliminated by normalization of relatively co-expressed genes ( see Methods ) ., In addition to showing that COAC can be used for batch effect elimination , we further validated COAC by illustrating three potential pathobiological applications: ( 1 ) cell type identification in two large-scale human scRNA-seq datasets ( 43 , 099 and 43 , 745 cells respectively , see Methods ) ; ( 2 ) gene subnetworks identified from melanoma patients-derived scRNA-seq data showing high correlation with survival of melanoma patients from The Cancer Genome Atlas ( TCGA ) ; ( 3 ) gene subnetworks identified from scRNA-seq profiles which can be used to predict drug sensitivity/resistance in cancer cell lines ., We collected scRNA-seq data generated from 10x scRNA-seq protocol 7 , 8 ., In total , 14 , 032 cells extracted from peripheral blood mononuclear cells ( PBMC ) in systemic lupus erythematosus ( SLE ) patients were used as the case group and 29 , 067 cells were used as the control group ( see Methods ) ., For the case group , we used 12 , 277 cells for the training set and the remaining 1 , 755 cells for the validation set ., For the control group , we used 25 , 433 cells for the training set and 3 , 634 for the validation set ., After filtering with average correlation and average component ratio thresholds ( see Methods ) , we obtained 93 , 951 co-expression subnetworks ( gene clusters with components ) by COAC ., We transformed these co-expression gene clusters to feature vectors ., Features whose variance distribution was significantly different in the case group versus the control group were kept ( see Methods ) ., Using a t-SNE algorithm implemented in the R package-tsne 9 , we found that the single cells ( from the case group ) which were retrieved directly from the patients can be more robustly separated from the control group cells ( Fig 2B ) , comparing to the original data ( Fig 2A ) without applying COAC ., Thus , the t-SNE analysis reveals that batch effects can be significantly reduced by COAC ( Fig 2 ) ., We next turned to examine whether COAC can be used for cell type identification ., We collected a scRNA-seq dataset of 14 , 448 single cells in an IFN-β stimulated group and 14 , 621 single cells in the control group 8 ., To remove factors caused by the stimulation conditions or experimental batch effects , we selected 13 , 003 cells in the IFN-β stimulated group and 13 , 158 cells in the control group as the training set to obtain homogeneous feature vectors for each cell ., The remaining scRNA-seq data are used as the validation set ., We generated the gene subnetworks by COAC and transformed the subnetworks into feature vectors for individual cells ( see Methods ) ., We found that cells from IFN-β stimulated and control groups were separated significantly ( Fig 3A ) by t-SNE 9 ., However , without applying COAC cells from the IFN-β stimulated and control groups are uniformly distributed in the whole space ( Fig 3B ) , suggesting that components which separate IFN-β stimulated cells from control cells were eliminated from the feature vector identified by COAC ., We further collected a scRNA-seq dataset including a total of 43 , 745 cells with well-defined cell types from a previous study 10 ., We built a training set ( 21 , 873 cells ) and a validation set ( 21 , 872 cells ) with approximately equivalent size ., In the training set , we generated co-expression subnetworks as the feature vector by COAC ., For the validation set , we grouped the total cells into five main categories as described previously 10 ., Fig 3C shows that COAC-inferred subnetworks can be used to distinguish five different cell types with high accuracy ( cell types for 83 . 05% cells have been identified correctly ) in the t-SNE analysis , indicating that COAC can identify cell types from heterogeneous scRNA-seq profiles ., We next inspected potential pathobiological applications of COAC in identifying possible prognostic biomarkers or pharmacogenomics biomarkers in cancer ., We next turned to inspect whether COAC-inferred gene co-expression subnetworks can be used as potential prognostic biomarkers in clinical samples ., We identified gene subnetworks from scRNA-seq data of melanoma patients 11 ., Using a feature selection pipeline , we filtered the original subnetworks according to the difference of means and variances between two different groups ( e . g . , malignant cells versus control cells ) to prioritize top gene co-expression subnetworks ( S1A Fig ) ., We collected the bulk gene expression data and clinical data for 458 melanoma patients from the TCGA website 12 ., Applying COAC , we identified two gene co-expression subnetworks with the highest co-expression correlation in malignant cells compared to control cells ( S1B Fig ) ., For each subnetwork , we then calculated the co-expression correlation in bulk RNA-seq profiles of melanoma patients ., Using the rank of co-expression values of melanoma patients , the top 32 patients were selected as group 1 and the tail 32 patients were selected as group 2 ., Log rank test was employed to compare the survival rate of two groups 13 ., We found that gene subnetworks identified by COAC from melanoma patients-derived scRNA-seq data can predict patient survival rate ( Fig 4A and Fig 4B ) ., KRAS , is an oncogene in multiple cancer types 14 , including menaloma 15 ., Herein we found a co-expression among KRAS , HADHB , and PSTPIP1 , can predict significantly patient survival rate ( P-value = 4 . 09×10−5 , log rank test , Fig 4B ) ., Thus , regulation of KRAS-HADHB-PSTPIP1 may offer new a pathobiological pathway and potential biomarkers for predicting patient’s survival in menaloma ., We next focused on gene co-expression subnetworks in several known melanoma-related pathways , such as the MAPK , cell-cycle , DNA damage response , and cell death pathways 16 by comparing the differences in means and variances between T cell and other cells using COAC ( see Methods ) ., For each gene co-expression subnetwork identified by COAC , we selected 32 patients who had enriched co-expression correlation and 32 patients who had lost a co-expression pattern ., We found that multiple COAC-inferred gene subnetworks predicted significantly menaloma patient survival rate ( Fig 4C–4F ) ., For example , we found that BRAF-PSMB3-SNRPD2 predict significant survival ( P-value = 0 . 0058 , log rank test . Fig 4C ) , revealing new potential disease pathways for BRAF melanoma ., CDKN2A , encoding cyclin-dependent kinase Inhibitor 2A , plays important roles in melanoma 17 ., Herein we found a potential regulatory subnetwork , RBM6-CDKN2A-MRPL10-MARCKSL , which is highly correlated with melanoma patients’ survival rate ( P-value = 0 . 019 , log rank test . Fig 4F ) ., We identified several new potential regulatory subnetworks for TP53 as well , which is highly correlated with patients survival rate as well ( Fig 4D and 4E ) ., Multiple novel COAC-inferred gene co-expression subnetworks that are significantly associated with patient’s survival rate are provided in S2 Fig . Altogether , gene regulatory subnetworks identified by COAC can shed light on new disease mechanisms uncovering possible functional consequences of known melanoma genes and offer potential prognostic biomarkers in melanoma ., COAC-inferred prognostic subnetworks should be further validated in multiple independent cohorts before clinical application ., To examine the potential pharmacogenomics application of COAC , we collected robust multi-array ( RMA ) gene expression profiles and drug response data ( IC50 The half maximal inhibitory concentration ) across 1 , 065 cell lines from the Genomics of Drug Sensitivity in Cancer ( GDSC ) database 18 ., We selected six drugs in this study based on two criteria:, ( i ) the highest variances of IC50 among over 1 , 000 cell lines , and, ( ii ) drug targets across diverse pathways: SNX-2112 ( a selective Hsp90 inhibitor ) , BX-912 ( a PDK1 inhibitor ) , Bleomycin ( induction of DNA strand breaks ) , PHA-793887 ( a pan-CDK inhibitor ) , PI-103 ( a PI3K and mTOR inhibitor ) , and WZ3105 ( also named GSK-2126458 and Omipalisib , a PI3K inhibitor ) ., We first identified gene co-expression subnetworks from melanoma patients’ scRNA-seq data 11 by COAC ., The COAC-inferred subnetworks with RMA gene expression profiles of bulk cancer cell lines were then transformed to a matrix: each column of this matrix represents a feature vector and each row represents a cancer cell line from the GDSC database 18 ., We then trained an SVM regression model using the LIBSVM 19 R package with default parameters and linear kernel ( see Methods ) ., We defined cell lines whose IC50 were higher than 10 μM as drug-resistant cell lines ( or non-antitumor effects ) , and the rest as drug sensitive cell lines ( or potential antitumor effects ) ., As shown in Fig 5A–5F , the area under the receiver operating characteristic curves ( AUC ) ranges from 0 . 728 to 0 . 783 across 6 drugs during 10-fold cross-validation , revealing high accuracy for prediction of drug responses by COAC-inferred gene subnetworks ., To illustrate the underlying drug resistance mechanisms , we showed two subnetworks identified by COAC for SNX-2112 ( Fig 5G ) and BX-912 ( Fig 5H ) respectively ., SNX-2112 , a selective Hsp90 ( encoded by HSP90B1 ) inhibitors , has been reported to have potential antitumor effects in preclinical studies , including melanoma 20 , 21 ., We found that several HSP90B1 co-expressed genes ( such as CDC123 , LPXN , and GPX1 ) in scRNA-seq data may be involved in SNX-2112’s resistance pathways ( Fig 5G ) ., GPX1 22 and LPXN 23 have been reported to play crucial roles in multiple cancer types , including melanoma ., BX-912 , a PDK1 inhibitor , has been shown to suppress tumor growth in vitro and in vivo 24 ., Fig 5H shows that several PDK1 co-expressed genes ( such as TEX264 , NCOA5 , ANP32B , and RWDD3 ) may mediate the underlying mechanisms of BX-912’s responses in cancer cells ., NCOA5 25 and ANP32B 26 were reported previously in various cancer types ., Collectively , COAC-inferred gene co-expression subnetworks from individual patients’ scRNA-seq data offer the potential underlying mechanisms and new biomarkers for assessment of drug responses in cancer cells ., In this study , we proposed a network-based approach to infer gene-gene relationships from large-scale scRNA-seq data ., Specifically , COAC identified novel gene-gene co-expression in individual certain components ( the subset of whole components ) representing multiple cell types and cell phases , which can overcome a high degree of cell-cell variabilities from scRNA-seq data ., We found that COAC reduced batch effects ( Fig 2 ) and identified specific cell types with high accuracy ( 83% , Fig 3C ) in two large-scale human scRNA-seq datasets ., More importantly , we showed that gene co-expression subnetworks identified by COAC from scRNA-seq data were highly corrected with patients’ survival rate from TCGA data and drug responses in cancer cell lines ., In summary , COAC offers a powerful computational tool for identification of gene-gene regulatory networks from scRNA-seq data , suggesting potential applications for the development of precision medicine ., There are several improvements in COAC compared to traditional gene co-expression network analysis approaches from RNA-seq data of bulk populations ., Gene co-expression subnetwork identification by COAC is nearly unsupervised , and only a few parameters need to be determined ., Since gene overlap among co-expression subnetworks is allowed , the number of co-expression subnetworks has a higher order of magnitude than the number of genes ., Gene co-expression subnetworks identified by COAC can capture the underlying information of cell states or cell types ., In addition , gene subnetworks identified by COAC shed light on underlying disease pathways ( Fig 4 ) and offer potential pharmacogenomics biomarkers with well-defined molecular mechanisms ( Fig 5 ) ., We acknowledged several potential limitations in the current study ., First , the number of predicted gene co-expression subnetworks is huge ., It remains a daunting task to select a few biologically relevant subnetworks from a large number of COAC-predicted gene subnetworks ., Second , as COAC is a gene co-expression network analysis approach , subnetworks identified by COAC are not entirely independent ., Thus , the features used for computing similarities among cells are not strictly orthogonal ., In the future , we may improve the accuracy of COAC by integrating the human protein-protein interactome networks and additional , already known , gene-gene networks , such as pathway information 27–29 ., In addition , we could improve COAC further by applying deep learning approaches 30 for large-scale scRNA-seq data analysis ., In summary , we reported a novel network-based tool , COAC , for gene-gene network identification from large-scale scRNA-seq data ., COAC identifies accurately the cell types and offers potential diagnostic and pharmacogenomic biomarkers in cancer ., If broadly applied , COAC would offer a powerful tool for identifying gene-gene regulatory networks from scRNA-seq data in immunology and human diseases in the development of precision medicine ., In COAC , a subnetwork is represented by the eigenvectors of its adjacency correlation matrix ., In practice , the gene regulatory relationships represented by each subnetwork are not always unique ., Those that occur in each subnetwork represent a superposition of two or several regulatory relationships , where each has a weight in gene subnetworks shown in S3A Fig . We thereby used multi-components ( i . e . , top eigenvectors with large eigenvalues ) to represent the co-expression subnetworks ., As shown in S3B Fig , a regulatory relationship between two genes can be captured in different co-expression subnetworks ., Herein , we integrated matrix factorization 31 into the workflow of closed frequent pattern mining 32 ., Specifically , the set of closed frequent patterns contains the complete itemset information regarding these corresponding frequent patterns 32 ., Here , closed frequent pattern is defined that if two item sets appear in the same samples , only the super one is kept ., For a general gene expression matrix , to obtain a sparse distribution of genes in each latent variable , a matrix factorization method such as sparse principal component analysis ( PCA ) 33 can be chosen ., In this study , because the scRNA-seq data matrix is highly sparse , singular value decomposition ( SVD ) is chosen for matrix factorization ( i . e . , the SVD of A is given by UσV* ) ., The robust rank r is defined in the S1 Text ., Components that are greater than rank r are selected and then each attribute is treated as the linearly weighted sum of components ( Di = wi1 P1 + wi2 P2 + wi3 P3 …wir Pr ) ., The projection of gene distribution i over principal component j can be expressed as DitPj‖Di‖‖Pj‖ , where ‖Pj‖ = 1 ., Then , D ( i , j ) =DitPj‖Di‖‖Pj‖=DitPj‖Di‖=wij‖Di‖ and −1<DitPj‖Di‖<1 ., The projection of each attribute distribution over each principal component distribution is illustrated in S4A Fig . In practice , single cell data are always sparse ., For component j , most elements in the collection of D ( i , j ) |j are zero ., Several thresholds are determined by F-distribution ., For a component j , the mean and the variance of collection D ( i , j ) |j is m and s2 ., Then the F-distribution with degree of freedom 1 , and degree of freedom N-1 ( N is the number of attributes ) is:, F ( 1 , N−1 ) ( x ) = ( x−m ) 2s2, ( 1 ), The P-value for a element x in collection D ( i , j ) |j is the extreme upper tail probability of this F-distribution ., The threshold of the collection D ( i , j ) |j is divided into two groups ., In one group , the P-value of all element should be below a pre-defined threshold ., The detailed process for obtaining the thresholds is described in the S1 Text ., Herein , the cutoff of P-value for F-distribution ranges from 0 . 01 to 0 . 05 ., Subsequently , we defined the mapping rule using these thresholds ., {1ifthresholdPj<DxtPj‖Dx‖<1 ( Gain ) 0ifthresholdNj<DxtPj‖Dx‖<thresholdPj ( Non−effect ) −1if−1<DxtPj‖Dx‖<thresholdNj ( Loss ), ( 2 ), The pipeline is shown in S4B and S4C Fig . In the ( 1/0 ) sparse matrix , each row represents a component while each column represents an attribute ( gene ) ., The association rule is consisted of:, ( i ) one is an attribute ( gene ) collection and, ( ii ) the other is a component collection ., The position in the binary distribution matrix of any pair with the Cartesian product of the two collections is always 1 ., This position is shown in S4D and S4E Fig . For each association rule , the attribute collection should have maximal component collection ., For example , for association rules {X Y Z} {M} , {X Y} {M} , {X Y} {M N} , only the maximal {X Y} {M N} is allowed ., And the closed association rule states that if two rules have the same component collections , only the maximal attribute collection is preserved and kept ., For association rules {X Y Z} {M N} , {X Y} {M N} , {Y Z} {M N} , and {X Z} {M N} , with the same component collection {M , N} , only the maximal {X Y Z} {M N} is kept , whereas the others are removed ., The process of efficient enumeration of all significant association rules ( gene subnetwork ) is described in the S1 Text ., The subnetwork and gene distribution of selected components are obtained directly by applying the association rule , and the gene subnetwork is treated as the largest connected component ( graph ) from co-expression networks of scRNA-seq profiles ., Finally , two metrics are introduced for filtering ., The average correlation among genes in each subnetwork is a measure of the homogeneity of genes with selected components ., The average component ratio denotes the average of how much of the whole component space is occupied by the selected components ., AverageCorrelation= ( 1n ( n−1 ) ) ∑i , j∈{X , Y , Z} , j , i≠jCorrelation ( Ai , Aj ) |M , N, ( 3 ), ComponentRatioofAi=‖Ai‖2|selectedcomponents‖Ai‖2, ( 4 ), AverageComponentRatio=1N∑ComponentRatioofAi, ( Ai ∈ attribute collection of a closed associate rule ), ( 5 ), The processes of obtaining the average correlation and the average component ratio are provided in the S1 Text ., The final largest connected component subnetwork is represented by several eigenvectors with large eigenvalues , which are calculated from the correlation matrix ., These eigenvectors are used to map each record of the gene expression profile into individual numerical values ( feature vectors ) ., Featurevector=SFt/‖S‖2 ( ‖F‖2=1 ), ( 6 ), Where S is the gene expression vector for each cell , and F is the first eigenvector of the component matrix ., If several principal components exist , then the feature value becomes the sum of components multiplied by the attenuation coefficient ., Featurevector=SF1t/‖S‖2+ ( σ2/σ1 ) SF2t/‖S‖2+ ( σ3/σ1 ) SF3t/‖S‖2… ( ‖F1t‖2=1 , ‖F2t‖2=1… ), ( 7 ), Where σ1 , σ2 , σ3 , … , σv are the eigenvalues of the gene clustering ( subnetwork ) correlation matrix , and F1t , F2 , t… are the eigenvectors of gene clustering correlation matrix ., The purpose of cell type alignment was to label cell types of each cell under different conditions ., Cell types with the same labels under each condition were then clustered ., Subsequently , differential expression analyses were performed for various conditions of each cell type ., Finally , surrogate variable analyses 34 were performed to remove the batch effects ., We used the limma 35 method ( S5B Fig ) for the differential expression analysis of the differently conditioned cell types ., The scRNA-seq data ( GEO accession ID: GSE96583 ) that was used to test the batch effect elimination was collected from PBMC peripheral blood mononuclear cells of SLE patients 7 , 8 ., In total , 14 , 032 cells with 13 aligned PBMC subpopulations under resting and interferon β ( IFN-β ) -stimulated conditions were collected 8 ., In addition , we also collected 29 , 067 cells from two controls as the control group 7 ., For the training dataset , the variances of the feature vectors ( COAC-identified subnetworks ) between the case group and the control group were calculated and was regarded as differential variances ., The variances of the feature vectors of the merged group of the case group and the control group were regarded as background variances ., For each feature , the ratio of the differential variance and background variance was defined as F-score , which measured how much this feature can distinguish cells in a case group versus a control group ., The F-score distribution for 93 , 951 features is described in S6 Fig . Using a critical point of 2 . 4 as a threshold ( S6 Fig ) , 8 , 331 features with F-score higher than the threshold were kept ., For comparison , we used 2 , 657 genes which were used as biomarkers previously as the feature vector 8 ., The scRNA-seq data of mouse kidney with well-annotated cell types were collected from a previous study 10 ., By stringent quality controls described previously 10 , a total of 43 , 745 cells selected from the original 57 , 979 cells were used in this study ., The entire dataset was randomly divided into the training set ( 21 , 873 cells ) and the test set ( 21 , 872 cells ) ., The detail of prediction model construction can be found in cell type alignment pipeline ( S5 Fig ) ., For the validation part , cell type was predicted using the training model ., For each cell , the scores for cell types were calculated ., Then all cells were plotted by t-SNE algorithm 9 ., The results of cell type prediction were displayed in the confusion matrix ., We collected the melanoma patients’ scRNA-seq data with well-annotated cell types from a previous study 11 ., The bulk RNA-seq data and clinical profiles for melanoma patients were collected from the TCGA website 13 ., The gene expression values in the scRNA-seq dataset were transformed as log ( TPMij+1 ) , where TPMij refers to transcript-per-million ( TPM ) of gene i in cell j ., The gene expression value in the bulk RNA-seq dataset was transformed in the same way ., The sub-network list was obtained from melanoma scRNA-seq dataset 11 by COAC ., Sub-networks then were transformed to feature vectors ., Two top sub-networks with the highest co-expressed correlation in melanoma cell type and one top sub-network with the highest co-expressed correlation in T cells were evaluated ., The co-expression values were calculated with RNA-seq gene expression of melanoma patients from TCGA 13 ., Survival analysis was conducted using an R survival package 36 ., We downloaded drug response data ( defined by IC50 value ) and gene bulk expression profiles in cancer cell lines from the GDSC database 18 ., The component co-expression sub-networks were identified from the melanoma patients’ scRNA-seq data with well-annotated cell types from a previous study 11 ., For scRNA-seq data , genes that had a ratio of expressed cells less than 0 . 03 were removed ., Herein , we kept the top 0 . 1~0 . 01 percent subnetworks with the highest correlation as feature vectors ., We predicted each drug’ IC50 value by LIBSVM 19 R package with default parameters and linear kernel ., The ROC curves for the result of drug response were plotted using the R package . | Introduction, Results, Discussion, Methods and materials | Recent advances in next-generation sequencing and computational technologies have enabled routine analysis of large-scale single-cell ribonucleic acid sequencing ( scRNA-seq ) data ., However , scRNA-seq technologies have suffered from several technical challenges , including low mean expression levels in most genes and higher frequencies of missing data than bulk population sequencing technologies ., Identifying functional gene sets and their regulatory networks that link specific cell types to human diseases and therapeutics from scRNA-seq profiles are daunting tasks ., In this study , we developed a Component Overlapping Attribute Clustering ( COAC ) algorithm to perform the localized ( cell subpopulation ) gene co-expression network analysis from large-scale scRNA-seq profiles ., Gene subnetworks that represent specific gene co-expression patterns are inferred from the components of a decomposed matrix of scRNA-seq profiles ., We showed that single-cell gene subnetworks identified by COAC from multiple time points within cell phases can be used for cell type identification with high accuracy ( 83% ) ., In addition , COAC-inferred subnetworks from melanoma patients’ scRNA-seq profiles are highly correlated with survival rate from The Cancer Genome Atlas ( TCGA ) ., Moreover , the localized gene subnetworks identified by COAC from individual patients’ scRNA-seq data can be used as pharmacogenomics biomarkers to predict drug responses ( The area under the receiver operating characteristic curves ranges from 0 . 728 to 0 . 783 ) in cancer cell lines from the Genomics of Drug Sensitivity in Cancer ( GDSC ) database ., In summary , COAC offers a powerful tool to identify potential network-based diagnostic and pharmacogenomics biomarkers from large-scale scRNA-seq profiles ., COAC is freely available at https://github . com/ChengF-Lab/COAC . | Single-cell RNA sequencing ( scRNA-seq ) can reveal complex and rare cell populations , uncover gene regulatory relationships , track the trajectories of distinct cell lineages in development , and identify cell-cell variabilities in human diseases and therapeutics ., Although experimental methods for scRNA-seq are increasingly accessible , computational approaches to infer gene regulatory networks from raw data remain limited ., From a single-cell perspective , the stochastic features of a single cell must be properly embedded into gene regulatory networks ., However , it is difficult to identify technical noise ( e . g . , low mean expression levels and missing data ) and cell-cell variabilities remain poorly understood ., In this study , we introduced a network-based approach , termed Component Overlapping Attribute Clustering ( COAC ) , to infer novel gene-gene subnetworks in individual components ( subsets of whole components ) representing multiple cell types and phases of scRNA-seq data ., We showed that COAC can reduce batch effects and identify specific cell types in two large-scale human scRNA-seq datasets ., Importantly , we demonstrated that gene subnetworks identified by COAC from scRNA-seq profiles highly correlated with patientss survival and drug responses in cancer , offering a novel computational tool for advancing precision medicine . | biotechnology, medicine and health sciences, clinical research design, engineering and technology, statistics, gene regulation, computational biology, cancers and neoplasms, biomarkers, oncology, research design, mathematics, network analysis, pharmacology, pharmacogenomics, research and analysis methods, bioengineering, computer and information sciences, mathematical and statistical techniques, gene expression, melanomas, survival analysis, biochemistry, gene regulatory networks, genetics, biology and life sciences, physical sciences, genomics, statistical methods, genomic medicine | null |
2,296 | journal.pcbi.1000259 | 2,009 | State Based Model of Long-Term Potentiation and Synaptic Tagging and
Capture | It is widely believed that synaptic potentiation , as demonstrated by the, physiological phenomenon of long-term potentiation ( LTP ) , plays an important, rôle in memory formation in the brain 1 , 2 ., This has triggered a, vast number of experiments in which this phenomenon has been recorded , both, in vivo and in vitro ., Typically , LTP can be, elicited in a population of CA1 neurons by placing an electrode into an input, pathway in the stratum radiatum , and applying a burst of high-frequency stimulation ., One major result that has emerged is that there are at least two distinct, “phases” of LTP , see 3 for a review ., Firstly ,, there is an “early” , transient phase ( e-LTP ) that can be induced, by a single , brief ( ) , burst of high-frequency stimulation ( weak HFS ) ., The lifetime of, this phase is around three hours in slice experiments , and its expression does not, require protein synthesis 4–6 ., Secondly , there is, late-phase LTP ( ) , which is stable for at least the eight hour time-span of a, typical slice experiment , but which can last up to months in vivo, 7–9 ., can be induced by repeated ( typically three ) bursts of HFS ,, separated by 10 minute intervals ( strong HFS ) ., Thus , notably , more stimulation does, not increase the amount of synaptic weight change at individual synapses ( as often, assumed in models ) , but rather increases the duration of weight enhancement ., It has, been shown that protein synthesis is triggered at the time of induction and is, necessary for, 4 , 5 , although, a more complicated rôle for protein synthesis in LTP has been implied 10 , 11 ., Interestingly , e-LTP at one synapse can be converted to if repeated bursts of HFS are given to other inputs of the same, neuron during a short period before or after the induction of e-LTP at the first, synapse 12–14 ., This discovery led to, the hypothesis that HFS initiates the creation of a “synaptic, tag” at the stimulated synapse , which is thought to be able to capture, plasticity-related proteins ( PRPs ) ., The PRPs are believed to be synthesized in the, cell body , although recent data suggest they may be manufactured more locally in, dendrites 15 ., The general framework for these hetero-synaptic, effects is called “synaptic tagging and capture” ( STC ) ., Which, proteins are involved in each stage of STC has not been fully elucidated yet ., Current data suggest that , at least in apical dendrites ,, calcium/calmodulin-dependent kinase II ( CaMKII ) is specifically involved in, signaling the tag in LTP induction 15 and protein kinase ( ) is involved in the late maintenance of potentiated synapses 6 , 16 ., The counterpart of LTP , long-term depression ( LTD ) , can be induced by stimulating CA1, hippocampal neurons with low-frequency stimulation ( LFS ) 17 , 18 ., LTD, states appear to have analogous properties to the LTP states discussed above ., The, early phase , which we call e-LTD , lasts around three hours , is not dependent on, protein synthesis , and can be induced by weak LFS , consisting of , for example , 900, stimuli at 1 Hz ., For induction of the late phase , , a stronger form of LFS is required , for example 900 bursts of, three stimuli at 20 Hz , with an inter-burst interval of one second 19 ., Like , is stable for the duration of most experiments and is protein, synthesis dependent 20 ., Moreover , e-LTD of one synapse can be converted, to if strong LFS is given to a second synapse of the same neuron, within an interval of around one hour 19 ., The setting of LTD, tags appears to be mediated by mitogen-activated protein kinases 15 , but, no specific PRP is yet known ., It turns out that LTP and LTD are not independent processes and that an interaction, known as “cross-capture” can occur between synapses tagged for, LTP and synapses tagged for LTD 19 ., Thus 1 ) e-LTD of one synapse can be converted to by giving inducing strong HFS to a second synapse shortly before or after, the induction of e-LTD at the first synapse; 2 ) e-LTP can be converted to in an analogous manner ., Cross-capture suggests that strong HFS and, strong LFS both trigger synthesis of both proteins and proteins ., A separate strand of research has put forward the idea that plasticity protocols, cause synapses to make discrete jumps between weak and strong states 21 , 22 ., Discrete synapses have a number of interesting theoretical properties , for example:, 1 ) old memories become at risk of being erased as new ones are stored , ( e . g . 23 ) ; 2 ), synaptic saturation , important in preventing run-away activity , is automatically, included , while storage capacity can be high 24 ., There have been several biochemical models that posit binary synapses 25–31 ., Induction and, maintenance of activity-dependent plasticity has been successfully incorporated into, a recent study 31 , and the longevity of evoked synaptic changes has, been investigated 28 , 29 ., There is however great divergence between most, network-level plasticity models and the experimental observations outlined above ., Network models typically ignore interaction between synapses , use graded weights ,, and assume that the stimulus only determines the amount of weight change and not its, longevity ., Given the limited knowledge of the processes involved , a detailed model seems at, present out of reach ., Instead , the model we present in this paper aims to integrate, the key results from experiments on induction , maintenance and STC together into a, concise model , whilst remaining simple enough to be useful for neural network, modeling ., The model posits a set of possible physical states in which a synapse can, exist , including , in particular , states with a tag present ., The states are, characterized by their synaptic strength , and also by their resistance to, potentiation and depression ., These characteristics are assumed to be determined by, the number of AMPA receptors present in the membrane 32 , and by the, configuration of proteins within the post-synaptic density ( PSD ) 33 ., In our, model , a synapse existing in one state will evolve by making stochastic transitions, between the different states , the probability per unit time of any given transition, being specified explicitly by the model ., High- or low-frequency stimulation is, assumed to change these transition probabilities ., The model does not , at this stage , include the complete biochemical machinery, involved in the induction , expression and maintenance of synaptic plasticity ., Instead , for reasons of computational efficiency , we develop a high-level model that, abstracts these processes and concentrates on the quantities important for network, behavior , namely the induction protocols and the resulting weight changes ., The model, reproduces sufficient agreement with real data to render it useful in exploring, further the functional consequences of STC in network modeling ., We have used our model to simulate several electrophysiology experiments with, multiple populations of synapses ., More specifically , we consider stimulation of, multiple independent synaptic inputs to the same neuronal population in CA1 ,, such that a protein synthesis-triggering stimulus ( i . e . , strong HFS or strong, LFS ) to one input affects all populations of synapses , and leads to STC, interactions between populations ., The stimulation protocol for the experiment, sets the transition rates for synaptic state transitions within each population ., In all cases we assume that at time there have been no recent stimulation protocols , and that the, system is in equilibrium ., Thus , initially , all transition rates are at their, resting values , and all synapses occupy one of the basal states ., Moreover ,, within each population , 80% of the synapses occupy the weak , as, opposed to the strong , basal state ( see Results ) ., Note though , that in a real, experiment , not all synapses will be in basal states , because they might have, experienced strong stimuli already , earlier in life ., As a result , some synapses, may already be in the or state before the experiment is started ., These will however, remain in these states throughout the experiment , and not interfere with other, synapses , so they can be ignored ., However , the presence of such synapses would, reduce the observed amount of LTP/D , both in model and experiment ., The actual number of synapses measured in experiments using extra-cellular, recordings is not known and probably varies considerably between experiments ., The results we obtain come from taking 1000 synapses in each population ., Starting from the initial equilibrium condition , we update state occupancy, numbers at each time-step by random sampling in accordance with the transition, rates ., Then , for each population , we can find the relative field excitatory post-synaptic, potential by expressing the summed synaptic weight at time as a percentage of the initial summed synaptic weight: ( 1 ) where denotes , for population , the occupancy number of state at time , with the states numbered as in Figure 1 ., Furthermore , we used that the, weight of states 4 , 5 , 6 was , twice that of states 1 , 2 , 3 ., In addition to stochastically simulating experiments , it is possible to calculate, mean results as well as the inter-trial standard deviation for each experiment, we simulate ., Let us consider a single population of synapses within an, experiment ., Let denote the probability that a particular synapse is in state at time ., Then the time evolution equation for the is given by ( 2 ) where the matrices are defined by ( 3 ) and denotes the transition rate from state to state at time ( with the convention that the ) ., Using equations ( 2 ) and ( 3 ) , and the fact that at all times, the occupancy numbers follow a multinomial distribution with parameters , it is straightforward to obtain the following equations for, the moments of the : ( 4 ) ( 5 ) ( 6 ) From these equations , together with equation ( 1 ) , we obtain ( 7 ) ( 8 ) where is the weight associated with state i , , ., Numerical integration of equations ( 4 ) , ( 7 ) and ( 8 ) , from, appropriate initial conditions , enables us to plot the mean and the standard, deviation of ., Using the equilibrium multinomial distribution , the appropriate initial conditions are and ., Our model is designed to reproduce as much pre-existing electrophysiological data, on long-term plasticity and STC as possible , whilst at the same time remaining, as simple as possible for its purpose ., In drawing up a list of states , a, trade-off must be made between having few states and complicated transition rate, dynamics or having lots of states and simple transition rate dynamics ., Our, convention is to say that states are distinct if they differ either in their, synaptic strength or in the expected time it will take them to potentiate or, depress in the absence of any plasticity protocols ., This leads us to a six state, model , containing three weak and three strong states: weak basal , strong basal ,, e-LTD , e-LTP , and ., The reactions that are triggered by plasticity protocols are, incorporated via time-variable transition rates between these six states ., Figure 1 shows schematic, drawings of the synaptic states of the model , together with the allowed, transitions between states ., The rate parameter associated with the transition, from one state to another state gives the probability per unit time of a synapse in state making the transition ., Equivalently , the inverse of the rate, parameter is the average time it takes the synapse to make the transition, ( assuming no other transition is available ) ., In our simulations we model, populations of synapses , with each individual synapse behaving independently, with respect to making transitions between states ., In mathematical terms , our, model is a stochastic Markov process ., Effects of stimulation protocols are, modeled by transient changes to the transition rates ., To model STC , certain, stimulation protocols given to just one population of synapses can affect the, transition rates of multiple populations ., These hetero-synaptic effects reflect, the capturing component of STC ., In the absence of stimuli , synapses fluctuate between a weak and a strong basal, state ., The weak basal state is assigned an arbitrary synaptic weight , whilst the strong basal state is taken to have synaptic, weight ., These could correspond to the two states probed in the, experiments of Ref ., 22 , in which it was found that the pairing of a, brief steady current injection with an appropriate depolarization led to, switch-like approximate doubling or halving of synaptic efficacy ., The difference, in efficacy between the two states is assumed to come about from AMPA receptor, insertion/deletion ., The transition rate for changes from weak to strong efficacy is set to , whilst that for changes from strong to weak efficacy , , is set to ., The values of these parameters are chosen, ( a ) to fit the, observation that 80% of synapses occupy the weak basal state when the, population is in equilibrium 22;, ( b ) for the model to reproduce data on, e-LTP/D decay to good agreement ( via decay from the e-LTP/D state followed by, equilibration between the two basal states ) ., These rates are comparable with, AMPA receptor recycling times 34 ., The other strong synaptic states are the e- and states ., They have the same efficacy as the strong basal state ,, but are considered potentiated states due to their increased resistance to, depression ., Choosing all potentiated states to have the same weight is motivated, by the data which shows that in experiments all LTP forms exhibit very similar, amounts of weight change ., This is actually surprising given the wide variety of, mechanisms that underlie the different forms of LTP ., Transitions into the, potentiated states only occur during intervals following certain stimulation, protocols , which we discuss below ., Once a synapse enters the e-LTP state it will, decay back into the strong basal state , with transition rate , unless it has the opportunity to move into the state ., The motivation for this decay rate comes from, experimental results on e-LTP decay ., Furthermore , it is assumed there is a tag, present in the e-LTP state since data suggest synapses in an e-LTP state convert, to an state whenever PRPs become available for capture 12 , 13 ., Although we do not model the biochemistry explicitly , we suggest that when a, synapse is in the e-LTP state , the CaMKII in the synapse is in a phosphorilated, state 15 ., When a synapse enters the state , it becomes very stable , as the only transition is very, slow decay to the strong basal state , with a rate of ., Synapses in the state are assumed to have captured PRPs , such as, 6 , 16 ., Although there is some evidence that decay, from the state is an active process rather than passive decay 8 , 35 , detailed knowledge of this is still lacking ,, so we did not attempt to include this ., The given decay rate is not intended to, be precise , but is intentionally of a smaller order of magnitude than the other, time-constants of the model ., Finally , the model is symmetric in potentiation and, depression , and so the LTD states are analogous to the LTP states ., The model has ten transitions in total , however setting some rates identical, leaves a total of seven transition rate parameters , Figure 1 . We have so far mentioned and which are responsible for fluctuations between the basal, states , as well as and which are the decay rates for e-LTP/D and respectively ., In addition , there are three further parameters , , and , for transitions into e-LTP/D and states ., These are only switched on following a, plasticity-inducing protocol ., Note that of these seven parameters , only and are constant; , , , and change transiently after stimulation ., In this section and the next we discuss the effects of LTP-inducing protocols on, the transition rates; the effect on synaptic weight dynamics is discussed in, later sections ., We model induction in a direct way , focusing on the effects of, specific plasticity-inducing stimuli rather than introducing additional stimulus, parameters ( such as strength , frequency or duration ) ., Specifically , we consider, 1 ) for e-LTP , a single one second burst of HFS ( weak HFS ) ;, 2 ) for , three repeated bursts of HFS , separated by 10 minute, time-intervals ( strong HFS ) ., The time courses for the transition rates have been, chosen so that the model matches the electrophysiological data that the model, aims to reproduce ., After any burst of HFS is applied , the following two changes occur ., Firstly , the, rate from the weak to strong basal state increases to some very, large value for a short period , before returning to its original value ., Mathematically , we use for a stimulus at time ., This , in effect , moves all synapses occupying the weak basal, state into the strong basal state ., This rapid switching is motivated by the, above-mentioned observations at the single synapse level 22 , and is assumed to, come about from AMPA receptor insertion ., Secondly , transitions from the strong basal state into the e-LTP state are, transiently turned on ., Following a stimulus at time , the rate of these transitions is given by an alpha-function ., Thus the rate takes a few minutes to grow to a significant level , peaks at a, value of , ten minutes after stimulation , and then decays back toward, zero , Figure 2 ., Alpha-functions arise naturally in chemical reaction dynamics ., In general , a, chain of first-order reactions will lead to a difference of exponentials , while, two subsequent reactions with identical rates will yield an alpha-function ., Here, the alpha-function is assumed to arise from the biochemical induction process in, the PSD ., The time-course for is motivated from evidence that a synaptic tag takes a few, minutes to form 36 ., Biophysically , the transitions to the e-LTP state might correspond to the, phosphorilation of serine-831 of the GluR1 AMPA receptor sub-units during LTP, induction 33 , which is higher 30 minutes after LTP induction, than immediately post-stimulus 2 ., Serine-831, phosphorilation is driven by CaMKII phosphorilation which happens on a faster, time-scale than that of tag stabilization 36 ., A highly, simplified model of this cascade would yield an alpha-function ., Alternatively ,, the CaMKII phosphorilation itself might correspond to tag formation and the, transition to e-LTP ., In addition to evoking the rate changes described above , a synapse subject to, strong HFS must incur additional changes resulting from the triggering of, protein synthesis and diffusion 4 , 5 ., This translates in our model to the, triggering of the transition rate from the e-LTP state into the state ., As discussed above this might correspond to the capture, of ., We assume that for the second burst of HFS crosses the threshold for protein, synthesis and the rate begins to change ., Simulations are not sensitive to the precise, course takes , nor is this tightly constrained by experimental data ., We assume the plausible form when the second burst of HFS comes at time ., The maximum value of is reached at time ., The precise conditions for protein synthesis are not known ., The strong HFS protocol described here is not the only protocol that leads to ; sometimes a strong , single burst of HFS is used 11 ., In that instance , we would need to assume that protein synthesis starts sooner ., In general , this could be achieved by integrating the stimulation and, thresholding it ., The rate also governs transitions from the e-LTD to the state , which enables the model to describe, “cross-capture” , whereby e-LTD of one synapse by weak LFS, can be converted to by applying strong HFS to a different synapse 19 ., We discuss this further in the section “Modeling synaptic tagging and, capture” ., Figure 2, summarizes the effects of weak and strong HFS on the transition rates in our, model , including their courses ., The effects of LFS are analogous to those of HFS ., Both weak and strong LFS affect, the rate from the strong basal to the weak basal state , and the rate from the weak basal to the e-LTD state in the same way that, HFS affects the rates and , respectively ., The only difference is that is held very high ( at ) for the extended period of four minutes , to reflect the, longer duration of an LFS protocol ., The rate follows the time-course , with being the time at stimulus onset ., This transition could, correspond to the de-phosphorilation of serine S-845 33 ., As mentioned above , the rate from the e-LTD state to the state is given by the same parameter as the rate from the e-LTP state to the state ., Strong LFS triggers this parameter in the same way as, strong HFS , i . e . , following strong LFS starting at time ., ( This can be taken to start at stimulus onset since the, strong LFS we consider consists of triple pulses separated by just one second, intervals ., This is in contrast to our strong HFS protocol , for which the bursts, are separated by 10 minute time-intervals ., ) In the above discussion , we have focused on stimulation of a single population of, synapses ., However STC relates to interactions between different populations of, synapses ., In our model , transitions from the weak basal state to the strong, basal state ( ) , or from the strong basal state to the e-LTP state ( ) reflect synapse-specific changes; namely changes in the, number of AMPA receptors , and configurational changes in the PSD 32 , 33 ., The transition rates and are only modified in stimulated synapses , and hence weak HFS, only affects synapses to which it is applied ., However , transition from the e-LTP, to the state results in cell-wide changes , i . e . , protein synthesis ., Thus , after one population of synapses has received strong HFS ,, many populations of synapses will see a change in the rate of these transitions ., Consistent with experiments , synapses in, an unstimulated population have little chance of being in the e-LTP state , and, will not be affected by the strong HFS; no tags are present ., But if another, population of synapses has received weak HFS and move into the tagged ( e-LTP ), state , then they have a chance to move into the state; proteins are captured by tags ., The STC process for LTD, is analogous to that for LTP ., Note that there is evidence that the STC interaction has limited range , and can, not occur between far away synapses , such as between a basal dendrite synapse, and an apical dendrite synapse 15 , 37 ., In this work we assume that when two different, populations within the same neuron are stimulated , they are close enough to, interact via STC ., However , extension to compartmentalized STC is possible ( see, Discussion ) ., As we demonstrate below , the model also accounts for, “cross-capture” in a straightforward way by using the same, parameter for transitions from the e-LTP state to the state and from the e-LTD state to the state ., Thus , for example , after one, population of synapses has received strong HFS , synapses from a, second population that find themselves in the tagged e-LTD, state will have a chance to change into the state as a result of LTD tags capturing proteins ., In addition to reproducing single-trial experiments , the model makes novel, predictions about the theoretical mean and inter-trial standard deviation of the, fEPSP ., Figure 7A and 7B, illustrate this for populations of 1000 synapses given weak HFS and weak LFS ,, whilst graphs C+D illustrate this for strong HFS and strong LFS ., We see, that when e-LTP is established the standard deviation is greater than at, baseline , whilst when e-LTD is established the standard deviation is less , Figure 7B ., In the former case ,, the increase is a result of variability in the number of synapses that make it, into the e-LTP state ., Although all synapses are initially moved into the strong, basal state by the HFS , ( resulting briefly in zero fEPSP variability ) , while the, tag-forming reaction in the PSD is still incomplete , a variable number of, synapses drop into the weak basal state from where they can no longer access the, e-LTP state , Figure 1 ., Although an analogous process occurs during the onset of e-LTD , the standard, deviation remains less in this case since the transition rate from the weak to, strong basal state ( ) is much less than that from the strong to weak basal state ( ) ., The standard deviation is also less when is established , Figure 7D ., This is because strong HFS/LFS enables almost all the, synapses to enter , first the e-LTP/D state , and then the state , in which the weight becomes stable ., The theoretical predictions above can be used in a similar way to the noise, analysis technique used to extract properties of voltage- and ligand-gated, channels from measurements of their mean current and current fluctuations 39 ., In, all cases the transition matrix determines not only the evolution of the mean, but also the fluctuations around the mean ., In principle this means that a more, accurate estimate of the transition matrix can be obtained by fitting both the, mean and the fluctuations ., In analogy with standard noise analysis , here the, fluctuations in the basal state are inversely proportional to the number of, synapses , the spectrum of the fluctuations can be used to determine the rate, constants , and changes to the fluctuations as compared to baseline can be used, to calculate how many synapses have made a transition ., Although we have, attempted to perform this type of analysis on data recorded by Roger Redondo , we, found that too many additional noise sources , as well as non-stationarity , makes, this analysis currently unsuitable ., We have presented a model of synaptic plasticity at hippocampal synapses which, reproduces several slice experiments ., It contains just six distinct states , yet, gives rise to a rich set of electrophysiological properties ., The model incorporates, the two observed flavors of LTP and LTD , namely the early and late phases , and, de-potentiation , as well as the interaction between these two phases , known commonly, as synaptic tagging and capture ., The model has a number of key features: Because all three LTP and all three LTD states have the same weight associated to, them ( and , respectively ) , a given synapse has a binary weight ., This is, reminiscent of a number of models that have proposed bistable synapses to stabilize, memories , often using CaMKII as a switch 25–31 ., In, the current model , synapses have three levels of stability ( basal , early-phase and, late-phase ) , with the early- and late-phase being stable up to hours ., It is likely, that on a biochemical level , bistable switches underlie these more stable states and, slow down the transition rates , consistent with those earlier models ., Another key postulate of the model is the existence of a single state that, corresponds both to the synapse exhibiting e-LTP and the presence of an tag , ( and similarly for LTD ) ., They go hand in hand; under natural, conditions there is no mechanism by which an tag can be removed , whilst still retaining e-LTP , or indeed, vice-versa , Figure 6 . If tag, formation is incomplete , de-potentiation ( from LFS ) can occur and tag formation, halted , but if tag formation is complete , de-potentiation can not occur and the tag, can not be destroyed , consistent with data in 36 ., Pharmacological, 15 and genetic manipulations ( reviewed in 40 ) can, interfere with tag setting and capture ., The reverse , tag setting without e-LTP , has, not ( yet ) been observed ., Finally , the model makes predictions about the noise level in the fEPSP during a, period of potentiation ( or depression ) followed by a return to baseline value ., In, particular , it predicts that the noise level increases during a period of e-LTP , but, decreases during a period of e-LTD , or , Figure 7 . The, source of this noise is purely the random nature of the transitions between states ., As experimental noise is not taken into account by the model , a test of these, predictions would require systematic removal of experimental noise from a data set ., The reason for the decreased variability during is that many synapses occupy a state that is immune to weight, change ., An alternative , more complicated model would allow for the possibility of a, synapse in a “strong” state to become even stronger , say by, insertion of even more AMPA receptors ., If this were the case , then a greater level, of noise could occur during as a result of synapses fluctuating between the state of our current model and an extra “even, stronger” state ., Note however that this would be inconsistent with, experimental evidence that synapses have only two stable levels of efficacy , e . g ., 2 , 22 ., Next , we discuss shortcomings and potential extensions of the model ., In general , it, is likely that adding extra states and more complex dynamics would refine the, agreement with experimental data ., However , doing this incurs the cost of making the, model more cumbersome to fit and computationally more expensive ., Extra states could ,, for example , enable us to incorporate the biochemistry of the PSD , leading to a more, realistic description of the flow from the basal states into the LTP and LTD states, 41 ., A recent model of LTP by Smolen 42 indeed incorporates continuous variables for the, state of the tag and for protein expression , together with modeling of calcium, dynamics ., Protein synthesis probably plays a more subtle rôle in LTP than our model, incorporates ., For example , immunity to de-potentiation does not require protein, synthesis in our model , even though some data suggest it does 43 ., Other data suggest that ,, at high levels of synaptic activation , protein synthesis can be involved in e-LTP as, well as in, 10 ., We, have not considered such regimes of reduced protein synthesis in which there could, be competition for the capture of proteins available 44 ., To reduce the level, of protein synthesis , one could simply decrease the post-strong stimulus growth and, peak of the transition rate , ( the rate corresponding to the availability of PRPs ) ., Competition, could then be incorporated by reducing the value of further every time a synapse makes the transition into a state ., Both these effects would reduce the number of synapses that, enter the states and the long term change in the fEPSP would be reduced ., Another extension would involve specifying the distances of the site of protein, synthesis from the two stimulated populations ., Our results are not sensitive to the, precise time-course of the transition rate , and so our model does not make predictions about this ., The, time-course for could however be made to reflect the distance of the site of, protein synthesis from the stimulated synapses ., For very local protein synthesis , would grow faster and larger than for more distant protein, synthesis ., In particular , if different populations were at different distances from, the site of protein synthesis , then the rate would differ between the two populations ., For example , suppose, protein synthesis took place near a population of synapses given strong HFS ., Then a, second popu | Introduction, Methods, Results, Discussion | Recent data indicate that plasticity protocols have not only synapse-specific but, also more widespread effects ., In particular , in synaptic tagging and capture, ( STC ) , tagged synapses can capture plasticity-related proteins , synthesized in, response to strong stimulation of other synapses ., This leads to long-lasting, modification of only weakly stimulated synapses ., Here we present a biophysical, model of synaptic plasticity in the hippocampus that incorporates several key, results from experiments on STC ., The model specifies a set of physical states in, which a synapse can exist , together with transition rates that are affected by, high- and low-frequency stimulation protocols ., In contrast to most standard, plasticity models , the model exhibits both early- and late-phase LTP/D ,, de-potentiation , and STC ., As such , it provides a useful starting point for, further theoretical work on the role of STC in learning and memory . | It is thought that the main biological mechanism of memory corresponds to, long-lasting changes in the strengths , or weights , of synapses between neurons ., The phenomenon of long-term synaptic weight change has been particularly well, documented in the hippocampus , a crucial brain region for the induction of, episodic memory ., One important result that has emerged is that the duration of, synaptic weight change depends on the stimulus used to induce it ., In particular ,, a certain weak stimulus induces a change that lasts for around three hours ,, whilst stronger stimuli induce changes that last longer , in some cases as long, as several months ., Interestingly , if separate weak and strong stimuli are given, in reasonably quick succession to different synapses of the same neuron , both, synapses exhibit long-lasting change ., Here we construct a model of synapses in, the hippocampus that reproduces various data associated with this phenomenon ., The model specifies a set of abstract physical states in which a synapse can, exist as well as probabilities for making transitions between these states ., This, paper provides a basis for further studies into the function of the described, phenomena . | computational biology/computational neuroscience, neuroscience/theoretical neuroscience | null |
1,547 | journal.pcbi.1003599 | 2,014 | Feedback Signals in Myelodysplastic Syndromes: Increased Self-Renewal of the Malignant Clone Suppresses Normal Hematopoiesis | Myelodysplastic syndromes are clonal disorders which are characterized by ineffective hematopoiesis , peripheral cytopenia and a high risk of disease progression towards acute myeloid leukemia ( AML ) 1–3 ., They arise from an aberrant HSC that gains growth advantage over normal hematopoiesis resulting in clonal expansion 4 , 5 ., The pathogenesis of this disease is still unclear , and no curative treatment has been developed with the exception of stem cell transplantation 6–8 ., So far , research has particularly focused on cell-intrinsic modification of MDS cells: mutations and molecular aberrations have been identified which seem to increase proliferation of the malignant clone 9 , 10 ., On the other hand , defects might also emerge as a result of an abnormal microenvironment 11–13 ., Mesenchymal stromal cells ( MSCs ) show intrinsic growth deficiency in MDS 14 and fail to support hematopoiesis 13 ., It has been suggested that MDS is also associated with increased apoptosis rates of normal bone marrow cells 3 , 15 ., So far , the mechanisms that suppress normal hematopoiesis remain unclear , as there is no evidence that the bone marrow niche is completely filled by the malignant clone 4 , 16 ., Self-renewal and differentiation of HSCs need to be tightly controlled according to the physiological needs 17 ., For this purpose , feedback signals may either be derived from the immediate bone marrow microenvironment or by systemically released factors ., The highest self-renewal rate is expected for the long-term repopulating HSCs ( LT-HSCs ) which predominantly remain dormant under steady-state conditions 18 ., Yet , self-renewal and differentiation are also prerequisites of short-term repopulating stem cells ( ST-HSCs ) , multipotent progenitor cells ( MPPs ) , committed progenitor cells ( CPCs ) and precursors 19 , 20 ., In analogy , cells derived from the aberrant MDS clone may also display a hierarchy of self-renewal and differentiation: this is in line with the concept of cancer stem cells – or tumor initiating cells – which then reveal further differentiation and heterogeneity 21 , 22 ., It is generally anticipated that proliferation rates are higher in malignant cells ., On the other hand , several mutations seem to affect the self-renewal in MDS 23 , 24 – yet , this is difficult to study under in vivo conditions ., Mathematical modeling is a powerful tool to study interaction of different cell types and the impact of feedback signals 18 , 25 , 26 ., Based on the biological context several models have been proposed to study the impact of feedback signals on system stability and regenerative properties ., Theoretical and experimental studies on the olfactory epithelium 27 , 28 as well as theoretical considerations of self-renewing cell lineages 29 demonstrate the necessity of feedback signals for system stability and efficient regeneration ., We have recently proposed mathematical models describing activation of the HSC-pool upon hematopoietic stem cell transplantation ( HSCT ) ., These models indicated that feedback signals for self-renewal and proliferation are important ., In particular , the increased self-renewal rates of immature cells facilitate efficient hematopoietic reconstitution 18 , 30 ., Similar results have been obtained for the olfactory epithelium 27 ., Subsequently , we have shown that patient serum obtained during aplasia after HSCT has impact on hematopoietic progenitor cells ( HPCs ) in vitro: it significantly increased proliferation , maintenance of the primitive immunophenotype and expansion of colony forming units ( CFUs ) 31 ., These findings supported the notion that systemically released factors contribute to regulation of stem cell function ., In the current work , we conceived a mathematical model to simulate development of MDS with particular focus on self-renewal and proliferation of the aberrant clone ., MDS is a very heterogeneous disease ., Furthermore , multiple mutations contribute to a complex clonal hierarchy during disease progression and many parameters are so far not well defined in specific cellular subpopulations ., In this regard , we aimed for a conceptional approximation how the malignant clone interferes with normal hematopoiesis – irrespective of specific MDS subtypes or hematopoietic cell lineages as well as multiple mutation scenarios ., Existence of the proposed feedback signals was then substantiated using serum of MDS patients ., The use of all human materials was performed after written consent and according to the guidelines approved by the local Ethic Committees: CD34+ cells were isolated from umbilical cord blood ( CB; Permit Number: EK187/08; RWTH Aachen University ) ; CD34+ cells and MSCs were also isolated from bone marrow ( BM ) during surgical intervention ( Permit Numbers: EK300/13 and EK128/09; RWTH Aachen University ) ; and serum from MDS patients or healthy controls was collected in Düsseldorf and Aachen , respectively ( Permit numbers: 2972 and EK206/09 ) ., The mathematical model developed in this study considers interaction of normal hematopoiesis and myelodysplastic cells in the bone marrow ., It is based on a previously proposed model of the hematopoietic system 18 that was extended to describe dynamics of aberrant clones , as in MDS development 26 ., The model is based on a system of ordinary differential equations describing the flux of cells through different maturation stages for both normal and malignant cells ., The structure of the model is depicted in Figure 1 and a detailed description of the model is given in Text S1 ., CD34+ cells were isolated from fresh umbilical cord blood using the human CD34 Micro Bead Kit ( Miltenyi Biotec GmbH , Bergisch-Gladbach , Germany ) as described before 31 ., Alternatively , CD34+ cells were isolated from human bone marrow aspirate from the femur obtained during orthopaedic surgery ., MSCs were isolated from the caput femoris and cultured as described before 32 , 33 ., For co-culture experiments , we have used MSCs of passage 3 to 6 ( 10–15 population doublings ) ., Serum samples from 57 MDS patients and 5 healthy controls were obtained from the Department of Hematology of Heinrich Heine University in Düsseldorf ., Additionally , serum of 12 healthy controls was obtained from the Department of Gynaecology at RWTH Aachen University ., Generation of serum was performed as described in detail before 31 ., Relevant patient data are summarized in Table 1 in Text S1 ., Hematopoietic progenitor cells were expanded for up to seven days as described previously 34 in StemSpan culture medium supplemented with 10 ng/mL stem cell factor ( SCF; PeproTech GmbH , Hamburg , Germany ) , 20 ng/mL thrombopoietin ( TPO; PeproTech ) , 10 ng/mL fibroblast growth factor 1 ( FGF-1; PeproTech ) and 10 µg/mL heparin ( Roche GmbH , Mannheim , Germany ) 32 ., For co-culture experiments , addition of cytokines was not performed as MSCs alone activate proliferation ., Culture medium was always supplemented with 10% serum of individual MDS patients or control samples as described in our previous work 31 ., Freshly isolated CD34+ cells ( either from CB or BM ) were labelled with carboxyfluorescein diacetate N-succinimidyl ester ( CFSE; Sigma-Aldrich , Hamburg , Germany ) to monitor cell divisions as previously described 34 ., After five days , CFSE intensity was measured by flow cytometry ., For immunophenotypic analysis , cells were stained with CD34-allophycocyanin , CD133-phycoerythrin and CD45-V500 and analyzed using a FACS Canto II ( BD ) 32 ., Further details on immunophenotypic analysis are provided in Text S1 ., Colony forming unit ( CFU ) frequency was determined to estimate culture expansion on HPCs ., In brief , 12 , 500 CD34+ cells were grown for seven days in StemSpan medium supplemented with SCF , TPO , FGF , heparin and 10% patient serum ., The progeny was harvested and analyzed in the CFU-assay as described before 31 ., Concentrations of SCF , TPO and FGF in patient serum were determined with RayBio Human ELISA Kits ( RayBiotec , Norcross , GA , USA ) according to the manufacturers instructions ., Concentration of erythropoietin ( EPO ) was measured by the laboratory diagnostic center of RWTH Aachen University with a chemoluminescent-immunometric assay ( IMMULITE 1000 EPO ) ., All results are expressed as mean ± standard deviation ( SD ) or ± standard error of the mean ( SEM ) ., To estimate the probability of differences , we have adopted the two-sided Students T-test ., Probability value of p<0 . 05 denoted statistical significance ., We propose a mathematical model to address the relevance of self-renewal and proliferation rates for MDS development ., The model describes interaction of, 1 ) normal hematopoietic cells , which progress along long-term repopulating stem cells ( LT-HSCs ) , short-term repopulating stem cells ( ST-HSCs ) , multipotent progenitor cells ( MPPs ) , committed progenitor cells ( CPCs ) , precursors and mature cells ( Figure 1A ) , with, 2 ) cells of the MDS clone which progress through analogous steps of differentiation except for mature cells ( MDS-LT-HSCs , MDS-ST-HSCs , MDS-MPPs , MDS-CPCs and dysplastic precursors; Figure 1B ) ., We assume that proliferation is regulated in normal and malignant cells by feedback signals acting on all developmental stages - it is inversely correlated with the number of mature cells in peripheral blood ( PB ) ., On the other hand , we assume that self-renewal is regulated by cellular density in a virtual stem cell niche occupied exclusively by the more primitive cells in the marrow – it is inversely correlated with the number of cells in the three more primitive compartments ( LT-HSCs , ST-HSCs , MPPs , MDS-LT-HSCs , MDS-ST-HSCs , and MDS-MPPs; Figure 1C ) ., A wide range of values of each parameter has been examined ., The simulations consistently demonstrate that high self-renewal of MDS-initiating cells is crucial for MDS development ., Only if MDS-LT-HSCs have a higher self-renewal potential than normal LT-HSCs , they eventually outcompete healthy hematopoiesis ., In contrary , increased proliferation of MDS-cells alone is not sufficient ., Notably , we have assumed that the proliferation rate of MDS-HSCs is lower than in normal LT-HSCs ( maximal cell division rate every 100 versus every 50 days , respectively ) - even then the MDS clone gains predominance if the self-renewal rate is higher than in normal LT-HSCs ( maximal self-renewal rate 90% versus 70% , respectively ) ., Nevertheless , high proliferation rates in MDS cells - although not required for establishment of the disease – would accelerate expansion of a cell population if self-renewal is also increased ., Our results indicate that increased self-renewal is most essential for MDS , whereas an additional increase of proliferation accelerates the impairment of hematopoiesis ., MDS is usually a slow progressive disease which occurs particularly in elderly people ., Simulated examples with input parameters derived from our previous work 18 , 35 demonstrated that clonally derived MDS cells may increase over approximately 15 to 17 years without clinically relevant changes in bone marrow or blood counts ( Figure 2A ) ., After 17 years , the BM will contain about 1 . 66×10−6% MDS-LT-HSCs , 0 . 39% MDS-ST-HSCs , 1 . 12% MDS-MPPs , 5 . 38% MDS-CPCs and 8 . 15% dysplastic progenitors ., Then , within few years , the number of mature cells in PB drops significantly ( Figure 2B ) ., Correspondingly , the percentage of normal hematopoietic cells in the bone marrow declines ( Figure 2C ) ., The simulated dynamics of disease development are as follows:, 1 ) Initially , a single MDS cell expands very slowly due to higher self-renewal compared to normal LT-HSCs ., 2 ) Consequently , the number of cells in the bone marrow niche increases which leads , via feedback signaling , to reduced self-renewal of cells in the niche ., 3 ) This indirectly results in suppression of normal hematopoiesis and cytopenia ., 4 ) The low number of mature cells triggers proliferation of normal and malignant cells and thereby enhances disease progression ( Figure 2D ) ., In this model , we consider apoptosis rates of mature cells and of dysplastic progenitors only ., However , due to the increasing number of dysplastic precursors which die within 10 days , the percentage of apoptotic cells in the bone marrow increases to 5 . 6% ( under the assumption that apoptosis takes 24 h; Figure 2E ) ., Alternatively , we modeled MDS including further maturation of MDS cells and high apoptosis on the level of committed progenitors ., We assume that MDS derived mature cells have higher apoptosis rates than normal mature cells ( half-life time only 16 h ) , which is in line with higher apoptosis rates in bone marrow and peripheral blood observed in MDS patients 3 , 15 , 36 ., These simulations lead to qualitatively similar results: in all cases , enhanced self-renewal of disease initiating cells is crucial for establishment of the disease ., This indicates that increased apoptosis is compatible - but not required - for MDS development in our approach ( Figure 1 in Text S1 ) ., The percentage of CD34+ cells in healthy bone marrow , low-risk MDS , and high-risk MDS was 1 . 4±0 . 2% , 3 . 4±0 . 7% and 7 . 8±1 . 9% , respectively ( Figure 2 in Text S1 ) ., In our model , we assume that LT-HSCs , ST-HSCs , and MPPs , as well as MDS-LT-HSCs , MDS-ST-HSCs , and MDS-MPPs correspond to CD34+ cells – they are not pure stem cell fractions but they are all influenced by the self-renewal signal ., The percentage of primitive cells is compatible with dynamics of the mathematical model , but it rapidly increases over time ., It has been previously suggested that the percentage of blasts , defined as CD117+ or CD34+ cells , has prognostic value for survival 2 , 37 ., In this regard , it might be speculated that high-risk MDS is characterized by higher cell-intrinsic self-renewal ., Based on our mathematical model , we assumed that serum of MDS might comprise signaling molecules related to the systemic feedback which stimulate proliferation of CD34+ cells ., These cells can be expanded in vitro – particularly if co-cultured with MSCs – but this is associated with further loss of stemness ( Figure 3 in Text S1 ) ., We isolated serum of 57 MDS patients and 12 healthy controls ., CB-derived CD34+ cells were then stained with CFSE and cultured in parallel with culture media supplemented with 10% of individual serum samples ., After five days , the cells were analyzed by flow cytometry ( Figure 3A ) ., Overall , proliferation rate of CD34+ cells , and hence dilution of CFSE , was significantly higher in MDS serum ( p\u200a=\u200a0 . 007 ) ., When we subdivided MDS patients into high risk ( sAML , RAEBI and RAEBII ) , low risk ( RCMD and RCMD-RS ) , CMML I , and 5q chromosomal deletion increased proliferation was particularly observed using serum of low-risk MDS ( p\u200a=\u200a0 . 041; Figure 3B ) ., These results were reproduced with all patient sera using HPCs of three different cord blood samples ., Especially serum derived from leukopenic and anemic patients enhanced proliferation of HPCs ( p\u200a=\u200a0 . 05 and p\u200a=\u200a0 . 004 , respectively ) , whereas this trend was less pronounced with serum from thrombopenic patients ( Figure 4 ) ., However , under co-culture conditions with MSCs , the growth-supporting effect of MDS serum was obscured by the overall growth-stimulation of stromal cells , even though we did not use cytokines in these experiments ( Figure 4 in Text S1 ) ., MDS is rather observed in elderly patients and it is conceivable that age-matched HPCs respond differently to feedback signals ., Therefore , we performed two additional experiments with HPCs from adult bone marrow using all patient sera ., In analogy to our results with CB-derived HPCs , BM-derived CD34+ cells revealed significantly higher proliferation if stimulated with serum from MDS-patients ( del ( 5q ) : p\u200a=\u200a0 . 0005; high-risk MDS: p\u200a=\u200a0 . 0007; low-risk MDS: p\u200a=\u200a0 . 0019; Figure 5 in Text S1 ) ., Overall , the results support the notion that the number of mature cells is inversely correlated with the proliferative effect of patient serum - which is in agreement with our model ., Computer simulations demonstrated that our mathematical model recapitulates clinical observations under the assumption that the feedback signal for self-renewal decays if malignant cells accumulate in the stem cell niche ., Therefore , we reasoned that MDS patient serum might also impair maintenance of the primitive immunophenotype in vitro ., To this end , we have only measured expression of CD34 and CD133 in CB-HPCs which underwent five cell divisions to exclude bias by proliferation ., In fact , CD34 and CD133 expression was moderately decreased with MDS serum ( Figure 3B ) ., In contrast , expression of CD45 was not influenced by MDS serum ., A similar effect was also observed using BM-HPCs ( Figure 5 in Text S1 ) ., Although effects of MDS serum on immunophenotype were rather moderate , they are in agreement with the proposed decrease of the self-renewal signal ., The impact of MDS serum was further analyzed with regard to maintenance of colony forming units ( CFUs ) : CD34+ CB-HPCs were cultured in vitro for 7 days , and this was performed in parallel with medium supplemented with 10% of individual serum samples ., The cells were then reseeded in methylcellulose medium and 12 to 14 days later , colony types and numbers were detected ., Comparing MDS serum and control serum , no significant differences were found in colony-initiating cells ( CFU-G , CFU-M , CFU-GM , and CFU-GEMM ) ., Only the number of erythroid colonies ( BFU-E and CFU-E ) was significantly increased when exposed to serum of MDS patients ( p\u200a=\u200a0 . 004 and p\u200a=\u200a0 . 02 respectively; Figure 5A ) ., Thus , colony assays provide no support for the presence of circulating factors in MDS patient serum that increase colony formation initiated by the most primitive hematopoietic progenitors ., The proposed feedback signals may involve growth factors ., Therefore , we have analyzed serum levels of stem cell factor ( SCF ) , thrombopoietin ( TPO ) and fibroblast growth factor ( FGF ) which support expansion of CD34+ cells in vitro 32 , and of erythropoietin ( EPO ) which stimulates hematopoietic differentiation ., Concentrations of SCF , TPO and FGF were higher in MDS serum than in control serum , but this trend did not reach statistical significance ( Figure 6 in Text S1 ) ., However , the EPO-concentration was significantly higher in MDS patient serum and this is in line with previous reports ( Figure 5B ) 12 , 38 ., Our study indicates that increased self-renewal of MDS-initiating cells is the most critical parameter to initiate MDS development ., This may also explain why the disease seems to be stem cell derived as stem cells already reveal relatively high self-renewal rates ., The central question in this process is the nature of feedback signals regulating hematopoiesis ., Our models suggest that cure of MDS would only be achieved if the self-renewal rate can be specifically down-regulated in the malignant cells - particularly in the tumor-initiating MDS-LT-HSCs ., Therefore , better understanding of the MDS-niche interaction is crucial to identify new therapeutic targets . | Introduction, Methods, Results, Discussion | Myelodysplastic syndromes ( MDS ) are triggered by an aberrant hematopoietic stem cell ( HSC ) ., It is , however , unclear how this clone interferes with physiologic blood formation ., In this study , we followed the hypothesis that the MDS clone impinges on feedback signals for self-renewal and differentiation and thereby suppresses normal hematopoiesis ., Based on the theory that the MDS clone affects feedback signals for self-renewal and differentiation and hence suppresses normal hematopoiesis , we have developed a mathematical model to simulate different modifications in MDS-initiating cells and systemic feedback signals during disease development ., These simulations revealed that the disease initiating cells must have higher self-renewal rates than normal HSCs to outcompete normal hematopoiesis ., We assumed that self-renewal is the default pathway of stem and progenitor cells which is down-regulated by an increasing number of primitive cells in the bone marrow niche – including the premature MDS cells ., Furthermore , the proliferative signal is up-regulated by cytopenia ., Overall , our model is compatible with clinically observed MDS development , even though a single mutation scenario is unlikely for real disease progression which is usually associated with complex clonal hierarchy ., For experimental validation of systemic feedback signals , we analyzed the impact of MDS patient derived serum on hematopoietic progenitor cells in vitro: in fact , MDS serum slightly increased proliferation , whereas maintenance of primitive phenotype was reduced ., However , MDS serum did not significantly affect colony forming unit ( CFU ) frequencies indicating that regulation of self-renewal may involve local signals from the niche ., Taken together , we suggest that initial mutations in MDS particularly favor aberrant high self-renewal rates ., Accumulation of primitive MDS cells in the bone marrow then interferes with feedback signals for normal hematopoiesis – which then results in cytopenia . | Myelodysplastic syndromes are diseases which are characterized by ineffective blood formation ., There is accumulating evidence that they are caused by an aberrant hematopoietic stem cell ., However , it is yet unclear how this malignant clone suppresses normal hematopoiesis ., To this end , we generated mathematical models under the assumption that feedback signals regulate self-renewal and proliferation of normal and diseased stem cells ., The simulations demonstrate that the malignant cells must have particularly higher self-renewal rates than normal stem cells – rather than higher proliferation rates ., On the other hand , down-regulation of self-renewal by the increasing number of malignant cells in the bone marrow niche can explain impairment of normal blood formation ., In fact , we show that serum of patients with myelodysplastic syndrome , as compared to serum of healthy donors , stimulates proliferation and moderately impacts on maintenance of hematopoietic stem and progenitor cells in vitro ., Thus , aberrant high self-renewal rates of the malignant clone seem to initiate disease development; suppression of normal blood formation is then caused by a rebound effect of feedback signals which down-regulate self-renewal of normal stem and progenitor cells as well . | blood cells, medicine and health sciences, cell cycle and cell division, cell processes, hematologic cancers and related disorders, mathematical computing, mathematics, stem cells, cell growth, mesenchymal stem cells, adult stem cells, computer and information sciences, cell potency, animal cells, stem cell niche, computing methods, myelodysplastic syndromes, hematopoietic progenitor cells, hematology, hematopoietic stem cells, cell biology, biology and life sciences, cellular types, physical sciences, molecular cell biology, hematopoiesis | null |
1,192 | journal.pcbi.1000832 | 2,010 | RNAcontext: A New Method for Learning the Sequence and Structure Binding Preferences of RNA-Binding Proteins | RBPs act in the post-transcriptional regulation ( PTR ) of gene expression by binding to target RNAs to control splicing , stability , localization and translation ., Recent draft networks of RBP-transcript physical interaction in yeast 1 , fruit flies 2 , and humans 3 reveal a complex and combinatorial pattern of RBP targeting and supports an RNA regulon model 4 in which cis-regulatory transcript sequence dictates the post-transcriptional fate of an mRNA at multiple , distinct stages of regulation ., Deciphering this operon code as well as the role of individual RBPs in post-transcriptional regulation requires the detailed characterization of the binding preferences of RBPs ., We have recently introduced the RNAcompete assay 5 , a microarray-based in vitro method to estimate the binding affinity of selected RBPs to a defined population of short RNA sequences ., RNAcompete , along with in vivo methods such as RIP-seq 6 and CLIP-seq 7 , can be used to determine binding preferences of individual RBPs for a large number of RNA sequences ., Motif representation generated from these data can be used to scan mRNA transcripts to identify potential RBP binding sites ., However , this step can prove challenging because many RBPs show a preference for both specific sequences and secondary structure contexts in their binding sites 8–12 ., Despite these structural preferences , motif finding algorithms that ignore RNA secondary structure work surprisingly well for some RBPs ., This approach has been successful for both in vitro and in vivo binding data 1 , 2 , 5 , 13 , 14 ., For example , structure-naive motif finding applied to mRNAs targeted by yeast proteins Puf3p and Puf4p recover sequence preferences confirmed by crystal structures of the RBP-RNA complexes 15 , 16; and motif models for YB-1 , SF2 and PTB fit to in vitro binding data from the RNAcompete assay predict their in vivo targets with high accuracy 5 ., However , this approach can give misleading results when an RBP has non-trivial structural preferences ., For example , Vts1p is a yeast RBP that preferentially binds loop sequences within RNA hairpins 17 , however , this binding preference can be difficult to detect without consideration of this structural preference ( e . g . , 1 ) ., RBP motif finding can made more reliable by training structure-naive algorithms only on RNA sequence likely to be in the preferred context 9 , 18 ., For example , Foat and Stormo 18 could reliably extract the Vts1p sequence binding preferences from in vivo binding data by using only loop sequences ( from likely hairpin loops ) to train the MatrixREDUCE19 motif finding algorithm ., Similarly , the MEMERIS 9 algorithm adapts the MEME 20 motif finding algorithm to search for RNA motifs enriched in single-stranded regions by assessing a prior on each word according to its structural accessibility ., MEMERIS predicts binding sites more accurately than MEME for a number of proteins , including the mammalian stem-loop binding RBP U1A ., However , applying this strategy only allows a single , pre-defined structural preference to be queried ., Ideally , an RBP motif finding method should consider multiple possible structural contexts simultaneously , and detect the relative preferences of a particular RBP for each ., Covariance models ( CMs ) 21 are RNA motif models often used for modeling families of ncRNAs ( e . g . , 22 ) and have the capacity , in theory , to represent both the sequence and ( arbitrary ) structure preferences of RBPs ., However , CMs have a reported tendency to overpredict secondary structure 23 ., Indeed , recent CM-based motif models of Puf3p , Puf4p , and HuR 24 , 25 predict they preferentially bind RNA hairpins and contradict structural , in vitro and in vivo evidence 5 , 12 , 26 , 27 , that they bind unstructured ssRNA ., We present a new strategy for modeling RBP binding sites that learns both the sequence and structure binding preferences of an RBPs ., Our method assumes that the primary role of RNA secondary structure in RBP binding is to establish a structural context ( e . g . , loop or unstructured ) for the RNA sequence recognized by the RBP ., As such , we annotate each nucleotide in terms of its secondary structure context ( e . g . , paired , in a hairpin loop or bulge ) ., Cognizant of the fact that a given RNA sequence can have multiple , distinct stable secondary structures , this annotation takes the form of a distribution over all its possible contexts ., These distributions are estimated using computational models of RNA folding ., Our new model can be discriminatively trained ( as 19 , 28 , 29 ) thus facilitating its use with either binding affinity data or sets of bound sequences ., We apply RNAcontext to several RNA-binding affinity datasets , demonstrating that it can infer the RBP structure and sequence-binding preferences with greater accuracy than other motif-finding methods ., RNAcontext recovers previously reported sequence and structure binding preferences for well-charactered RBPs including Vts1p , HuR , and PTB and predicts new structure binding preferences for FUSIP1 , SF2/ASF , SLM2 , and RBM4 ., We use computational algorithms to predict RNA secondary structures though our algorithm can use experimentally determined RNA secondary structures when they are available ., Instead of focusing on the single minimum free energy structure which is often not representative of the full ensemble of possible structures 30 , we consider the ensemble of secondary structures that the RNA can form ., In the experiments reported here , we used SFOLD 30 to estimate the marginal distribution at each nucleotide over structural contexts ( e . g . paired , unpaired , hairpin loop ) for each position of the sequence by sampling a large number of structures for the sequence according to the Boltzmann distribution ., We annotated each base in each structure using our context annotation alphabet ( described below ) and then we set the structural context distribution ( hereafter called the annotation profile ) to be the empirical annotation frequencies for that base across these samples ., In all experiments described herein we used 1 , 000 samples ., Our motif model can use any annotation alphabet ., However , in this manuscript , we only use the alphabet P , L , U , M indicating that the nucleotide is paired ( P ) , in a hairpin loop ( L ) , or in an unstructured ( or external ) region ( U ) ., The last annotation , M , stands for miscellaneous because we combine the remaining unpaired contexts ( i . e . , the nucleotide is in a bulge , internal loop or multiloop ) ., This group of structural contexts are expressive enough to distinguish most known RBP structure preferences ., Figure 1 shows an overview of our method ., A set of sequences together with SFOLD predicted structure annotation profiles serve as input to the model ., Each input RNA molecule is scored using the sequence and structure parameters ., Formally , let represent the input set of sequences and let be a set of real-valued matrices that represent the annotation profiles of the corresponding sequences ., We use A to represent the alphabet which is composed of the structure features and associate each annotation in A with one of the rows of ., The columns of correspond to the positions in sequence and are discrete probability distributions over the annotations in the alphabet A . Let represent the model parameters where is the width of the binding site , is a position weight matrix ( PWM ) of sequence features with dimensions , is a vector of structure annotation parameters with one element for each letter in the alphabet ., For instance if = then will consist of parameters ( , , , ) for the structure annotations , , and , respectively ., Lastly , and stands for the bias terms in sequence affinity model and structural context model respectively ., We use to assign a score , , to a sequence and its corresponding annotation profile ., For an RBP with a binding site of width , following 31 , we define as the probability that at least one of its subsequences of length ( which we call -mers ) is bound by the RBP , that is: ( 1 ) where is an estimate of the probability that the -mer with base content and with structural context defined by the probability profile matrix is bound ., Here , indicates the subsequence of between -th element and -th element , inclusive , and is a matrix whose columns are the annotation distributions for each of the bases between -th and -th position ., We set to be the product between a term that depends only its base content , , and one that depends only upon its structural context , i . e . : ( 2 ) We interpret the term as an estimate of the probability that the RBP will bind in the ideal structural context ., We use a standard biophysical model 28 , 31 , 32 to define ( please see Protocol S1 for more details on this model ) : ( 3 ) where is the well-known logistic function ., The logistic function takes value at where it is an approximately linear function of , but it quickly saturates toward for negative and for positive ., We also model the structural context term using a logistic function of the sum of the structure parameters weighted by corresponding profile values plus a bias term : ( 4 ) where represents the probability that the base at position of has structural annotation ., In a preferred structural context , as represented by an annotation associated with large positive values of , the score for a -mer approximately equals and is thus determined by the base content ., Whereas in a highly disfavored structural context , as represented by highly negative values of , and therefore the score regardless of because is bounded above by for all ., So , the context term licenses binding in favored structured contexts ., In the following section , we describe how to estimate the parameters of our motif model from binding data ., However , in theory , our motif model has the flexibility to represent many different modes of RBP binding ., For example , the binding preferences of RBPs , like HuR and Vts1p , that bind their preferred sequences within a specific structural context , unstructured ( U ) 33 and hairpin ( H ) 17 respectively , can be represented by setting to match their sequence binding preferences and to have negative elements except for the elements of that corresponds to their preferred structural context ( either or respectively ) ., The binding preferences of RBPs , like U1A , that have multiple preferred contexts ( e . g . , hairpin loops 34 or unstructured ssRNA 35 ) can be captured by setting and to large positive values ., RBPs , like Staufen , that bind dsRNA without obvious sequence preferences 36 , can be represented by setting the elements of to constant values , and setting to a large positive value ., Similarly , RBPs without strong structure preferences can be represented by setting the elements of to zero and setting to a large positive value ., Our model thus extends previous efforts that model RBP binding preferences 8 by associating each RBP with a single preferred structured context which is required for binding ., In the next section , we describe how we can estimate the sequence and structure preferences of new RBPs by training our model using RBP binding or RBP binding affinity data for short RNA sequences ., We learn by using our model to attempt to reproduce the observed affinity data given the associated sequences ., In particular , we model the affinity of a sequence as a linear function of the sequence score with unknown slope and -intercept and search for settings of , , and that minimize the sum of the squared differences between the measured affinity and our predicted affinities ., When we only know whether or not a given sequence is bound we use for all bound sequences and for sequences not bound ., This formulation leads to the following least squares cost function , , that we attempt to minimize with respect to , , and using the L-BFGS method 37: ( 5 ) Here , we have added a regularization term scaled by a small constant to avoid indeterminancy thus ensuring a unique global minimum ., We use the same value of this constant in all experiments ., We use the bound constraints feature of the L-BFGS-B package to constrain to take positive values so that the estimated affinity increases as a function of the sequence score ., The cost function optimized by RNAcontext is multimodal , so different initializations can generate different results ., For the experiments reported here , we used ten different initialization for each motif width ., For motif lengths , , longer than the minimum length , two of these initial settings are generated by taking the optimal matrix learned for and adding a column of zeros to its left and right sides , respectively ., The elements of matrix for the other initializations are randomly sampled uniformly between −0 . 05 and 0 . 05 ., In all cases , the other parameters ( , , , , ) are randomly sampled uniformly between −0 . 05 and 0 . 05 ., We evaluated our motif model on RNAcompete-derived datasets 5 comprised of the measured binding preferences of nine RBPs ( i . e . , HuR , Vts1p , PTB , FUSIP1 , U1A , SF2/ASF , SLM2 , RBM4 and YB1 ) to a pool of 213 , 130 unique short ( 29- to 38-nt ) RNA sequences ( see GEO record GSE15769 and/or Agilent array design: AMADID # 022053 for the array design and data ) ., RNAcompete estimates an RBPs binding affinity for each sequence in an RNA pool based on the relative enrichment of that RNA sequence in the bound fraction versus the total RNA pool ( as measured by transformed microarray intensity ratios ) ., The RNA pool can be divided into two separate sets , Set A and Set B , that each individually satisfy the following constraints:, ( i ) each loop of length 3 to 7 ( inclusive ) is represented on at least one sequence flanked by RNA stems of 10 bases; and, ( ii ) a population of “weakly structured RNAs” wherein each possible 7-mer is represented in at least 64 different sequences that have high folding free energy , and therefore are linear or form weak secondary structures ., We call the group satisfying the first constraint the stem-loop sequences ., This group also contains 60% of the possible length eight loops ., We call the sequences satisfying the second constraint the weakly structured sequences ., There is no overlap between the stem-loop and weakly structured sequences ., So in summary , there are two different groups of stem-loops , one in Set A and one in Set B , and similarly , two different groups of weakly structured sequences ., It is important to note two things ., First , though we attempted to design these sequences to be linear or hairpins , there are many unintended structures represented in the pool ., For example , some of the sequences contain bulge or internal loops and some of the weakly structured sequences contain stem-loops ., Second , no two sequences within the pool share a common subsequence more than 12 nt long ., The design and properties of these sequences are described in greater detail in 5 ., The division of the RNA sequence pool into Set A and Set B provides a natural strategy for evaluating our motif models using two-fold cross-validation: we train our algorithm on one of the two sets and test its predictive power on the other set ., This strategy provides us with two independent measurements of performance on non-overlapping training sets ., Table S1 contains more information on the sizes and compositions of the sequences used for training and testing ., The categorizations “Positive” , “Negative” , and “Other” that appear in this table are described below ., Note due to stringent RNAcompete quality controls , some affinity data is missing for some of the sequences , so the numbers in the table do not add up to 213 , 130 for each RBP ., We evaluated RNAcontext against two other motif finding methods: MEMERIS 9 and MatrixREDUCE 19 ., MEMERIS and RNAcontext use similar approaches to model the structural context of an RNA binding site except that MEMERIS only models a single structural context where RNAcontext considers multiple contexts simultaneously ., In contrast , MatrixREDUCE does not consider the structural context of RBP binding sites and therefore can help determine the value of considering structural context in RNA motif finding ., Additionally , MatrixREDUCE outperforms many standard DNA motif finding algorithms on a similar experimental assay 38 and therefore provides a strong algorithm to benchmark to compare RNAcontext and MEMERIS against ., Also , like RNAcontext , MatrixREDUCE learns its motif model by trying to predict RNA sequence affinity whereas MEMERIS searches for motif models enriched in a set of bound sequences ., In this subsection we describe our protocol for using the training data to fit the MEMERIS , MatrixREDUCE and RNAcontext motif models ., Note that for all three methods , we fit all parameters , including those of the motif models and any free parameters ( like motif width ) , using the training data ., One of the free parameters that we consider for each method is whether it is better to train their motif model on the whole training set , or a defined subset of the training set ., All of the free parameters that we consider for each method are described below ., For every setting of the free parameters , we fit one motif model ., The “best” motif model for each method was selected based on its ability to correctly classify “Positive” and “Negative” RNA sequences in the training set , as defined in the next paragraph ., The final result of training is a single motif model for each method that we then evaluate on the test set ., The parameters of some motif models are fit using subsets of the training set because:, ( i ) MatrixREDUCE does not model RNA secondary structure and it is possible that its performance would degrade when trained on stem-loop sequences ( most of whose bases are paired ) ; and, ( ii ) MEMERIS takes as input a set of “bound” sequences that contain RBP binding sites ., For MEMERIS , “bound” sequences are selected using a manual cutoff that captures the right tail of the distribution of the RNAcompete affinity estimates ., We used a different cutoff for each RBP and each training set and the number of bound sequences ranged between 234 and 792 for the RBPs analyzed ., Additionally , we used these bound sequence as the “Positive” sequences for Area Under the Precision-Recall Curve ( AUC-PR ) ., For the “Negative” sequences required by the AUC-PR calculation , we used those with estimated affinities below the median affinity of the training set ., Any sequence not deemed a “Positive” or “Negative” is labeled as “Other” in Table S1 ., We score each motif models performance by using it to estimate RNA-binding affinities for the “Positive” and “Negative” sequences and then evaluating classification accuracy using the AUC-PR ., Because each algorithm models RBP binding preferences in a slightly different manner , in this section , we also describe how we estimate RNA-binding affinity for each sequence using the motif models for each algorithm ., For each method , we trained two sets of motif models ., One set of models was fit using the full training set which consists of all RNA sequences in the training set for MatrixREDUCE and RNAcontext and all bound RNA sequences in the training set for MEMERIS ., The other set of models was fit using only the weakly structured sequences in the training set ( i . e . , removing the stem-loops ) ., We consider a wide range of combinations of free parameters for MEMERIS ., In particular , we tried all possible combinations of the following free parameter choices: the EF and PU options for measurement of single-strandedness; OOPS , ZOOPS and TCM options for the expected number of motifs per sequence ( see Protocol S1 for details on these options ) ; motif lengths between 4 and 12 nts ( inclusive ) ; different values for the pseudocount parameter ( i . e . 0 . 1 , 1 and 3 ) ; and selecting the training set using a permissive cutoff ( i . e . , the bound sequences ) or a stringent cutoff ( i . e . , the top half of bound sequences ) ., The final option means that we consider four different subsets of the training set for each setting of the other free parameters ( i . e . permissive/full , stringent/full , permissive/weak , stringent/weak ) ., In total , we fit 648 different motif models for MEMERIS for each training set ., We estimate affinity for each RNA sequence using a MEMERIS Position Frequency Matrix ( PFM ) motif model by following an approach similar to that used by MotifRegressor 39 ., Namely , we calculated the foreground probability of a K-mer under the product-multinomial distribution defined by the PFM and calculated the background probability using a third-order Markov model trained on either the full training set ( or test set , as appropriate ) ., As explained in Protocol S1 , the ratio of the foreground and background probabilities is an estimate of the relative affinity of the RBP for that K-mer ., For some RBPs , when it led to a performance increase , we also multiplied this affinity by the probability that the site was accessible , as determined using the optimized settings of the EF/PU and pseudocount parameters for that training set ., To estimate the affinity of the entire sequence , we summed its k-mer relative affinities ., Note that we also tried MAST 40 to score the sequences using MEMERISs motif models but test set performance decreased ( data not shown ) ., We used MatrixREDUCE to generate single motifs with widths ranging from to by setting to ., The MatrixREDUCE program automatically selects the appropriate motif width , so we only needed to choose between two different MatrixREDUCE motifs on each training set ( one trained on the full set and the other only on the weakly structured sequences ) ., Note that MatrixREDUCEs PSAM motif model directly estimates relative binding affinity of the RBP for each k-mer , so to estimate RNA sequence affinity , we summed PSAM scores for each constituent k-mer ., We ran RNAcontext with motifs width ranging from to , thus creating 18 motif models per training set , and used equation ( 1 ) to score RNA sequences using these models ., For all three methods , for each training set , we used the AUC-PR on training set “Positives” and “Negatives” , to select the best single model among the fitted models ., The free parameters settings for the selected models are in Table S2 ., RNAcontext achieved higher average AUC-PR values than MEMERIS and MatrixREDUCE on all of the nine RBPs analyzed ( Table 1 ) ., It also had significantly higher AUC-PRs than either method on 15 of the 18 test sets encompassing seven of the nine RBPs ( the largest P-value was , Wilcoxons sign-rank test on the AUC-PR values of 1 , 000 bootstrap samples; See Table S3 for the complete results of bootstrap analysis ) ., The improvement in AUC-PR of RNAcontext compared with MatrixREDUCE is largest for proteins whose preferred structural context is less common in the RNA pool , reflecting the fact these are the hardest binding sites for MatrixREDUCE to predict ., For example , RNAcontext performs much better than MatrixREDUCE on Vts1p which binds to CNGG in the loop of an RNA stem-loop ., This sequence appears frequently outside of a loop context in the RNA pool ., We also see large improvements for RBM4 that binds to CG containing sequences in an unpaired context , likely because these sequences often appear in stems ., In contrast , HuRs binding site is U-rich and , as such , is rarely paired in either the training or test set ., In this circumstance , MatrixREDUCEs lack of a structural model does little harm to its performance ., Although MEMERIS has higher average AUC-PR than MatrixREDUCE for stem-loop binding proteins Vts1p and U1A , reflecting the value of its model of structural context , its average AUC-PR was otherwise worse than that of MatrixREDUCE and always worse than that of RNAcontext ., This is likely due to its inability to make use of the affinity data associated with each sequence ., One consequence of this is that it can only trained on a small subset of the data ., Some of the loss in AUC-PR on the test set may also be due to overfitting because of the large number of parameter combinations that needed to be considered ., Having established that RNAcontext can capture RBP binding preferences better than comparable motif models that either do not model RNA secondary structure ( MatrixREDUCE ) , or use a limited representation ( MEMERIS ) , we then attempted to confirm that the added predictive value was due to the incorporation of structural context , rather than differences in how we estimate sequence affinity ., To do this , we compared our model based on the structural annotation alphabet to a simplified version of our model whose alphabet only contains a single letter ( i . e . all bases have an identical structural annotation ) ., As in previous sections , the two models were fit to the data for each of the nine RBPs using a variety of motif widths ( 4–12 ) ., Also , as before , we used training set AUC-PR to choose the optimal motif width and to choose between the full training set and only the weakly structured sequences ., After selecting the single best model for the two methods , we compared RNAcontext against the structure-naive model using AUC-PR on the full test set ., To assess the significance of difference in AUC-PR , we used 95% confidence interval of the difference estimated from 1 , 000 bootstrap samples ., Figure 2 shows these differences for nine RBPs on the two cross-validation test sets ., Using structural context lead to a significant improvement in AUC-PR for eight of the nine RBPs ., In some cases , the difference was dramatic , particularly for Vts1p , RBM4 , FUSIP1 and U1A ., We then sought to assess the accuracy of position-specific scoring matrix ( PSSM ) approximations of RNA-sequence binding preferences by comparing the predictive power of inferred 7-mers affinities to that of the three PSSM-based models ., We trained a “fully-specified 7-mer model” that estimates the binding affinity of an RBP for every 7-mer by taking a trimmed average of the transformed intensity ratios of the weakly-structured sequences that contain the 7-mer in the training set ( see 5 for more details of this model ) ., We then used these estimated affinities to assign a score to RNA sequences longer than seven nucleotides , by taking the mean of the affinities of each 7-mer in each sequence in the test set ., We also trained and evaluated RNAcompete , MatrixREDUCE and MEMERIS motif models as previously described except that we always restricted the training and test sets to the weakly-structured sequences ., We used only the weakly-structured sequences in this comparison so that we could more readily evaluate the ability of PSSM models to assess sequence binding preferences separately from each methods ability to capture RBP structure binding preferences ., Figure 3 compares the 7-mer model against the three methods with respect to average AUC-PR on the test sets ., PSSM-based motif models perform significantly better than the 7-mer model for every RBP except U1A ( and only on test set A ) , YB1 , and SF2/ASF ( the Wilcoxon sign-rank P-values for the best PSSM motif model are all less than ) ., Notice that because MatrixREDUCE performs significantly better than the RNAcompete method for five of the nine RBPs , this performance gain can not be explained by the incorporation of structural context in RNAcontext ., Having established that RNAcontext accurately predicts the in vitro affinity for seven of the nine RBPs ( with the exception of YB-1 and U1A ) , we applied RNAcontext to the entire dataset to make the best possible prediction for their binding preferences ., The results are shown in Figure 4 and Figure 5 ., Figure 4 shows the relative structural context preference of each RBP ., RNAcontexts predicted structural preferences are consistent with co-crystal structures for Vts1p 17 ( loop ) and PTB 41 ( ssRNA ) and in vitro and in vivo binding data for HuR 5 , 8 , 12 ., RNAcontext also predicts new structural preferences for SLM2 , RBM4 and SF2/ASF ., Of particular interest , is that RNAcontext predicts that SF2/ASF has a slight preference for RNA binding sites in bulges , internal loops , and/or multiloops ( the M annotation ) ., For FUSIP1 , we report the motif model trained using only the weakly structured sequences even though the model trained on the full set ( shown in Figure S1 ) had higher AUC-PR ., As mentioned in the legend of Figure S1 , we could not rule out the possibility that this model reflected an artifact of our pool design despite the fact that the two models both suggest that FUSIP1 prefers its binding site to be 5′ to an RNA stem ., Figure 5 compares the motif logo representations ( generated by Enologos software42 ) of RNAcontexts parameters with previously reported motifs for those RBPs ., To derive the energy parameters required by Enologos , we uniformly rescaled the elements of the matrix so that of the optimal binding site , , would be 0 . 5 ( as suggested by 31 ) ., Underneath each of the logos for the RNAcontext motifs , we have displayed an estimate of the preferred structural context for each base ., In order to identify this context , we found the top 20 best scoring k-mers in the test set under each motif model , averaged the annotation profiles for these 20 k-mers and deemed the annotation with the highest average frequency to be the preferred context for each position in the k-mer ., These estimates recover the fact that the Vts1p binding site ( CNGG ) occurs at the 5′ end of the hairpin loop ., Our RNAcontext motifs match previously reported binding sites 12 , 17 , 43–45 and the motifs that we have previously derived from the RNAcompete data5 ., In both Figure 4 and Figure 5 , we observe a preference for the M structural context for the SF2/ASF motif ., This preference has not been previously reported for SF2/ASF 43 ., To confirm this unusual preference , we collected data on the in vivo targets of SF2/ASF from 13 ., These targets were generated using the CLIP-Seq assay and consist of 296 short RNA fragments that cross-link to the protein in cultured cells which we call “bound”; and 314 transcript sequences not observed to cross-link which we call “unbound” ., These data supported our inferred structure preferences for SF2/ASF ., In particular , by manual inspection , we discovered a number of cases of the RNAcontext motif within bulge and internal loops within the bound sequences ., Also , using our model trained on the RNAcompete data , we were able to distinguish between bound and unbound sequences with higher accuracy using our model ( AUC-PR 0 . 915 ) compared with the version of our model with a single letter annotation alphabet ( AUC-PR 0 . 898 ) and MatrixREDUCE ( AUC-PR 0 . 898 ) ., Furthermore , when we train our RNAcontext model on the in vivo data , assigning bound sequences an affinity of 1 and unbound ones an affinity of −1 , we recover the same structural preference for SF2/ASF ( Figure S2 ) ., We have demonstrated that RNAcontext represents an advance over existing methods for modeling mRNA-binding protein binding preferences ., Motifs learned by RNAcontext more accurately predicted a held out in vitro binding dataset for all of the nine RBPs tested ., Seven of these differences were statistically significant ., As expected , the size of an improvement depends on the relative representation of the preferred binding site in the preferred structural context ( or contexts ) in the RNAcontext dataset ., RNAcontext motif models reflect previously reported sequence and structure preferences for well-studied RBPs like HuR , Vts1p and PTB and predict new structure binding preferences for SLM2 , RBM4 and SF2/ASF ., RNAcontexts predictions are supported by in vivo binding data for SF2/ASF: the RNAcontext in vitro motif model more ac | Introduction, Methods, Results, Discussion | Metazoan genomes encode hundreds of RNA-binding proteins ( RBPs ) ., These proteins regulate post-transcriptional gene expression and have critical roles in numerous cellular processes including mRNA splicing , export , stability and translation ., Despite their ubiquity and importance , the binding preferences for most RBPs are not well characterized ., In vitro and in vivo studies , using affinity selection-based approaches , have successfully identified RNA sequence associated with specific RBPs; however , it is difficult to infer RBP sequence and structural preferences without specifically designed motif finding methods ., In this study , we introduce a new motif-finding method , RNAcontext , designed to elucidate RBP-specific sequence and structural preferences with greater accuracy than existing approaches ., We evaluated RNAcontext on recently published in vitro and in vivo RNA affinity selected data and demonstrate that RNAcontext identifies known binding preferences for several control proteins including HuR , PTB , and Vts1p and predicts new RNA structure preferences for SF2/ASF , RBM4 , FUSIP1 and SLM2 ., The predicted preferences for SF2/ASF are consistent with its recently reported in vivo binding sites ., RNAcontext is an accurate and efficient motif finding method ideally suited for using large-scale RNA-binding affinity datasets to determine the relative binding preferences of RBPs for a wide range of RNA sequences and structures . | Many disease-associated mutations do not change the protein sequence of genes; instead they change the instructions on how a genes mRNA transcript should be processed ., Translating these instructions allows us to better understand the connection between these mutations and disease ., RNA-binding proteins ( RBP ) perform this translation by recognizing particular “phrases” that occupy short regions of the transcript ., Recognition occurs by the binding of the RBP to the phrase ., The set of phrases bound by a particular RBP is defined by the RNA base content of the binding site as well as the 3D configuration of these bases ., Because it is impossible to assess RBP binding to every possible phrase , we have developed a mathematical model called RNAcontext that can be trained by measuring RBP binding strength on one set of phrases ., Once trained , this model can then be used to accurately predict binding strength to any possible phrase ., Compared to previously described methods , RNAcontext learns a more precise description of the 3D shapes of binding sites ., This precision translates into more accurate generalization of RBP binding preferences to new phrases and allows us to make new discoveries about the binding preferences of well-studied RBPs . | molecular biology/rna-protein interactions, computational biology/sequence motif analysis | null |
995 | journal.pcbi.1007044 | 2,019 | Mathematical model studies of the comprehensive generation of major and minor phyllotactic patterns in plants with a predominant focus on orixate phyllotaxis | Plants bear leaves around the stem in a regular arrangement; this is termed phyllotaxis ., Across diverse plant species , phyllotaxis has common characteristics , which are often described mathematically and are reflected in a limited variety of phyllotactic patterns , including the distichous , decussate , tricussate , and Fibonacci spiral ( spiral with a divergence angle close to the golden angle of 137 . 5° ) patterns 1 ., The origin of the regularity of , and the few particular patterns that are allowed in , phyllotaxis have long been fascinating questions for botanists ., In the early days , morphological studies attributed phyllotactic patterning to Hofmeister’s axiom , which claims that , on the periphery of the shoot apical meristem ( SAM ) , a new leaf primordium is formed in the largest gap between existing primordia and as far away as possible from them 2 ., Following this axiom , many theoretical models have been proposed to explain the generation of phyllotactic patterns 3–21 ., Such theoretical models are based on a common concept: the existence of an inhibitory field created by a repulsive , either physical or chemical , interaction between leaf primordia , which conforms to Hofmeister’s axiom ., Among them , the two mathematical models proposed by Douady and Couder 15–18 are particularly notable ( they will be referred to as DC1 and DC2 hereafter ) ., The key assumptions shared by DC models are that each individual leaf primordium emits a constant power that inhibits the production of a new primordium near it and that the inhibitory effect of this power decreases as the distance from the emission point increases ., In DC1 , it is additionally assumed that leaf primordia are formed one by one at a constant time interval , i . e . , plastochron; thus , DC1 deals only with alternate phyllotaxis 15 , 16 ., In contrast , DC2 does not deny the simultaneous formation of leaf primordia or temporal changes of the plastochron and can deal with both alternate and whorled phyllotaxis 17 ., Computer simulations using DC models demonstrated that they can generate various major standard phyllotactic patterns as stable patterns that depend on parameter settings 15–17 ., In the early 2000s , experimental studies showed that auxin determines the initiation of shoot lateral organs and that its polar transport serves as a driving force of phyllotactic patterning 22–24 ., Briefly , the auxin efflux carrier PIN1 , which is localized asymmetrically in epidermal cells of the shoot apex , polarly transports auxin to create auxin convergence , thus directing the position of lateral organ initiation ., Subsequently , assuming the existence of a positive feedback regulatory loop between the auxin concentration gradient and PIN1 localization , a novel mathematical model was developed to explain the spontaneous formation of the auxin convergence ., It was further shown by computer simulation analysis that these models can produce several typical patterns of standard phyllotaxis 25 , 26 ., In the auxin-transport-based models , auxin polar transport toward the auxin convergence removes auxin from its surroundings , which prevents the formation of a new , vicinal auxin convergence ., This effect is considered to correspond to the repulsive interaction between primordia described in the previous models ., The parameters of the auxin-transport-based model were mapped on the parameters of DC2 27 , 28 , which shows that DC2 can be treated as an abstract model of the auxin-transport-based models ., DC models and the auxin-transport-based models , DC2 in particular , have been studied extensively regarding the ability to produce the various phyllotactic patterns that are observed in nature 15–17 , 25–26; however , several types were never addressed in the studies that used these models ., An interesting example is orixate phyllotaxis , which is named after Orixa japonica ( Rutaceae , Sapindales ) 29 ., Orixate phyllotaxis is a tetrastichous alternate phyllotaxis that is characterized by the periodic repetition of a sequence of different divergence angles: 180° , 90° , −180° ( 180° ) , and −90° ( 270° ) ., Although plant species that show orixate phyllotaxis are uncommon , they are found in several distant taxa ( Fig 1 ) ., Many species of Kniphofia ( Asphodelaceae , Asparagales ) display a tetrastichous arrangement of leaves 30 , and K . uvaria , K . pumila , and K . tysonii exhibit orixate phyllotaxis 31 , 32 ., Lagestroemia indica ( Lythraceae , Myrtales ) and Berchemiella berchemiaefolia ( Rhamnaceae , Rosales ) are also known as species with orixate phyllotaxis 29 ., The rare and sporadic distribution of orixate phyllotaxis among plants suggests that this peculiar phyllotaxis occurred independently a few times during plant evolution ., Therefore , it is likely that orixate phyllotaxis is generated by a common regulatory mechanism of leaf-primordium formation under some particular condition rather than by an orixate-unique mechanism ., If this is true , mathematical models that account fully for the spatial regulation of leaf-primordium formation should be able to produce not only major phyllotactic patterns , but also orixate phyllotaxis ., In this study , we re-examined the original DC models exhaustively under various parameter conditions , to test whether they can produce orixate phyllotaxis ., We then expanded DC models by introducing primordial age-dependent changes in the inhibitory power ., Our results indicate that a late and slow increase in the inhibitory power is critical for the establishment of orixate phyllotaxis and imply that changing the inhibitory power is generally an important component of the mechanism of phyllotactic patterning ., Terminal winter buds of O . japonica that had been collected in July from nine plants growing at the Koishikawa Botanical Gardens , Graduate School of Science , The University of Tokyo were used for morphological analyses ., The winter buds were fixed with 5% v/v formalin , 5% v/v acetic acid , 50% v/v ethanol ( FAA ) , dehydrated in an ethanol series , and finally infiltrated in 100% ethanol ., For light microscopic observation , the dehydrated samples were embedded in Technovit 7100 , cut into 5-μm-thick sections using a rotary microtome , and stained with 0 . 5% w/v toluidine blue ., The center of gravity was determined for each leaf primordium on the section with ImageJ ( https://imagej . nih . gov ) and was used as its position when measuring morphometric data ., For scanning electron microscopy ( SEM ) , the dehydrated samples were infiltrated once with a 1:1 v/v mixture of ethanol and isoamyl acetate and twice with isoamyl acetate ., Subsequently , the samples were critical point dried , sputter coated with gold–palladium , and observed using SEM ( Hitachi S-3400N ) ., The essential points of the DC1 model are as follows 15 , 16 ., At the time when the nth primordium Ln is arising , for a position ( R0 cos θ , R0 sin θ ) on the circle M , the inhibitory field strength I ( θ ) is calculated by summing the inhibitory effects from all preceding primordia , L1 to Ln−1 , as follows:, I ( θ ) ≡∑m=1n−1k ( dm ( θ ) ) −η=k∑m=1n−1 ( R02+rm2−2R0rmcos ( θ−θm ) ) −η2 ,, ( 1 ), where dm is the distance between the position ( R0 cos θ , R0 sin θ ) and the mth primordium ( rm cos θm , rm sin θm ) and k is a proportional coefficient ( Fig 2A ) ., In this equation , the inhibitory field strength is assumed to be inversely proportional to the ηth power of the distance from the point emitting the inhibitory power ., Considering assumptions 5 and 7 , the distance from the center of the shoot apex to the mth primordium ( rm ) is expressed with the initial radial velocity V0 as:, rm=R0eV0R0 ( n−m ) T ., ( 2 ), The total inhibitory field strength I is expressed as:, I ( θ ) =kR0η∑m=1n−1{1+e2 ( n−m ) G−2e ( n−m ) Gcos ( θ−θm ) }−η2 ,, ( 3 ), where G is defined as G≡V0T/R0 = ln ( rm/rm+1 ) ., Morphometrically , rm/rm+1 is identical to the “plastochron ratio” introduced by Richards 34 ., The point ( R0 cos θ , R0 sin θ ) where I ( θ ) is smallest is chosen for the position of a new primordium ., Note that η and G are the only relevant parameters that influence the behavior of I ( θ ) in DC1 ., The essential points of the DC2 model are as follows 17 ., Positions on the conical surface are expressed in spherical coordinates ( r , ψ2 , θ ) ( Fig 2B ) ., The inhibitory field strength I ( θ ) at the position ( R0 , ψ2 , θ ) on M is calculated by summing the inhibitory effects from all preceding primordia , L1 to Ln−1 , as follows:, I ( θ ) ≡∑m=1n−1E ( dm ( θ ) d0 ) ,, ( 4 ), where dm is the distance between the mth primordium and the position ( R0 , ψ2 , θ ) , d0 is the maximum distance within which an existing primordium excludes a new primordium , and E is the inhibitory effect from the preceding primordium , which is defined as a monotonically decreasing , downward-convex function:, E ( x ) ≡Es−1+ ( tanhαx ) −1−1+ ( tanhα ) −1 ,, ( 5 ), where , if I ( θ ) <Es , a new primordium is placed at the position ( R0 , ψ2 , θ ) ., Throughout this study , Es = 1 ., Because of assumption 6 , the distance from the center of the shoot apex to the mth primordium on the conical surface ( rm ) is expressed with the time after its emergence Tm and the initial radial velocity V0 as:, rm=R0eV0R0Tm ., ( 6 ), By using tm≡TmV0/R0 , a standardized age of the mth primordium defined as the product of Tm and the relative SAM growth rate V0/R0 , rm is more simply expressed as:, rm=R0etm ., ( 7 ), The DC2 model is characterized by three parameters: α , N≡sinψ2 , and Γ≡d0R0N ., These parameters represent the steepness of the decline of the inhibitory effect around the threshold , the flatness of the shoot apex , and the ratio of the inhibition range to the SAM size , respectively ., In DC2 , as a distance between points ( r ( 1 ) , ψ2 , θ ( 1 ) ) and ( r ( 2 ) , ψ2 , θ ( 2 ) ) on the conical surface , instead of the true Euclidian distance , its slightly modified version ( as defined in the following equation ) was used to avoid the discontinuity problem 17:, d≡ ( r ( 1 ) −r ( 2 ) ) 2N+2Nr ( 1 ) r ( 2 ) {1−cos ( θ ( 1 ) −θ ( 2 ) ) } ., ( 8 ) Model simulations were implemented in C++ with Visual C++ in Microsoft Visual Studio 2015 as an integrated development environment ., Contour mapping was performed using OpenCV ver . 3 . 3 . 1 ( https://opencv . org/ ) ., Computer simulations using DC2 and DC2-derived models were initiated by placing a single primordium or two primordia at a central angle of 120° on the SAM periphery ., In the former initial condition , the second primordium arises at a certain time or immediately after the first primordium , in dependence on parameter settings , at the opposite position , and in some cases , more primordia are immediately inserted at middle positions ., Thus computer simulations with this condition substantially cover situations starting with 1×2x primordia ( x = 0 , 1 , 2⋯ ) evenly distributed on the SAM periphery ., Similarly , simulations with the latter condition substantially cover situations starting with 3×2x primordia ( x = 0 , 1 , 2⋯ ) ., We also tested simulations with another initial condition , in which two primordia were placed at opposite positions with a central angle of 180° , but they returned completely same results as simulations initiated by placing a single primordium did and are therefore omitted ., Computer simulations were performed with an angle resolution of 0 . 1° ., DC2 and DC2-derived models were simulated with a time step of Δtm = 0 . 001 ., In all model simulations , calculation was iterated until the total number of primordia reached 100 ., For alternate patterns generated by simulation , the last nine primordia were used to judge the stability and regularity of divergence angles ., For the other patterns , the last two nodes were used to judge the stability of the number of primordia per node ., Then the patterns were categorized and displayed as shown in Fig 3 ., First , we performed an anatomical analysis of the apical winter buds of O . japonica , to characterize morphologically its phyllotaxis ., In the transverse sections of the winter buds , there was a very obvious tetrastichous pattern of leaf primordia , which were arranged in opposite pairs on either of two orthogonal lines ( Fig 4A ) ., This pattern looked similar to decussate phyllotaxis; however , unlike decussate phyllotaxis , it was not symmetric ., Opposite pairs of primordia varied in size and radial distance and , in each pair , a smaller primordium was positioned closer to the center of the shoot apex ., Such asymmetry was also clearly recognized in the longitudinal sections and by observations performed using SEM ( Fig 4B and 4C ) ., Importantly , SEM observations detected incipient primordia that were not paired ( Fig 4C ) ., Therefore , the asymmetric arrangement of leaves was attributed to the alternate initiation of leaf primordia instead of the secondary displacement of originally decussate leaf primordia ., The divergence angle between successive primordia changed in the sequence of approximately 180° , 90° , −180° ( 180° ) , and −90° ( 270° ) , and this cycle was repeated a few times in the winter bud ( Fig 4D ) ., These results confirmed that the phyllotaxis of O . japonica is genuinely an “orixate phyllotaxis” ., Richards’ plastochron ratio was found to oscillate in relation to the divergence angle ., Plastochron ratios measured from the adjacent pairs of primordia with a divergence angle of approximately ±90° were significantly larger than those measured from the opposite pairs with a divergence angle of approximately ±180° ( Fig 4E ) ., A similar relationship between divergence angles and plastochron ratios had been , albeit fragmentarily , described for the orixate phyllotaxis of K . uvaria 32; thus , it is likely to be a common feature of orixate phyllotaxis ., DC1 is an inhibitory field model specialized for alternate phyllotactic patterning ., DC1 assumes one-by-one formation of leaf primordia at a constant time interval , which strongly limits the model flexibility 16 ., Nevertheless , as this constraint makes the patterning process simple and possible to be dealt with theoretically , it is worth investigating DC1 as a primary model for generation of any types of alternate phyllotaxis ., To test whether DC1 can produce orixate phyllotaxis , we re-examined this established model via detailed computer simulation analysis using exhaustive combinations of the determinant parameters , η and G . As reported previously 15 , 16 , distichous and relatively major spiral phyllotactic patterns , i . e . , alternate patterns with a regular divergence angle near 180° , a Fibonacci angle ( 137 . 5° ) , or a Lucas angle ( 99 . 5° ) , were generated as stable patterns over broad ranges of η and G in these simulations ( Fig 5 ) ., Of note , when η and G were set to 1–3 and about 0 . 2 , respectively , tetrastichous patterns were formed that resembled orixate phyllotaxis , as they showed a four-cycle periodic change of the divergence angle in the order of p , q , −p , and −q ( −180°≤p≤180° , |p|>|q| ) ( Fig 5A ) ., In these patterns , however , the larger absolute value of the divergence angle was considerably deviated from 180° , whereas this should be very close to 180° in orixate phyllotaxis ( Fig 5B ) ., These patterns showed nonorthogonal tetrastichy , which is distinct in appearance from the orthogonal tetrastichy of orixate phyllotaxis ( Fig 5C ) ., Therefore , we concluded that the tetrastichous patterns found in simulations with DC1 are not orixate and that DC1 does not generate the orixate phyllotactic pattern at any parameter setting ., The absence of the occurrence of normal orixate phyllotaxis , the divergence angles of which are exactly ±180° and ±90° , in the context of DC1 can be explained analytically ( S1 Text ) ., Next , we examined whether modification of DC1 could enable it to produce orixate phyllotaxis ., In an attempt to modify DC1 , we focused on the inhibitory power of each leaf primordium against new primordium formation—which is assumed to be constant in DC models but may possibly change during leaf development—and expanded DC1 by introducing age-dependent , sigmoidal changes in the inhibitory power ., In this expanded version of DC1 ( EDC1 ) , the inhibitory field strength I ( θ ) was redefined as the summation of the products of the age-dependent change in the inhibitory power and the distance-dependent decline of its effect:, I ( θ ) ≡∑m=1n−1{k ( dm ( θ ) ) −ηF ( n−m ) } ., ( 9 ), F is defined as:, F ( Δt ) ≡11+e−a ( Δt−b ) ,, ( 10 ), where parameters a and b are constants that represent the rate and timing of the age-dependent changes in the inhibitory power , respectively ., Under this equation , in an age-dependent manner , the inhibitory power increases at a>0 and decreases at a<0 ., In the present study , η was fixed at 2 for EDC1 ., Prior to computer simulation analysis with EDC1 , we searched for parameters of EDC1 that can fit the requirements of normal orixate phyllotaxis ., When the normal pattern of orixate phyllotaxis is stably maintained , a rectangular coordinate system with the origin at the center of the shoot apex can be set such that all primordia lie on the coordinate axes , and every fourth primordium is located on the same axis in the same direction , i . e . , the position of any primordium ( mth primordium ) can be expressed as ( rm cos θm−4i , rm sin θm−4i ) for integers i ., Under this condition , we considered whether a new primordium ( nth primordium ) is produced at the position ( R0 cos θn−4i , R0 sin θn−4i ) , to keep the normal orixate phyllotactic pattern ., In EDC1 , as in DC1 , new primordium formation at ( R0 cos θn−4i , R0 sin θn−4i ) implies that the inhibitory field strength I ( θ ) on the circle M has a minimum at θn−4i ., For this reason , we first attempted to solve the following equation:, dI ( θ ) dθ|θ−θn−4i=0=0 ., ( 11 ), This equation was numerically solved under two geometrical situations of primordia: the divergence angle between the newly arising primordium and the last primordium is ±90° ( situation 1 ) or ±180° ( situation 2 ) ( S1A Fig ) ., The solutions obtained identified parameter sets that satisfied the above equation under both these two situations ( Fig 6A , S1B Fig ) ., The calculation of I ( θ ) using the identified parameter sets showed that I ( θ ) has a local and global minimum around θn−4i with large values of G , such as 0 . 5 or 1 , while it has a local maximum instead of a minimum around θn−4i with small G values , such as 0 . 1 ( S1C Fig ) ., This result indicates the possibility that EDC1 can form orixate phyllotaxis as a stable pattern under a particular parameter setting with large G values ., We conducted computer simulations using EDC1 over broad ranges of parameters and found that EDC1 could generate tetrastichous alternate patterns in addition to distichous and spiral patterns ( Fig 6B ) ., The tetrastichous patterns included orthogonal tetrastichous ones with a four-cycle divergence angle change of approximately 180° , 90° , −180° , and −90° , which can be regarded as orixate phyllotaxis ( Fig 6C , S2 Fig ) ., Under the conditions of assuming an age-dependent increase in the inhibitory power ( a>0 ) , these orixate patterns were formed within a rather narrow parameter range of G = 0 . 5~1 , a = 1~2 , and b = 4~9 around the parameter settings that were determined by numerical solution , to fit the requirements for the stable maintenance of normal orixate phyllotaxis ( Fig 6B and 6C ) ., When assuming an age-dependent decrease in the inhibitory power ( a<0 ) , orixate phyllotaxis appeared at a point of G = 0 . 1 , a≈−10 , and b≈3 . 5 ( Fig 6B and 6C ) ., These values of a and b represent a very sharp drop in the inhibitory power at the primordial age corresponding to approximately three plastochron units ., Around this parameter condition , there were no numerical solutions for normal orixate phyllotaxis; however , patterns that were substantially orixate , although they were not completely normal , could be established ., The orixate patterns that were generated under the conditions in which the inhibitory power increased and decreased were visually characterized by sparse primordia around the small meristem and dense primordia around the large meristem , respectively ( Fig 6C ) ., In the results of computer simulations with EDC1 , besides the orixate patterns , we also found peculiar patterns with an x-cycle change in the divergence angle consisting of 180° followed by an ( x−1 ) -times repeat of 0° ( S3 Fig ) ., Such patterns were generated when all the parameters a , b , and G were set to relatively large values and are displayed as periodic distribution of black regions in the upper right area of the middle and right panels of Fig 6B ., In these patterns , as b is increased , the number of repetition times of 0° is increased , resulting in the shift from x-cycle to ( x+1 ) -cycle ., This shift is mediated by the occurrence of spiral patterns with a small divergence angle , and the transitions from x-cycle to spiral and from spiral to ( x+1 ) -cycle takes place suddenly in response to a slight change of b ( S3 Fig ) ., DC2 , as DC1 , is an inhibitory field model but is more generalized than DC1 17 ., Unlike DC1 , DC2 does not assume one-by-one formation of primordia at a constant time interval and thus does not exclude whorled phyllotactic patterning ., Indeed , DC2 was shown to produce all major patterns of either alternate or whorled phyllotaxis depending on parameter conditions 17 ., To test whether DC2 can generate orixate phyllotactic patterns , we carried out extensive computer simulation analyses using this model ., Our computer simulations confirmed that major phyllotactic patterns , such as distichous , Fibonacci spiral , Lucas spiral , decussate , and tricussate patterns , are formed as stable patterns in wide ranges of parameters , and also showed formation of tetrastichous alternate patterns with a four-cycle change of the divergence angle at N = 1 and Γ≈1 . 8 when initiated by placing a single primordium at the SAM periphery ( Fig 7A ) ., The possible inclusion of orixate phyllotaxis in these tetrastichous four-cycle patterns was carefully examined based on the ratio of plastochron times and the ratio of absolute values of divergence angles , which should be much larger than 0 and close to 0 . 5 , respectively , in orixate phyllotaxis ., Although all the tetrastichous four-cycle patterns detected here had a divergence angle ratio near 0 . 5 , their ratios of plastochron times were too small to be regarded as orixate phyllotaxis , and the overall characters indicated that they are rather similar to decussate phyllotaxis ( Fig 7B and 7C ) ., These results led to the conclusion that the DC2 system does not generate orixate phyllotaxis under any parameter conditions ., Similar to the approach used for DC1 , we expanded DC2 by introducing primordial age-dependent changes in the inhibitory power ., In this expanded version of DC2 ( EDC2 ) , the inhibitory field strength I ( θ ) was redefined as the summation of the products of the age-dependent change in the inhibitory power and the distance-dependent decrease of its effect:, I ( θ ) ≡∑m=1n−1{E ( dm ( θ ) d0 ) F ( tm ) } ,, ( 12 ), where F is a function expressing a temporal change in the inhibitory power , defined as:, F ( t ) ≡11+e−A ( t−B ) ., ( 13 ), Computer simulations using EDC2 were first conducted under a wide range of combinations of A and B at three different settings of Γ ( Γ = 1 , 2 , or 3 ) and fixed conditions for α and N ( α = 1 , N = 1/3 ) ( S4 Fig ) ., In this analysis , tetrastichous four-cycle patterns were formed within the parameter window where A was 3–7 and B was 0 . 4–1 , which represents a late and slow increase in the inhibitory power during primordium development ( Fig 8A ) ., Further analysis performed by changing Γ , α , and N showed that small values of α , which indicate that the distance-dependent decrease in the inhibitory effect is gradual , and large values of Γ , which indicate that the maximum inhibition range of a primordium is large , are also important for the formation of tetrastichous four-cycle patterns ( Fig 9 , S5 Fig ) ., All of these four-cycle patterns were found to be almost orthogonal and to have a sufficiently large ratio of successive plastochron times , thus fitting the criterion of orixate phyllotaxis ( Fig 8B , S7 Fig ) ., Furthermore , the plots of these patterns lied within the cloud of the data points of real orixate phyllotaxis , and therefore we concluded that they are orixate ., A typical example of such orixate patterns was obtained by simulation using the parameters , A = 4 . 8 , B = 0 . 72 , Γ = 2 . 8 , N = 1/3 , and α = 1 , and is presented as a contour map of the inhibitory field strength in Fig 10A , which clearly depicts orixate phyllotactic patterning ., Under this parameter condition , the inhibitory field strength on the SAM periphery was calculated to have a minimum close to the threshold at 0° at the time of new primordium formation when the preceding primordia were placed at 0° , 180° , and ±90° ( S8 Fig ) ., This landscape of the inhibitory field stabilizes the orixate arrangement of primordia ., In summary , our analysis demonstrated that orixate phyllotaxis comes into existence in the EDC2 system when the inhibitory power of each primordium increases at a late stage and slowly to a large maximum and when its effect decreases gradually with distance ., In the orixate phyllotactic patterns generated by EDC2 , the plastochron time oscillated between two values together with a cyclic change in the divergence angle: the longer plastochron was observed for the adjacent pairs of primordia with a divergence angle of ±90° and the shorter plastochron was recorded for the opposite pairs with a divergence angle of ±180° ( Fig 10B , S1 Movie ) ., This relationship between the plastochron and the divergence angle agreed with the real linkage observed for the plastochron ratios and divergence angles in the winter buds of O . japonica ( Fig 4E ) ., Based on a comprehensive survey of the results of the computer simulations performed using EDC2 , we examined the distribution of various phyllotactic patterns and the possible relationships between them in the parameter space of EDC2 ( Figs 8A and 9 , S4 , S5 , S9 and S10 Figs ) ., Major phyllotactic patterns , such as the distichous , Fibonacci spiral , and decussate patterns , occupied large areas in the parameter space , and the Lucas spiral pattern occupied some areas ., Depending on the initial condition , the tricussate pattern also took a considerable fraction of the space ., In the parameter space , the distichous pattern adjoined the Fibonacci spiral pattern , while the Fibonacci spiral adjoined the distichous , Lucas spiral , decussate , and tricussate patterns ., The regions where the orixate pattern was generated were located next to the regions of the decussate , Fibonacci spiral , Lucas spiral , and/or two-cycle alternate patterns ., This positional relationship suggests that orixate phyllotaxis is more closely related to the decussate and spiral patterns than it is to the distichous pattern ., The two-cycle patterns formed in a narrow parameter space next to the region of orixate phyllotaxis and had a divergence angle ratio of approximately 0 . 55 and a plastochron time ratio of approximately 0 . 2 ( Fig 8B , S6A Fig ) ; thus , they are similar to semi-decussate phyllotaxis , which is an alternate arrangement characterized by the oscillation of the divergence angle between 180° and 90° ( S6B Fig ) ., These semi-decussate-like patterns were not observed in the computer simulations performed using DC2 ( Fig 7B and 7C ) ; rather , they were produced only after its expansion into EDC2 ., The overall distributions of major phyllotactic patterns in the parameter space were compared between DC2 and EDC2 using color plots drawn from the results of simulations conducted for EDC2 with various settings of the inhibition range parameter Γ and the inhibitory power change parameter A ( Fig 9 ) ., In these simulations , large A values accelerated the age-dependent increase in the inhibitory power of each primordium; if A is sufficiently large , the inhibitory power is almost constant during primordium development and the EDC2 system is almost the same as DC2 ., Therefore , the colors along the top side of each panel of Fig 9 , where A was set to 20 , which is a high value , show the phyllotactic pattern distribution against Γ in DC2 , while the colors over the two-dimensional panel show the phyllotactic pattern distribution against Γ and A in EDC2 ., The order of distribution of the distichous , Fibonacci spiral , and decussate patterns was unaffected by decreasing A and , thus , did not differ between DC2 and EDC2 ., As reported in the previous study of DC2 17 , on the top side of Fig 9 , the stable pattern changed from distichous to Fibonacci spiral , and then turned into decussate as Γ decreased ., In the parameter space of EDC2 , this order of distribution of major phyllotactic patterns was not affected much by decreasing A to moderate values; however , when A was further decreased , the orixate pattern appeared in the region of the Fibonacci spiral ( Fig 9 , S10 Fig ) ., As A decreased , the range of Γ that produced a Fibonacci spiral became wider and the transition zone between the distichous and Fibonacci spiral patterns , where the divergence angle gradually changed from 180° to 137 . 5° , became narrower ( Fig 9 ) ., This result indicated that Fibonacci spiral phyllotaxis is more dominant when assuming a delay in the primordial age-dependent increase in the inhibitory power ., Orixate phyllotaxis is a special kind of alternate phyllotaxis with orthogonal tetrastichy resulting from a four-cycle change in the divergence angle in the order of approximately 180° , 90° , −180° ( 180° ) , and −90° ( 270° ) ; this phyllotaxis occurs in a few plant species across distant taxa 29–32 ., In the present study , we investigated a possible theoretical framework behind this minor but interesting phyllotaxis on the basis of the inhibitory field models proposed by Douady and Couder 16 , 17 , which were shown to give a simple and robust explanation for the self-organization process of major phyllotactic patterns by assuming that each existing leaf primordium emits a constant level of inhibitory power against the formation of a new primordium and that its effect decreases with distance from the primordium ., Re-examination of the original versions of Douady and Couder’s models ( DC1 and DC2 ) via exhaustive computer simulations revealed that they do not generate the orixate pattern at any parameter condition ., The inability of DC models to produce orixate phyllotaxis prompted us to expand them to account for a more comprehensive generation of phyllotactic patterns ., In an attempt to modify DC models , we introduced a temporal change in the inhibitory power during primordium development , instead of using a constant inhibitory power ., Such changes of the inhibitory power were partly considered in several previous studies ., Douady and Couder assessed the effects of “the growth of the element’s size” , which is equivalent to the primordial age-dependent increase in the inhibitory power and found that it stabilizes whorled phyllotactic patterns 17 ., Smith et al . assumed in their mathematical model that the inhibitory power of each primordium decays exponentially with age and stated that this decay promoted phyllotactic pattern formation de novo , as well as pattern transition , and allowed the maintenance of patterns for wider ranges of parameters 9 ., A DC1-based model equipped with a primordial age-dependent change in the inhibitory power was also used to investigate floral organ arrangement 35 , 36 ., In these studies , however , temporal changes in the inhibitory power were examined under limited ranges of parameters focusing on particular aspects of phyllotactic patterning , and the possibility of the gener | Introduction, Material, methods, and models, Results, Discussion | Plant leaves are arranged around the stem in a beautiful geometry that is called phyllotaxis ., In the majority of plants , phyllotaxis exhibits a distichous , Fibonacci spiral , decussate , or tricussate pattern ., To explain the regularity and limited variety of phyllotactic patterns , many theoretical models have been proposed , mostly based on the notion that a repulsive interaction between leaf primordia determines the position of primordium initiation ., Among them , particularly notable are the two models of Douady and Couder ( alternate-specific form , DC1; more generalized form , DC2 ) , the key assumptions of which are that each leaf primordium emits a constant power that inhibits new primordium formation and that this inhibitory effect decreases with distance ., It was previously demonstrated by computer simulations that any major type of phyllotaxis can occur as a self-organizing stable pattern in the framework of DC models ., However , several phyllotactic types remain unaddressed ., An interesting example is orixate phyllotaxis , which has a tetrastichous alternate pattern with periodic repetition of a sequence of different divergence angles: 180° , 90° , −180° , and −90° ., Although the term orixate phyllotaxis was derived from Orixa japonica , this type is observed in several distant taxa , suggesting that it may reflect some aspects of a common mechanism of phyllotactic patterning ., Here we examined DC models regarding the ability to produce orixate phyllotaxis and found that model expansion via the introduction of primordial age-dependent changes of the inhibitory power is absolutely necessary for the establishment of orixate phyllotaxis ., The orixate patterns generated by the expanded version of DC2 ( EDC2 ) were shown to share morphological details with real orixate phyllotaxis ., Furthermore , the simulation results obtained using EDC2 fitted better the natural distribution of phyllotactic patterns than did those obtained using the previous models ., Our findings imply that changing the inhibitory power is generally an important component of the phyllotactic patterning mechanism . | Phyllotaxis , the beautiful geometry of plant-leaf arrangement around the stem , has long attracted the attention of researchers of biological-pattern formation ., Many mathematical models , as typified by those of Douady and Couder ( alternate-specific form , DC1; more generalized form , DC2 ) , have been proposed for phyllotactic patterning , mostly based on the notion that a repulsive interaction between leaf primordia spatially regulates primordium initiation ., In the framework of DC models , which assume that each primordium emits a constant power that inhibits new primordium formation and that this inhibitory effect decreases with distance , the major types ( but not all types ) of phyllotaxis can occur as stable patterns ., Orixate phyllotaxis , which has a tetrastichous alternate pattern with a four-cycle sequence of the divergence angle , is an interesting example of an unaddressed phyllotaxis type ., Here , we examined DC models regarding the ability to produce orixate phyllotaxis and found that model expansion by introducing primordial age-dependent changes of the inhibitory power is absolutely necessary for the establishment of orixate phyllotaxis ., The simulation results obtained using the expanded version of DC2 ( EDC2 ) fitted well the natural distribution of phyllotactic patterns ., Our findings imply that changing the inhibitory power is generally an important component of the phyllotactic patterning mechanism . | plant anatomy, buds, computerized simulations, mathematical models, hormones, plant science, plant hormones, microscopy, plants, flowering plants, research and analysis methods, computer and information sciences, mathematical and statistical techniques, scanning electron microscopy, leaves, biochemistry, plant biochemistry, computer modeling, eukaryota, electron microscopy, biology and life sciences, auxins, organisms | null |
1,388 | journal.pcbi.1006859 | 2,019 | Conformational ensemble of native α-synuclein in solution as determined by short-distance crosslinking constraint-guided discrete molecular dynamics simulations | α-Synuclein is involved in the pathogenesis of misfolding-related neurodegenerative diseases , in particular Parkinson’s disease 1 , 2 ., A misfolding event leads to the formation of oligomers which are believed to result in cell toxicity and which eventually lead to the death of neuronal cells 3 ., α-Synuclein is thought to interact with lipid vesicles in vivo 4 and the toxicity is thought to be mediated via membrane disruption by misfolded oligomers 5 ., Moreover , a prion-like spread of the pathology via the conversion of native α-synuclein molecules by toxic oligomers has been suggested 6 ., Native α-synuclein is considered to be an intrinsically disordered protein , although there is evidence that some globular structure exists in solution , which may serve as a basis for understanding the mis-folding and oligomerization pathways ., A number of biophysical methods , such as NMR , EPR , FRET , and SAXS—in combination with computational methods—have been applied to the study of intrinsically disordered proteins , including the structure of α-synuclein in solution 7–10 ., In all of these cases , even a limited amount of experimental structural data was helpful in the characterization of the conformational ensemble of α-synuclein in solution ., Recently , we developed a method for determination of protein structures , termed short-distance crosslinking constraint-guided discrete molecular dynamics simulations ( CL-DMD ) , where the folding process is influenced by short-distance experimental constraints which are incorporated into the DMD force field 11 ., Adding constraints to DMD simulations results in a reduction of the possible conformational space and allows the software to achieve protein folding on a practical time scale ., We have tested this approach on well-structured proteins including myoglobin and FKBP and have observed clear separation of low-energy clusters and a narrow distribution of structures within the clusters ., The conformational flexibility of intrinsically disordered proteins , such as α-synuclein , brings additional challenges to the computational process 12 ., In cases like this , proteins exist as a collection of inter-converting conformational states , and crosslinking data represents multiple conformations of a protein rather than a single structure ., In addition , recent research indicates that traditional force fields with their parametrization are not ideal for providing an accurate description of disordered proteins , and tend to produce more compact structures 13 ., Recently research has been focused on improving traditional state-of-the-art force fields and their ability to predict structures of disordered proteins without losing their accuracy for structured proteins 14 ., In this work we use a Medusa force field 15–17 that is utilized in DMD simulations is discretized to mimic continuous potentials ., DMD uses a united atom representation for the protein where all heavy atoms and polar hydrogens are explicitly accounted ., The solvation energy is described in terms of the discretized Lazaridis-Karplus implicit solvation model 18 and inter-atomic interactions , such van der Waals and electrostatics , are approximated by a series of multistep square-well potentials ., Other additional potentials , such as pair-wise distance constraints 19 , 20 and solvent accessibility information 21 , 22 can also be readily integrated ., During CL-DMD simulations there are no continuous forces that would drive the atoms to satisfy all constraints , rather generating conformational ensembles , which satisfy an optimal number of the constraints are generated ., This , to some degree , naturally resolves conflicting experimental constraints ., Thus , CL-DMD simulations are a viable computational platform for the structural analysis of intrinsically disordered proteins 23 in general , and α-synuclein in particular ., Here , we used the CL-DMD approach 11 to determine conformational ensembles of the α-synuclein protein in solution ., During this process , α-synuclein was crosslinked with a panel of short-range crosslinkers , crosslinked proteins were enzymatically digested , crosslinked residues were determined by LC-MS/MS analysis , and the resulting data on inter-residues distances were introduced into DMD force field as external constraints ., To experimentally validate the predicted structures , we analyzed α-synuclein using surface modification ( SM ) , circular dichroism ( CD ) , hydrogen-deuterium exchange ( HDX ) , and long-distance crosslinking ( LD-CL ) ., α-Synuclein was crosslinked with a panel of short-range reagents azido-benzoic acid succinimide ( ABAS-12C6/13C6 ) , succinimidyl 4 , 4-azipentanoate ( SDA ) , 24 triazidotriazirine ( TATA-12C3/13C3 ) , 25 and 1-ethyl-3- ( 3-dimethylaminopropyl ) carbodiimide ( EDC ) 26 ., ABAS and SDA are hetero-bifunctional amino group-reactive and photo-reactive reagents , TATA is a homo-bifunctional photo-reactive reagent , and EDC is a zero-length carboxyl-to-amino group crosslinker ., Crosslinked proteins were digested with proteinase K or trypsin proteolytic enzymes , and the digest was analyzed by LC-MS/MS to identify crosslinked peptides ( S1 Table ) ., We used an equimolar mixture of 14N- and 15N-metabolically labeled α-synuclein to exclude potential inter-protein crosslinks from the analysis and to facilitate the assignment of crosslinked residues based on the number of nitrogen atoms in the crosslinked peptides and the MS/MS fragments 27 ., The distances between crosslinked residues are based on the length of the crosslinker reagents , and were introduced as constraints into the DMD potentials ( see section below and 11 for additional details ) ., A total of 30 crosslinking constraints were used in these DMD simulations ( S1 Table ) ., In addition , α-synuclein was characterized by top-down ECD- and UVPD-FTMS HDX and CD to determine the secondary-structure content ( Fig 1 and S1 Fig ) ., Quantitative differential surface modification experiments were performed with and without 8 M urea to determine the characteristics of the residues as exposed or buried ( S2 Table ) ., LD-CL was used to estimate the overall protein topology ( S3 Table ) ., α-Synuclein was expressed using a pET21a vector provided by Dr . Carol Ladner of the University of Alberta ., The protein was expressed in E . coli BL21 ( DE3 ) bacteria and was purified as in 25 ., Briefly , the protein was overexpressed with 1 mM IPTG in 1L LB cultures of BL21DE3 E . coli for 4 hours at 30°C ., Cells were lysed with a French press and the lysate was heated at 70°C for 10 minutes and then centrifuged at 14000 g for 30 minutes ., The soluble fraction was precipitated for 1 hour in 2 . 1 M ( NH4 ) 2SO4 ., α-Synuclein was then purified by fast protein liquid chromatography on a Mono Q 4 . 6/100 SAX column ( GE Life Science ) , using a gradient from 50–500 mM NaCl , in 50 mM Tris at pH 8 . 0 ., Elution fractions containing α-synuclein were further purified by size exclusion on a Superdex 200 30/100 GL column ( GE Life Science ) ., For the expression of metabolically labeled 15N α-synuclein , 1L of M9 Minimal media was prepared with 1 g/L 15NH4Cl ( Cambridge Isotopes ) as the sole source of nitrogen ., BL21 ( DE3 ) cells were grown overnight in 50mL of this media , then seeded into 1 L , grown to an A600 of approximately 0 . 8 , and induced using 1 mM IPTG ., After expression overnight at 30°C , 15N α-synuclein was purified as described above ., Unlabeled and 15N metabolically-labeled α-synuclein were mixed in a 1:1 ratio at a concentration of 20 μM in 50 mM Na2HPO4 and incubated overnight at room temperature prior to crosslinking ., α-synuclein aliquots of 38 μL were then crosslinked using either 1 mM of the ABAS-12C6/13C6 crosslinker ( Creative Molecules ) or 30 mM of the EDC crosslinker ., ABAS crosslinking reaction mixtures were incubated for 10 minutes in the dark to allow the NHS-ester reaction to take place , followed by 10 minutes of UV irradiation under a 25 W UV lamp ( Model UVGL-58 Mineralight lamp , UVG ) with a 254 nm wavelength filter ., ABAS reaction mixtures were quenched with 10 mM ammonia bicarbonate ., EDC reaction mixtures were incubated for 20 minutes ., A portion of each crosslinking reaction mixture was checked by SDS-PAGE gel to see the extent of potential intermolecular crosslinked products ., Aliquots were subsequently split and digested with either trypsin or proteinase K at an enzyme: protein ratio of 1:10 ., Digestion was quenched using a final concentration of 10 mM AEBSF ( ApexBio ) , and samples were then acidified with formic acid for analysis by mass spectrometry ., For TATA , 100 μM synuclein in 50 mM sodium phosphate buffer was reacted with 0 . 5 mM TATA-12C3/13C3 ( Creative Molecules ) ., Samples were incubated for 5 minutes with 254 nm UV light from the same lamp as was used for the ABAS reactions ., Samples were then split and digested with either proteinase K or trypsin at an enzyme: protein ratio of 1:20 ., For SDA reactions , 20 μL of 1mg/mL α-synuclein was crosslinked using 1 mM SDA ( Creative Molecules , Inc . ) ., Aliquots were incubated for 15 minutes in the dark prior to incubation under the same UV lamp as used previously for ABAS reactions but changing the wavelength to 366 nm ., Samples were then run on an SDS-PAGE gel , and bands representing the α-synuclein monomer were excised and subjected to in-gel trypsin digestion ., After in-gel digestion , samples were acidified using formic acid prior to mass spectrometric analysis ., The CBDPS crosslinking reaction mixture consisted of 238 μL of 50 μM α-synuclein , with 0 . 12 mM CBDPS ., Samples were split and digested with either proteinase K or trypsin at an enzyme: protein ratio of 1:10 ., Digests were quenched with 10 mM AEBSF and samples were enriched using monomeric avidin beads ( Thermo Scientific ) ., Enriched samples were acidified for mass spectrometric analysis using formic acid ., Mass spectrometric analysis was then performed using a nano-HPLC system ( Easy-nLC II , ThermoFisher Scientific ) , coupled to the ESI-source of an LTQ Orbitrap Velos or Fusion ( ThermoFisher Scientific ) , using conditions described previously 11 ., Briefly , samples were injected onto a 100 μm ID , 360 μm OD trap column packed with Magic C18AQ ( Bruker-Michrom , Auburn , CA ) , 100 Å , 5 μm pore size ( prepared in-house ) and desalted by washing with Solvent A ( 2% acetonitrile:98% water , both 0 . 1% formic acid ( FA ) ) ., Peptides were separated with a 60-min gradient ( 0–60 min: 4–40% solvent B ( 90% acetonitrile , 10% water , 0 . 1% FA ) , 60–62 min: 40–80% B , 62–70 min: 80% B ) , on a 75 μm ID , 360 μm OD analytical column packed with Magic C18AQ 100 Å , 5 μm pore size ( prepared in-house ) , with IntegraFrit ( New Objective Inc . , Woburn , MA ) and equilibrated with solvent A . MS data were acquired using a data-dependent method ., The data dependent acquisition also utilized dynamic exclusion , with an exclusion window of 10 ppm and exclusion duration of 60 seconds ., MS and MS/MS events used 60000- and 30000-resolution FTMS scans , respectively , with a scan range of m/z 400–2000 in the MS scan ., For MS/MS , the CID collision energy was set to 35% ., Data were analyzed using the 14N15N DXMSMS Match program from the ICC-CLASS software package 27 ., SDA crosslinking data was analyzed using Kojak 28 and DXMSMS Match ., For scoring and assignment of the MS/MS spectra , b- and y-ions were primarily used , with additional confirmation from CID-cleavage of the crosslinker where this was available ., Chemical surface modification with pyridine carboxylic acid N-hydroxysuccinimide ester ( PCAS ) ( Creative Molecules ) was performed as described previously 29 ., Briefly , α-synuclein was prepared at 50 μM in 8 M urea in PBS , pH 7 . 4 ( unfolded state ) , or in only PBS ( folded state ) ., Either the light or the heavy form of the 13C-isotopically-coded reagent ( PCAS-12C6 or PCAS-13C6 ) was then added to give a final concentration of 10 mM ., Reaction mixtures were incubated for 30 minutes and quenched with 50 mM ammonium bicarbonate ., Samples were then mixed at a 1:1 ratio , combining folded ( PCAS-12C ) with unfolded ( PCAS-13C ) samples , as well as in reverse as a control ., Samples were acidified with 150 mM acetic acid and digested with pepsin at a 20:1 protein: enzyme ratio overnight at 37°C ., After digestion samples were prepared for mass spectrometry analysis using C18 zip-tips ( Millipore ) ., Zip-tips were equilibrated with 30 μL 0 . 1% TFA , sample was introduced , then washed with 30 μL 0 . 1% TFA and eluted with 2 μL of 0 . 1% formic acid/50% acetonitrile ., Samples were analyzed by LC-MS/MS as described above ., Top-down ECD-FTMS hydrogen/deuterium exchange was performed as described previously 30 ., Briefly , protein solution and D2O from separate syringes were continuously mixed in a 1:4 ratio ( 80% D2O final ) via a three-way tee which was connected to a 100 μm x 5 cm capillary , providing a labeling time of 2 s ., The outflow from this capillary was mixed with a quenching solution containing 0 . 4% formic acid in 80% D2O from the third syringe via a second three-way tee and injected into a Bruker 12 T Apex-Qe hybrid Fourier Transform mass spectrometer , equipped with an Apollo II electrospray source ., In-cell ECD fragmentation experiments were performed using a cathode filament current of 1 . 2 A and a grid potential of 12 V . Approximately 800 scans were accumulated over the m/z range 200–2000 , corresponding to an acquisition time of approximately 20 minutes for each ECD spectrum ., Deuteration levels of the amino acid residues were determined using the HDX Match program 31 ( S1 Fig ) ., Synuclein UVPD spectra were collected on a Thermo Scientific Orbitrap Fusion Lumos Tribrid mass spectrometer equipped with a 2 . 5-kHz repetition rate ( 0 . 4 ms/pulse ) 213 nm Nd:YAG ( neodymium-doped yttrium aluminum garnet ) laser ( CryLas GmbH ) with pulse energy of 1 . 5 ± 0 . 2 μJ/pulse and output power of 3 . 75 ± 0 . 5 mW for UVPD ., The solution was exchanged with deuterium using the same three-way tee setup , although in this case a 50 μm x 7cm capillary provided a labeling time of ~1s ., Spectra were acquired for 8 or 12 ms , and resultant spectra were averaged and used for the data analysis with the HDX Match program as above ., CD spectra were recorded on Jasco J-715 spectrometer under a stream of nitrogen ., The content of α-helical and β-sheet structures was calculated using BeStSel web server 32 ., Crosslink guided discrete molecular dynamics ( CL-DMD ) simulations were performed according to the protocol described in our previous work 11 ., Briefly , discrete molecular dynamics ( DMD ) is a physically based and computationally efficient approach for molecular dynamics simulations of biological systems 16 , 17 ., In DMD , continuous inter-atom interaction potentials are replaced with their discretized analogs , allowing the representation of interactions in the system as a series of collision events where atoms instantaneously exchange their momenta according to conservation laws ., This approach significantly optimizes computations by replacing integration of the motion equations at fixed time steps with the solution of conservation-law equations at event-based time points 33 ., In order to incorporate experimental data for inter-residue distances between corresponding atoms into DMD simulations , we introduced a series of well-shape potentials that energetically penalize atoms whose interatomic distance do not satisfy experimentally determined inter-atom proximity constraints ., The widths of these potentials are determined by the cross-linker spacer length and side chain flexibility 11 ., Starting from the completely unfolded structure of α-synuclein molecule , we performed an all-atom Replica Exchange ( REX ) 34 simulations of the protein where 24 replicas with temperatures equally distributed in the range from 0 . 375 to 0 . 605 kcal/ ( mol kB ) , are run for 6 x 106 DMD time steps ( S2 Fig ) ., The simulation temperature of each of the replicates periodically exchanged according to the Metropolis algorithm allowing the protein to overcome local energetic barriers and increase conformational sampling ., During the simulations we monitored the convergence of the system energy distribution specific heat curve , calculated by Weighted Histogram Analysis Method ( WHAM ) 35 which was used as the indicator of system equilibration ., We discarded the first 2 x 106-time steps of system equilibration during the analysis ., Next , we ranked all of the structures among all of the trajectories , and selected the ones with lowest 10% of the energies , as determined by the DMD Medusa force field 36 ., These structures were then clustered using the GROMACS 37 distance- based algorithm described by Daura et al . 38 ., It uses root-mean-square deviation ( RMSD ) between backbone Cα atoms as a measure of structural similarities between the cluster representatives ., A RMSD cut-off was chosen to correspond to the peak of the distribution of pair-wise RMSDs for all of the low-energy structures ., Because the energies of the resulting centroids representative of the clusters are very close to each other ( S3 Fig ) and picking one of them would potentially introduce a bias related to our scoring energy function , we presented them all as our predicted models of the α-synuclein globular structure ., We then calculated the root-mean-square deviation of atomic positions within each cluster and used this as a measure of fluctuations of the structures of corresponding centroids ( Figs 2 and 3 ) ., In order to obtain information on the global folding of α-synuclein , we performed clustering analysis on the lowest-energy structures obtained during CL-DMD simulations ., In summary , we have determined de novo the conformational ensemble of native α-synuclein in solution by short-distance crosslinking constraint-guided DMD simulations , and validated this structure with experimental data from CD , HDX , SM , and LD-CL experiments ., The predicted conformational ensemble is represented by rather compact globular conformations with transient secondary structure elements ., The obtained structure can serve as a starting point for understanding the mis-folding and oligomerization of α-synuclein . | Introduction, Methods, Results and discussion | Combining structural proteomics experimental data with computational methods is a powerful tool for protein structure prediction ., Here , we apply a recently-developed approach for de novo protein structure determination based on the incorporation of short-distance crosslinking data as constraints in discrete molecular dynamics simulations ( CL-DMD ) for the determination of conformational ensemble of the intrinsically disordered protein α-synuclein in the solution ., The predicted structures were in agreement with hydrogen-deuterium exchange , circular dichroism , surface modification , and long-distance crosslinking data ., We found that α-synuclein is present in solution as an ensemble of rather compact globular conformations with distinct topology and inter-residue contacts , which is well-represented by movements of the large loops and formation of few transient secondary structure elements ., Non-amyloid component and C-terminal regions were consistently found to contain β-structure elements and hairpins . | As the population ages , neurodegenerative diseases such as Parkinson’s disease will become an increasing problem in many countries ., Aggregation of the protein α-synuclein is the primary cause of Parkinson’s disease , but there is still a dearth of structural information pertaining to the native , non-aggregating form of this protein ., A better understanding the structural state of the native protein may prove useful for the design of new therapeutics to combat this disease ., In order to obtain more structural information on this protein , we have recently modelled the native α-synuclein protein ., These models were generated using a novel approach which combines protein crosslinking and discrete molecular dynamics simulations ., We have found that the α-synuclein protein can adopt several shapes , all with a similar topology , resembling a three fingered closed claw ., A region of the protein important for aggregation was found to be protected from the surrounding biological environment in these conformations , and the stabilization of these structures may be a fruitful avenue for future drug research into mitigating the cause and effect of Parkinson’s disease . | chemical bonding, molecular dynamics, protein structure prediction, protein structure, intrinsically disordered proteins, physical chemistry, protein structure determination, proteins, chemistry, cross-linking, molecular biology, protein structure comparison, biochemistry, biochemical simulations, biology and life sciences, physical sciences, computational chemistry, computational biology, macromolecular structure analysis | null |
421 | journal.pntd.0007577 | 2,019 | Kankanet: An artificial neural network-based object detection smartphone application and mobile microscope as a point-of-care diagnostic aid for soil-transmitted helminthiases | Soil-transmitted helminths ( STH ) such as Ascaris lumbricoides , hookworm , and Trichuris trichiura affect more than a billion people worldwide 1–3 ., However , due to lack of access to fecal processing materials , diagnostic equipment , and trained personnel for diagnosis , the mainstay of STH control remains mass administration of antihelminthic drugs 4 ., To diagnose STH in residents of rural areas , the present standard is the Kato-Katz technique ( estimated sensitivity of 0 . 970 for A . lumbricoides , 0 . 650 for hookworm , and 0 . 910 for T . trichiura; estimated specificity of 0 . 960 for A . lumbricoides , 0 . 940 for hookworm , and 0 . 940 for T . trichiura ) 5 ., However , this method is time-sensitive due to rapid degeneration of hookworm eggs 5 ., Other methods , including fecal flotation through FLOTAC and mini-FLOTAC still have higher sensitivity ( 0 . 440 ) than direct fecal examination ( 0 . 360 ) , but require centrifugation equipment , which is expensive and difficult to transport 6 ., Multiplex quantitative PCR analysis for these three species is a high sensitivity and specificity technique ( 0 . 870–1 . 00 and 0 . 830–1 . 00 , respectively ) , but can only be performed with expensive laboratory equipment 7 , 8 ., Spontaneous sedimentation technique in tube ( SSTT ) analysis has been found in preliminary studies to be not inferior to Kato-Katz in A . lumbricoides , T . trichiura , and hookworm 9 , 10 ., Since it requires no special equipment and few materials , it has the potential to be a cost-effective stool sample processing method in the field ., Mass drug administration campaigns are the prevailing strategy employed to control high rates of STH ., Such campaigns , however , are focused on treating children and do not necessarily address the high infection prevalence rates of STH in adults , which in turn may contribute to the high reinfection rates 11 , 12 ., Technology that facilitates point-of-care diagnosis could enable mass drug administration programs to screen adults for treatment , monitor program efficacy , aid research , and map STH prevalence ., In areas close to STH elimination , such a tool could facilitate a test-and-treat model for STH control ., One avenue for point-of-care diagnostic equipment is smartphone microscopy ., Numerous papers have already demonstrated the viability of using smartphones 13–15 and smartphone-compatible microscopy attachments ( USB Video Class , or UVC ) 16 as cheap point-of-care diagnostic tools ., Studies have tried direct imaging , as with classical parasitological diagnosis 17 , fluorescent labeling 14 , and digital image processing algorithms to aid diagnosis 18 ., To address the need for trained parasitologists to make the STH diagnosis , this study investigated artificial neural network-based technology ( ANN ) ., ANN , a framework from machine learning , a subfield of artificial intelligence , has seen a rapid explosion in range of applications , from object detection to speech recognition to translation ., Rather than traditional software , which relies on a set of human-written rules for image classification , a method explored in other studies 19 , ANN image processing stacks thousands of images together and uses backpropagation , a recursive algorithm to create its own rules to classify images ., A previous study has applied ANN-based systems to diagnostic microscopy of STH with moderate sensitivity , using a device of comparable price to a smartphone to image samples and applying a commercially available artificial intelligence algorithm ( Web Microscope ) to classify the samples ., However , such a device requires internet connection to function and was only validated on 13 samples 20 , 21 ., Another study has created and patented an ANN-based system to identify T . trichiura based on a small dataset of sample images ( n<100 ) 22 ., However , there is no precedent in current literature for extensive ( n>1 , 000 ) ANN-based object detection system training for multiple STH species , nor use in smartphones , nor offline use ( disconnected from the internet ) , nor field testing in specimens ., This study developed such a system , named Kankanet from the English word network and the Malagasy word for intestinal worms , kankana ., This study also uses a smartphone-compatible mobile microscope , or UVC , with a simple X-Y slide stage ., As a proof-of-concept pilot study for ANN-assisted microscopy , this project aimed to address two key obstacles to point-of-care diagnosis of STH in rural Madagascar: ( 1 ) the lack of portable and inexpensive microscopy , and ( 2 ) the limited capacity and expertise to read microscope images ., This project evaluated the efficacy for diagnosis of three species of STH of ( 1 ) a UVC and ( 2 ) Kankanet , an object-detection ANN-based system deployed through smartphone application ., This study was a part of a larger study on the Assessment of Integrated Management for Intestinal Parasites control: study of the impact of routine mass treatment of Helminthiasis and identification of risk areas of transmission in two villages in the district of Ifanadiana , Madagascar ., This study has received institutional review board approval from the Stony Brook University ( ID: 874952–13 ) and the national ethics review board of Madagascar: Comité d’Éthique de la Recherche Biomédicale Auprès du Ministère de la Santé Publique de Madagascar ( 41-MSANP/CERBM , June 8 , 2017 ) ., As a prospective study , data collection was planned before any diagnostic test was performed ., In accordance with cultural norms , consent was first required from the local leaders before engaging in any activities within their purview ., All participants received oral information about the study in Malagasy; written informed consent was obtained from adult participants or parents/legal guardians for the children ., Since this study was meant to evaluate diagnostic methods and did not produce definitive results , no diagnostic results from this study were reported to the patients ., All inhabitants of the two study villages were given their annual dose of 400 mg albendazole one year before this study , and received another 400 mg albendazole dose within a month of the conclusion of the study by the national mass drug administration effort ., A unique identifier was assigned to each participant to allow grouping of analysis data for each patient ., All data was stored on an encrypted server , to which only investigators had access ., The two villages under study , Mangevo and Ambinanindranofotaka ( geographic coordinates: 21°27S , 47°25E and 21°28S , 47°24E ) , are rural villages situated on the edge of Ranomafana National Park , about 275 km south of Antananarivo , the capital of Madagascar ., Over 95% of households in Ambinanindranofotaka ( total population , n = 327 ) and Mangevo ( total population , n = 238 ) engage in subsistence farming and animal husbandry ., The villages , accessible only by 14 hours’ worth of footpaths , are tucked between mountain ridges covered with secondary-growth rainforest ., The study was conducted between 8 Jun 2018 and 18 Jun 2018 ., All residents of each village were given a brief oral presentation about the public health importance , symptoms and prevention of STH; subjects above age 16 , the Madagascar cut-off age for adulthood , who gave voluntary consent to participate in the study were given containers and gloves to collect their own fecal samples ., Parents gave consent for their assenting children and collected their fecal samples ., One fecal sample from each participant was submitted between the hours of sunrise and sunset ., Samples were processed for analysis within 20 minutes of production by participant ., Cognitively impaired subjects were excluded ., Each fecal sample produced three slides for microscopic analysis: ( 1 ) one slide was prepared according to Kato-Katz ( KK ) technique from fresh stool; ( 2 ) one slide was prepared according to spontaneous sedimentation technique in tube ( SSTT ) from 10% formalin-preserved stool; ( 3 ) one slide was prepared according to Merthiolate-Iodine-Formaldehyde ( MIF ) technique from 10% formalin-preserved stool ., As a reference test , a modified gold standard was defined as any positive result ( at least one egg positively identified in a sample ) from standard microscopy by trained parasitologists using ( 1 ) KK , ( 2 ) SSTT , and ( 3 ) MIF techniques ., Intensity of infection ( measured by eggs/gram ) of A . lumbricoides , T . trichiura , and hookworm were obtained by standard microscopy reading of KK slides by multiplying the egg count per slide reading by the standard coefficient of 24 ., SSTT technique followed standard protocol 23 ., This measure was defined to increase the sensitivity of the reference test ., A standard Android smartphone was attached to a UVC ( Magnification Endoscope , Jiusion Tech; Digital Microscope Stand , iTez ) for microscopic analysis of KK and SSTT slides in the field ( Fig 1 ) ., Clinical information or results from any other analyses of the fecal samples was not made available to slide readers during their analysis ., TensorFlow is an open-source machine learning framework developed by Google Brain ., Using the TensorFlow repository , this study developed Kankanet , an ANN-based object detection system built upon a Single Shot Detection meta-architecture and a MobileNet feature extractor , a convolutional neural network developed for mobile vision applications 24 , 25 ., Based on a dataset of 2 , 078 images of STH eggs , Kankanet was trained to recognize three STH species: A . lumbricoides , T . trichiura , and hookworm 26 ., 597 egg pictures were taken by a standard microscope and 1 , 481 were taken by UVC ., The efficacy of Kankanet diagnosis was evaluated with a separate dataset of 186 images with a comparable distribution of species and imaging modalities ., The detailed breakdown of the composition of these image sets is shown in Table 1 , which shows percentage distributions by species and imaging modality to show concordance in image distribution between training set and evaluation set ., The following hyperparameters were used: initial learning rate = 0 . 004; decay steps = 800720; decay factor = 0 . 95 , according to the default configuration used to train open-source models released online ., To improve the robustness of the model , the dataset was augmented using the default methods of random cropping and horizontal flipping ., The loss rate was monitored until it averaged less than 0 . 01 , as shown in Fig 2 , after which the model was frozen in a format suitable for use in a mobile application ., Based on this protocol , two models were trained: It took Model 1 around 81 and Model 2 around 12 epochs , or iterations through the entire training dataset , to reach the loss rate of less than 0 . 01 ., These models were then validated by being tested from randomly selected images from the evaluation image set ( n = 185 ) , images that were not included in the training set ., Once trained , these models analyze images in real time , project a bounding-box over each detected object , and display the name of the object detected , along with a confidence rating ( Fig 3 and Fig 4 ) ., The true readings of each image in the training and test image sets were determined by a trained parasitologist ., The Kankanet models then were used to read test set images , and correctly identified eggs were considered true positives , incorrect objects identified as eggs were considered false positives , undetected eggs were considered false negatives , and images without eggs or detected objects were considered true negatives ., Evaluation of model sensitivity and specificity was performed with the following test image sets: The open-source TensorFlow library contains a demo Android application that includes an object-detection module ., Following the protocol for migrating this TensorFlow model to Android 27 , the original object detection model on the app was swapped out for the Kankanet model ., As per the original app , the threshold for reporting detected objects was set at 0 . 60 confidence ., Intended sample size was calculated based on June 2016 prevalence rates in Ifanadiana , Madagascar ( n = 574 ) : A . lumbricoides 71 . 3% ( 95% CI 67 . 7–75 . 1 ) ; T . trichiura 74 . 7% ( 95% CI 71 . 1–78 . 2 ) ; hookworm 33 . 1% ( 95% CI 29 . 2–36 . 9 ) 28 ., Following the calculations for a binary diagnostic test for the species with the lowest prevalence , hookworm , with a predicted sensitivity of the test of 90% and a 10% margin of error , the required sample size to have adequate power was determined to be 115 ., For A . lumbricoides and T . trichiura , which have higher prevalence rates , a sample size of 115 gave sufficient power to support a sensitivity of 70% with a margin of error of 10% ., This study used a sample size of 113 fecal samples ., Readings from the UVC on KK and SSTT slides were compared against the modified gold standard , which is defined as any positive result from a standard microscopy reading of KK , SSTT , and MIF techniques by a parasitologist ., In SPSS , sensitivity and specificity of the UVC reading were calculated for each species with KK , SSTT , and combined analysis ., Separate analyses were calculated for different intensities of infection as classified according to WHO guidelines 4 ., Cohen’s Kappa coefficient ( K ) was calculated for each type of fecal processing method to determine comparability to the modified gold standard reading ., Results from Kankanet interpretation were compared to visual interpretation of the same images by a trained parasitologist ., The two models were evaluated for sensitivity , specificity , positive predictive value , and negative predictive value using SPSS ., There were no samples that had missing results from any of the tests run ., The number of positive samples identified by standard microscopy through the Kato-Katz , MIF , and SSTT preparation methods are shown in Table 2 , as well as the composite reading used as the modified gold standard in this study of the three tests ., The number of samples of A . lumbricoides and T . trichiura at each intensity level is reported in Table 3 ., There were no participants heavily infected with T . trichiura ., Since it was not possible for the KK slides to be transported to the laboratory in time for quantification of hookworm eggs , we were unable to detect the intensity of infection of these cases ., The UVC performed best at imaging A . lumbricoides ( Tables 4 and 5 ) , demonstrating higher sensitivity in SSTT preparations ( 0 . 829 , 95% CI . 744- . 914 ) than in KK ( 0 . 579 , 95% CI . 468- . 690 ) , and high specificity in both SSTT and KK ( 0 . 971 , 95% CI . 915–1 . 03; 0 . 971 , 95% CI . 915–1 . 03 ) ., These sensitivity numbers increased with increasing infection intensity ( Fig 5 ) ., UVC imaging of SSTT slide preparations of samples with AL showed a substantial level of concordance with the modified gold standard reading , which was obtained through standard microscopy ( K = 0 . 728 ) , and UVC imaging of KK slide preparations demonstrated moderate concordance with the modified gold standard ( K = 0 . 439 ) ., For T . trichiura , the UVC demonstrated low overall sensitivity through SSTT and KK ( 0 . 224 , 95% CI . 141- . 307; 0 . 235 , 95% CI . 151- . 319 , respectively ) , but high specificity ( 0 . 917 , 95% CI . 761–1 . 07; 1 , 95% CI 1 . 00–1 . 00 ) ., As infection intensity of T . trichiura increased , however , sensitivity increased ( Fig 5 ) ., According to WHO categories for infection intensity , sensitivity for low-intensity infections was 0 . 164 , which increased to 0 . 435 in moderate-intensity infections ., There was little agreement with the modified gold standard ( K = 0 . 038 for SSTT , K = 0 . 063 for KK ) ., The UVC also demonstrated low sensitivity to hookworm eggs in both SSTT ( 0 . 318 , 95% CI . 123- . 513 ) and KK ( 0 . 381 , 95% CI . 173- . 589 ) preparations ., Model 1 , which was trained and evaluated on microscope images only , demonstrated high sensitivity ( 1 . 00; 95% CI 1 . 00–1 . 00 ) and specificity ( 0 . 910; 95% CI 0 . 831–0 . 989 ) for T . trichiura , low sensitivity ( 0 . 571; 95% CI 0 . 423–0 . 719 ) and specificity ( 0 . 500; 95% CI 0 . 275–0 . 725 ) for A . lumbricoides , and low sensitivity ( 0 . 00; 95% CI 0 . 00–0 . 00 ) and specificity ( 0 . 800; 95% CI 0 . 693–0 . 907 ) for hookworm ., Table 6 shows the full breakdown of sensitivity , specificity , positive predictive value , and negative predictive value of the different analyses performed by Model 1 and Model 2 ., Though Model 1 was also evaluated for its performance on UVC pictures of STH , it failed to recognize any , and thus the results are not tabulated ., Model 2 was trained on images taken both with microscopes and with UVC , and was tested with both types of images ., It outperformed Model 1 in every parameter , with high sensitivity and specificity for microscope images all across the board and for UVC images of A . lumbricoides and hookworm ., It performed poorly on UVC images of T . trichiura ( sensitivity 0 . 093 , 95% CI -0 . 138–0 . 304; specificity 0 . 969 , 95% CI 0 . 934–1 . 00 ) , but had moderate PPV and NPV values ( 0 . 667 and 0 . 800 , respectively ) ., This study found that UVC imaging of SSTT slides , though of low quality , still could be read by trained parasitologists with a high sensitivity ( 0 . 829 , 95% CI . 744- . 914 ) and specificity ( 0 . 971 , 95% CI . 915–1 . 03 ) in A . lumbricoides , which is comparable to literature estimates of KK sensitivity at 0 . 970 and specificity of 0 . 960 5 ., The UVC showed lower sensitivity for KK preparations ( 0 . 579 , 95% CI . 468- . 690 ) ., This UVC does not have sufficient image quality to be used with T . trichiura or hookworm diagnosis , which have thinner and more translucent membranes ., Despite UVC imaging having high sensitivity for A . lumbricoides , the 14% difference in sensitivity needs improvement , with a goal of reaching similar sensitivity to standard microscopy , before it can be feasibly used in large-scale STH control efforts ., UVC’s specificity of 0 . 971 ( 95% CI 0 . 915–1 . 03 ) surpasses that of standard microscopy KK’s 0 . 960 specificity ., Though currently shown to have insufficient sensitivity or specificity for use with T . trichiura or hookworm diagnosis , these are limitations believed to be related to the particular microscope peripheral used in this study ., This UVC achieved maximum magnification of approximately 215X at 600 px/mm; its resolution was 640x480 pixels ., The magnification level with this peripheral is sufficient , as other studies have shown success with T . trichiura with magnification levels as low as 60X 29 ., However , for the purposes of STH imaging , improvement of resolution and light source in this UVC may be necessary ., Another study successfully imaged T . trichiura and hookworm at a resolution of 2595x1944 pixels , which is substantially higher than the 640x480 with this peripheral 20 ., This UVC’s light source comes from the same direction as the camera , rather than shining through the sample as in most microscopy , which may have reduced image quality and imaging ability ., Development of a proprietary microscope is another solution , which many other studies have employed: a mobile phone microscope developed by Coulibaly et al . has demonstrated similarly high sensitivity for Schistosoma mansoni ( 0 . 917; 95% CI 0 . 598–0 . 996 ) , Schistosoma haematobium ( 0 . 811; 95% CI 0 . 712–0 . 883 ) and Plasmodium falciparum ( 0 . 802 , 1 . 00 ) 30 , 31; other studies that employ ball lenses or low-cost foldable chassis show slightly lower sensitivity/specificity values 29 , 32 ., Independent development of a smartphone microscope could substantially improve the sensitivity and specificity of these devices to an acceptable level for healthcare use , that is , not inferior to standard microscopy , while simultaneously decreasing the cost per microscope ., However , the advantage of using a commercially available microscope is ease of access for rapid , large-scale implementation and feasibility for low-income rural areas with a heavy burden of STH ., In the context of these villages in rural Madagascar , where STH prevalence can be as high as 93 . 0% for A . lumbricoides , 55 . 0% for T . trichiura , and 27 . 0% for hookworm as measured in 1998 33 , yet only school-aged children receive for mass drug administration , a rule-in test with high specificity , which this UVC achieves , can be useful to reliably identify adults who would also require antihelminthics ., Another context in which this tool may be especially useful is areas close to elimination of STH , to reduce the amounts of antihelminthics needed for STH control 34 ., Though Kankanet interpretation of UVC and microscope images yielded lower sensitivity than trained parasitologist readings of these images , Kankanet Model 2 still achieved high sensitivity for A . lumbricoides ( 0 . 696; 95% CI 0 . 625–0 . 767 ) and hookworm ( 0 . 714; 95% CI 0 . 401–1 . 027 ) on both microscope and UVC images ., Model 2 showed high sensitivity for T . trichiura in microscope images ( 1 . 00; 95% CI 1 . 00–1 . 00 ) , but low in UVC images ( 0 . 083; 95% CI -0 . 138–0 . 304 ) ., Model 1 achieved lower sensitivity and specificity for all species , and could not accurately interpret UVC images ., Model 2’s overall sensitivity for A . lumbricoides , T . trichiura , and hookworm ( 0 . 696 , 0 . 154 , and 0 . 714 , respectively ) may not seem very high at first ., However , these are sensitivity results given for recognizing individual eggs ., As an indication for treatment with antihelminthics would only require one egg per fecal sample slide to be positively identified , the real likelihood of this ANN-based object detection model giving an accurate reading is much higher than the per-egg sensitivity cited here ., For example , even in an infection of A . lumbricoides at the middle of the range considered low-intensity ( 2500 eggs per gram ) , a slide would contain 104 eggs , making the sensitivity of detection of infection in the slide nearly 1 . 00 ., The difference in sensitivity and specificity between the models can be explained by the differences in image sets used for training ., Model 2 was trained with an image set of over twice the size of Model 1’s image set; Model 2’s image set also contained images from both UVC and standard microscopy modalities ., It was a robust model , accurately detecting STH in images with multiple examples of multiple species , despite being trained on an image set containing mostly A . lumbricoides ., It demonstrated a very low rate of false positives , considering the amount of debris apropos to fecal samples ., The Kankanet models can be improved by developing a larger image dataset , exploring other object detection meta-architectures , and optimizing file size and computational requirements ., A greater number and more even distribution of images of parasite species would improve object detection model sensitivity ., Standard laboratory processing and diagnosis of STH is extremely time-consuming and expensive and hence , not often practical for rural low-income communities ., As smartphone penetrance will only increase in the coming years , medical technology should leverage smartphones as portable computational equipment , as use and distribution of such software requires no additional cost ., Because it is able to be attached to smartphones and requires no external power source than the smartphone itself , UVC is a suitable microscopy option for point-of-care diagnosis ., In addition , the smartphone application used in this study did not require internet access , unlike those of previous studies 20 ., UVC and Kankanet are cost-effective , with only the initial cost of $69 . 82 for the microscope and stage setup , as well as the negligible cost of fecal analysis reagents ., In the case of SSTT , only microscope slides and Lugol’s iodine would be needed for fecal processing ., These initial costs are readily defrayed by the thousands of analyses performed with just one unit , the work-hours gained by timely treatment of STH and prevention of STH re-infection , and the reduction of unnecessary drug administration and concomitant drug resistance ., A detailed cost analysis comparing the cost of standard microscopy and the Kankanet system for 2-sample Kato-Katz testing of 10 villages in rural Madagascar ( estimated 3000 people total ) is shown in Table 7 ., Whereas standard microscopy ends up costing around 1 . 33 USD per person tested , the Kankanet system costs around 0 . 56 USD per person tested ., ANN-based object detection systems such as the one introduced here can be useful for screening STH-endemic communities in the context of research , mass drug administrations and STH mapping programs ., In addition , Kankanet , rather than replacing human diagnosis , could be a useful diagnostic training aid for healthcare workers and field researchers ., With sustained use of such a tool , these workers may more quickly learn how to identify such eggs themselves ., Limitations of this study include that the UVC used was of insufficient image quality to produce accurate imaging of T . trichiura and hookworm ., The Kankanet models employed used a dataset limited to two imaging modalities: standard microscopy and UVC , and with images of only three species of STH; in addition , images for this dataset were only taken of samples prepared under KK conditions , so the efficacy of this system can only be assessed for those conditions ., We conclude that parasitologist interpretation of UVC imaging of SSTT slides can be a field test comparable to standard microscopy of KK for A . lumbricoides ., Second , we conclude that ANN interpretation is a feasible avenue for development of a point-of-care diagnostic aid ., With 85 . 7% sensitivity and 87 . 5% specificity for A . lumbricoides , 100 . 0% sensitivity and 100 . 0% specificity for T . trichiura , and 66 . 7% sensitivity , 100 . 0% specificity for hookworm , Kankanet Model 2 has demonstrated stellar results in interpreting UVC images , even though it was trained with a limited proof-of-concept dataset ., We hope that continued expansion of the Kankanet image database , improved imaging technology , and improvement of machine learning technology will soon enable Kankanet to achieve rates comparable to those of parasitologists . | Introduction, Methods, Results, Discussion | Endemic areas for soil-transmitted helminthiases often lack the tools and trained personnel necessary for point-of-care diagnosis ., This study pilots the use of smartphone microscopy and an artificial neural network-based ( ANN ) object detection application named Kankanet to address those two needs ., A smartphone was equipped with a USB Video Class ( UVC ) microscope attachment and Kankanet , which was trained to recognize eggs of Ascaris lumbricoides , Trichuris trichiura , and hookworm using a dataset of 2 , 078 images ., It was evaluated for interpretive accuracy based on 185 new images ., Fecal samples were processed using Kato-Katz ( KK ) , spontaneous sedimentation technique in tube ( SSTT ) , and Merthiolate-Iodine-Formaldehyde ( MIF ) techniques ., UVC imaging and ANN interpretation of these slides was compared to parasitologist interpretation of standard microscopy . Relative to a gold standard defined as any positive result from parasitologist reading of KK , SSTT , and MIF preparations through standard microscopy , parasitologists reading UVC imaging of SSTT achieved a comparable sensitivity ( 82 . 9% ) and specificity ( 97 . 1% ) in A . lumbricoides to standard KK interpretation ( 97 . 0% sensitivity , 96 . 0% specificity ) ., The UVC could not accurately image T . trichiura or hookworm ., Though Kankanet interpretation was not quite as sensitive as parasitologist interpretation , it still achieved high sensitivity for A . lumbricoides and hookworm ( 69 . 6% and 71 . 4% , respectively ) ., Kankanet showed high sensitivity for T . trichiura in microscope images ( 100 . 0% ) , but low in UVC images ( 50 . 0% ) ., The UVC achieved comparable sensitivity to standard microscopy with only A . lumbricoides ., With further improvement of image resolution and magnification , UVC shows promise as a point-of-care imaging tool ., In addition to smartphone microscopy , ANN-based object detection can be developed as a diagnostic aid ., Though trained with a limited dataset , Kankanet accurately interprets both standard microscope and low-quality UVC images ., Kankanet may achieve sensitivity comparable to parasitologists with continued expansion of the image database and improvement of machine learning technology . | For rainforest-enshrouded rural villages of Madagascar , soil-transmitted helminthiases are more the rule than the exception ., However , the microscopy equipment and lab technicians needed for diagnosis are a distance of several days’ hike away ., We piloted a solution for these communities by leveraging resources the villages already had: a traveling team of local health care workers , and their personal Android smartphones ., We demonstrated that an inexpensive , commercially available microscope attachment for smartphones could rival the sensitivity and specificity of a regular microscope using standard field fecal sample processing techniques ., We also developed an artificial neural network-based object detection Android application , called Kankanet , based on open-source programming libraries ., Kankanet was used to detect eggs of the three most common soil-transmitted helminths: Ascaris lumbricoides , Trichuris trichiura , and hookworm ., We found Kankanet to be moderately sensitive and highly specific for both standard microscope images and low-quality smartphone microscope images ., This proof-of-concept study demonstrates the diagnostic capabilities of artificial neural network-based object detection systems ., Since the programming frameworks used were all open-source and user-friendly even for computer science laymen , artificial neural network-based object detection shows strong potential for development of low-cost , high-impact diagnostic aids essential to health care and field research in resource-limited communities . | invertebrates, medicine and health sciences, engineering and technology, helminths, tropical diseases, hookworms, geographical locations, parasitic diseases, animals, cell phones, neuroscience, artificial neural networks, ascaris, ascaris lumbricoides, pharmaceutics, artificial intelligence, computational neuroscience, drug administration, neglected tropical diseases, africa, computer and information sciences, madagascar, communication equipment, people and places, helminth infections, eukaryota, equipment, nematoda, biology and life sciences, drug therapy, soil-transmitted helminthiases, computational biology, organisms | null |
1,519 | journal.pcbi.1000300 | 2,009 | Alu Exonization Events Reveal Features Required for Precise Recognition of Exons by the Splicing Machinery | How are short exons , embedded within vast intronic sequences , precisely recognized and processed by the splicing machinery ?, Despite decades of molecular and bioinformatic research , the features that allow recognition of exons remain poorly understood ., Various factors are thought to be of importance ., These include the splicing signals flanking the exon at both ends , known as the 5′ and 3′ splice sites ( 5′ss and 3′ss , respectively ) , auxiliary cis-elements known as exonic and intronic splicing enhancers and silencers ( ESE/Ss and ISE/S ) that promote or repress splice-site selection , respectively 1 , 2 , and exon 3 and intron length 4 ., There is an increasing body of evidence that secondary structure is a powerful modifier of splicing events 5–12 ., Secondary structure is thought to present binding sites for auxiliary splicing factors , correctly juxtapose widely separated cis-elements , and directly affect the accessibility of the splice sites ., However , only very few studies have used bioinformatic approaches to broadly study the effects of secondary structure on splicing 13–15 ., Many of the above-listed factors have been subjected to analysis in the context of comparison between constitutively and alternatively spliced exons ., It has been found , for example , that constitutively spliced exons are flanked by stronger splicing signals , that they contain more ESEs but fewer ESSs , and are longer but flanked by shorter introns with respect to their alternatively spliced counterparts ( reviewed in 16 ) ., However , to what extent do these features contribute to the selection of exons and allow discrimination between true exons and “non-exons” , i . e . sequences resembling exons but not recognized by the splicing machinery ?, This question is fundamental for understanding the process of exon selection by the spliceosome , and yet has not been subjected to much analysis ., This is presumably because unlike alternatively and constitutively spliced exons , both of which are relatively easy to define computationally , defining a non-exon or a pseudo-exon is more of a challenge ., One approach is to compare exons to sequences of up to a certain length which are flanked by splicing signals exceeding a certain threshold 17 , 18 ., Although this approach is powerful and has contributed to the discovery of the “vocabulary” of exons , it is also limited ., The primary limitation is that it is circular: For the mere definition of pseudo-exons , we are forced to fix various features—such as minimal splice site strength and exon length—that we would prefer to infer ., To circumvent these obstacles , we have studied Alu exonization events ., Alu elements are primate-specific retroelements present at about 1 . 1 million copies in the human genome ., A large portion of Alu elements reside within introns 19 ., Alus are dimeric , with two homologous but distinct monomers , termed left and right arms 20–22 ., During evolution , some intronic Alus accumulated mutations that led the splicing machinery to select them as internal exons , a process termed exonization 23–25 ., Such exonization events may occur either from the right or the left arm of the Alu sequence , but are observed predominantly in the antisense orientation relative to the mRNA precursor ., Almost invariably , such events give birth to an alternatively spliced exon , as a constitutively spliced exon would compromise the original transcriptomic repertoire and hence probably be deleterious 19 , 24 , 26 , 27 ., The fact that exonizing and non-exonizing Alus have retained high sequence similarity but are perceived as different by the splicing machinery makes them excellent candidates for studying the factors required for precise recognition of exons by the spliceosome ., The natural control group of non-exonizing Alus obviates the need to fix different parameters in the control set , and the high degree of sequence similarity shared by all Alus , regardless of whether they do or do not undergo exonization , enables direct comparison of a wide array of features ., Based on the comparison between Alu exons and their non-exonizing counterparts , we were able to identify several key features that characterize Alu exons and to determine the relative importance of these features in the process of Alu exonization ., A novel result of this comparison was the importance of pre-mRNA secondary structure: More thermodynamically stable predicted secondary structure in an Alu arm harboring a potential Alu exon decreases the probability of an exonization event originating from this Alu ., Thus , this study is among the first to provide wide-scale statistical proof of the importance of secondary structure in the context of exon selection ., We identified numerous further factors differentiating between Alu exons and non-exons , and integrated them in a machine learning classification model ., This model displayed a high performance in classifying Alu exons and non-exons ., Moreover , the strength of predictions by this model correlated with biological inclusion levels , and higher probabilities of exonization were given by the model to constitutive exons than to alternative ones ., These findings indicate that the features identified in this study may form the basis for precise exon selection , and make the difference between a non-selected element , an alternatively-selected element , and a constitutively selected one ., We set out to determine the features underlying the recognition of Alu exons by the splicing machinery ., We therefore required datasets of Alus that undergo and that do not undergo exonization ., We took advantage of the fact that Alu elements may exonize either from the right or from the left arm , and composed three core datasets ( Figure 1A ) : ( 1 ) A dataset of 313 Alu exons ( AEx ) that are exonized from the right Alu arm , termed AEx-R; ( 2 ) A dataset of 77 Alus that undergo exonization in the left arm , termed AEx-L; ( 3 ) A dataset of 74 , 470 intronic Alus lacking any evidence of exonization , called No AEx ., In all these datasets , Alus had to be embedded in the antisense orientation within genes , since most exonization events of Alus occur in this orientation 19 , 23 , 28 ., Finally , to allow direct comparison between parallel positions in different Alus , we used pairwise alignments to align each Alu in each of the datasets against an Alu consensus sequence ., We next computationally searched for the optimal borders , or splice sites , of non-exons within both the right arm and the left arm of the sequences in the No AEx dataset ., This was done in two steps: ( 1 ) We first empirically determined the positional windows in which the selected 3′ss and 5′ss appeared within exonizing Alus; ( 2 ) We next searched the above-determined positional windows for the highest scoring splicing signals ( see Materials and Methods ) ., We found that computational selection of the highest scoring splicing signal yielded a high extent of congruence ( ranging between 74%–96% , depending on the arm and on the signal ) with the “true” splicing signal based on EST data ., Since the congruence was not perfect , we created two control datasets based on the AEx-R and AEx-L group , termed AEx-R ( c ) and AEx-L ( c ) , respectively , in which exon borders were searched for computationally as in the No AEx dataset ., These two subsets were used to verify that differences between the exonizing and non-exonizing datasets were not due to the manner in which exons and non-exons were derived ( ESTs versus computational predictions ) ., To complete the picture , we computationally searched for non-exons within the right arm of the AEx-L group and in the left arm of the AEx-R group ., Notably , we demanded that all exons within all datasets have a minimal potential 3′ss ( AG ) and 5′ss ( GT/GC ) , because lacking such minimal conditions Alus cannot undergo exonization at all ., Thus , our analyses are based on three core and two control sets of Alus with two sets of start and end coordinates mapped for each Alu—one in the right arm and one in the left ( see Materials and Methods for further details ) ., Previous studies , based on much smaller datasets , implicated the 3′ss 24 and the 5′ss 26 splicing signals as major factors determining exonization events ., To assess whether this held for our dataset as well , we calculated the strength of the 5′ss and 3′ss of the exons/non-exons in the right and in the left arms in each of the five datasets ., Indeed , we found that in the right arms the 3′ss and the 5′ss scores were highest among those Alus that underwent exonization ( Figure 1B and 1C , respectively ) ., Similarly , in the left arms , the scores of the 3′ss and the 5′ss are highest among the exonizing Alus ( Figure 1D and 1 E , respectively ) ., These results were highly statistically significant ( see Text S1 ) ., Moreover , these differences are even more pronounced when comparing the two control datasets to their non-exonizing counterparts ( compare the results for AEx-R and AEx-L to AEx-R ( c ) and AEx-L ( c ) , respectively , in Figure 1B–E ) ., Thus , these analyses fit in with previous analyses emphasizing the role of the two major splicing signals ., We were interested in assessing the role of secondary structure in the context of Alu exonization events ., We therefore began by computing the thermodynamic stabilities of the secondary structures predicted for the Alus in each of the core datasets ., We used RNAfold 29 to calculate the secondary structure partition function; but rather than use this metric directly , we used a dinucleotide randomization approach to yield a Z-score that is not sensitive to sequence length or nucleotide composition ( see Materials and Methods ) ., We found that Alus that gave rise to exonization events , regardless of whether from the left or from the right arm , were characterized by weaker secondary structures than Alus that do not undergo exonization ( Figure 2A ) ., This was highly significant in the case of exonizations originating from the right arm ( AEx-R vs . No AEx p\u200a=\u200a9 . 8E−12 ) and of borderline significance for the left arm exonizations ( AEx-L vs . No AEx p\u200a=\u200a0 . 07 ) ., This provided the first indication that strong secondary structures might prevent Alu exonizations ., To pinpoint the subsequences to which the differences in strength of secondary structure could be attributed , we next calculated secondary structure Z-scores for each of the two Alu arms separately ., We found that the secondary structures of right and the left arms were weakest in cases in which these arms undergo exonization ( Figure 2B and 2C , respectively ) ., These changes relative to the No AEx group were highly significant ( p\u200a=\u200a2E−15 and p\u200a=\u200a1 . 08E−5 , respectively ) ., Interestingly , the non-exonizing arm tended to have weaker secondary structure in those cases in which the opposite arm underwent exonization ( p\u200a=\u200a0 . 001 when comparing the left arm of the AEx-R to the No AEx dataset , and p\u200a=\u200a0 . 055 when comparing the right arm of the AEx-L to the No AEx dataset ) ., These observations suggested that secondary structures have a detrimental effect on the recognition of Alu exons primarily when the structure incorporates sequence from the exon itself , but also when stable structures are located in relative proximity to the exon ., Secondary structure has been shown to impair exon recognition by affecting the accessibility of splice sites 8 , 9 , 11 , 12 , 30 ., To examine whether sequestration of splice sites within secondary structures plays a role in the context of Alu exonizations , we used a measure indicating the probability that all bases in a motif are unpaired ( denoted probability unpaired or PU value ) 31 ., Briefly , this measure indicates the probability that a motif , located within a longer sequence , is participating in a secondary structure ., Higher values indicate that the motif is more likely to be single stranded and lower values indicate a greater likelihood of participating in a secondary structure ( see Materials and Methods ) ., We assessed the single strandedness of the two most frequently selected 5′ss in the right arm located at positions 156 and 176 relative to the consensus ( also termed sites B and C 28 ) and the most frequently selected 5′ss of the left arm , located at position 291 ( see Figure 2A ) ., We found that 5′ss selected in exonization events are characterized by significantly higher PU values than their non-exonizing counterparts , indicating that selected 5′ss have a lower tendency to participate in secondary structures ( see Figure 2E–G ) ., We repeated this analysis for the two most frequently selected 3′ss in the right arm and the most frequently selected 3′ss in the left arm , but did not observe higher single-strandedness in the selected 3′ss with respect to their non-selected counterparts ( data not shown ) ., However , this finding may also be attributed to the fact that all Alus , regardless of whether they undergo exonization or not , are characterized by relatively strong 3′ss , due to the poly-T stretch characterizing them ( see Discussion ) ., See Text S1 for description of a control analysis ., Intron-exon architecture has well-documented effects on splicing ., Therefore , we compared the lengths of the Alu exons to their counterpart non-exons ( diagram in Figure 3A ) ., We found that exons were ∼10 nt longer than their non-exonizing counterparts ( Figure 3C and 3D ) ., Exons in the right arm of the AEx-R dataset were 112 nt long , on average , whereas non-exons were only 102 nt long in the No AEx dataset ., The same trend was observed in the AEx-L dataset: Exons in the left arm of the AEx-L dataset were 88 nt long , whereas the non-exons in the No AEx group were 78 nt long ., In both cases , the differences were highly statistically significant ( see Text S1 ) ., This indicates that increased exon length is an advantage in terms of exonization of Alu elements ., Analyzing the lengths of the flanking introns , we found that introns flanking Alu exons were almost 50% shorter than those flanking their non-exonizing counterparts ., Introns upstream of Alu exons in the AEx-R or AEx-L dataset were 7 , 216 and 9 , 497 nt long , respectively , on average ( Figure 3B ) , but 14 , 458 nt long upstream of the non-exons in the No AEx group ., These differences were highly significant ( No AEx vs . AEx-R p\u200a=\u200a1 . 38E−13 , No AEx vs . AEx-L p\u200a=\u200a0 . 0047 ) ., Highly significant findings were observed in the downstream intron as well ., These introns were 7 , 844 and 9 , 210 nt long for exons in the AEx-R and AEx-L dataset , respectively , but 14 , 808 nt long for Alus in the No AEx dataset ( Figure 3E ) ., Taken together , these results indicate that recognition of exons by the splicing machinery correlates positively with exon length but negatively with intron length , yielding insight on the constraints and the mechanism of the splicing machinery ( see Discussion ) ., Based on both biologic and bioinformatic methodologies , datasets of exonic splicing enhancers ( ESEs ) and silencers ( ESSs ) have been compiled; these sequences are believed to increase or decrease , respectively , the spliceosomes ability to recognize exons ., Indeed , exons were found to be enriched in ESRs with respect to pseudo-exons or exons 32–34 ., Thus , our next step was to determine the densities of ESEs and ESSs in exons and non-exons ., We made use of four datasets of exonic splicing regulators ( ESRs ) : the groups of SR-protein binding sites in ESEfinder 35 , the dataset of ESEs from Fairbrother et al . 36 , the exonic splicing regulatory sequences compiled by Goren et al . that consists mostly of ESEs 37 , and the ESS dataset compiled by Wang et al . 38 ., For each exon ( or non-exon ) in the two Alu arms ( Figure 4A ) in the three core and two control datasets , we calculated the ESR density for the four groups of ESRs ., The ESR density was calculated as the total number of nucleotides within an exon that overlap with motifs from a given dataset divided by the length of the exon ., We found that Alu exons showed a marked tendency for enrichment in ESEs and depletion in ESSs with respect to their non-exonizing counterparts ., Right arm Alu exons had significantly higher densities of ESEfinder ESEs than their counterparts in the No AEx group ( Figure 4B , p\u200a=\u200a0 . 00007 ) and higher densities of ESEs from Fairbrother et al . ( Figure 4C , p\u200a=\u200a0 . 00009 ) ., Higher densities were also observed in terms of ESEs found in Goren et al . ( Figure 4D ) , whereas slightly lower densities were observed for the ESSs of Wang et al ( Figure 4E ) ; However , the trends for the latter two datasets were not statistically significant ., In the left arms , similar tendencies were observed: Exons originating from this arm were highly enriched in ESEs of Goren et al . ( Figure 4H , p\u200a=\u200a0 . 0001 ) and depleted in ESSs of Wang et al . ( Figure 4I , p\u200a=\u200a0 . 0003 ) ., They also tended to be enriched in ESEs of Fairbrother et al . ( Figure 4G ) , although this was not significant ( p\u200a=\u200a0 . 12 ) ; and in this arm no differences were found in terms of ESEs of ESEfinder ( Figure 4F , p\u200a=\u200a0 . 72 ) ., To summarize , in all cases in which significant differences were observed , these differences reflect an increase in ESE densities in parallel with a decrease in ESS densities in exons relative to non-exons ., Since the splicing machinery is able to differentiate between exonizing and non-exonizing Alus , we were interested in discovering whether the features identified here can give rise to such precise classification ., Toward these aims , we used Support Vector Machine ( SVM ) machine learning , which has shown excellent empirical performance in a wide range of applications in science , medicine , engineering , and bioinformatics 39 ., We created two classifiers: One discriminating between non-exonizing Alus and Alus exonizing from the right arm and one discriminating between non-exonizing Alus and Alus exonizing from the left arm ., Receiver-operator curves ( ROC curves ) were used to test performance ., Briefly , ROC curves measure the tradeoff between sensitivity and specificity of a given classification ., A perfect classification with 100% sensitivity and 100% specificity will yield an area under the curve ( AUC ) of 1 , whereas a random classification will yield an AUC of 0 . 5 ( see Materials and Methods for complete details of the SVM protocol used ) ., 14 features were selected for the machine learning ., These were divided into 5 clusters: 5′ss strength ( 1 feature: 5′ss score ) , 3′ss strength ( 1 feature: 3′ss score ) , secondary structure ( 5 features: z-scores for the stability of secondary structure of the entire Alu and of each of the two Alu arms , PU values of the 5′ss , and PU values of the 3′ss ) , exon-intron architecture ( 3 features: lengths of upstream intron , of Alu exon , and of downstream intron ) , and ESRs ( 4 features: density in terms of each of the 4 groups of ESRs ) ., Based on the above-described features , we were able to achieve a high degree of classification between exonizing and non-exonizing Alus ., Figure 5A presents the ROC curves and AUC values for the classification between Alus exonizing from the right arm and non-exonizing Alus and Figure 5B presents these values for the classification between the Alus exonizing from the left arm and the non-exonizing ones ., The AUC values of ∼0 . 91 , demonstrate that our features achieve a high degree of accuracy in discriminating between true exons and non-exons , thus mimicking the role of the splicing machinery ., If selection of an Alu exon is indeed determined by this set of features , then this same set of features may well also determine the inclusion level of an Alu exon ., A “strong” set of features will lead to a high selection rate by the spliceosome , and hence to high inclusion levels , whereas “weaker” features may lead to a more reduced selection rate by the spliceosome and to lower inclusion levels ., Indeed , we found a positive , highly significant correlation between probabilities of exonization based on the SVM model and between inclusion levels of exons based on EST data in the case of right arm Alu exons ( Pearson , r\u200a=\u200a0 . 28 , p\u200a=\u200a6 . 35e−07 ) ., For the sake of comparison , the correlation between 5′ss scores and inclusion levels is considerably lower and less significant ( r\u200a=\u200a0 . 15 , p\u200a=\u200a0 . 007 ) ., Thus , although the computational model was explicitly trained on the basis of a dichotomous input ( Alus were labeled either as exonizing or as non-exonizing ) , the model managed to capture the more stochastic nature of the spliceosomal recognition of exons ., A positive correlation existed in the left arm as well , but this correlation was not significant presumably due to the fewer number of Alus in the AEx-L dataset ., Although our model was trained on Alus , and specifically on comparing non-exonizing Alus to mostly alternatively recognized Alus , we reasoned that the same set of features which make the difference between a non-recognized and an alternatively-recognized Alu exon might also make the difference between an alternatively recognized exon and a constitutively recognized one ., We therefore applied the SVM model to datasets of constitutive and cassette exons ., For this purpose , we generated a dataset of 55 , 037 constitutive and 3 , 040 cassette exons based on EST-data ( see Materials and Methods ) ., For each of these exons , we first extracted all above-described features , and then applied the SVM model to them ., Our model classified constitutive and alternative exons as different in a highly statistically significant manner ., The mean probability of undergoing exonization , provided by the logistic regression transformed SVM model , was 73% for the constitutive exons , but only 60% for the alternative ones ( Mann-Whitney , p<2 . 2e−16 ) ., In addition , 82% of the constitutive exons were classified as “exonizing” , in comparison to only 63% of the alternative exons ., These results demonstrate that the features learned by the SVM model are relevant for exonization in general , and control not only the shift of non-exons to alternative ones , but also of alternative exons to constitutive ones ., Finally , we were interested in assessing the importance of different features in allowing correct discrimination between exonizing and non-exonizing elements ., For this purpose , we used ΔAUC to measure the contribution of each feature cluster ., This measure compares the performance of the classification with and without each cluster of features , with greater differences indicating greater contribution of a given cluster of features to precise classification ., The feature with the highest contribution , both in the right arm ( Figure 5C ) and in the left arm ( Figure 5D ) , was the strength of the 5′ss , in concordance with previous bioinformatic findings 26 ., However , much information is included in the other features as well ., The second most important feature both in the left and in the right arm was exon-intron architecture ., Secondary structure and the 3′ss had a comparable contribution in the right and left arm ., Despite the differences in terms of ESR densities between the different datasets , this feature cluster had a negligent contribution to classification in the right arm , and a slightly higher one in the left arm ., Using a mutual information based metric to measure the contribution of the different features , yielded similar , consistent results ( see Text S1 ) ., In this study , we sought to determine how the splicing machinery distinguishes true exons from non-exons ., Alu exonization provided a powerful model for approaching this question ., Exonizing Alus have retained high sequence similarity to their non-exonizing counterparts but are perceived differently by the splicing machinery ., Past studies have emphasized mainly the splice sites , but our results indicate the importance additional features that lead to exonization ., These features , which include splicing signals ( splice sites and ESRs ) , exon-intron architecture , and secondary structural features , achieved a high degree of classification between true Alu exons and non-exons , demonstrating the biological relevance of these layers in determining and controlling exonization events ., Perhaps the most interesting result to emerge from this study is that secondary structure is critical for exon recognition ., It has been assumed that pre-RNA is coated in vivo by proteins 10 and that these RNA-protein interactions either prevent pre-mRNAs from folding into stable secondary structures 40 or provide pre-mRNAs with a limited time span for folding 41 ., However , an increasing number of studies are finding that secondary structure plays a crucial role in the regulation of splicing ., Secondary structures involving entire exons ( e . g . , 5–7 ) , the splice sites only ( e . g . , 8 , 11 , 12 ) , or specific regulatory elements 42 , 43 were shown to be involved in the regulation of alternative splicing ., Hiller et al . 14 recently found that regulatory elements within their natural pre-mRNA context were significantly more single stranded than controls ., Our current study puts these findings into a broad context , and provides bioinformatic evidence for the notion that the structural context of splicing motifs is part of the splicing code ., Such a structure , as we have shown , is detrimental for exonization in general , and specifically if it overlaps the 5′ss ., Several intriguing observations can be made when merging our results based on the exonizing and non-exonizing Alus with those of the alternative and constitutive datasets ., In terms of inclusion level , these four groups form a continuum , with non-exonizing Alus having a 0% inclusion level , exonizing Alus having a mean inclusion level of 10% , cassette exons having a mean inclusion level of 25% , and constitutive exons being included in 100% of the cases ., Gradual changes when moving from non-exonizing Alus , to exonizing Alus , to alternative exons , to constitutive ones are observed in several additional features: The strength of the 5′ss gradually increases from non-exonizing Alus to constitutive exons , the strength of the secondary structure gradually decreases , lengths of the upstream and downstream introns gradually decrease while length of the exons gradually increase ( see Figure 6 for detailed values ) ., These gradual changes are all coherent in biological terms: Stronger 5′ splice sites allow higher affinity of binding between the spliceosomal snRNAs and the 5′ss , and have well documented effects in increasing exon selection 28 , 44; stronger secondary structure can sequester binding sites of spliceosomal components; And it has been previously shown that longer flanking introns profoundly increase the likelihood that an exon is alternatively spliced 4 , and that alternative exons tend to be shorter than their constitutive counterparts ( reviewed by 16 ) , presumably due to spliceosomal constraints ., In addition , our finding that selective constraints are simultaneously applied both on the lengths of the exons and of their flanking introns suggests that the exon and its flanking introns are recognized , to some extent , as a unit ., This challenges the more traditional exon-definition and intron-definition models 3 , 45 , according to which either the exon , or its flanking introns , but not both , are recognized by the splicing machinery ., Notably , in our search for features differentiating between exonizing and non-exonizing Alus , we focused only on features which can potentially be mechanistically employed by the splicing machinery to differentiate between exons and introns ., For this reason , we did not use phylogenetic conservation , nor the age of the Alu exons , nor the location of the exonization event ( CDS vs . UTR ) as features ., Although these features are informative as well ( see Text S1 , and 32 ) , and thus may potentially boost the performance of our classifier , these cannot be directly sensed by the spliceosome ., Rather , these elements reflect the evolutionary pressures to which an exonizing Alu element is subjected ., In our study we found that introns flanking exonizing Alus are dramatically shorter than the introns flanking their non-exonizing counterparts ., These results appear to contradict recent results 46 according to which there is a tendency for new exons to form within longer introns ., However , two points must be borne in mind in this context: First , the introns flanking exonizing Alus are longer than average introns , and thus our results are consistent with the above study in that exonizations occur in longer introns ., Second , our findings may reflect an upper bound in terms of intron length within which exonization optimally occurs , and introns longer than a certain threshold may cease to be good candidates for exonization ., Our results indicate that the Alu-trained model could be applied to a more general context of alternative and constitutive exons , where it yielded coherent results ., This does not , however , imply that all findings made in the context of Alus can be directly extrapolated to exons in general ., For Alu sequences , we found the 5′ss to be the most informative feature for correctly predicting exonization events , in agreement with previous findings 26 , 28 ., We found , however , that the 3′ss , which was also found to play a major role in exonization 24 , is less critical ., This finding may not necessarily hold for all exons ., The relatively low contribution of the 3′ss to Alu exonization may reflect the general tendency of Alus to have relatively strong splice signals at their 3′ end , regardless of whether they undergo exonization or not ., This is since the poly-T track , present in all Alus in the antisense orientation , serves as a strong polypyrimidine tract 24 , 47 ., On the other hand , our results regarding the importance of ESRs are consistent with several previous studies that have found exons to be enriched in ESRs with respect to pseudo-exons , more poorly recognized exons , and introns 32–34 ., Thus , while the importance of different features may vary from one exon to another , our results provide a general understanding of the features impacting on exon recognition ., It is noteworthy , that the majority of Alu exonization events in our two exonizing datasets presumably reflect either errors of the splicing machinery or newly born exons , which presumably do not give rise to functional proteins ( see also 48 ) ., This is indicated by the low inclusion level of the Alu exons , averaging 13% and 10% in the AEx-R and AEx-L groups , respectively ., In addition , the symmetry of the Alu exons ( i . e . , divisibility-by-three ) , at least in the AEx-R dataset , is very low: Only 23% of the exons are symmetric ( in the AEx-L dataset 55% of the Alus are symmetric ) ., Thus , the majority of Alus in this dataset insert a frame-shift mutation ., These numbers contrast with the 73% symmetry found in alternative events conserved between human and mouse 49 ., However , since our objective in this research was to understand the requirements of the spliceosome , the potential function of the transcript is irrelevant ., Moreover , newly born alternatively spliced Alu exons are the raw materials for future evolution: Given the right conditions and time , further mutations might generate a functional reading frame ., The features identified here provided good , but not perfect , classification using machine learning ., A number of factors underlie the non-perfect classification: For example , EST data is very noisy and far from providing a comprehensive coverage of all genes in all tissues 50 ., Therefore , many Alus categorized as non-e | Introduction, Results, Discussion, Materials and Methods | Despite decades of research , the question of how the mRNA splicing machinery precisely identifies short exonic islands within the vast intronic oceans remains to a large extent obscure ., In this study , we analyzed Alu exonization events , aiming to understand the requirements for correct selection of exons ., Comparison of exonizing Alus to their non-exonizing counterparts is informative because Alus in these two groups have retained high sequence similarity but are perceived differently by the splicing machinery ., We identified and characterized numerous features used by the splicing machinery to discriminate between Alu exons and their non-exonizing counterparts ., Of these , the most novel is secondary structure: Alu exons in general and their 5′ splice sites ( 5′ss ) in particular are characterized by decreased stability of local secondary structures with respect to their non-exonizing counterparts ., We detected numerous further differences between Alu exons and their non-exonizing counterparts , among others in terms of exon–intron architecture and strength of splicing signals , enhancers , and silencers ., Support vector machine analysis revealed that these features allow a high level of discrimination ( AUC\u200a=\u200a0 . 91 ) between exonizing and non-exonizing Alus ., Moreover , the computationally derived probabilities of exonization significantly correlated with the biological inclusion level of the Alu exons , and the model could also be extended to general datasets of constitutive and alternative exons ., This indicates that the features detected and explored in this study provide the basis not only for precise exon selection but also for the fine-tuned regulation thereof , manifested in cases of alternative splicing . | A typical human gene consists of 9 exons around 150 nucleotides in length , separated by introns that are ∼3 , 000 nucleotides long ., The challenge of the splicing machinery is to precisely identify and ligate the exons , while removing the introns ., We aimed to understand how the splicing machinery meets this momentous challenge , based on Alu exonization events ., Alus are transposable elements , of which approximately one million copies exist in the human genome , a large portion of which within introns ., Throughout evolution , some intronic Alus accumulated mutations and became recognized by the splicing machinery as exons , a process termed exonization ., Such Alus remain highly similar to their non-exonizing counterparts but are perceived as different by the splicing machinery ., By comparing exonizing Alus to their non-exonizing counterparts , we were able to identify numerous features in which they differ and which presumably lead to the recognition only of the former by the splicing machinery ., Our findings reveal insights regarding the role of local RNA secondary structures , exon–intron architecture constraints , and splicing regulatory signals ., We integrated these features in a computational model , which was able to successfully mimic the function of the splicing machinery and discriminate between true Alu exons and their intronic counterparts , highlighting the functional importance of these features . | computational biology/alternative splicing | null |
1,428 | journal.pcbi.1006328 | 2,018 | Inter-trial effects in visual pop-out search: Factorial comparison of Bayesian updating models | In everyday life , we are continuously engaged in selecting visual information to achieve our action goals , as the amount of information we receive at any time exceeds the available processing capacity ., The mechanisms mediating attentional selection enable us to act efficiently by prioritizing task-relevant , and deprioritizing irrelevant , information ., Of importance for the question at issue in the present study , the settings that ensure effective action in particular task episodes are , by default , buffered by the attentional control system and carried over to subsequent task episodes , facilitating performance if the settings are still applicable and , respectively , impairing performance if they no longer apply owing to changes in the task situation ( in which case the settings need to be adapted accordingly ) ., In fact , in visual search tasks , such automatic carry-over effects may account for more of the variance in the response times ( RTs ) than deliberate , top-down task set 1 ., A prime piece of evidence in this context is visual search for so-called singleton targets , that is , targets defined by being unique relative to the background of non-target ( or distractor ) items , whether they differ from the background by one unique feature ( simple feature singletons ) or a unique conjunction of features ( conjunction singletons ) : singleton search is expedited ( or slowed ) when critical properties of the stimuli repeat ( or change ) across trials ., Such inter-trial effects have been found for repetitions/switches of , for example , the target-defining color 2 , 3 , size 4 , position 5 , and , more generally , the target-defining feature dimension 6 , 7 ., The latter has been referred to as the dimension repetition/switch effect , that is: responding to a target repeated from the same dimension ( e . g . , color ) is expedited even when the precise target feature is different across trials ( e . g . , changing from blue on one trial to red on the next ) , whereas a target switch from one dimension to another ( e . g . , from orientation to color ) causes a reaction time cost ( ‘dimension repetition effect’ , DRE ) 8–10 ., While inter-trial effects have been extensively studied , the precise nature of the processes that are being affected remains unclear ., Much of the recent work has been concerned with the issue of the processing stage, ( s ) at which inter-trial effects arise ( for a review , see 11 ) ., Müller and colleagues proposed that inter-trial effects , in particular the dimension repetition effect , reflect facilitation of search processes prior to focal-attentional selection ( at a pre-attentive stage of saliency computation ) 10 ., However , using a non-search paradigm with a single item presented at a fixed ( central ) screen location , Mortier et al . 12 obtained a similar pattern of inter-trial effects–leading them to conclude that the DRE arises at the post-selective stage of response selection ., Rangelov and colleagues 13 demonstrated that DRE effects can originate from distinct mechanisms in search tasks making different task demands ( singleton feature detection and feature discrimination ) : pre-attentive weighting of the dimension-specific feature contrast signals and post-selective stimulus processing–leading them to argue in favor of a multiple weighting systems hypothesis ., Based on the priming of pop-out search paradigm , a similar conclusion 11 has also been proposed , namely , inter-trial effects arise from both attentional selection and post-selective retrieval of memory traces from previous trials 4 , 14 , favoring a dual-stage account 15 ., It is important to note that those studies adopted very different paradigms and tasks to examine the origins of inter-trial effects , and their analyses are near-exclusively based on differences in mean RTs ., Although such analyses are perfectly valid , much information about trial-by-trial changes is lost ., Recent studies have shown that the RT distribution imposes important constraints on theories of visual search 16 , 17 ., RT distributions in many different task domains have been successfully modeled as resulting from a process of evidence accumulation 18 , 19 ., One influential evidence accumulation model is the drift-diffusion model ( DDM ) 20–22 ., In the DDM , observers sequentially accumulate multiple pieces of evidence , each in the form of a log likelihood ratio of two alternative decision outcomes ( e . g . , target present vs . absent ) , and make a response when the decision information reaches a threshold ( see Fig 1 ) ., The decision process is governed by three distinct components: a tendency to drift towards either boundary ( drift rate ) , the separation between the decision boundaries ( boundary separation ) , and a starting point ., These components can be estimated for any given experimental condition and observer by fitting the model to the RT distribution obtained for that condition and observer ., Estimating these components makes it possible to address a question that is related to , yet separate from the issue of the critical processing stage, ( s ) and that has received relatively less attention: do the faster RTs after stimulus repetition reflect more efficient stimulus processing , for example: expedited guidance of attention to more informative parts of the stimulus , or rather a bias towards giving a particular one of the two alternative responses or , respectively , a tendency to require less evidence before issuing either response ., The first possibility , more efficient processing , would predict an increase in the drift rate , that is , a higher speed of evidence accumulation ., A bias towards one response or a tendency to require less evidence would , on the other hand , predict a decreased distance between the starting point and the decision boundary associated with that response ., In the case of bias , this would involve a shift of the starting point towards that boundary , while a tendency to require less evidence would be reflected in a decrease of the boundary separation ., While response bias is more likely associated with changes at the post-selective ( rather than pre-attentive ) processing stage , the independence of the response selection and the attentional selection stage has been challenged 23 ., For simple motor latencies and simple-detection and pop-out search tasks 24 , there is another parsimonious yet powerful model , namely the LATER ( Linear Approach to Threshold with Ergodic Rate ) model 25 , 26 ., Unlike the drift-diffusion model , which assumes that evidence strength varies across the accumulative process , the LATER model assumes that evidence is accumulated at a constant rate during any individual perceptual decision , but that this rate varies randomly across trials following a normal distribution ( see Fig 1 ) ., Such a pattern has been observed , for instance , in the rate of build-up of neural activity in the motor cortex of monkeys performing a saccade-to-target task 27 ., Similar to the DDM , the LATER model has three important parameters: the ergodic rate, ( r ) , the boundary separation ( θ ) , and a starting point ( S0 ) ., However , the boundary separation and starting point are not independent , since the output of the model is completely determined by the rate and the separation between the starting point and the boundary; thus , in effect , the LATER model has only two parameters ., The evidence accumulation process can be interpreted in terms of Bayesian probability theory 26 , 28 ., On this interpretation , the linear approach to threshold with ergodic rate represents the build-up of the posterior probability that results from adding up the log likelihood ratio ( i . e . , evidence ) of a certain choice being the correct one and the initial bias that derives from the prior probability of two choices ., The prior probability should affect the starting point S0 of the evidence accumulation process: S0 should be the closer to the boundary the higher the prior probability of the outcome that boundary represents ., The drift rate , by contrast , should be influenced by any factor that facilitates or impedes efficient accumulation of task-relevant sensory evidence , such as spatial attentional selection ., The present study was designed to clarify the nature of the inter-trial effects for manipulations of target presence and the target-defining dimension as well as inter-trial dimension repetitions and switches ., If inter-trial effects reflect a decision bias , this should be reflected in changes of the decision boundary and/or the starting point ., By contrast , if inter-trial effects reflect changes in processing efficiency , which might result from allocating more attentional resources ( or weight ) to the processing of the repeated feature/dimension 6 , the accumulation rate r should be changed ., Note that neither the DDM nor the LATER model provides any indication of how the initial starting point might change across trials ., Given that the inter-trial effects are indicative of the underlying trial-by-trial dynamics , we aimed to further analyze trial-wise changes of the prior and the accumulation rate , and examine how a new prior is learned when the stimulus statistics change , as reflected in changes of the starting point to decision boundary separation during the learning process ., To address these inter-trial dynamics , we adopted the Dynamic Belief Model ( DBM ) 29 ., The DBM has been successfully used to explain why performance on many tasks is better when a stimulus matches local patterns in the stimulus history even in a randomized design where it is not actually possible to use stimulus history for ( better-than-chance ) prediction ., Inter-trial effects arise naturally in the DBM ., This is because the DBM assumes a prior belief about non-stationarity , that is: participants are updating their beliefs about the current stimulus statistics while assuming that these can change at any time ., The assumption of non-stationarity leads to something similar to exponential discounting of previous evidence , that is , the weight assigned to previous evidence decreases exponentially with the time ( or number of updating events ) since it was acquired ., Consequently , current beliefs about what is most likely to happen on an upcoming trial will always be significantly influenced by what occurred on the previous trial , resulting in inter-trial effects ., Thus , here we combine a belief-updating model closely based on the DBM , for modelling the learning of the prior , with the DDM and , respectively , the LATER model for predicting RTs ., A very similar model has previously been proposed to explain results in saccade-to-target experiments 30 ., We also consider the possibility that the evidence accumulation rate as well as the starting point may change from trial to trial ., To distinguish between different possible ways in which stimulus history could have an influence via updating of the starting point and/or the rate , we performed three visual search experiments , using both a detection and a discrimination task and manipulating the probability of target presence , as well as the target-defining dimension ., Based on the RT data , we then performed a factorial model comparison ( cf . 31 ) , where both the response history and the history of the target dimension can affect either the starting point or the rate ., The results show that the model that best explains both the effects of our probability manipulation and the inter-trial effects is the one in which the starting point is updated based on response history and the rate is updated based on the history of the target dimension ., The singleton search was quite easy , with participants making few errors overall: mean error rates were 1 . 5% , 2 . 5% , and 3 . 3% in Experiments 1 , 2 , and 3 respectively ( Fig 2 ) ., Despite the low average error rates , error rates differed significantly between blocks in both Experiments 1 and 2 F ( 1 . 34 , 14 . 78 ) =11 . 50 , p<0 . 01 , ηp2=0 . 51 , BF=8372 , and F ( 2 , 22 ) =12 . 20 , p<0 . 001 , ηp2=0 . 53 , BF=3729 , respectively: as indicated by post-hoc comparisons ( S1 Text ) , error rates were higher in the low-frequency blocks compared to the medium- and high-frequency blocks , without a significant difference between the latter ., In addition , in Experiment 1 , error rates were overall higher for target-present than for target-absent trials , that is , there were more misses than false alarms , F ( 1 , 11 ) =11 . 43 , p<0 . 01 , ηp2=0 . 51 , BF=75 ., In contrast , there was no difference in error rates between color and orientation targets in Experiment 2 , F ( 1 , 11 ) = 0 . 70 , p = 0 . 42 , BF = 0 . 33 ., In Experiment 3 , there was no manipulation of target ( or dimension ) frequency , but like in Experiment 1 , error rates were higher on target-present than on target-absent trials , t ( 11 ) = 4 . 25 , p < 0 . 01 , BF = 30 . 7; and similar to Experiment 2 , there was no significant difference in error rates between color and orientation targets , t ( 11 ) = 1 . 51 , p = 0 . 16 , BF = 0 . 71 ., Given the low error rates , we analyzed only RTs from trials with a correct response , though excluding outliers , defined as trials on which the inverse RT ( i . e . , 1/RT ) was more than three standard deviations from the mean for any individual participant ., Fig 3 presents the pattern of mean RTs for all three experiments ., In both Experiments 1 and 2 , the main effect of frequency was significant F ( 2 , 22 ) =10 . 25 , p<0 . 001 , ηp2=0 . 48 , BF=73 , and , respectively , F ( 1 . 27 , 13 . 96 ) =29 . 83 , p<0 . 01 , ηp2=0 . 73 , BF=8 . 7*108 ., Post-hoc comparisons ( see S2 Text ) confirmed RTs to be faster in high-frequency compared to low-frequency blocks , indicative of participants adapting to the stimulus statistics in a way such as to permit faster responses to the most frequent type of trial within a given block ., In addition , in Experiment 1 , RTs were faster for target-present than for target-absent trials F ( 1 , 11 ) =5 . 94 , p<0 . 05 , ηp2=0 . 35 , BF=51 , consistent with the visual search literature ., In contrast , there was no difference between color- and orientation-defined target trials in Experiment 2 , and no interaction between target condition and frequency in either Experiment 1 or 2 ( S2 Text ) –suggesting that the effect of frequency is independent of the target stimuli ., Comparing the error rates depicted in Fig 2 and the mean RTs in Fig 3 , error rates tended to be lower for those frequency conditions for which RTs were faster ., While this rules out simple speed-accuracy trade-offs , it indicates that participants were adapting to the statistics of the stimuli in a way that permitted faster and more accurate responding to the most frequent type of trial within a given block , at the cost of slower and less accurate responding on the less frequent trial type ., A possible explanation of these effects is a shift of the starting point of a drift-diffusion model towards the boundary associated with the response associated with the most frequent type of trial; as will be seen below ( in the modeling section ) , the shapes of the RT distributions were consistent with this interpretation ., Without a manipulation of frequency , Experiment 3 yielded a standard outcome: all three types of trial yielded similar mean RTs , F ( 2 , 22 ) = 2 . 15 , p = 0 . 14 , BF = 0 . 71 ., This is different from Experiment 1 , in which target-absent RTs were significantly slower than target-present RTs ., This difference was likely obtained because the target-defining dimension was kept constant within short mini-blocks in Experiment 1 , but varied randomly across trials in Experiment 3 , yielding a dimension switch cost and therefore slower average RTs on target-present trials ( see modeling section for further confirmation of this interpretation ) ., Given our focus on inter-trial dynamic changes in RTs , we compared trials on which the target condition was switched to trials on which it was repeated from the previous trial ., Fig 4 illustrates the inter-trial effects for all three experiments ., RTs were significantly faster on target-repeat than on target-switch trials , in all experiments: Experiment 1 F ( 1 , 11 ) =6 . 13 , p<0 . 05 , ηp2=0 . 36 , BF=0 . 81 , Experiment 2 F ( 1 , 11 ) =71 . 29 , p<0 . 001 , ηp2=0 . 87 , BF=2 . 6*107 , and Experiment 3 F ( 1 , 11 ) =32 . 68 , p<0 . 001 , ηp2=0 . 75 , BF=625 ., Note that for Experiment 1 , despite the significant target-repeat/switch effect , the ‘inclusion’ BF ( see Methods ) suggests that this factor is negligible compared to other factors; a further post-hoc comparison of repeat versus switch trials has a BF of 5 . 88 , compatible with the ANOVA test ., The target repetition effect in all three experiments is consistent with trial-wise updating of an internal model ( see the modeling section ) ., The target repetition/switch effect was larger for target-absent responses ( i . e . , comparing repetition of target absence to a switch from target presence to absence ) than for target-present responses in Experiment 3 ( interaction inter-trial condition x target condition , F ( 1 , 11 ) =14 . 80 , p<0 . 01 , ηp2=0 . 57 , BF=18 ) , while there was no such a difference in Experiment 1 , F ( 1 , 11 ) = 2 . 55 , p = 0 . 14 , BF = 0 . 43 , and also no interaction between target dimension and inter-trial condition in Experiment 2 , F ( 1 , 11 ) = 0 . 014 , p = 0 . 91 , BF = 0 . 76 ., These findings suggest that , while the target repetition/switch effect as such is stable across experiments , its magnitude may fluctuate depending on the experimental condition ., The interaction between target condition and inter-trial condition seen in Experiment 3 , but not in Experiment 1 , is likely attributable to the fact that color and orientation targets were randomly interleaved in Experiment 3 , so that target-present repetitions include trials on which the target dimension did either repeat or change–whereas the target dimension was invariably repeated on consecutive target-present trials in Experiment 1 ., The effects of repeating/switching the target dimension are considered further below ., Note that in all experiments , we mapped two alternative target conditions to two fixed alternative responses ., The repetition and switch effects described above may be partly due to response repetitions and switches ., To further examine dimension repetition/switch effects when both dimensions were mapped to the same response , we extracted those target-present trials from Experiment 3 on which a target was also present on the immediately preceding trial ., Fig 5 depicts the mean RTs for the dimension-repeat versus -switch trials ., RTs were faster when the target dimension repeated compared to when it switched , F ( 1 , 11 ) =25 . 06 , p<0 . 001 , ηp2=0 . 70 , BF=1905 , where this effect was of a similar magnitude for color- and orientation-defined targets interaction target dimension x dimension repetition , F ( 1 , 11 ) = 0 . 44 , p = 0 . 84 , BF = 0 . 33 ., There was also no overall RT difference between the two types of target main effect of target dimension , F ( 1 , 11 ) = 0 . 16 , p = 0 . 69 , BF = 0 . 34 , indicating that the color and orientation targets were equally salient ., This pattern of dimension repetition/switch effects is in line with the dimension-weighting account 8 ., Of note , there was little evidence of a dimension repetition benefit from two trials back , that is , from trial n-2 to trial n: the effect was very small ( 3 ms ) and not statistically significant t ( 23 ) = 0 . 81 , p = 0 . 43 , BF = 0 . 38 ., In addition to inter-trial effects from repetition versus switching of the target dimension , there may also be effects of repeating/switching the individual target-defining features ., To examine for such effects , we extracted those trials on which a target was present and the target dimension stayed the same as on the preceding trial , and examined them for ( intra-dimension ) target feature repetition/switch effects ., See Fig 6 for the resulting mean RTs ., In Experiments 1 and 3 , there was no significant main effect of feature repetition/switch Exp . 1: F ( 1 , 11 ) = 0 . 30 , p = 0 . 593 , BF = 0 . 30 , Exp . 3: F ( 1 , 11 ) = 3 . 77 , p = 0 . 078 , BF = 0 . 76 , nor was there an interaction with target dimension Exp . 1: F ( 1 , 11 ) = 2 . 122 , p = 0 . 17 , BF = 0 . 44 , Exp . 3: F ( 1 , 11 ) = 0 . 007 , p = 0 . 93 , BF = 0 . 38 ., In contrast , in Experiment 2 ( which required an explicit target dimension response ) , RTs were significantly faster when the target feature repeated compared to when it switched within the same dimension , F ( 1 , 11 ) =35 . 535 , p<0 . 001 , ηp2=0 . 764 , BF=13 , and this effect did not differ between the target-defining , color and orientation , dimensions , F ( 1 , 11 ) = 1 . 858 , p = 0 . 2 , BF = 0 . 57 ., Note though that , even in Experiment 2 , this feature repetition/switch effect was smaller than the effect of dimension repetition/switch ( 20 vs . 54 ms , t ( 11 ) = 5 . 20 , p<0 . 001 , BF = 122 ) ., In summary , the results revealed RTs to be expedited when target presence or absence or , respectively , the target-defining dimension ( on target-present trials ) was repeated on consecutive trials ., However , the origin of these inter-trial effects is unclear: The faster RTs for cross-trial repetitions could reflect either more efficient stimulus processing ( e . g . , as a result of greater ‘attentional ‘weight’ being assigned to a repeated target dimension ) or a response bias ( e . g . , an inclination to respond ‘target present’ based on less evidence on repeat trials ) , or both ., In the next section , we will address the origin ( s ) of the inter-trial effects by comparing a range of generative computational models and determining which parameters are likely involved in producing these effects ., Because feature-specific inter-trial effects , if reliable at all ( they were significant only in Exp . 2 , which required an explicit target dimension response ) , were smaller than the inter-trial effects related to either target presence/absence or the target-defining dimension ( e . g . , in Exp . 3 , a significant dimension-based inter-trial effect of 39 ms compares with a non-significant feature-based effect of 11 ms ) , we chose to ignore the feature-related effect in our modeling attempt ., With the full combination of the four factors , there were 144 ( 2 x 2 x 6 x 6 ) models altogether for comparison: non-decision time ( with/without ) , evidence accumulation models ( DDM vs . LATER ) , RDF-based updating ( 6 factor levels ) , and TDD-based updating ( 6 factor levels ) ., We fitted all models to individual-participant data across the three experiments , which , with 12 participants per experiment , yielded 5184 fitted models ( see S7 Text for RT distributions and model fits for the factor levels with no updating but with a non-decision time ) ., Several data sets could not be fitted with the full memory version of the starting point updating level ( i . e . , Level 2 ) of the dimension-based updating factor , due to the parameter updating to an extreme ., We therefore excluded this level from further comparison ., To obtain a better picture of the best model predictions , we plotted predicted versus observed RTs in Fig 11 ., Each point represents the average RT over all trials from one ratio condition , one trial condition , and one inter-trial condition in a single participant ., There are 144 points each for Experiments 1 and 2 ( 12 participants x 3 ratios x 2 trial conditions x 2 inter-trial conditions ) and 108 for Experiment 3 ( 12 participants x 3 trial conditions x 3 inter-trial conditions ) ., The predictions were made based on the best model for each experiment , in terms of the average AIC ( see Figs 8 , 9 and 10 ) ., The r2 value of the best linear fit is 0 . 85 for Experiment 1 , 0 . 86 for Experiment 2 , and 0 . 98 for Experiment 3 , and 0 . 89 for all the data combined ., Fig 12 presents examples of how the starting point ( S0 ) and rate were updated according to the best model ( in AIC terms ) for each experiment ., For all experiments , the best model used starting point updating based on the response-defining feature ( Fig 12A , 12C and 12E , left panels ) ., In Experiments 1 and 2 , the trial samples shown were taken from blocks with an unequal ratio; so , for the starting point , the updating results are biased towards the ( correct ) response on the most frequent type of trial ( Fig 12A and 12C ) ., In Experiment 3 , the ratio was equal; so , while the starting point exhibits a small bias on most trials ( Fig 12E ) , it is equally often biased towards either response ., Since , in a block with unequal ratio , the starting point becomes biased towards the most frequent response , the model predicts that the average starting point to boundary separation for each response will be smaller in blocks in which that response is more frequent ., This predicts that RTs to a stimulus requiring a particular response should become faster with increasing frequency of that stimulus in the block , which is what we observed in our behavioral data ., In addition , since , after each trial , the updating rule moves the starting point towards the boundary associated with the response on that trial , the separation between the starting point and the boundary will be smaller on trials on which the same response was required on the previous trial , compared to a response switch ., This predicts faster RTs when the same response is repeated , in line with the pattern in the behavioral data ., The forgetting mechanism used in the best models ensures that such inter-trial effects will occur even after a long history of previous updates ., In Experiment 1 , the best model did not use any updating of the drift rate , but a different rate was used for each dimension and for target-absent trials ( Fig 12B ) ., In Experiment 2 the best model updated the rate based on the ‘Rate with decay’ rule described above ., The rate is increased when the target-defining dimension is repeated , and decreased when the dimension switches , across trials , and these changes can build up over repetitions/switches , though with some memory decay ( Fig 12D ) ., Since the target dimension was ( also ) the response-defining feature in Experiment 2 , the rate updating would contribute to the ‘response-based’ inter-trial effects ., In Experiment 3 , the best model involved the ‘Weighted rate’ rule ., Note that the rate tends to be below the baseline level ( dashed lines ) after switching from the other dimension , but grows larger when the same dimension is repeated ( Fig 12F ) ., This predicts faster RTs after a dimension repetition compared to a switch , which is what we observed in the behavioral data ., In three experiments , we varied the frequency distribution over the response-defining feature ( RDF ) of the stimulus in a visual pop-out search task , that is , target presence versus target absence ( Experiments 1 and 3 ) or , respectively , the dimension , color versus orientation , along which the target differed from the distractors ( Experiment 2 ) ., In both cases , RTs were overall faster to stimuli of that particular response-defining feature that occurred with higher frequency within a given trial block ., There were also systematic inter-trial ‘history’ effects: RTs were faster both when the response-defining feature and when the target-defining dimension repeated across trials , compared to when either of these changed ., Our results thus replicate previous findings of dimension repetition/switch effects 6 , 9 ., In contrast to studies on ‘priming of pop-out’ ( PoP ) 3 , 32–34 , we did not find significant feature-based repetition/switch effects ( consistent with 6 ) , except for Experiment 2 in which the target dimension was also the response-defining feature ., The dimension repetition/switch effects that we observed were also not as ‘long-term’ compared to PoP studies , where significant feature ‘priming’ effects emerged from as far as eight trials back from the current trial ., There are ( at least ) two differences between the present study and the PoP paradigms , which likely contributed to these differential effect patterns ., First , we employed dense search displays ( with a total of 39 items , maximizing local target-to-non-target feature contrast ) , whereas PoP studies typically use much sparser displays ( e . g . , in the ‘prototypical’ design of Maljkovic & Nakayama 3 , 32–34 , 3 widely spaced items: one target and two distractors ) ., Second , the features of our distractors remained constant , whereas in PoP studies the search-critical features of the target and the distractors are typically swapped randomly across trials ., There is evidence indicating that , in the latter displays , the target is actually not the first item attended on a significant proportion of trials ( according to 35 , on some 20% up to 70% ) , introducing an element of serial scanning especially on feature swap trials on which there is a tendency for attention ( and the eye ) to be deployed to a distractor that happens to have the same ( color ) feature as the target on the previous trial ( for eye movement evidence , see , e . g . , 36 , 37 ) ., Given this happens frequently , feature checking would become necessary to ensure that it is the ( odd-one-out ) target item that is attended and responded to , rather than one of the distractors ., As a result , feature-specific effects would come to the fore , whereas these would play only a minor role when the target can be reliably found based on strong ( local ) feature contrast 38 ., For this reason , we opted to start our modeling work with designs that , at least in our hand , optimize pop-out ( see also 39 ) , focusing on simple target detection and ‘non-compound’ discrimination tasks in the first instance ., Another difference is that we used simple detection and ‘non-compound’ discrimination tasks in our experiments , while PoP experiments typically employ ‘compound’ tasks , in which the response-defining feature is independent of the target-defining feature ., We do not believe that the latter difference is critical , as reliable dimension repetition/change effects have also been observed with compound-search tasks ( e . g . , 40 ) , even though , in terms of the final RTs , these are weaker compared to simple response tasks because they are subject to complex interactions arising at a post-selective processing stage ( see below and 41 , 42 ) ., To better understand the basis of the effects we obtained , we analyzed the shape of the RT distributions , using the modified LATER model 26 and the DDM 21 , 22 ., Importantly , in addition to fitting these models to the RT distribution across trials , we systematically compared and contrasted different rules of how two key parameters of the LATER/DDM models–the starting point ( S0 ) or the rate ( r ) of the evidence accumulation process–might be dynamically adapted , or updated , based on trial history ., We assumed two aspects of the stimuli to be potentially relevant for updating the evidence accumulation parameters: the response-defining feature ( RDF ) and the target-defining dimension ( TDD; in Experiment 2 , RDF and TDD were identical ) ., Thus , in our full factorial model comparison , trial-by-trial updating was based on either the response-defining feature or the target dimension ( factor 1 ) , combined with updating of either the starting point or the rate of evidence accumulation ( factor 2 ) , with a number of different possible updating rules for each of these ( 6 factor levels each ) ., An additional factor ( factor 3 ) in our model comparison was the evidence accumulation model used to predict RT distributions: either the DDM or the LATER model ., Finally , to compare the DDM and LATER models on as equal terms as possible , we modified the original LATER model by adding a non-decision time component ., Thus , the fourth and final factor concerned w | Introduction, Results, Discussion, Methods | Many previous studies on visual search have reported inter-trial effects , that is , observers respond faster when some target property , such as a defining feature or dimension , or the response associated with the target repeats versus changes across consecutive trial episodes ., However , what processes drive these inter-trial effects is still controversial ., Here , we investigated this question using a combination of Bayesian modeling of belief updating and evidence accumulation modeling in perceptual decision-making ., In three visual singleton ( ‘pop-out’ ) search experiments , we explored how the probability of the response-critical states of the search display ( e . g . , target presence/absence ) and the repetition/switch of the target-defining dimension ( color/ orientation ) affect reaction time distributions ., The results replicated the mean reaction time ( RT ) inter-trial and dimension repetition/switch effects that have been reported in previous studies ., Going beyond this , to uncover the underlying mechanisms , we used the Drift-Diffusion Model ( DDM ) and the Linear Approach to Threshold with Ergodic Rate ( LATER ) model to explain the RT distributions in terms of decision bias ( starting point ) and information processing speed ( evidence accumulation rate ) ., We further investigated how these different aspects of the decision-making process are affected by different properties of stimulus history , giving rise to dissociable inter-trial effects ., We approached this question by, ( i ) combining each perceptual decision making model ( DDM or LATER ) with different updating models , each specifying a plausible rule for updating of either the starting point or the rate , based on stimulus history , and, ( ii ) comparing every possible combination of trial-wise updating mechanism and perceptual decision model in a factorial model comparison ., Consistently across experiments , we found that the ( recent ) history of the response-critical property influences the initial decision bias , while repetition/switch of the target-defining dimension affects the accumulation rate , likely reflecting an implicit ‘top-down’ modulation process ., This provides strong evidence of a disassociation between response- and dimension-based inter-trial effects . | When a perceptual task is performed repeatedly , performance becomes faster and more accurate when there is little or no change of critical stimulus attributes across consecutive trials ., This phenomenon has been explored in previous studies on visual ‘pop-out’ search , showing that participants can find and respond to a unique target object among distractors faster when properties of the target are repeated across trials ., However , the processes that underlie these inter-trial effects are still not clearly understood ., Here , we approached this question by performing three visual search experiments and applying mathematical modeling to the data ., We combined models of perceptual decision making with Bayesian updating rules for the parameters of the decision making models , to capture the processing of visual information on each individual trial as well as possible mechanisms through which an influence can be carried forward from previous trials ., A systematic comparison of how well different combinations of models explain the data revealed the best model to assume that perceptual decisions are biased based on the response-critical stimulus property on recent trials , while repetition of the visual dimension in which the target differs from the distractors ( e . g . , color or orientation ) increases the speed of stimulus processing . | learning, decision making, reaction time, social sciences, neuroscience, learning and memory, cognitive neuroscience, cognitive psychology, mathematics, probability distribution, computer vision, cognition, memory, vision, computer and information sciences, target detection, probability theory, psychology, biology and life sciences, sensory perception, physical sciences, cognitive science | null |
1,855 | journal.pbio.2005952 | 2,018 | Spatiotemporal coordination of cell division and growth during organ morphogenesis | The development of an organ from a primordium typically involves two types of processes: increase in cell number through division , and change in tissue shape and size through growth ., However , how these processes are coordinated in space and time is unclear ., It is possible that spatiotemporal regulation operates through a single control point: either on growth with downstream effects on division , or on division with downstream effects on growth ., Alternatively , spatiotemporal regulation could act on both growth and division ( dual control ) , with cross talk between them ., Distinguishing between these possibilities is challenging because growth and division typically occur in a context in which the tissue is continually deforming ., Moreover , because of the correlations between growth and division it can be hard to distinguish cause from effect 1 ., Plant development presents a tractable system for addressing such problems because cell rearrangements make little or no contribution to morphogenesis , simplifying analysis 2 ., A growing plant organ can be considered as a deforming mesh of cell walls that yields continuously to cellular turgor pressure 3 , 4 ., In addition to this continuous process of mesh deformation , new walls are introduced through cell division , allowing mesh strength to be maintained and limiting cell size ., It is thus convenient to distinguish between the continuous expansion and deformation of the mesh , referred to here as growth , and the more discrete process of introducing new walls causing increasing cell number , cell division 5–8 ., The developing Arabidopsis leaf has been used as a system for studying cell division control within a growing and deforming tissue ., Developmental snapshots of epidermal cells taken at various stages of leaf development reveal a complex pattern of cell sizes and shapes across the leaf , comprising both stomatal and non-stomatal lineages 9 ., Cell shape analysis suggests that there is a proximal zone of primary proliferative divisions that is established and then abolished abruptly ., Expression analysis of the cell cycle reporter construct cyclin1 Arabidopsis thaliana β-glucuronidase ( cyc1At-GUS ) 10 shows that the proximal proliferative zone extends more distally in the subepidermal as compared with the epidermal layer ., Analysis of the intensity of cyc1At-GUS , which combines both epidermal and subepidermal layers , led to a one-dimensional model in which cell division is restricted to a corridor of fixed length in the proximal region of the leaf 11 ., The division corridor is specified by a diffusible factor generated at the leaf base , termed mobile growth factor , controlled by expression of Arabidopsis cytochrome P450/CYP78A5 ( KLUH ) ., Two-dimensional models have been proposed based on growth and cell division being regulated in parallel by a morphogen generated at the leaf base 12 , 13 ., These models assume either a constant cell area at division , or constant cell cycle duration ., The above models represent important advances in understanding the relationships between growth and division , but leave open many questions , such as the relations of divisions to anisotropic growth , variations along both mediolateral and proximodistal axes , variation between cell layers , variation between genotypes with different division patterns , and predictions in relation to mutants that modify organ size , cell numbers , and cell sizes 14 ., Addressing these issues can be greatly assisted through the use of live confocal imaging to directly quantify growth and division 15–22 ., Local rates and orientations of growth can be estimated by the rate that landmarks , such as cell vertices , are displaced away from each other ., Cell division can be monitored by the appearance of new walls within cells ., This approach has been used to measure growth rates and orientations for developing Arabidopsis leaves and has led to a tissue-level model for its spatiotemporal control 16 ., Live tracking has also been used to follow stomatal lineages and inform hypotheses for stomatal division control 23 ., It has also been applied during a late stage of wild-type leaf development after most divisions have ceased 24 ., However , this approach has yet to be applied across an entire leaf for extended periods to compare different cell layers and genotypes ., Here , we combine tracking and modelling of 2D growth in different layers of the growing Arabidopsis leaf to study how growth and division are integrated during organ morphogenesis ., We exploit the speechless ( spch ) mutant to allow divisions to be followed in the absence of stomatal lineages , and show how the distribution and rates of growth and cell division vary in the epidermal and subepidermal layers along the proximodistal and mediolateral axes and in time ., We further compare these findings to those of wild-type leaves grown under similar conditions ., Our results reveal spatiotemporal variation in both growth rates and cell properties , including cell sizes , shapes , and patterns of division ., By developing an integrated model of growth and division , we show how these observations can be accounted for by a model in which core components of both growth and division are under spatiotemporal control ., Varying parameters of this model illustrates how changes in organ size , cell size , and cell number are likely interdependent , providing a framework for evaluating growth and division mutants ., Tracking cell vertices on the abaxial epidermis of spch seedlings imaged at about 12-h intervals allowed cells at a given developmental stage to be classified into those that would undergo division ( competent to divide , green , Fig 1A ) , and those that did not divide for the remainder of the tracking period ( black , Fig 1A ) ., During the first time interval imaged ( Fig 1A , 0–14 h ) , division competence was restricted to the basal half of the leaf , with a distal limit of about 150 μm ( all distances are measured relative to the petiole-lamina boundary , Fig 1 ) ., To visualise the fate of cells at the distal limit , we identified the first row of nondividing cells ( orange ) and displayed them in all subsequent images ., During the following time intervals , the zone of competence extended together with growth of the tissue to a distance of about 300 μm , after which it remained at this position , while orange boundary cells continued to extend further through growth ., Fewer competent cells were observed in the midline region at later stages ., Thus , the competence zone shows variation along the proximodistal and mediolateral axes of the leaf , initially extending through growth to a distal limit of about 300 μm and disappearing earlier in the midline region ., To monitor execution of division , we imaged spch leaves at shorter intervals ( every 2 h ) ., At early stages , cells executed division when they reached an area of about 150 μm2 ( Fig 2A , 0–24 h ) ., At later stages , cells in the proximal lamina ( within 150 μm ) continued to execute division at about this cell area ( mean = 151 ± 6 . 5 μm2 , Fig 2B ) , while those in the more distal lamina or in the midline region executed divisions at larger cell areas ( mean = 203 ± 9 . 7 μm2 or 243 . 0 ± 22 . 4 μm2 , respectively , Fig 2A , 2B and 2D ) ., Cell cycle duration showed a similar pattern , being lowest within the proximal 150 μm of the lamina ( mean = 13 . 9 ± 0 . 8 h ) and higher distally ( mean = 19 . 4 ± 1 . 8 h ) or in the midline region ( 18 . 9 ± 2 . 1 h , Fig 2C and 2E ) ., Within any given region , there was variation around both the area at time of division execution and the cell cycle duration ( Fig 2F and 2G ) ., For example , the area at execution of division within the proximal 150 μm of the lamina had a mean of about 150 μm2 , with standard deviation of about 40 μm2 ( Fig 2F ) ., The same region had a cell cycle duration with a mean of about 14 h and a standard deviation of about 3 h ., Thus , both the area at which cells execute division and cycle duration show variation around a mean , and the mean varies along the proximodistal and mediolateral axes of the leaf ., These findings suggest that models in which either cell area at the time of division or cell cycle duration are fixed would be unable to account for the observed data ., To determine how cell division competence and execution are related to leaf growth , we measured areal growth rates ( relative elemental growth rates 25 ) for the different time intervals , using cell vertices as landmarks ( Fig 1B ) ., Areal growth rates varied along both the mediolateral and proximodistal axis of the leaf , similar to variations observed for competence and execution of division ., The spatiotemporal variation in areal growth rate could be decomposed into growth rates in different orientations ., Growth rates parallel to the midline showed a proximodistal gradient , decreasing towards the distal leaf tip ( Fig 1C and S1A Fig ) ., By contrast , mediolateral growth was highest in the lateral lamina and declined towards the midline , becoming very low there in later stages ( Fig 1D and S1B Fig ) ., The region of higher mediolateral growth may correspond to the marginal meristem 26 ., Regions of low mediolateral growth ( i . e . , the proximal midline ) showed elongated cell shapes ., Models for leaf growth therefore need to account not only for the spatiotemporal pattern of areal growth rates but also the pattern of anisotropy ( differential growth in different orientations ) and correlated patterns of cell shape ., Cell size should reflect both growth and division: growth increases cell size while division reduces cell size ., Cell periclinal areas were estimated from tracked vertices ( Fig 1E ) ., Segmenting a sample of cells in 3D showed that these cell areas were a good proxy for cell size , although factors such as leaf curvature introduced some errors ( for quantifications see S5 Fig , and ‘Analysis of cell size using 3D segmentation’ in Materials and methods ) ., At the first time point imaged , cell areas were about 100–200 μm2 throughout most of the leaf primordium ( Fig 1E , left ) ., Cells within the proximal 150 μm of the lamina remained small at later stages , reflecting continued divisions ., In the proximal 150–300 μm of the lamina , cells were slightly larger , reflecting larger cell areas at division execution ., Lamina cells distal to 300 μm progressively enlarged , reflecting the continued growth of these nondividing cells ( Fig 1E and Fig 3A ) ., Cells in the midline region were larger on average than those in the proximal lamina , reflecting execution of division at larger cell areas ( Fig 1E and Fig 3C ) ., Thus , noncompetent cells increase in area through growth , while those in the competence zone retain a smaller size , with the smallest cells being found in the most proximal 150 μm of the lateral lamina ., Visual comparison between areal growth rates ( Fig 2B ) with cell sizes ( Fig 2E ) suggested that regions with higher growth rates had smaller cell sizes ., Plotting areal growth rates against log cell area confirmed this impression , revealing a negative correlation between growth rate and cell size ( Fig 4B ) ., Thus , rapidly growing regions tend to undergo more divisions ., This relationship is reflected in the pattern of division competence: mean areal growth rates of competent cells in the lamina were higher than noncompetent cells , particularly at early stages ( Fig 3I ) ., However , there was no fixed threshold growth rate above which cells were competent , and for the midline region there was no clear difference between growth rates of competent and noncompetent cells ( Fig 3I ) ., Plotting areal growth rates for competent and noncompetent cells showed considerable overlap ( S6 Fig ) , with no obvious switch in growth rate when cells no longer divide ( become noncompetent ) ., Thus , high growth rate broadly correlates with division competence , but the relationship is not constant for different regions or times ., To determine how the patterns and correlations observed for the epidermis compared to those in other tissues , we analysed growth and divisions in the subepidermis ., The advantage of analysing an adjacent connected cell layer is that unless intercellular spaces become very large , the planar cellular growth rates will be very similar to those of the attached epidermis ( because of tissue connectivity and lack of cell movement ) ., Comparing the epidermal and subepidermal layers therefore provides a useful system for analysing division behaviours in a similar spatiotemporal growth context ., Moreover , by using the spch mutant , one of the major distinctions in division properties between these layers ( the presence of stomatal lineages in the epidermis ) is eliminated ., Divisions in the abaxial subepidermis were tracked by digitally removing the overlying epidermal signal ( the distalmost subepidermal cells could not be clearly resolved ) ., As with the epidermis , 3D segmentation showed that cell areas were a good proxy for cell size , although average cell thickness was greater ( S11 Fig , see also ‘Analysis of cell size using 3D segmentation’ in Materials and methods ) ., Unlike the epidermis , intercellular spaces were observed for the subepidermis ., As the tissue grew , subepidermal spaces grew and new spaces formed ( Fig 5A–5D ) ., Similar intercellular spaces were observed in subepidermal layers of wild-type leaves , showing they were not specific to spch mutants ( S8 Fig ) ., Vertices and intercellular spaces in the subepidermis broadly maintained their spatial relationships with the epidermal vertices ( Fig 5C , 5E and 5F ) ., Comparing the cellular growth rates in the plane for a patch of subepidermis with the adjacent epidermis showed that they were similar ( S9 Fig ) , although the subepidermal rates were slightly lower because of the intercellular spaces ., This correlation is expected , because unless the intercellular spaces become very large , the areal growth rates of the epidermal and subepidermal layers are necessarily similar ., The most striking difference between subepidermal and epidermal datasets was the smaller size of the distal lamina cells of the subepidermis ( compare Fig 6A with Fig 1E , and Fig 3A with Fig 3B ) ., For the epidermis , these cells attain areas of about 1 , 000 μm2 at later stages , while for the subepidermis they remain below 500 μm2 ., This finding was consistent with the subepidermal division competence zone extending more distally ( Fig 6B ) , reaching a distal limit of about 400 μm compared with 300 μm for the epidermis ., A more distal limit for the subepidermis has also been observed for cell cycle gene expression in wild type 10 ., Moreover , at early stages , divisions occurred throughout the subepidermis rather than being largely proximal , as observed in the epidermis , further contributing to the smaller size of distal subepidermal cells ( S10 Fig ) ., Despite these differences in cell size between layers , subepidermal cell areal growth rates showed similar spatiotemporal patterns to those of the overlying epidermis , as expected because of tissue connectivity ( compare Fig 6C with Fig 1B ) ., Consequently , correlations between growth rate and cell size were much lower for the subepidermis than for the epidermis ( Fig 4B and 4C ) ., This difference in the relationship between growth and cell size in different cell layers was confirmed through analysis of cell division competence ., In the subepidermis , at early stages there was no clear difference between mean growth rates for competent and noncompetent cells ( Fig 3J cyan , green ) , in contrast to what is observed in the epidermis ( Fig 3I cyan , green ) , while at later stages noncompetent cells had a slightly lower growth rate ( Fig 3J yellow , red ) ., To determine how the patterns of growth and division observed in spch related to those in wild type , we imaged a line generated by crossing a spch mutant rescued by a functional SPCH protein fusion ( pSPCH:SPCH-GFP ) to wild type expressing the PIN3 auxin transporter ( PIN3:PIN3-GFP ) , which marks cell membranes in the epidermis 23 ., The resulting line allows stomatal lineage divisions to be discriminated from non-stomatal divisions ( see below ) in a SPCH context ., At early stages , wild-type and spch leaves were not readily distinguishable based on cell size ( S12 Fig ) ., However , by the time leaf primordia attained a width of about 150 μm , the number and size of cells differed dramatically ., Cell areas in wild type were smaller in regions outside the midline region , compared with corresponding cells in spch ( Fig 7A ) ., Moreover , cell divisions in wild type were observed throughout the lamina that was amenable to tracking ( Fig 7B , 0–12 h ) , rather than being largely proximal ., Divisions were observed over the entire lamina for subsequent time intervals , including regions distal to 300 μm ( Fig 7B , 12–57 h ) ., These results indicate that SPCH can confer division competence in epidermal cells outside the proximal zone observed in spch mutants ., To further clarify how SPCH influences cell division , we used SPCH-GFP signal to classify wild-type cells into two types: ( 1 ) Stomatal lineage divisions , which include both amplifying divisions ( cells express SPCH strongly around the time of division and retain expression in one of the daughter cells ) ( S1 Video , orange/yellow in Fig 7C ) and guard mother cell divisions ( SPCH expression is bright and diffuse during the first hours of the cycle , transiently switched on around time of division , and then switched off in both daughters ) ., ( 2 ) Non-stomatal divisions , in which SPCH expression is much weaker , or only lasts <2 h , and switches off in both daughter cells ( S2 Video , light/dark green in Fig 7C ) ., If cells with inactive SPCH behave in a similar way in wild-type or spch mutant contexts , we would expect non-stomatal divisions to show similar properties to divisions in the spch mutant ., In the first time interval , non-stomatal divisions ( green ) were observed within the proximal 150 μm ( Fig 7C , 0–12 h ) , similar to the extent of the competence zone in spch ( Fig 1A , 0–14h ) ., The zone of non-stomatal divisions then extended to about 250 μm and became restricted to the midline region ., After leaf width was greater than 0 . 45 mm , we did not observe further non-stomatal divisions in the midline region , similar to the situation in spch leaves at a comparable width ( Fig 1A , 58-74h , 0 . 48 mm ) ., These results suggest that similar dynamics occur in the non-stomatal lineages of wild type and the spch mutant ., To determine how SPCH modulates division , we analysed stomatal and non-stomatal divisions in the lamina ., Considerable variation was observed for both the area at which cells divide ( 25–400 μm2 ) and cell cycle duration ( 8–50 h ) ( S13 Fig ) ., The mean area at which cells execute division was greater for non-stomatal divisions ( about 165 ± 28 μm2 1 . 96 × standard error ) than stomatal divisions ( about 80 ± 6 μm2 ) ( S13 Fig ) ., Similarly , cell cycle durations were longer for non-stomatal divisions ( about 25 ± 3 h ) compared with stomatal divisions ( about 18 ± 1 h ) ., These results suggest that in addition to conferring division competence , SPCH acts cell autonomously to promote division at smaller cell sizes and/or for shorter cell cycle durations ., Given the alteration in cell sizes and division patterns in wild type compared to spch , we wondered if these may reflect alterations in growth rates ., When grown on agar plates , spch mutant leaves grow more slowly than wild-type leaves ( S14A Fig ) ., The slower growth of spch could reflect physiological limitations caused by the lack of stomata , or an effect of cell size on growth—larger cells in spch cause a slowing of growth ., However , the tracking data and cell size analysis of spch and wild type described above were carried out on plants grown in a bio-imaging chamber in which nutrients were continually circulated around the leaves ., Growth rates for wild type and spch leaves grown in these conditions were comparable for much of early development , and similar to those observed for wild type on plates ( compare Fig 7D with Fig 1B , S14 Fig ) ., These results suggest that the reduced growth rates of spch compared with wild type at early stages on plates likely reflect physiological impairment caused by a lack of stomata rather than differences in cell size ., As a further test of this hypothesis , we grew fama ( basic helix-loop-helix transcription factor bHLH097 ) mutants , as these lack stomata but still undergo many stomatal lineage divisions 27 ., We found that fama mutants attained a similar size to spch mutants on plates , consistent with the lack of stomata being the cause of reduced growth in these conditions ( S14 Fig ) ., Plots of cell area against growth rates of tracked leaves grown in the chamber showed that , for similar growth rates , cells were about three times smaller in wild type compared with spch ( compare Fig 4A with Fig 4B ) ., Thus , the effects of SPCH on division can be uncoupled from effects on growth rate , at least at early stages of development ., At later stages ( after leaves were about 1 mm wide ) , spch growth in the bio-imaging chamber slowed down compared with wild type , and leaves attained a smaller final size ., This later difference in growth rate might be explained by physiological impairment of spch because of the lack of stomata , and/or by feedback of cell size on growth rates ., This change in later behaviour may reflect the major developmental and transcriptional transition that occurs after cell proliferation ceases 9 ., The above results reveal that patterns of growth rate , cell division , and cell size and shape exhibit several features in spch: ( 1 ) a proximal corridor of cell division competence , with an approximately fixed distal limit relative to the petiole-lamina boundary; ( 2 ) the distal limit is greater for subepidermal ( 400 μm ) than epidermal tissue ( 300 μm ) ; ( 3 ) a further proximal restriction of division competence in the epidermis at early stages that extends with growth until the distal limit of the corridor ( 300 μm ) is reached; ( 4 ) larger and narrower cells in the proximal midline region of the epidermis; ( 5 ) a proximodistal gradient in cell size in the epidermal lamina; ( 6 ) a negative correlation between cell size and growth rate that is stronger in the epidermis than subepidermis; ( 7 ) variation in both the size at which cells divide and cell cycle duration along both the proximodistal and mediolateral axes; and ( 8 ) variation in growth rates parallel or perpendicular to the leaf midline ., In wild-type plants , these patterns are further modulated by the expression of SPCH , which leads to division execution at smaller cell sizes and extension of competence , without affecting growth rates at early stages ., Thus , growth and division rates exhibit different relations in adjacent cell layers , even in spch , in which epidermal-specific stomatal lineages are eliminated , and division patterns can differ between genotypes ( wild type and spch ) without an associated change in growth rates ., These observations argue against spatiotemporal regulators acting solely on the execution of division , which then influences growth , as this would be expected to give conserved relations between division and growth ., For the same reason , they argue against a single-point-of-control model in which spatiotemporal regulators act solely on growth , which then secondarily influences division ., Instead , they suggest dual control , with spatiotemporal regulators acting on both growth and division components ., With dual control , growth and division may still interact through cross-dependencies , but spatiotemporal regulation does not operate exclusively on one or the other ., To determine how a hypothesis based on dual control may account for all the observations , we used computational modelling ., We focussed on the epidermal and subepidermal layers of the spch mutant , as these lack the complications of stomatal lineages ., For simplicity and clarity , spatiotemporal control was channelled through a limited set of components for growth and division ( Fig 8A ) ., There were two components for growth under spatiotemporal control: specified growth rates parallel and perpendicular to a proximodistal polarity field ( Kpar and Kper , respectively ) 16 ., Together with mechanical constraints of tissue connectivity , these specified growth components lead to a pattern of resultant growth and organ shape change 28 ., There were two components for cell division under spatiotemporal control: competence to divide ( CDIV ) , and a threshold area for division execution that varies around a mean ( Ā ) ., Controlling division execution by a threshold cell size ( Ā ) introduces a cross-dependency between growth and division , as cells need to grow to attain the local threshold size before they can divide ., The cross-dependency is indicated by the cyan arrow in Fig 8A , feeding information back from cell size ( which depends on both growth and division ) to division ., An alternative to using Ā as a component of division-control might be to use a mean cell cycle duration threshold ., However , this would bring in an expected correlation between high growth rates and large cell sizes ( for a given cell cycle duration , a faster-growing cell will become larger before cycle completion ) , which is the opposite trend of what is observed ., Spatiotemporal regulators of growth and division components can be of two types: those that become deformed together with the tissue as it grows ( fixed to the tissue ) and those that maintain their pattern to some extent despite deformation of the tissue by growth ( requiring mobile or diffusible factors ) 28 ., In the previously published growth model , regulatory factors were assumed , for simplicity , to deform with the tissue as it grows 16 ., These factors comprised a graded proximodistal factor ( PGRAD ) , a mediolateral factor ( MID ) , a factor distinguishing lamina from petiole ( LAM ) , and a timing factor ( LATE ) ( S15A and S15B Fig ) ., However , such factors cannot readily account for domains with limits that remain at a constant distance from the petiole-lamina boundary , such as the observed corridors for division competence ., This is because the boundary of a domain that is fixed to the tissue will extend with the tissue as it grows ., We therefore introduced a mobile factor , proximal mobile factor ( PMF ) , that was not fixed to the tissue to account for these behaviours ., This motivation is similar to that employed by others 11–13 ., PMF was generated at the petiole-lamina boundary and with appropriate diffusion and decay coefficients such that PMF initially filled the primordium and then showed a graded distribution as the primordium grew larger , maintaining a high concentration in the proximal region and decreasing towards the leaf tip ( S15C and S15D Fig ) ., This profile was maintained despite further growth , allowing thresholds to be used to define domains with relatively invariant distal limits ., Further details of the growth model are given in Materials and methods , and the resultant growth rates are shown in S16 Fig ( compare with Fig 1B and 1D ) ., Cells were incorporated by superimposing polygons on the initial tissue or canvas ( S15A Fig , right ) ., The sizes and geometries of these virtual cells ( v-cells ) were based on cells observed at corresponding stages in confocal images of leaf primordia 16 ., The vertices of the v-cells were anchored to the canvas and displaced with it during growth ., Cells divided according to Errera’s rule: the shortest wall passing through the centre of the v-cell 29 , with noise in positioning of this wall incorporated to capture variability ., V-cells were competent to divide if they expressed factor CDIV , and executed division when reaching a mean cell target area , Ā ., As the observed area at time of division was not invariant ( Fig 2F ) , we assumed the threshold area for division varied according to a standard deviation of σ = 0 . 2Ā around the mean ., CDIV and Ā are the two core components of division that are under the control of spatiotemporal regulators in the model ( Fig 8A , 8C and 8D ) ., Variation between epidermal and subepidermal patterns reflects different interactions controlling cell division ( interactions colour coded red and blue , respectively , in Fig 8C and 8D ) ., We first modelled cell divisions in the subepidermis , as this layer shows a more uniform pattern of cell sizes ( Fig 3B and Fig 6A ) ., Formation of intercellular spaces was simulated by replacing a random selection of cell vertices with small empty equilateral triangles , which grew at a rate of 2 . 5% h−1 , an average estimated from the tracking data ., To account for the distribution of divisions and cell sizes , we assumed that v-cells were competent to divide ( express CDIV ) where PMF was above a threshold value ., This value resulted in the competence zone extending to a distal limit of about 400 μm ., To account for the proximodistal pattern of cell areas in the lamina ( Fig 3B and Fig 6A ) and larger cells in the midline ( Fig 3D and Fig 6A ) , we assumed that Ā was modulated by the levels of PMF , PGRAD , and MID ( Fig 8D , black and blue ) ., These interactions gave a pattern of average v-cell areas and division competence that broadly matched those observed ( compare Fig 8E and 8F with Fig 6A and 6B , and Fig 3F and 3H with 3B and 3D , S3 Video ) ., For the epidermis , the zone of division competence was initially in the proximal region of the primordium and then extended with the tissue as it grew ( Fig 1A ) ., We therefore hypothesised that in addition to division being promoted by PMF , there was a further requirement for a proximal factor that extended with the tissue as it grew ., We used PGRAD to achieve this additional level of control , assuming CDIV expression requires PGRAD to be above a threshold level ( Fig 8C , red and black ) ., V-cells with PGRAD below this threshold were not competent to divide , even in the presence of high PMF ., Thus , at early stages , when PMF was high throughout the primordium , the PGRAD requirement restricted competence to the proximal region of the leaf ( Fig 8H ) ., At later stages , as the PGRAD domain above the threshold extended beyond 300 μm , PMF became limiting , preventing CDIV from extending beyond about 300 μm ., To account for the earlier arrest of divisions in the midline region ( Fig 1A ) , CDIV was inhibited by MID when LATE reached a threshold value ( Fig 8C , red ) ., As well as CDIV being regulated , the spatiotemporal pattern of Ā was modulated by factors MID and PMF ( Fig 8D black ) ., With these assumptions , the resulting pattern of epidermal divisions and v-cell sizes broadly matched those observed experimentally for the epidermis ( compare Fig 8G with Fig 1E , S4 Video ) ., In particular , the model accounted for the observed increases in cells sizes with distance from the petiole-lamina boundary , which arise because of the proximal restrictions in competence ( compare Fig 3E and 3G with Fig 3A and 3C ) ., The model also accounted for the elongated cell shapes observed in the midline region , which arise through the arrest of division combined with low specified growth rate perpendicular to the polarity ., Moreover , the negative correlations between growth rates and cell size , not used in developing the model , were similar to those observed experimentally ( Fig 4B and 4D ) ., These correlations arise because both growth and division are promoted in proximal regions ., We also measured the cell topology generated by the epidermal model ., It has previously been shown that the frequency of six-sided neighbours observed experimentally for the spch leaf epidermis is very low compared with that for other plant and animal tissues and also with that generated by a previous implementation | Introduction, Results, Discussion, Materials and methods | A developing plant organ exhibits complex spatiotemporal patterns of growth , cell division , cell size , cell shape , and organ shape ., Explaining these patterns presents a challenge because of their dynamics and cross-correlations , which can make it difficult to disentangle causes from effects ., To address these problems , we used live imaging to determine the spatiotemporal patterns of leaf growth and division in different genetic and tissue contexts ., In the simplifying background of the speechless ( spch ) mutant , which lacks stomatal lineages , the epidermal cell layer exhibits defined patterns of division , cell size , cell shape , and growth along the proximodistal and mediolateral axes ., The patterns and correlations are distinctive from those observed in the connected subepidermal layer and also different from the epidermal layer of wild type ., Through computational modelling we show that the results can be accounted for by a dual control model in which spatiotemporal control operates on both growth and cell division , with cross-connections between them ., The interactions between resulting growth and division patterns lead to a dynamic distributions of cell sizes and shapes within a deforming leaf ., By modulating parameters of the model , we illustrate how phenotypes with correlated changes in cell size , cell number , and organ size may be generated ., The model thus provides an integrated view of growth and division that can act as a framework for further experimental study . | Organ morphogenesis involves two coordinated processes: growth of tissue and increase in cell number through cell division ., Both processes have been analysed individually in many systems and shown to exhibit complex patterns in space and time ., However , it is unclear how these patterns of growth and cell division are coordinated in a growing leaf that is undergoing shape changes ., We have addressed this problem using live imaging to track growth and cell division in the developing leaf of the mustard plant Arabidopsis thaliana ., Using subsequent computational modelling , we propose an integrated model of leaf growth and cell division , which generates dynamic distributions of cell size and shape in different tissue layers , closely matching those observed experimentally ., A key aspect of the model is dual control of spatiotemporal patterns of growth and cell division parameters ., By modulating parameters in the model , we illustrate how phenotypes may correlate with changes in cell size , cell number , and organ size . | skin, cell physiology, plant anatomy, medicine and health sciences, integumentary system, cell division analysis, cell cycle and cell division, cell processes, brassica, cell polarity, plant science, model organisms, network analysis, experimental organism systems, epidermis, bioassays and physiological analysis, seedlings, plants, research and analysis methods, arabidopsis thaliana, computer and information sciences, cell analysis, animal studies, regulatory networks, leaves, eukaryota, plant and algal models, cell biology, anatomy, biology and life sciences, organisms | null |
54 | journal.pcbi.1000090 | 2,008 | CSMET: Comparative Genomic Motif Detection via Multi-Resolution Phylogenetic Shadowing | We concern ourselves with uncovering motifs in eukaryotic cis-regulatory modules ( CRM ) from multiple evolutionarily related species , such as the members from the Drosophila clade ., Due to high degeneracy of motif instances , and complex motif organization within the CRMs , pattern-matching-based motif search in higher eukaryotes remains a difficult problem , even when representations such as the position weight matrices ( PWMs ) of the motifs are given ., Extant methods that operate on a single genome or simpler organisms such as yeast often yield a large number of false positives , especially when the sequence to be examined spans a long region ( e . g . , tens of thousands of bps ) beyond the basal promoters , where possible CRMs could be located ., As in gene finding , having orthologous sequences from multiple evolutionarily related taxa can potentially benefit motif detection because a reasonable alignment of these sequences could enhance the contrast of sequence conservation in motifs with respect to that of the non-motif regions , However , the alignment quality of non-coding regions is usually significantly worse than that of the coding regions , so that the aligned motif sequences are not reliably orthologous ., This is often unavoidable even for the best possible local alignment software because of the short lengths and weak conservation of TFBSs ., When applying a standard shadowing model on such alignments , motif instances aligned with non-orthologous sequences or gaps can be hard to identify due to low overall shadowing score of the aligned sequences ( Figure 1A ) ., In addition to the incomplete orthology due to imperfect alignment , a more serious concern comes from a legitimate uncertainty over the actual functional orthology of regions that are alignment-wise orthologous ., A number of recent investigations have shown that TFBS loss and gain are fairly common events during genome evolution 8 , 12 ., For example , Patel et al 13 showed that aligned “motif sites” in orthologous CRMs in the Drosophila clade may have varying functionality in different taxa ., Such cases usually occur in regions with reduced evolutionary constraints , such as regions where motifs are abundant , or near a duplication event ., The sequence dissimilarities of CRMs across taxa include indel events in the spacers , as well as gains and losses of binding sites for TFs such as the bcd-3 and hb-1 motifs in the evenskipped stripe 2 ( eve2 ) ( Figure 1B ) ., A recent statistical analysis of the Zeste binding sites in several Drosophila taxa also revealed existence of large-scale functional turnover 12 ., Nevertheless , the fact that sequence similarity is absent does not necessarily mean that the overall functional effect of the CRM as a whole is vastly different ., In fact , for the Drosophila clade , despite the substantial sequence dissimilarity in gap-gene CRMs such as eve2 , the expression of these gap genes shows similar spatio-temporal stripe patterns across the taxa 8 , 13 ., Although a clear understanding of the evolutionary dynamics underlying such inter- and intra-taxa diversity is still lacking , it is hypothesized that regulatory sequences such as TFBSs and CRMs may undergo adaptive evolution via stabilizing selections acting synergistically on different loci within the sequence elements 8 , 12 , which causes site evolution to be non-iid and non-isotropic across all taxa ., In such a scenario , it is crucial to be able to model the evolution of biological entities not only at the resolution of individual nucleotides , but also at more macroscopic levels , such as the functionality of whole sequence elements such as TFBSs over lineages ., To our knowledge , so far there have been few attempts along this line , especially in the context of motif detection ., The CSMET model presented in this paper intends to address this issue ., Orthology-based motif detection methods developed so far are mainly based on nucleotide-level conservation ., Some of the methods do not resort to a formal evolutionary model 14 , but are guided by either empirical conservation measures 15–17 , such as parsimonious substitution events or window-based nucleotide identity , or by empirical likelihood functions not explicitly modeling sequence evolution 4 , 18 , 19 ., The advantage of these non-phylogeny based methods lies in the simplicity of their design , and their non-reliance on strong evolutionary assumptions ., However , since they do not correspond to explicit evolutionary models , their utility is restricted to purely pattern search , and not for analytical tasks such as ancestral inference or evolutionary parameter estimation ., Some of these methods employ specialized heuristic search algorithms that are difficult to scale up to multiple species , or generalize to aligned sequences with high divergence ., Phylogenetic methods such as EMnEM 20 , MONKEY 21 , and our in-house implementation of PhyloHMM ( originally implemented in 1 for gene finding , but in our own version tailored for motif search ) explicitly adopt a complete and independent shadowing model at the nucleotide level ., These methods are all based on the assumption of homogeneity of functionality across orthologous nucleotides , which is not always true even among relatively closely related species ( e . g . , of divergence less than 50 mya in Drosophila ) ., Empirical estimation and simulation of turnover events is an emerging subject in the literature 12 , 22 , but to our knowledge , no explicit evolutionary model for functional turnover has been proposed and brought to bear in comparative genomic search of non-conserved motifs ., Thus our CSMET model represents an initial foray in this direction ., Closely related to our work , two recent algorithms , rMonkey 12—an extension over the MONKEY program , and PhyloGibbs 9—a Gibbs sampling based motif detection algorithm , can also explicitly account for differential functionality among orthologs , both using the technique of shuffling or reducing the input alignment to create well conserved local subalignments ., But in both methods , no explicit functional turnover model has been used to infer the turnover events ., Another recent program , PhyME 10 , partially addresses the incomplete orthology issue via a heuristic that allows motifs only present in a pre-chosen reference taxon to be also detectable , but it is not clear how to generalize this ability to motifs present in arbitrary combination of other taxa , and so far no well-founded evolutionary hypothesis and model is provided to explain the heuristic ., Non-homogeneous conservation due to selection across aligned sites has also been studied in DLESS 23 and PhastCons 24 , but unlike in CSMET , no explicit substitution model for lineage-specific functional evolution was used in these algorithms , and the HMM-based model employed there makes it computationally much more expensive than CSMET to systematically explore all possible evolutionary hypotheses ., A notable work in the context of protein classification proposed a phylogenomic model over protein functions , which employs a regression-like functional to model the evolution of protein functions represented as feature vectors along lineages in a complete phylogeny 25 , but such ideas have not been explored so far for comparative genomic motif search ., Various nucleotide substitution models , including the Jukes-Cantor 69 ( JC69 ) model 26 , and the Felsenstein 81 ( F81 ) model 27 , have been employed in current phylogenetic shadowing or footprinting algorithms ., PhyloGibbs and PhyME use an analogue of F81 proposed in 28 , which is one of the simplest models to handle arbitrary stationary distributions , necessary to model various specific PWMs of motifs ., Both PhyME and PhyloGibbs also offer an alternative to use a simplified star-phylogeny to replace the phylogenetic tree when dealing with a large number of taxa , which corresponds to an even simpler substitution process ., Our CSMET model differs from these existing methods in several important ways ., First , it uses a different evolutionary model based on a coupled-set of both functional and nucleotide substitution processes , rather than a single nucleotide substitution model to score every alignment block ., Second , it uses a more sophisticated and popular nucleotide substitution process based on the Felsenstein84 ( F84 ) model 29 , which captures the transition/transversion bias ., Third , it employs a hidden Markov model that explicitly models autocorrelation of evolutionary rates on successive sites in the genome ., Fourth , it uses an efficient deterministic inference algorithm that is linear to the length of the input sequence and either exponential ( under a full functional phylogeny ) or linear ( under a star-shaped functional phylogeny ) to the number of the aligned taxa , rather than the Monte Carlo or heuristic search algorithms that require long convergence times ., Essentially , CSMET is a context-dependent probabilistic graphical model that allows a single column in a multiple alignment to be modeled by multiple evolutionary trees conditioned on the functional specifications of each row ( i . e . , the functional identity of a substring in the corresponding taxon ) ( Figure 2 ) ., When conjoined with a hidden Markov model that auto-correlates the choices of different evolutionary rates on the phylogenetic trees at different sites , we have a stochastic generative model of phylogenetically related CRM sequences that allows both binding site turnover in arbitrary subsets of taxa , and coupling of evolutionary forces at different sites based on the motif organizations within CRMs ., Overall , CSMET offers an elegant and efficient way to take into consideration complex evolutionary mechanisms of regulatory sequences during motif detection ., When such a model is properly trained on annotated sequences , it can be used for comparative genomic motif search in all aligned taxa based on a posterior probabilistic inference algorithm ., This model can be also used for de novo motif finding as programs such as PhyloGibbs and PhyME , with a straightforward extension of the inference procedure that couples the training and prediction routines in an expectation-maximization ( EM ) iteration on unannotated sequence alignments ., In this paper , we focus on supervised motif search in higher eukaryotic genomes ., We compare CSMET with representative competing algorithms , including EMnEm , PhyloHMM , PhyloGibbs , and a mono-genomic baseline Stubb ( which uses an HMM on single species ) on both simulated data , and a pre-aligned Drosophila dataset containing 14 developmental CRMs for 11 aligned Drosophila species ., Annotations for motif occurrences in D . melanogaster of 5 gap-gene TFs - Bicoid , Caudal , Hunchback , Kruppel and Knirps - were obtained from the literature ., We show that CSMET outperforms the other methods on both synthetic and real data , and identifies a number of previously unknown occurrences of motifs within and near the study CRMs ., The CSMET program , the data used in this analysis , and the predicted TFBS in Drosophila sequences , are available for download at http://www . sailing . cs . cmu . edu/csmet/ ., At present , biologically validated orthologous motifs and CRMs across multiple taxa are extremely rare in the literature ., In most cases , motifs and CRMs are only known in some well-studied reference taxa such as the Drosophila melanogaster; and their orthologs in other species are deduced from multiple alignments of the corresponding regulatory sequences from these species according to the positions and PWMs of the “reference motifs” in the reference taxon ., This is a process that demands substantial manual curation and biological expertise; rarely are the outcomes from such analysis validated in vivo ( but see 8 for a few such validations in some selected Drosophila species where the transgenic platforms have been successfully developed ) ., At best , these real annotations would give us a limited number of true positives across taxa , but they are not suitable for a systematic performance evaluation based on precision and recall over true motif instances ., Thus we first compare CSMET with a carefully chosen collection of competing methods on simulated CRM sequences , where the motif profiles across all taxa are completely known ., We choose to compare CSMET with 3 representative algorithms for comparative genomic motif search , PhyloGibbs , EMnEM , PhyloHMM; and the program Stubb , which is specialized for motif search in eukaryotic CRMs , and in our paper , set to operate in mono-genomic mode ., The rationale for choosing these 4 benchmarks is detailed in the Material and Methods ., We applied CSMET and competing methods to a multi-specific dataset of Drosophila early developmental CRMs and motifs compiled from the literature 38 ., However , in this situation , we score accuracy only on the motifs annotated in Drosophila melanogaster ( rather than in all taxa ) , because they are the only available gold-standard ., Upon concluding this section , we also report some interesting findings by CSMET of putative motifs , some of which only exist in other taxa and do not have known counterparts in melanogaster ., CSMET is a novel phylogenetic shadowing method that can model biological sequence evolution at both nucleotide level at each individual site , and functional level of a whole TFBS ., It offers a principled way of addressing the problem that can seriously compromise the performance of many extant conservation-based motif finding algorithms: motif turnover in aligned CRM sequences from different species , an evolutionary event that results in functional heterogeneity across aligned sequence entities and shatters the basis of conventional alignment scoring methods based on a single function-specific phylogeny ., CSMET defines a new evolution-based score that explicitly models functional substitution along the phylogeny that causes motif turnover , and nucleotide divergence of aligned sites in each taxa under possibly different function-specific phylogenies conditioning on the turnover status of the site in each taxon ., In principle , CSMET can be used to estimate the rate of turnover of different motifs , which can elucidate the history and dynamics of functional diversification of regulatory binding sites ., But we notice that experimentally validated multi-species CRM/TFBS annotations that support an unbiased estimate of turnover rates are yet to be generated , as currently almost all biologically validated motifs only exist in a small number of representative species in each clade of the tree of life , such as melanogaster in the Drosophila clade ., Manual annotation on CRM alignments , as we used in this paper , tends to bias the model toward conserved motifs ., Thus , at this time , the biological interpretation of evolutionary parameters on the functional phylogeny remains preliminary ., Nevertheless , these estimated parameters do offer important utility from a statistical and algorithmic point of view , by elegantly controlling the trade-off between two competing molecular substitution processes—that of the motif sequence and of the background sequence—at every aligned site across all taxa beyond what is offered in any existing motif evolution model ., Empirically , we find that such modelling is useful in motif detection ., On both synthetic data and 14 CRMs from 11 Drosophila taxa , we find that the CSMET performs competitively against the state-of-the-art comparative genomic motif finding algorithm , PhyloGibbs , and significantly outperforms other methods such as EMnEM , PhyloHMM and Stubb ., In particular , CSMET demonstrates superior performance in certain important scenarios , such as cases where aligned sequences display significant divergence and motif functionalities are apparently not conserved across taxa or over multiple adjacent sites ., We also find that both CSMET and PhyloGibbs significantly outperform Stubb when the latter is naively applied to sequences of all taxa without exploiting their evolutionary relationships ., Our results suggest that a careful exploration of various levels of biological sequence evolution can significantly improve the performance of comparative genomic motif detection ., Recently , some alignment-free methods 19 have emerged which search for conserved TFBS rich regions across species based on a common scoring function , e . g . , distribution of word frequencies ( which in some ways mirrors the PWM of a reference species ) ., One may ask , given perhaps in the future a perfect search algorithm ( in terms of only computational efficiency ) , do we still need explicit model-based methods such as CSMET ?, We believe that even if exhaustive search of arbitrary string patterns becomes possible , models such as CSMET still offer important advantage not only in terms of interpretability and evolutionary insight as discussed above , but possibly also in terms of performance because of the more plausible scoring schemes they use ., This is because it is impractical to obtain the PWM of a motif in species other than a few reference taxa , thus the scores of putative motif instances in species where their own versions of the PWM are not available can be highly inaccurate under the PWM from the reference species due to evolution of the PWM itself in these study species with respect to the PWM in the reference species ., The CSMET places the reference PWM only at the tree root as an equilibrium distribution; for the tree leaves where all study species are placed , the nucleotide substitution model along tree branches allows sequences in each species to be appropriately scored under a species-specific distribution that is different from the reference PWM , thereby increasing its sensitivity to species-specific instantiations of motifs ., A possible future direction for this work lies in developing better approximate inference techniques for posterior inference under the CSMET model , especially under the scenarios of studying sequences from a large clade with many taxa , and/or searching for multiple motifs simultaneously ., It is noteworthy that our methods can be readily extended for de novo motif detection , for which an EM or a Monte Carlo algorithm can be applied for model-estimation based on the maximum likelihood principle ., Currently we are exploring such extensions ., Also we intend to develop a semi-supervised training algorithm that does not need manual annotation of motifs in other species on the training CRM alignment , so that we can obtain a less biased estimate of the evolutionary parameters of the CSMET model ., A problem with most of the extant motif finders , including the proposed CSMET , is that the length variation of aligned motifs ( e . g . , alignments with gaps ) cannot be accommodated ., In our model , while deletion events may be captured as gaps in the motif alignment , insertion events cannot be captured as the length of the motif is fixed ., This is because in a typical HMM sequence model the state transitions between sites within motifs are designed to be deterministic ., Thus stochastically accommodating gaps ( insertion events ) within motifs is not feasible ., Hence , some of the actual motifs missed by the competing algorithms were “gapped” motifs ., These issues deserve further investigation ., We use the Felsenstein 1984 model ( F84 ) 29 , which is similar to the Hasegawa–Kishino–Yanos 1985 model ( HKY85 ) 44 and widely used in the phylogenetic inference and footprinting literature 5 , 29 , for nucleotide substitution in our motif and background phylogeny ., Formally , F84 is a five-parameter model , based on a stationary distribution π ≡ πA , πT , πG , πC′ ( which constitutes three free parameters as the equilibrium frequencies sum to ) and the additional parameters κ and ι which impose the transition/transversion bias ., According to this model , the nucleotide-substitution probability from an internal node c to its descendent c′ along a tree branch of length b can be expressed as follows: ( 3 ) where i and j denote nucleotides , δij represents the Kronecker delta function , and εij is a function similar to the Kronecker delta function which is 1 if i and j are both pyrimidines or both purines , but 0 otherwise ., The summation in the denominator concisely computes purine frequency or pyrimidine frequency ., A more intuitive parameterization for F84 involves the overall substitution rate per site μ and the transition/transversion ratio ρ , which can be easily estimated or specified ., We can compute the transition matrix PN from μ and ρ using Equation 3 based on the following relationship between ( κ , ι ) and ( μ , ρ ) :To model functional turnover of aligned substrings along functional phylogeny Tf , we additionally define a substitution process over two characters ( 0 and 1 ) corresponding to presence or absence of functionality ., Now we use the single parameter JC69 model 26 for functional turnover due to its simplicity and straightforward adaptability to an alphabet of size, 2 . The transition probability along a tree branch of length β ( which represents the product of substitution rate μ and evolution time t , which are not identifiable independently , ) is defined by: ( 4 ) We estimate the evolutionary parameters from training data based on maximum likelihood , details are available in the Text S1 ., A complete phylogenetic tree T ≡ {τ , π , β , λ} with internal nodes {Vi; i\u200a=\u200a1:K′} and leaf nodes {Vi; i\u200a=\u200aK′+1:K} , where K denotes the total number of nodes ( i . e . , current and ancestral species ) instantiated in the tree and the node indexing follows a breath-first traversal from the root , defines a joint probability distribution of all-node configurations ( i . e . , the nucleotide contents at an aligned site in all species instantiated in the tree ) , which can be written as the following product of nt-substitution probabilities along tree branches: ( 5 ) where Vpa ( i ) denotes the parent-node of the node i in the tree , and the substitution probability PN ( ) is defined by Equation, 3 . For each position l of the multiple alignment , computing the probability of the entire column denoted by Al of aligned nucleotides from species corresponding to the leaves of a phylogenetic tree T ( l ) defined on position l , i . e . , P ( Al|T ( l ) ) , where Al correspond to an instantiation of the leaf nodes {Vi; i\u200a=\u200aK′+1:K} , takes exponential time if performed naively , since it involves the marginalization of all the internal nodes in the tree , i . e . , ( 6 ) We use the Felsenstein pruning algorithm 30 , which is a dynamic programming method that computes the probability of a leaf-configuration under a tree from the bottom up ., At each node of the tree , we store the probability of the subtree rooted at that node , for each possible nucleotide at that node ., At the leaves , only the probability for the particular nucleotide instantiated in the corresponding taxon is non-zero , and for all the other nucleotides , it is zero ., Unlike the naive algorithm , the pruning algorithm requires an amount of time that is proportional to the number of leaves in the tree ., We use a simple extension of this algorithm to compute the probabilities of a partial-alignment defined earlier under a marginal phylogeny , which is required in the coupled-pruning algorithm for CSMET , by considering only the leaves instantiated in ( but not in ) that is under a subtree T′ ( l ) that forms the marginal phylogeny we are interested in ., Specifically , let correspond to possible instantiations of the subset of nodes we need to marginalized out ., Since we already how to compute P ( Al|T ( l ) ) via marginalization over internal nodes , we simply further this marginalization over leaf nodes that corresponds to taxa instantiated in , i . e . , ( 7 ) where denotes the leaves instantiated in ., This amounts to replacing the leaf-instantiation step , which was originally operated on all leaves in the Felsenstein pruning algorithm , by a node-summation step over those leaves in ., In fact , in can be easily shown that this is equivalent to performing the Felsenstein pruning only on the partial tree T′ ( l ) that directly shadows , which is a smaller tree than the original T ( l ) , and only requires time ., Under the CSMET model , to perform the forward-backward algorithm for either motif prediction or unsupervised model training , we need to compute the emission probability given each functional state at every alignment site ., This is nontrivial because a CSMET is defined on an alignment block containing whole motifs across taxa rather than on a single alignment-column ., We adopt a “block-approximation” scheme , where the emission probability of each state at a sequence position , say , t , is defined on an alignment block of length L started at t , i . e . , , where At≡ ( A1 ( t ) , A2 ( t ) , … , AL ( t ) ) , and Al ( t ) denotes the lth column in an alignment block started from position t ., The conditional likelihood At given the nucleotide-evolutionary trees T and Tb coupled by the annotation tree Ta under a particular HMM state st is also hard to calculate directly , because the leaves of the two nucleotide trees are connected by the leaves of the annotation tree ( Figure 2B ) ., However , if the leaf-states of the annotation tree are known , the probability components coming from the two trees become conditionally independent and factor out ( see Equation 2 ) ., Recall that for a motif of length L , the motif tree actually contains L site-specific trees , i . e . , , and the the choice of these trees for every site in the same row ( i . e . , taxon ) , say , in the alignment block At , is coupled by a common annotation state ., Hence , given an annotation vector Zt for all rows of At , we actually calculate the probability of two subset of the rows given two subtrees ( i . e . , marginal phylogenies ) of the original phylogenetic trees for motif and backgrounds , respectively ( Figure 2B ) ., The subset is constructed by simply stacking the DNA bases of those taxon for which the annotation variables indicate that they were generated from the motif tree ., The subtree is constructed by simply retaining the set of nodes which correspond to the chosen subset , and the ancestors thereof ., Similarly we have and ., Hence , we obtain ( 8 ) The probability of a particular leaf-configuration of a tree , be it a partial or complete nucleotide tree , or an annotation tree , can be computed efficiently using the pruning algorithm ., Thus for each configuration of zt , we can readily compute and ., The block emission probability under CSMET can be expressed as: ( 9 ) where we use , , and to make explicit the dependence of the partial blocks and marginal trees on functional indicator vector zt ., We call this algorithm a coupled-pruning algorithm ., Note that in this algorithm we need to sum over a total number of 2M configurations of zt where M is the total number of taxa ( i . e . , rows ) in matrix At ., It is possible to reduce the computational complexity using a full junction tree algorithm on CSMET , which will turn the graphical model underlying CSMET into a clique tree of width ( i . e . , maximum clique size ) possibly smaller than M . But this algorithm is complicated and breaks the modularity of the tree-likelihood calculation by the coupled-pruning algorithm ., In typical comparative genomic analysis , we expect that M will not be prohibitively large , so our algorithm may still be a convenient and easy-to-implement alternative to the junction-tree algorithm ., Also this computation can be done off-line and in parallel ., Given the emission probabilities for each ancestral functional state at each site , we use the forward-backward algorithm for posterior decoding of the sequence of ancestral functional states along the input CRM alignment of length N . The procedure is the same as in a standard HMM applied to a single sequence , except that now the emission probability at each site , say with index t , is defined by the CSMET probability over an alignment block At at that position under an ancestral functional state , rather than the conditional probability of a single nucleotide observed at position t as in the standard HMM ., The complexity of this FB-algorithm is O ( Nk2 ) where k denotes the total number of functional states ., In this paper , we only implemented a simple HMM with one type motif allowed on either strand , so that k\u200a=\u200a3 . We defer a more elaborate implementation that allows multiple motifs and encodes sophisticated CRM architecture as in LOGOS 33 to a future extension ., Given an estimate of , we can infer the MAP estimates of —the functional annotation of every site t in every taxon i of the alignment ., Specifically , the posterior probability of a column of functional states Zt under ancestral functional state can be expressed as: ( 10 ) Recall that in the coupled-pruning algorithm , we can readily compute all the three conditional probability terms in the above equation ., Performing posterior inference allows us to make motif predictions in two ways ., A simple way is look at blocks in the alignment at which the posterior inference produces ones , and predict those to be motifs ., Alternatively , we can also use the inferred state of the alignment block together with the inferred ancestral state to compute a probability score ( as a heuristic ) based on the functional annotation tree ., The score for the block is the sum of probabilities of each block element being one ., Given blocks of aligned substrings {At} containing motif instances in at least one of the aligned taxa , in principle we can estimate both the annotation tree Tf ≡ {α , τf , βf} and the motif trees Tm ≡ {θ , τm , βm , λm} based on a maximum likelihood principle ., But since in our case most training CRM sequences do not have enough motif data to warrant correct estimation of the motif and function tree , we use the topology and branch lengths of a tree estimated by fastDNAml 36 from the entire CRM sequence alignment ( containing both motif and background ) as the common basis to build the Tf and Tm ., Specifically , fastDNAml estimates a maximum likelihood tree under the F84 model from the entire CRM alignment; we then scale the branch lengths of this tree to get the sets of branch lengths for Tf and Tm by doing a simple linear search ( see below ) of the scaling coefficient that maximize the likelihood of aligned motif sequences and aligned annotation sequences , under the Tm and Tf ( scaled based on the coefficients ) respectively ., For simplicity , we estimate the background tree Tb ≡ {θ , τb , βb , λb} separately from only aligned background sequences that are completely orthologous ( i . e . , containing no motifs in any taxon ) ., For both motifs and background phylogenies , the Felsenstein rate parameter μ for the corresponding nucleotide substitution models must also be estimated from the training data ., More technically , note that for Tm the scaling coefficient β and the rate parameter μ form a product in the expression of the substitution probability ( see Equation 3 ) and are not identifiable independently ., Thus we only need to estimate the compound rate parameter μ′\u200a=\u200aμβ ., Ideally , the optimal value of the μ′ should be obtained by performing a gradient descent on the likelihood under the corresponding phylogeny with respect to μ′ ., However , due to the phylogenetic tree probability terms involved in the likelihood computation , there is no closed form expression for the gradient that can be evaluated for a specific value of the compound rate parameter to determine the direction to choose for optimization ., Therefore , to find an approximation to the optimal value of μ′ , we perform a simple linear search in the space of μ′ as follows: and are lower and upper bounds respectively on the space of μ′ that is searched , and are heuristically chosen based on observation ., The step δ can be chosen to be as small as desired or is allowabl | Introduction, Results, Discussion, Materials and Methods | Functional turnover of transcription factor binding sites ( TFBSs ) , such as whole-motif loss or gain , are common events during genome evolution ., Conventional probabilistic phylogenetic shadowing methods model the evolution of genomes only at nucleotide level , and lack the ability to capture the evolutionary dynamics of functional turnover of aligned sequence entities ., As a result , comparative genomic search of non-conserved motifs across evolutionarily related taxa remains a difficult challenge , especially in higher eukaryotes , where the cis-regulatory regions containing motifs can be long and divergent; existing methods rely heavily on specialized pattern-driven heuristic search or sampling algorithms , which can be difficult to generalize and hard to interpret based on phylogenetic principles ., We propose a new method: Conditional Shadowing via Multi-resolution Evolutionary Trees , or CSMET , which uses a context-dependent probabilistic graphical model that allows aligned sites from different taxa in a multiple alignment to be modeled by either a background or an appropriate motif phylogeny conditioning on the functional specifications of each taxon ., The functional specifications themselves are the output of a phylogeny which models the evolution not of individual nucleotides , but of the overall functionality ( e . g . , functional retention or loss ) of the aligned sequence segments over lineages ., Combining this method with a hidden Markov model that autocorrelates evolutionary rates on successive sites in the genome , CSMET offers a principled way to take into consideration lineage-specific evolution of TFBSs during motif detection , and a readily computable analytical form of the posterior distribution of motifs under TFBS turnover ., On both simulated and real Drosophila cis-regulatory modules , CSMET outperforms other state-of-the-art comparative genomic motif finders . | Functional turnover of transcription factor binding sites ( TFBSs ) , such as whole-motif loss or gain , are common events during genome evolution , and play a major role in shaping the genome and regulatory circuitry of contemporary species ., Conventional methods for searching non-conserved motifs across evolutionarily related species have little or no probabilistic machinery to explicitly model this important evolutionary process; therefore , they offer little insight into the mechanism and dynamics of TFBS turnover and have limited power in finding motif patterns shaped by such processes ., In this paper , we propose a new method: Conditional Shadowing via Multi-resolution Evolutionary Trees , or CSMET , which uses a mathematically elegant and computationally efficient way to model biological sequence evolution at both nucleotide level at each individual site , and functional level of a whole TFBS ., CSMET offers the first principled way to take into consideration lineage-specific evolution of TFBSs and CRMs during motif detection , and offers a readily computable analytical form of the posterior distribution of motifs under TFBS turnover ., Its performance improves upon current state-of-the-art programs ., It represents an initial foray into the problem of statistical inference of functional evolution of TFBS , and offers a well-founded mathematical basis for the development of more realistic and informative models . | computational biology/evolutionary modeling, computational biology/comparative sequence analysis, computational biology/sequence motif analysis | null |
1,752 | journal.pcbi.1005724 | 2,017 | Quantifying the effects of antiangiogenic and chemotherapy drug combinations on drug delivery and treatment efficacy | The abnormal structure of tumor vasculature is one of the leading causes of insufficient and spatially heterogeneous drug delivery in solid tumors ., Tortuous and highly permeable tumor vessels along with the lack of a functional lymphatic system cause interstitial fluid pressure ( IFP ) to increase within tumors ., This elevated IFP results in the inefficient penetration of large drug particles into the tumor , whose primary transport mechanism is convection 1 , 2 ., The abnormalities in tumor vasculature are caused by dysregulation of angiogenesis ., Tumors initiate angiogenesis to form a vascular network that can provide oxygen and nutrients to sustain its rapid growth ., The production of VEGF , a growth factor that promotes angiogenesis , is triggered by the chronic hypoxic conditions that are prevalent in tumors ., Besides inducing angiogenesis , it leads to hyperpermeable blood vessels by enlarging pores and loosening the junctions between the endothelial cells that line the capillary wall 3 , 4 ., Subsequently , excessive fluid extravasation from these vessels results in a uniformly elevated IFP in the central region of tumor nearly reaching the levels of microvascular pressure ( MVP ) while at the tumor periphery , IFP falls to normal tissue levels 1 , 5 , 6 ., This common profile of IFP within tumors has been identified as a significant transport barrier to therapeutic agents and large molecules 1 , 7 ., When IFP approaches MVP , pressure gradients along vessels are diminished and blood flow stasis occurs , diminishing the functionality of existing vessels 8–10 ., Furthermore , uniformity of IFP in interior regions of tumors terminates the convection within tumor interstitium , hindering the transportation of large drugs 1 ., While the lack of a transvascular pressure gradient inhibits convective extravasation of drugs , sharp IFP gradient at tumor periphery creates an outward fluid flow from tumors that sweeps drugs away into normal tissues 1 ., Together these factors lead to the decreased drug exposure of tumor cells ., It has been revealed that the application of antiangiogenic agents can decrease vessel wall permeability and vessel density , transiently restoring some of the normal function and structure of abnormal tumor vessels 4 , 11 , 12 ., This process , which is called vascular normalization , is associated with a decrease in IFP and an increase in perfusion ., Therefore , this state of vasculature enables increased delivery of both drug and oxygen/nutrients to the targeted tumor cells 11 , 13 ., Normalization enhances convection of drug particles from vessels into tumor interstitium by restoring transvascular pressure gradients through IFP reduction 11 , 14 , 15 ., It has shown some favorable results in preclinical and clinical trials regarding the enhancement of the delivery of large therapeutics such as nanoparticles 14 , 16 , 17 ., Since nanoparticles benefit from the enhanced permeability and retention effect ( EPR ) , they are distributed in higher amounts to tumors relative to normal tissue ., Accumulation of nanoparticles in normal tissues is relatively small compared to the standard small molecule chemotherapies , leading to decreased toxicity and side effects ., However , the main transport mechanism for large drugs is convection in tumor microenvironment ., Hence , when IFP is high , extravasation via convection is inhibited ., Normalization due to its ability to decrease IFP seems promising in drug delivery for large drugs with its potential of restoring convective transportation ., In both clinical and preclinical studies , it has been shown that antiangiogenic drugs demonstrate anti-tumor effects in various cancer types 18 ., However , rather than using antiangiogenic agents alone , studies reveal that the combination of these agents with chemotherapy drugs yields favorable results with increased therapeutic activity ., In some clinical studies 19–21 , bevacizumab combined with conventional chemotherapy has increased the survival and response rates among patients with gastrointestinal cancer compared to bevacizumab alone ., This finding that antiangiogenic therapy in combination with chemotherapy can improve the efficacy of treatment has been observed for patients with various cancers including non-small cell lung cancer 22 , 23 , breast cancer 24–26 and ovarian cancer 27 ., However , it is evident that there is a transient time window for vessel normalization 28 , 29 ., In order to improve drug delivery , chemotherapy should coincide with this transient state of improved vessel integrity ., Prolonged or excessive application of antiangiogenic agents can reduce microvascular density to the point that drug delivery is compromised 30 ., Therefore , dosing and scheduling of combined therapy with antiangiogenic agents must be carefully tailored to augment the delivery and response to chemotherapy 12 ., It is suggested that rather than uninterrupted application , intermittent cycles which can create re-normalization should be employed for antiangiogenic agent scheduling 31 ., Due to the complex and interdisciplinary nature of the subject , there is a considerable amount of computational efforts on tumor vascularization and its consequences for the tumor microenvironment and drug delivery ., Development of vasculature and intravascular flow dynamics are studied comprehensively 32–37 and in many studies chemotherapy is given through the discrete vessel system in order to calculate drug delivery to capillaries and tumor 33 , 34 , 37–39 ., Mathematical models have included transvascular and interstitial delivery of drugs 37–39 ., In addition to that , Wu et al . added tumor response to chemotherapy by applying nanoparticles and evaluating the decrease in tumor radius during chemotherapy for different microenvironmental conditions 39 ., There are also some studies about the optimization of combination therapy in tumors 40 ., In studies by the groups of Urszula Ledzewicz and Heinz Schäettler , changes in tumor volume after the administration of cytotoxic and antiangiogenic agents have been investigated by proposing a mathematical model and seeking optimal solutions for different treatment cases 41 , 42 ., Compartment models have also been used to explore how antiangiogenic agents may provide assistance to chemotherapy agents in reducing the volume of drug-resistant tumors and by using a bifurcation diagram it is shown that the co-administration of antiangiogenic and chemotherapy drugs can reduce tumor size more effectively compared to chemotherapy alone 43 ., Applications of chemotherapy drugs together with antiangiogenic agents have been studied by Panovska et al . to cut the supply of nutrients 44 ., Stephanou et al . showed that random pruning of vessels by anti-angogenic agents improves drug delivery by using 2-D and 3-D vessel networks 45 ., However , they did not associate this benefit with normalization of vasculature ., Jain and colleagues laid out the general groundwork for relations between vessel normalization and IFP by relating vessel properties and interstitial hydraulic conductivity to changes in pressure profile due to normalization 15 ., The subject is further investigated by Wu et al . by building a 3-D model of angiogenesis and adding intravascular flow to the computational framework 32 ., They observed slow blood flow within the tumors due to almost constant MVP and elevated IFP profile ., They show the coupling between intravascular and transvascular flux ., Kohandel et al . showed that normalization enhances tumor response to chemotherapy and identified the most beneficial scheduling for combined therapy in terms of tumor response 46 ., The size range of nanoparticles that could benefit from normalization has also been investigated 16 ., In this study , following the continuous mathematical model developed by Kohandel et al . 46 which couples tumor growth and vasculature , we built a framework for tumor dynamics and its microenvironment including IFP ., We use this system to evaluate the improvement in nanoparticle delivery resulting from vessel normalization ., As the tumor grows , a homogeneous distribution of vessels is altered by the addition of new leaky vessels to the system , representing angiogenesis ., As a consequence of angiogenesis and the absence of lymph vessels , IFP starts to build up inside the tumor inhibiting the fluid exchange between vessels and tumor and inhibiting nanoparticle delivery ., Simulations give the distribution of the nanoparticles in the tumor in a time-dependent manner as they exit the vessels and are transported through interstitium ., The activity of the drugs on tumor cells is determined according to the results of experimental trials by Sengupta et al . 47 ., We apply drugs in small doses given in subsequent bolus injections ., During drug therapy , both vessels and tumor respond dynamically ., After injections of antiangiogenic agents , a decrease in vessel density accompanies the changes in vessel transport parameters , initiating the normalized state ., Combining chemotherapy with applications of antiangiogenic agents , we are able to identify the benefits of a normalized state by observing the effects of different scheduling on IFP decrease , extravasation of drugs and tumor shrinkage ., We found that in adjuvant combination of drugs , IFP and vessel density decrease together resulting in an increase in the average extravasation of nanoparticles per unit area in the interior region of tumor ., In concurrent combination of drugs , IFP decrease is higher but vessel decrease is higher as well , creating a smaller enhancement in average extravasation per unit tumor area ., However , even though average extravasation is smaller in this case , we observe an increase in homogeneity in drug distribution ., Nanoparticles begin to extravasate even in the center of tumor through sparsely distributed vessels due to the sharp decrease in IFP ., Therefore normalization enabled the drugs to reach deeper regions of the tumor ., Following Kohandel et al . 46 , the Eqs ( 1 ) and ( 2 ) are used to model the spatio-temporal distribution of tumor cells and the heterogeneous tumor vasculature ., In Eq ( 1 ) , the first term models the diffusion of tumor cells , where Dn is the diffusion coefficient , and the second term describes the tumor growth rate , where nlim is the carrying capacity and r is the growth rate ., In the absence of the third and fourth terms , the Eq ( 1 ) has two fixed points: an unstable fixed point at n = 0 where there is no cell population and a stable fixed point at n = nlim where the population reaches its maximal density ., The coupling terms αmn n ( x , t ) m ( x , t ) and dr n ( x , t ) d ( x , t ) indicate the interactions of tumor cells with vasculature and chemotherapy drug , respectively ., Tumor cells proliferate at an increased rate αmn when they have vessels supplying them with nutrients and tumor cells are eliminated at rate dr if chemotherapy drug d ( x , t ) is present ., ∂ n ( x , t ) ∂ t = D n ∇ 2 n ( x , t ) + r n ( 1 - n n l i m ) + α m n n ( x , t ) m ( x , t ) - d r n ( x , t ) d ( x , t ) ., ( 1 ) The tumor vasculature network exhibits abnormal dynamics with tortuous and highly permeable vessels which are structurally and functionally different from normal vasculature ., In order to create this heterogeneous structure , a coarse-grained model is used to produce islands of vessels ., In Eq 2 , the average blood vessel distribution is represented with m ( x , t ) and the equation is formulated to produce islands of vascularized space with the term m ( x , t ) ( α + βm ( x , t ) + γm ( x , t ) 2 ) which has two stable points m = 1 and m = 0 corresponding to the presence and absence of vessels , respectively ., Representation of tumor-induced angiogenesis is modified in this model by recruiting the terms αnm n ( 1 − n/nlim ) m and βnm∇ ., ( m∇n ) ., Here , the former attains positive values for tumor periphery due to the low cell density and in the central regions when cell density exceeds nlim , the term becomes negative creating a behavior which resembles to real tumors that has generally high vascularization in periphery and low vessel density in the center due to the growth-induced stresses 48 ., The latter term leads the vessels that are produced in the periphery towards the tumor core ., In this novel form , parameters relate to angiogenesis , βnm and αnm are changed as 0 . 5 and 0 . 25 , respectively ., Remaining set of the parameters related to tumor and vessel growth can be found in Kohandel et al . 46 ., Ar m ( x , t ) A ( x , t ) is the reaction of tumor vessels to antiangiogenic agent A ( x , t ) , which results in the elimination of vessels in the presence of antiangiogenic agent ., ∂ m ( x , t ) ∂ t = D m ∇ 2 m ( x , t ) + m ( x , t ) ( α + β m ( x , t ) + γ m ( x , t ) 2 ) + β n m ∇ · ( m ∇ n ) + α n m n ( 1 - n n l i m ) m - A r m ( x , t ) A ( x , t ) ., ( 2 ) For the initial configuration of tumor cells , a Gaussian distribution is assumed while the initial vascular distribution is obtained by starting from a random , positively distributed initial condition of tumor vessels ., Darcy’s law is used to describe the interstitial fluid flow within the tissue: u = −K∇P , where K is the hydraulic conductivity of the interstitium ( mm2/s/mmHg ) and P is the interstitial fluid pressure ( IFP ) ., For the steady state fluid flow , the continuity equation is:, ∇ · u = Γ b - Γ ℓ , ( 3 ), where Γb ( 1/s ) represents the supply of the fluid from blood vessels into the interstitial space and Γℓ ( 1/s ) represents the fluid drainage from the interstitial space into the lymph vessels ., Starling’s law is used to determine the source and the sink terms:, Γ b = λ b m ( x , t ) P v - P ( x , t ) - σ v ( π c - π i ) , ( 4 ) Γ ℓ = λ ℓ P ( x , t ) ., ( 5 ) The parameters in these equations are the hydraulic conductivities of blood vessels λb and the lymphatics λℓ , the vascular pressure Pv , interstitial fluid pressure P and the osmotic reflection coefficient σv ., The capillary and the interstitial oncotic pressures are denoted by πc and πi , respectively ., Hydraulic conductivities of blood and lymph vessels are related to the hydraulic conductivity of vessel wall ( Lp ) and the vessel surface density ( S V ) with the relation λ b , ℓ = L p S V . The osmotic pressure contribution for the lymph vessels is neglected due to the highly permeable lymphatics ., Also , the pressure inside the lymphatics is taken to be 0 mm Hg 49 ., By substituting Darcy’s law and Starling’s law into the continuity equation , we obtain the equation for IFP in a solid tumor:, - K ∇ 2 P ( x , t ) = λ b m ( x , t ) P v - P ( x , t ) - σ v ( π c - π i ) - λ ℓ P ( x , t ) ., ( 6 ) Pressure is initially taken to be the normal tissue value Pv and the initial pressure profile is set based on the solution of the above equation with the initial condition for tumor vasculature ., The boundary condition ensures that pressure reduces to the normal value Pv in host tissue ., For the transport of antiangiogenic agents A ( x , t ) , a diffusion equation is used:, ∂ A ( x , t ) ∂ t = D A ∇ 2 A ( x , t ) + λ A m ( x , t ) ( A v - A ( x , t ) ) - Γ ℓ A ( x , t ) - k A A ( x , t ) , ( 7 ), where DA is the diffusion coefficient of antiangiogenic agents in tissue , λA is the transvascular diffusion coefficient of antiangiogenic agents , Av is the plasma antiangiogenic agent concentration and kA is the decay rate of antiangiogenic agents ., The terms on the right hand side represent the diffusion of the antiangiogenic agents in the interstitium , diffusion through the vessels , the drainage of agents to the lymph vessels and the decay rate of the agents , respectively ., We consider liposomal delivery vehicles for chemotherapy drug with their concentration denoted by d ( x , t ) ., Since they are relatively large ( ∼ 100 nm ) , a convection-diffusion equation is used for the transport of these drug molecules:, ∂ d ( x , t ) ∂ t = D d ∇ 2 d ( x , t ) + ∇ · ( k E d ( x , t ) K ∇ P ) + Γ b ( 1 - σ d ) d v - Γ ℓ d ( x , t ) - d r d ( x , t ) n ( x , t ) - k d d ( x , t ) , ( 8 ), where Dd is the diffusion coefficient of drugs in the tissue , kE is the retardation coefficient for interstitial convection , dv is the plasma drug concentration , σd is the solvent drag reflection coefficient , dr is the rate of drug elimination as a result of reaction with tumor cells and kd is the decay rate of the drugs ., The terms on the right hand side represent the diffusion and the convection of the drugs in the interstitium , convection of the drugs through the vessels , the drainage of the drugs into the lymphatics , the consumption of drugs as a result of tumor cell interaction and the decay of the drug , respectively ., Diffusion of the drug from the blood vessels is assumed to be negligible since transvascular transport of large drugs is convection-dominated ., Since the time scale of the tumor growth is much larger than the time scale for the transport and distribution of the drug molecules , both antiangiogenic agent and chemotherapy drug equations are solved in steady state , i . e . ∂ d ( x , t ) ∂ t = ∂ A ( x , t ) ∂ t = 0 ., Both drugs are administered to the plasma with bolus injection in each administration through an exponential decay function:, A v ( t ) = A 0 e - t / t 1 / 2 A , ( 9 ) d v ( t ) = d 0 e - t / t 1 / 2 d , ( 10 ), In these equations , the terms A0 , d0 and t 1 / 2 A , t 1 / 2 d indicate the peak plasma concentration and the plasma half-lives of the antiangiogenic agent and chemotherapy drug , respectively ., No-flux boundary conditions are used for the antiangiogenic agent and the chemotherapy drug ., Parameters related to transport of interstitial fluid and transport of liposomes and antiangiogenic agents are listed in Tables 1 and 2 respectively ., Some of the effective parameters in the equations above dynamically change to mimic the changes in tumor and its microenvironment ., As the tumor grows , lymph vessels are diminished to ensure that there are no lymph vessels inside the tumor ., Without the presence of tumor , vessel density can increase up to a specific value ( the dimensionless value of 1 ) ., When vessel density is greater than 1 , it implies that they were produced by angiogenesis and leaky , thus their hydraulic conductivity is increased up to levels that is observed in tumors ., During antiangiogenic treatment , vessel density is decreased and when it decreases below 1 , normalization occurs and the hydraulic conductivity returns to normal tissue levels ., We started the simulations with a small tumor ( 0 . 2 mm radius ) and left it to grow for 30 days to an approximate radius of 13 . 5 mm ., Vessels which were initially set as randomly distributed islands in the computational domain evolved into a heterogeneous state throughout the simulations due to the presence of tumor cells ( Fig 1 , vessel density ) ., As the tumor grows , vessel islands become sparse in the interior region but their density increases by angiogenesis and they become leaky ., By the end of the simulation , the leakiness of tumor vessels and the lack of lymphatic drainage inside the tumor causes elevated pressure in the interior region of tumor very similar to that suggested in literature 1 , 5 ( Fig 1 , IFP/Pe ) ., We experimented with various drug regimens ., To illustrate the improvement in drug delivery , we designed the cases given in Fig 2 ., Dimensionless dose values are fixed in order to replicate the treatment response observed in 47 ., Antiangiogenic treatment is adjusted such that at the end of administrations there is approximately a 50% decrease in MVD inside the tumor ., A fixed chemotherapy drug dose is administered on days 23 , 25 and 27 while we change the day of antiangiogenic agent administration starting from the days 15 , 17 , 19 , 21 and 23 , continue to give them every other day in 4 or 5 pulses ., We decrease the dose of antiangiogenic agents throughout the therapy because a better response in drug delivery is obtained with this way in our simulations ., We present here four cases where only antiangiogenic agent administration starts on day 23 , only chemotherapy drug on day 23 , neoadjuvant therapy with antiangiogenic agents on day 19 and chemotherapy drug on day 23 and finally concurrent therapy with both of drugs starting on day 23 ., The most beneficial results regarding the amounts of drugs extravasate in the interior parts of the tumor are yielded when the antiangiogenic treatment starts at day 19 ( case-3 in Fig 2 ) ., As expected , antiangiogenic agents don’t have a profound effect on tumor cell density when they are applied alone ( Fig 3 , case-1 ) ., In all cases , we observed greater drug extravasation near the tumor rim due to decreasing IFP in that region ( Fig 3 ) ., It can be seen that fluid flow from vessels to the tumor is poor in the interior region for case-2 , but it starts to enhance in the same region in case-3 and case-4 ., The main reason for this change is the introduction of a pressure gradient in the tumor center restoring drug convection ., Therefore , in both case-3 and case-4 , tumor cell density is decreased in the interior region ( Figs 3 and 4b ) as a consequence of increased drug extravasation in the interior region of the tumor ., We calculate the space average of cell density and IFP in each time step ., Average cell density is calculated as, ∫ ∫ A i n t n ( x , y , t ) d x d y ∫ ∫ A i n t d x d y ( 11 ), over area Aint whose boundary is set by the condition n ( x , y , t ) > 1 which represents the interior region of tumor ( corresponds to r < 6 mm for a tumor of radius 10mm ) ., Average IFP is calculated as, ∫ ∫ A P ( x , y , t ) d x d y ∫ ∫ A d x d y ( 12 ), over area A whose boundary is set by the condition n ( x , y , t ) > 0 . 1 which represents the value over whole tumor ., When we evaluate average pressure over the entire area of the tumor , we observe a synergistic effect in reducing pressure arising from the combined application of antiangiogenic agent and chemotherapy which can be seen in Fig 4a , especially for case-4 ., This synergistic effect also exhibits itself in tumor cell density in a less pronounced manner that can be observed from Fig 4b ., This indicates improved combination treatment efficacy as an indirect result of decreasing IFP ., According to our results , drug extravasation from vessels in the interior region of the tumor is nearly doubled for combination cases ( Fig 5a , case-3 and case-4 compared to case-2 ) ., However , this improvement is not directly reflected on drug exposure due to reduced vessel density by antiangiogenic agents ., Total drug exposure of unit area in tumor during treatment only improves approximately 20–25% ., IFP during the applications of chemotherapy drug was the lowest for concurrent therapy ( case-4 ) ., However , regarding tumor regression adjuvant therapy ( case-3 ) performed better , agreeing with the results of Kohandel et al . 46 ., Even though decrease in vessel density and leakiness cuts off the supply of drugs , the decrease in IFP appearing for the same reasons seems to compensate in the interior region of tumor , resulting in better drug extravasation ., When two drugs are given closer temporally , the resulting IFP decrease is maximized ., This enables the convective extravasation of nanoparticles deep into tumors to places that are not exposed to drugs without combination therapy ., In order to evaluate the effect of chemotherapy drugs that target tumor cell proliferation , we modified Eq 1 such that the chemotherapy drugs would directly act on tumor growth ., The terms responsible for tumor growth ( 2nd and 3rd terms in the right-hand side of Eq 1 ) are multiplied by ( 1 − d ( x , t ) /dmax ) where dmax is maximum drug concentration that extravasated inside the tumor ., In this scenario , small changes are seen in tumor cell densities between combination therapy and chemotherapy alone ., However , we observe that in this form , extravasation of drugs is also increased in the central region as seen in Fig 6 , implying that normalization is also beneficial in this scenario ., Using a mathematical model , we assess whether antiangiogenic therapy could increase liposome delivery due to normalization of tumor vessels ., In order to do that , we first created a dynamic vessel structure that exhibits properties of tumor vessels created by angiogenesis as well as inherent vessels in the tissue ., As the tumor grows , vessels in the central region begin to disappear due to increased tumor cell density in that region ., Angiogenesis occurs in the tumor creating additional leaky vessels ., The emergent vessel density is consistent with that observed in 59 , with decreasing density towards the tumor center along with randomly appearing clusters of vessels ., IFP is found to be elevated throughout the tumor up to the levels of MVP and decreases sharply around the tumor rim as it is observed in various studies in the literature ., 1 , 5 , 6 ., We apply antiangiogenic agents in various regimens combined with chemotherapy and focus on large drugs ( liposomes ) whose delivery mainly depends on convection ., As a result of the decrease in vessel density and leakiness due to the antiangiogenic activity , we expect a decrease in pressure which brings about a higher pressure difference between tumor and vessels ., Transvascular convection depends on this pressure difference , hydraulic conductivity and density of vessels at the unit area ., Since antiangiogenic agents decrease hydraulic conductivity ( i . e . leakiness ) and vessel density , by cutting the supply of drugs , the resulting increase in pressure difference should compensate for these effects , restoring extravasation in remaining vessels ., In all simulations , liposome extravasation predominantly occurs in the tumor periphery due to low IFP levels , hence drugs preferentially accumulate in this area ., Our result has been confirmed by experimental studies of drug distribution using large drugs such as micelles 60 , 61 , nanoprobes 62 and liposomes 59 , 63–66 in which peripheral accumulation is observed ., As the application time between antiangiogenic agents and liposomes becomes shorter , the resulting decrease in IFP is maximized ., This enables the convective extravasation of nanoparticles deep into tumors to places that could not previously be exposed to drugs before and liposome extravasation begins to appear in central region ., However , that does not bring about maximum accumulation of liposomes consistently at all times ., There is a trade-off between total drug accumulation and how deeply drug can penetrate inside the tumor ., In our study , we find a balance between these two situations ., It also shows us that IFP and drug accumulation are not always correlated , rather the maximum accumulation is achieved through the complex interplay between IFP , vessel density and leakiness ., Current research by 63 also supports this view; in their mouse study , they point out that IFP is correlated with perfusion , perfusion is correlated with accumulation and the relationship between IFP and liposome accumulation is limited ., In another significant study , tumor-bearing animals are subjected to combination therapy with liposomes and the antiangiogenic agent pazopanib in order to evaluate the effect of normalization via imaging drug distribution 65 ., As a result of the decrease in MVP , they also observed a resulting decrease in IFP ., Similar to our results , IFP is not the determinant of drug accumulation in their work ., They have found that decreased leakiness of vessels inhibits delivery even though there is an IFP decrease as a result of antiangiogenic therapy ., They have collected data for a single time point and observed a decrease in doxil penetration in combination therapy ., They also point out that functional measures of normalization may not occur simultaneously which is also the case for our study ., Throughout the combination therapy , we also observe periods where drug extravasation is limited and others where drug extravasation is improved ., They have found the vessel permeability as a limiting factor in their study , however MVD 67 and tumor blood flow and blood volume 68 are also determinants of large drug accumulation ., This shows that these measures of normalization are tumor type dependent and even within the same tumor they are dynamic which leads to variation in drug distribution ., Among many different schedules , most of our trials did not show improvement in drug accumulation ., We see that the dose of antiangiogenic agents should be carefully determined to ensure any delivery benefit ., As stated by 30 , when we apply a large dose of antiangiogenic agents , significant IFP decrease is observed but the decrease in vessel permeability and the lack of vessel density lead to impaired liposome extravasation ., At the other extreme , when we give small amounts of ant-angiogenic agents , it is seen that IFP decrease is not enough to make a significant improvement to liposome extravasation ., In this model , intravascular flow is approximated as uniform to focus on the effects of transvascular delivery benefit of normalization ., Due to abnormal vasculature , tumors are known to have impaired blood perfusion 69 due to simultaneous presence of functional and non-functional vessels ., In this work , we simulate structural normalization of vessels without considering functional normalization which is associated with intravascular flow and results in increased perfusion 30 ., Vessels within the tumor in this model have uniform functionality in terms of supplying blood flow ., Hence , by decreasing vessel density in microenvironment due to antiangiogenic activity , we are decreasing blood perfusion ., However , on the contrary , normalization is expected to enhance intravascular flow by decreasing pore size which restores intravascular pressure gradients and pruning non-functional vessels that interrupt circulation ., Therefore , normalization brings about improved blood perfusion whereas here we decrease perfusion and improve the delivery only through improved convective extravasation by decreased IFP ., In our simulations , the delivery benefit is underestimated since we decrease blood perfusion as a part of antiangiogenic activity ., In 65 , they observed that MVD decrease did not change liposome accumulation because the eliminated vessels are the ones that are thought to be nonfunctional ., In our previous study , we constructed a spherical tumor with uniform vessel density to investigate the benefit from normalization therapy and the results showed increased delivery in the interior regions of tumors of certain sizes 70 ., In animal studies , it has been shown that the bulk accumulation of liposomes is not representative of efficacy since it is not informative about the drug accumulation within specific regions of tumors 67 , 71 and heterogeneous drug accumulation may result in tumor repopulation 72 ., Therefore , it is important to understand the factors that yield heterogeneous accumulation and strive to avoid them to generate effective treatments ., According to our results , it is plausible that administering targeted therapies using large drugs , normalization should be more useful since it can provide a simultaneous access to both tumor rim and center ., The dose of chemotherapy should be increased in order to ensure similar drug exposure despite the sparser vessel density caused by antiangiogenic activity ., This is the reason why targeted therapies are more suitable to seize the benefits from normalization , as they can be applied in greater doses without harming healthy tissue ., When convective extravasation is restored in the central region , drugs can immediately reach to tumor center and increase the probability of treatment success and tumor eradication . | Introduction, Methods, Results, Discussion | Tumor-induced angiogenesis leads to the development of leaky tumor vessels devoid of structural and morphological integrity ., Due to angiogenesis , elevated interstitial fluid pressure ( IFP ) and low blood perfusion emerge as common properties of the tumor microenvironment that act as barriers for drug delivery ., In order to overcome these barriers , normalization of vasculature is considered to be a viable option ., However , insight is needed into the phenomenon of normalization and in which conditions it can realize its promise ., In order to explore the effect of microenvironmental conditions and drug scheduling on normalization benefit , we build a mathematical model that incorporates tumor growth , angiogenesis and IFP ., We administer various theoretical combinations of antiangiogenic agents and cytotoxic nanoparticles through heterogeneous vasculature that displays a similar morphology to tumor vasculature ., We observe differences in drug extravasation that depend on the scheduling of combined therapy; for concurrent therapy , total drug extravasation is increased but in adjuvant therapy , drugs can penetrate into deeper regions of tumor . | Tumor vessels being very different from their normal counterparts are leaky and lack organization that sustains blood circulation ., As a result , insufficient blood supply and high fluid pressure begin to appear inside the tumor which leads to a reduced delivery of drugs within the tumor , especially in tumor center ., A treatment strategy that utilizes anti-vascular drugs is observed to revert these alterations in tumor vessels , making them more normal ., This approach is suggested to improve drug delivery by enhancing physical transport of drugs ., In this paper , we build a mathematical model to simulate tumor and vessel growth as well as fluid pressure inside the tumor ., This framework enables us to simulate drug treatment scenarios on tumors ., We use this model to find whether the delivery of the chemotherapy drugs is enhanced by application of anti-vascular drugs by making vessels more normal ., Our simulations show that anti-vascular drug not only enhances the amount of drugs that is released into tumor tissue , but also enhances drug distribution enabling drug release in the central regions of tumor . | medicine and health sciences, vesicles, cardiovascular physiology, engineering and technology, cancer treatment, clinical oncology, drugs, chemotherapeutic agents, oncology, angiogenesis, developmental biology, clinical medicine, pharmaceutics, nanoparticles, nanotechnology, oncology agents, pharmacology, cellular structures and organelles, cancer chemotherapy, liposomes, drug delivery, chemotherapy, cell biology, physiology, biology and life sciences, drug therapy, combination chemotherapy | null |
1,292 | journal.pcbi.1000929 | 2,010 | Instantaneous Non-Linear Processing by Pulse-Coupled Threshold Units | Understanding the dynamics of single neurons , recurrent networks of neurons , and spike-timing dependent synaptic plasticity requires the quantification of how a single neuron transfers synaptic input into outgoing spiking activity ., If the incoming activity has a slowly varying or constant rate , the membrane potential distribution of the neuron is quasi stationary and its steady state properties characterize how the input is mapped to the output rate ., For fast transients in the input , time-dependent neural dynamics gains importance ., The integrate-and-fire neuron model 1 can efficiently be simulated 2 , 3 and well approximates the properties of mammalian neurons 4–6 and more detailed models 7 ., It captures the gross features of neural dynamics: The membrane potential is driven by synaptic impulses , each of which causes a small deflection that in the absence of further input relaxes back to a resting level ., If the potential reaches a threshold , the neuron emits an action potential and the membrane potential is reset , mimicking the after-hyperpolarization ., The analytical treatment of the threshold process is hampered by the pulsed nature of the input ., A frequently applied approximation treats synaptic inputs in the diffusion limit , in which postsynaptic potentials are vanishingly small while their rate of arrival is high ., In this limit , the summed input can be replaced by a Gaussian white noise current , which enables the application of Fokker-Planck theory 8 , 9 ., For this approximation the stationary membrane potential distribution and the firing rate are known exactly 8 , 10 , 11 ., The important effect of synaptic filtering has been studied in this limit as well; modelling synaptic currents as low-pass filtered Gaussian white noise with non-vanishing temporal correlations 12–15 ., Again , these results are strictly valid only if the synaptic amplitudes tend to zero and their rate of arrival goes to infinity ., For finite incoming synaptic events which are excitatory only , the steady state solution can still be obtained analytically 16 , 17 and also the transient solution can efficiently be obtained by numerical solution of a population equation 18 ., A different approach takes into account non-zero synaptic amplitudes to first calculate the free membrane potential distribution and then obtain the firing rate by solving the first passage time problem numerically 19 ., This approach may be extendable to conductance based synapses 20 ., Exact results for the steady state have so far only been presented for the case of exponentially distributed synaptic amplitudes 21 ., The spike threshold renders the model an extremely non-linear unit ., However , if the synaptic input signal under consideration is small compared to the total synaptic barrage , a linear approximation captures the main characteristics of the evoked response ., In this scenario all remaining inputs to the neuron are treated as background noise ( see Figure 1A ) ., Calculations of the linear response kernel in the diffusion limit suggested that the integrate-and-fire model acts as a low-pass filter 22 ., Here spectrum and amplitude of the synaptic background input are decisive for the transient properties of the integrate-and-fire model: in contrast to white noise , low-pass filtered synaptic noise leads to a fast response in the conserved linear term 12 ., Linear response theory predicts an optimal level of noise that promotes the response 23 ., In the framework of spike-response models , an immediate response depending on the temporal derivative of the postsynaptic potential has been demonstrated in the regime of low background noise 24 ., The maximization of the input-output correlation at a finite amplitude of additional noise is called stochastic resonance and has been found experimentally in mechanoreceptors of crayfish 25 , in the cercal sensory system of crickets 26 , and in human muscle spindles 27 ., The relevance and diversity of stochastic resonance in neurobiology was recently highlighted in a review article 28 ., Linear response theory enables the characterization of the recurrent dynamics in random networks by a phase diagram 22 , 29 ., It also yields approximations for the transmission of correlated activity by pairs of neurons in feed-forward networks 30 , 31 ., Furthermore , spike-timing dependent synaptic plasticity is sensitive to correlations between the incoming synaptic spike train and the firing of the neuron ( see Figure 1 ) , captured up to first order by the linear response kernel 32–38 ., For neuron models with non-linear membrane potential dynamics , the linear response properties 39 , 40 and the time-dependent dynamics can be obtained numerically 41 ., Afferent synchronized activity , as it occurs e . g . in primary sensory cortex 42 , easily drives a neuron beyond the range of validity of the linear response ., In order to understand transmission of correlated activity , the response of a neuron to fast transients with a multiple of a single synaptic amplitude 43 hence needs to be quantified ., In simulations of neuron models with realistic amplitudes for the postsynaptic potentials , we observed a systematic deviation of the output spike rate and the membrane potential distribution from the predictions by the Fokker-Planck theory modeling synaptic currents by Gaussian white noise ., We excluded any artifacts of the numerics by employing a dedicated high accuracy integration algorithm 44 , 45 ., The novel theory developed here explains these observations and lead us to the discovery of a new early component in the response of the neuron model which linear response theory fails to predict ., In order to quantify our observations , we extend the existing Fokker-Planck theory 46 and hereby obtain the mean time at which the membrane potential first reaches the threshold; the mean first-passage time ., The advantage of the Fokker-Planck approach over alternative techniques has been demonstrated 47 ., For non-Gaussian noise , however , the treatment of appropriate boundary conditions for the membrane potential distribution is of utmost importance 48 ., In the results section we develop the Fokker-Planck formalism to treat an absorbing boundary ( the spiking threshold ) in the presence of non-zero jumps ( postsynaptic potentials ) ., For the special case of simulated systems propagated in time steps , an analog theory has recently been published by the same authors 49 , which allows to assess artifacts introduced by time-discretization ., Our theory applied to the integrate-and-fire model with small but finite synaptic amplitudes 1 , introduced in section “The leaky integrate-and-fire model” , quantitatively explains the deviations of the classical theory for Gaussian white noise input ., After reviewing the diffusion approximation of a general first order stochastic differential equation we derive a novel boundary condition in section “Diffusion with finite increments and absorbing boundary” ., We then demonstrate in section “Application to the leaky integrate-and-fire neuron” how the steady state properties of the model are influenced: the density just below threshold is increased and the firing rate is reduced , correcting the preexisting mean first-passage time solution 10 for the case of finite jumps ., Turning to the dynamic properties , in section “Response to fast transients” we investigate the consequences for transient responses of the firing rate to a synaptic impulse ., We find an instantaneous , non-linear response that is not captured by linear perturbation theory in the diffusion limit and that displays marked stochastic resonance ., On the network level , we demonstrate in section “Dominance of the non-linear component on the network level” that the non-linear fast response becomes the most important component in case of feed-forward inhibition ., In the discussion we consider the limitations of our approach , mention possible extensions and speculate about implications for neural processing and learning ., Consider a leaky integrate-and-fire model 1 with membrane time constant and resistance receiving excitatory and inhibitory synaptic inputs , as they occur in balanced neural networks 50 ., We aim to obtain the mean firing rate and the steady state membrane potential distribution ., The input current is modeled by point events , drawn from homogeneous Poisson processes with rates and , respectively ., The membrane potential is governed by the differential equation ., An excitatory spike causes a jump of the membrane potential by , an inhibitory spike by , so , where is a constant background current ., Whenever reaches the threshold , the neuron emits a spike and the membrane potential is reset to , where it remains clamped for the absolute refractory time ., The approach we take is to modify the existing Fokker-Planck theory in order to capture the major effects of the finite jumps ., To this end , we derive a novel boundary condition at the firing threshold for the steady state membrane potential distribution of the neuron ., We then solve the Fokker-Planck equation obtained from the standard diffusion approximation 8 , 10 , 11 , 22 , 23 given this new condition ., The membrane potential of the model neuron follows a first order stochastic differential equation ., Therefore , in this section we consider a general first order stochastic differential equation driven by point events ., In order to distinguish the dimensionless quantities in this section from their counterparts in the leaky integrate-and-fire model , we denote the rates of the two incoming Poisson processes by ( excitation ) and ( inhibition ) ., Each incoming event causes a finite jump ( the excitatory synaptic weight ) for an increasing event and ( the inhibitory synaptic weight ) for a decreasing event ., The stochastic differential equation takes the form ( 1 ) where captures the deterministic time evolution of the system ( with for the leaky integrate-and-fire neuron ) ., We follow the notation in 46 and employ the Kramers-Moyal expansion with the infinitesimal moments ., The first and second infinitesimal moment evaluate to and , where we introduced the shorthand and ., The time evolution of the probability density is then governed by the Kramers-Moyal expansion , which we truncate after the second term to obtain the Fokker-Planck equation ( 2 ) where denotes the probability flux operator ., In the presence of an absorbing boundary at , we need to determine the resulting boundary condition for the stationary solution of ( 2 ) ., Without loss of generality , we assume the absorbing boundary at to be the right end of the domain ., A stationary solution exists , if the probability flux exiting at the absorbing boundary is reinserted into the system ., For the example of an integrate-and-fire neuron , reinsertion takes place due to resetting the neuron to the same potential after each threshold crossing ., This implies a constant flux through the system between the point of insertion and threshold ., Rescaling the density by this flux as results in the stationary Focker-Planck equation , which is a linear inhomogeneous differential equation of first order ( 3 ) with ., First we consider the diffusion limit , in which the rate of incoming events diverges , while the amplitude of jumps goes to zero , such that mean and fluctuations remain constant ., In this limit , the Kramers-Moyal expansion truncated after the second term becomes exact 51 ., This route has been taken before by several authors 8 , 22 , 23 , here we review these results to consistently present our extension of the theory ., In the above limit equation ( 3 ) needs to be solved with the boundary conditionsMoreover , a finite probability flux demands the density to be a continuous function , because of the derivative in the flux operator ., In particular , the solution must be continuous at the point of flux insertion ( however , the first derivative is non-continuous at due to the step function in the right hand side of ( 3 ) ) ., Continuity especially implies a vanishing density at threshold ., Once the solution of ( 3 ) is found , the normalization condition determines the stationary flux ., Now we return to the problem of finite jumps ., We proceed along the same lines as in the diffusion limit , seeking the stationary solution of the Fokker-Planck equation ( 2 ) ., We keep the boundary conditions at and at as well as the normalization condition as before , but we need to find a new self-consistent condition at threshold , because the density does not necessarily have to vanish if the rate of incoming jumps is finite ., The main assumption of our work is that the steady state solution satisfies the stationary Fokker-Planck equation ( 3 ) based on the diffusion approximation within the interval , but not necessarily at the absorbing boundary , where the solution might be non-continuous ., To obtain the boundary condition , we note that the flux over the threshold has two contributions , the deterministic drift and the positive stochastic jumps crossing the boundary ( 4 ) ( 5 ) with ., To evaluate the integral in ( 5 ) , for small we expand into a Taylor series around ., This is where our main assumption enters: we assume that the stationary Fokker-Planck equation ( 3 ) for is a sufficiently accurate characterization of the jump diffusion process ., We solve this equation for It is easy to see by induction , that the function and all its higher derivatives , can be written in the form , whose coefficients for obey the recurrence relation ( 6 ) with the additional values and , as denotes the function itself ., Inserting the Taylor series into ( 5 ) and performing the integration results in ( 7 ) which is the probability mass moved across threshold by a perturbation of size and hence also quantifies the instantaneous response of the system ., After dividing ( 4 ) by we solve for to obtain the Dirichlet boundary condition ( 8 ) If is small compared to the length scale on which the probability density function varies , the probability density near the threshold is well approximated by a Taylor polynomial of low degree; throughout this work , we truncate ( 7 ) and ( 12 ) at ., The boundary condition ( 8 ) is consistent with in the diffusion limit , in which the rate of incoming jumps diverges , while their amplitude goes to zero , such that the first ( ) and second moment ( ) stay finite ., This can be seen by scaling , , with such that the mean is kept constant 51 ., Inserting this limit in ( 8 ) , we find ( 9 ) since , and vanishes for , is bounded and ., The general solution of the stationary Fokker-Planck equation ( 3 ) is a sum of a homogeneous solution that satisfies and a particular solution with ., The homogeneous solution is , where we fixed the integration constant by chosing ., The particular solution can be obtained by variation of constants and we chose it to vanish at the threshold as ., The complete solution is a linear combination , where the prefactor is determined by the boundary condition ( 8 ) in the case of finite jumps , or by for Gaussian white noise The normalization condition determines the as yet unknown constant probability flux through the system ., We now apply the theory developed in the previous section to the leaky integrate-and-fire neuron with finite postsynaptic potentials ., Due to synaptic impulses , the membrane potential drifts towards and fluctuates with the diffusion constant ., This suggests to choose the natural units for the time and for the voltage to obtain the simple expressions for the drift- and for the diffusion-term in the Fokker-Planck operator ( 2 ) ., The probability flux operator ( 2 ) is then given as ., In the same units the stationary probability density scaled by the flux reads where is the flux corresponding to the firing rate in units of ., As is already scaled by the flux , application of the flux operator yields unity between reset and threshold and zero outside ( 10 ) The steady state solution of this stationary Fokker-Planck equation ( 11 ) is a linear superposition of the homogeneous solution and the particular solution ., The latter is chosen to be continuous at and to vanish at ., Using the recurrence ( 6 ) for the coeffcients of the Taylor expansion of the membrane potential density , we obtain and , where starts from ., The first important result of this section is the boundary value of the density at the threshold following from ( 8 ) as ( 12 ) The constant in ( 11 ) follows from ., The second result is the steady state firing rate of the neuron ., With being the fraction of neurons which are currently refractory , we obtain the rate from the normalization condition of the density as ( 13 ) The normalized steady state solution Figure 2A therefore has the complete form ( 14 ) Figure 2B , D shows the steady state solution near the threshold obtained by direct simulation to agree much better with our analytical approximation than with the theory for Gaussian white noise input ., Even for synaptic amplitudes ( here ) which are considerably smaller than the noise fluctuations ( here ) , the effect is still well visible ., The oscillatory deviations with periodicity close to reset observable in Figure 2A are due to the higher occupation probability of voltages that are integer multiples of a synaptic jump away from reset ., The modulation washes out due to coupling of adjacent voltages by the deterministic drift as one moves away from reset ., The oscillations at lower frequencies apparent in Figure 2A are due to aliasing caused by the finite bin width of the histogram ( ) ., The synaptic weight is typically small compared to the length scale on which the probability density function varies ., So the probability density near the threshold is well approximated by a Taylor polynomial of low degree; throughout this work , we truncate the series in ( 12 ) at ., A comparison of this approximation to the full solution is shown in Figure 2E ., For small synaptic amplitudes ( shown ) , below threshold and outside the reset region ( Figure 2A , C ) the approximation agrees with the simulation within its fluctuation ., At the threshold ( Figure 2B , D ) our analytical solution assumes a finite value whereas the direct simulation only drops to zero on a very short voltage scale on the order of the synaptic amplitude ., For larger synaptic weights ( , see Figure 2F ) , the density obtained from direct simulation exhibits a modulation on the corresponding scale ., The reason is the rectifying nature of the absorbing boundary: A positive fluctuation easily leads to a threshold crossing and absorption of the state in contrast to negative fluctuations ., Effectively , this results in a net drift to lower voltages within the width of the jump distribution caused by synaptic input , visible as the depletion of density directly below the threshold and an accumulation further away , as observed in Figure 2F ., The second term ( proportional to ) appearing in ( 13 ) is a correction to the well known firing rate equation of the integrate-and-fire model driven by Gaussian white noise 10 ., Figure 3 compares the firing rate predicted by the new theory to direct simulation and to the classical theory ., The classical theory consistently overestimates the firing rate , while our theory yields better accuracy ., Our correction resulting from the new boundary condition becomes visible at moderate firing rates when the density slightly below threshold is sufficiently high ., At low mean firing rates , the truncation of the Kramers-Moyal expansion employed in the Fokker-Planck description may contribute comparably to the error ., Our approximation captures the dependence on the synaptic amplitude correctly for synaptic amplitudes of up to ( Figure 3B ) ., The insets in Figure 3C , D show the relative error of the firing rate as a function of the noise amplitude ., As expected , the error increases with the ratio of the, synaptic effect compared to the amplitude of the noise fluctuations ., For low noise , our theory reduces the relative error by a factor of compared to the classical diffusion approximation ., We now proceed to obtain the response of the firing rate to an additional -shaped input current ., Such a current can be due to a single synaptic event or due to the synchronized arrival of several synaptic pulses ., In the latter case , the effective amplitude of the summed inputs can easily exceed that of a single synapse ., The fast current transient causes a jump of the membrane potential at and ( 2 ) suggests to treat the incident as a time dependent perturbation of the mean input ., First , we are interested in the integral response of the excess firing rate ., Since the perturbation has a flat spectrum , up to linear order in the spectrum of the excess rate is , where is the linear transfer function with respect to perturbing at Laplace frequency ., In particular , ., As is the DC susceptibility of the system , we can express it up to linear order as ., Hence , ( 15 ) We also take into account the dependence of on to calculate from ( 13 ) and obtain ( 16 ) Figure 4D shows the integral response to be in good agreement with the linear approximation ., This expression is consistent with the result in the diffusion limit : Here the last term becomes , where we used , following from ( 10 ) with ., This results in , which can equivalently be obtained directly as the derivative of ( 13 ) with respect to setting ., Taking the limit , however , does not change significantly the integral response compared to the case of finite synaptic amplitudes ( Figure 4D , Figure 5A ) ., The instantaneous response of the firing rate to an impulse-like perturbation can be quantified without further approximation ., The perturbation shifts the probability density by so that neurons with immediately fire ., This results in the finite firing probability of the single neuron within infinitesimal time ( 5 ) , which is zero for ., This instantaneous response has several interesting properties: For small it can be approximated in terms of the value and the slope of the membrane potential distribution below the threshold ( using ( 7 ) for ) , so it has a linear and a quadratic contribution in ., Figure 4A shows a typical response of the firing rate to a perturbation ., The peak value for a positive perturbation agrees well with the analytical approximation ( 7 ) ( Figure 4C ) ., Even in the diffusion limit , replacing the background input by Gaussian white noise , the instantaneous response persists ., Using the boundary condition our theory is applicable to this case as well ., Since the density just below threshold is reduced , ( 5 ) yields a smaller instantaneous response ( Figure 4C , Figure 5B ) which for positive still exhibits a quadratic , but no linear , dependence ., The increasing and convex dependence of the response probability on the amplitude of the perturbation is a generic feature of neurons with subthreshold mean input that also persists in the case of finite synaptic rise time ., In this regime , the membrane potential distribution has a mono-modal shape centered around the mean input , which is inherited from the underlying superposition of a large number of small synaptic impulses ., The decay of the density towards the threshold is further enhanced by the probability flux over the threshold: a positive synaptic fluctuation easily leads to the emission of a spike and therefore to the absorption of the state at the threshold , depleting the density there ., Consequently , the response probability of the neuron is increasing and convex as long as the peak amplitude of the postsynaptic potential is smaller than the distance of the peak of the density to the threshold ., It is increasing and concave beyond this point ., At present the integrate-and-fire model is the simplest analytically tractable model with this feature ., The integral response ( 15 ) as well as the instantaneous response ( 5 ) both exhibit stochastic resonance; an optimal level of synaptic background noise enhances the transient ., Figure 5A shows this noise level to be at about for the integral response ., The responses to positive and negative perturbations are symmetric and the maximum is relatively broad ., The instantaneous response in Figure 5B displays a pronounced peak at a similar value of ., This non-linear response only exists for positive perturbations; the response is zero for negative ones ., Though the amplitude is reduced in the case of Gaussian white noise background , the behavior is qualitatively the same as for noise with finite jumps ., Stochastic resonance has been reported for the linear response to sinusoidal periodic stimulation 23 ., Also for non-periodic signals that are slow compared to the neurons dynamics an adiabatic approximation reveals stochastic resonance 52 ., In contrast to the latter study , the rate transient observed in our work is the instantaneous response to a fast ( Dirac ) synaptic current ., Due to the convex nature of the instantaneous response ( Figure 4C ) its relative contribution to the integral response increases with ., For realistic synaptic weights the contribution reaches percent ., An example network in which the linear non-instantaneous response cancels completely and the instantaneous response becomes dominant is shown in Figure 6A ., At two populations of neurons simultaneously receive a perturbation of size and respectively ., This activity may , for example , originate from a third pool of synchronous excitatory and inhibitory neurons ., It may thus be interpreted as feed-forward inhibition ., The linear contributions to the pooled firing rate response of the former two populations hence is zero ., The instantaneous response , however , causes a very brief overshoot at ( Figure 6B ) ., Figure 6C reveals that the response returns to baseline within ., Figure 6D shows that the dependence of peak height on still exhibits the supra-linearity ., The quite exact cancellation of the response for originates from the symmetry of the response functions for positive and negative perturbations in this interval ( shown in Figure 4A , B ) ., The pooled firing rate of the network is the sum of the full responses: the instantaneous response at does not share the symmetry and hence does not cancel ., This demonstrates that the result of linear perturbation theory is a good approximation for and that the instantaneous response at the single time point completes the characterization of the neuronal response ., In this work we investigate the effect of small , but non-zero synaptic impulses on the steady state and response properties of the integrate-and-fire neuron model ., We obtain a more accurate description of the firing rate and the membrane potential distribution in the steady state than provided by the classical approximation of Gaussian white noise input currents 10 ., Technically this is achieved by a novel hybrid approach combining a diffusive description of the membrane potential dynamics far away from the spiking threshold with an explicit treatment of threshold crossings by synaptic transients ., This allows us to obtain a boundary condition for the membrane potential density at threshold that captures the observed elevation of density ., Our work demonstrates that in addition to synaptic filtering , the granularity of the noise due to finite non-zero amplitudes does affect the steady state and the transient response properties of the neuron ., Here , we study the effect of granularity using the example of a simple neuron model with only one dynamic variable ., The quantitatively similar increase of the density close to threshold observed if low-pass filtered Gaussian white noise is used as a model for the synaptic current has a different origin ., It is due to the absence of a diffusion term in the dynamics of the membrane potential 12 , 13 , 15 ., The analytical treatment of finite synaptic amplitudes further allows us to characterize the probability of spike emission in response to synaptic inputs for neuron models with a single dynamical variable and renewal ., Alternatively , this response can be obtained numerically from population descriptions 18 , 39–41 or , for models with one or more dynamic variables and gradually changing inputs , in the framework of the refractory density approximation 15 ., Here , we find that the response can be decomposed into a fast , non-linear and a slow linear contribution , as observed experimentally about a quarter of a century ago 53 in motor neurons of cat cortex in the presence of background noise ., The existence of a fast contribution proportional to the temporal change of the membrane potential was predicted theoretically 54 ., In the framework of the refractory density approach 15 , the effective hazard function of an integrate-and-fire neuron also exhibits contributions to spike emission due to two distinct causes: the diffusive flow through the threshold and the movement of density towards the threshold ., The latter contribution is proportional to the temporal change of the membrane potential and is corresponding to the instantaneous response reported here , but for the case of a gradually increasing membrane potential ., Contemporary theory of recurrent networks so far has neglected the transient non-linear component of the neural response , an experimentally observed feature 53 that is generic to threshold units in the presence of noise ., The infinitely fast rise of the postsynaptic potential in the integrate-and-fire model leads to the immediate emission of a spike with finite probability ., For excitatory inputs , this probability depends supra-linearly on the amplitude of the synaptic impulse and it is zero for inhibitory impulses ., The supra-linear increase for small positive impulse amplitudes relates to the fact that the membrane potential density decreases towards threshold: the probability to instantaneously emit a spike equals the integral of the density shifted over the threshold ., The detailed shape of the density below threshold therefore determines the response properties ., For Gaussian white noise synaptic background , the model still displays an instantaneous response ., However , since in this case the density vanishes at threshold , the response probability to lowest order grows quadratically in the amplitude of a synaptic impulse ., This is the reason why previous work based on linear response theory did not report on the existence of an instantaneous component when modulating the mean input and on the contrary characterized the nerve cell as a low-pass in this case 22 , 23 ., Modulation of the noise amplitude , however , has been shown to cause an instantaneous response in linear approximation in the diffusion limit 23 , confirmed experimentally in real neurons 55 ., While linear response theory has proven extremely useful to understand recurrent neural networks 29 , the categorization of the integrate-and-fire neurons response kernel as a low-pass is misleading , because it suggests the absence of an immediate response ., Furthermore we find that in addition to the nature of the background noise , response properties also depend on its amplitude: a certain level of noise optimally promotes the spiking response ., Hence noise facilitates the transmission of the input to the output of the neuron ., This is stochastic resonance in the general sense of the term as recently suggested 28 ., As noted in the introduction , stochastic resonance of the linear response kernel has previously been demonstrated for sinusoidal input currents and Gaussian white background noise 23 ., Furthermore , also slow aperiodic transients are facilitated by stochastic resonance in the integrate-and-fire neuron 52 ., We extend the known results in two respects ., Firstly , we show that the linear response shows aperiodic stochastic resonance also for fast transients ., Secondly , we demonstrate tha | Introduction, Model, Results, Discussion | Contemporary theory of spiking neuronal networks is based on the linear response of the integrate-and-fire neuron model derived in the diffusion limit ., We find that for non-zero synaptic weights , the response to transient inputs differs qualitatively from this approximation ., The response is instantaneous rather than exhibiting low-pass characteristics , non-linearly dependent on the input amplitude , asymmetric for excitation and inhibition , and is promoted by a characteristic level of synaptic background noise ., We show that at threshold the probability density of the potential drops to zero within the range of one synaptic weight and explain how this shapes the response ., The novel mechanism is exhibited on the network level and is a generic property of pulse-coupled networks of threshold units . | Our work demonstrates a fast-firing response of nerve cells that remained unconsidered in network analysis , because it is inaccessible by the otherwise successful linear response theory ., For the sake of analytic tractability , this theory assumes infinitesimally weak synaptic coupling ., However , realistic synaptic impulses cause a measurable deflection of the membrane potential ., Here we quantify the effect of this pulse-coupling on the firing rate and the membrane-potential distribution ., We demonstrate how the postsynaptic potentials give rise to a fast , non-linear rate transient present for excitatory , but not for inhibitory , inputs ., It is particularly pronounced in the presence of a characteristic level of synaptic background noise ., We show that feed-forward inhibition enhances the fast response on the network level ., This enables a mode of information processing based on short-lived activity transients ., Moreover , the non-linear neural response appears on a time scale that critically interacts with spike-timing dependent synaptic plasticity rules ., Our results are derived for biologically realistic synaptic amplitudes , but also extend earlier work based on Gaussian white noise ., The novel theoretical framework is generically applicable to any threshold unit governed by a stochastic differential equation driven by finite jumps ., Therefore , our results are relevant for a wide range of biological , physical , and technical systems . | biophysics/theory and simulation, neuroscience/theoretical neuroscience, computational biology/computational neuroscience | null |
2,010 | journal.pcbi.1000776 | 2,010 | Assimilating Seizure Dynamics | A universal dilemma in understanding the brain is that it is complex , multiscale , nonlinear in space and time , and we never have more than partial experimental access to its dynamics ., To better understand its function one not only needs to encompass the complexity and nonlinearity , but also estimate the unmeasured variables and parameters of brain dynamics ., A parallel comparison can be drawn in weather forecasting 1 , although atmospheric dynamics are arguably less complex and less nonlinear ., Fortunately , the meteorological community has overcome some of these issues by using model based predictor-controller frameworks whose development derived from computational robotics requirements of aerospace programs in 1960s 2 , 3 ., A predictor-controller system employs a computational model to observe a dynamical system ( e . g . weather ) , assimilate data through what may be relatively sparse sensors , and reconstruct and estimate the remainder of the unmeasured variables and parameters in light of available data ., The result of future measured system dynamics is compared with the model predicted outcome , the expected errors within the model are updated and corrected , and the process repeats iteratively ., For this recursive initial value problem to be meaningful one needs computational models of high fidelity to the dynamics of the natural systems , and explicit modeling of the uncertainties within the model and measurements 3–5 ., The most prominent of the model based predictor-controller strategies is the Kalman filter ( KF ) 2 ., In its original form , the KF solves a linear system ., In situations of mild nonlinearity , the extended forms of the KF were used where the system equations could be linearized without losing too much of the qualitative nature of the system ., Such linearization schemes are not suitable for neuronal systems with nonlinearities of the scale of action potential spike generation ., With the advent of efficient nonlinear techniques in the 1990s such as the ensemble Kalman filter 6 , 7 and the unscented Kalman filter ( UKF ) 8 , 9 , along with improved computational models for the dynamics of neuronal systems ( incorporating synaptic inputs , cell types , and dynamic microenvironment ) 10 , the prospects for biophysically based ensemble filtering from neuronal systems are now strong ., The general framework of the UKF differs from the extended KF in that it integrates the fundamental nonlinear models directly , along with iterating the error and noise expectations through these nonlinear equations ., Instead of linearizing the system equations , UKF performs the prediction and update steps on an ensemble of potential system states ., This ensemble gives a finite sampling representation of the probability distribution function of the system state 3 , 11–15 ., Our hypothesis is that seizures arise from a complex nonlinear interaction between specific excitatory and inhibitory neuronal sub-types 16 ., The dynamics and excitability of such networks are further complicated by the fact that a variety of metabolic processes govern the excitability of those neuronal networks ( such as potassium concentration ( ) gradients and local oxygen availability ) , and these metabolic variables are not directly measurable using electrical potential measurements ., Indeed , it is becoming increasingly apparent that electricity is not enough to describe a wide variety of neuronal phenomena ., Several seizure prediction algorithms , based only on EEG signals , have achieved reasonable accuracy when applied to static time-series 17–19 ., However , many techniques are hindered by high false positive rates , which render them unsuitable for clinical use ., We presume that there are aspects of the dynamics of seizure onset and pre-seizure states that are not captured in current models when applied in real-time ., In light of the dynamic nature of epilepsy , an approach that incorporates the time evolution of the underlying system for seizure prediction is required ., As one cannot see much of an anticipatory signature in EEG dynamics prior to seizures , the same can be said of a variety of oscillatory transient phenomena in the nervous system ranging from up states 20 , spinal cord burst firing 21 , cortical oscillatory waves 22 , in addition to animal 23 and human 24 epileptic seizures ., All of these phenomena share the properties that they are episodic , oscillatory , and have apparent refractory periods following which small stimuli can both start and stop such events ., It has recently been shown that the interrelated dynamics of and sodium concentration ( ) affect the excitability of neurons , help determine the occurrence of seizures , and affect the stability of persistent states of neuronal activity 10 , 25 ., Competition between intrinsic neuronal ion currents , sodium-potassium pumps , glia , and diffusion can produce slow and large-amplitude oscillations in ion concentrations similar to what is observed physiologically in seizures 26 , 27 ., Brain dynamics emerge from within a system of apparently unique complexity among the natural systems we observe ., Even as multivariable sensing technology steadily improves , the near infinite dimensionality of the complex spatial extent of brain networks will require reconstruction through modeling ., Since at present , our technical capabilities restrict us to only one or two variables at a restricted number of sites ( such as voltage or calcium ) , computational models become the “lens” through which we must consider viewing all brain measurements 28 ., In what follows , we will show the potential power of fusing physiological measurements with computational models ., We will use reconstruction to account for unmeasured parts of the neuronal system , relating micro-domain metabolic processes to cellular excitability , and validating cellular dynamical reconstruction against actual measurements ., Model inadequacy is an issue of intense research in the data assimilation community – no model does exactly what nature does ., To deal with inadequate models , researchers in areas such as meteorology have developed various strategies to account for the inaccuracies in the models for weather forecasting 4 , 5 , 29 ., In complex systems such as neuronal networks , the need to account for model inadequacy is critical ., To demonstrate that UKF can track neuronal dynamics in the face of moderate inadequacy , we impaired our model by setting the sodium current rate constant instead of using the actual complex function of , ( see equation ( 2 ) for the functional form of ) , and tracked it as a parameter ( Figure 3 ) ., That is , we deleted the relevant function for from the model and allowed UKF to update it as a parameter ., The model with fixed is by itself unable to spike , but when it is allowed to float when voltage is assimilated through UKF using the data from hippocampal pyramidal cells ( PCs ) , it is capable of tracking the dynamics of the cell reasonably well ., The tracked by the filter is sufficiently close to its functional form values ( within 25% ) so that spiking dynamics can be reconstructed ( Figure 3C and 3D ) ., This occurs because Kalman filtering constantly estimates the trade off between model accuracy and measurements , expressed in the filter gain function 2 , 3 ., This is an excellent demonstration of the robustness of this framework ., Looking at the estimated values of it also becomes clear that in fact should be assigned the functional form rather than a constant value ., Despite decades of effort neuroscientists lack a unifying dynamical principle for epilepsy ., An incomplete knowledge of the neural interactions during seizures makes the quest for unifying principles especially difficult 30 ., Here we show that UKF can be employed to track experimentally inaccessible neuronal dynamics during seizures ., Specifically , we used UKF to assimilate data from pairs of simultaneously impaled pyramidal cells and oriens-lacunosum moleculare ( OLM ) interneurons ( INs ) in the CA1 area of the hippocampus 23 ., We then used biophysical ionic models to estimate extra- and intracellular potassium , sodium , and calcium ion concentrations and various parameters controlling their dynamics during seizures ( Figure 4 ) ., In Figure 4A we show an intracellular recording from a pyramidal cell during seizures , and plot the estimated extracellular potassium concentration ( ) in Figure 4B ., As is clear from the figure the extracellular potassium concentration oscillates as the cell goes into and out of seizures ., The potassium concentration begins to rise as the cell enters seizures and peaks with the maximal firing frequency , followed by decreasing potassium concentration as the firing rate decreases and the seizure terminates ., Higher makes the PC more excitable by raising the reversal potential for currents ( equation 7 ) ., The increased reversal potential causes the cell to burst-fire spontaneously ., Whether the increased causes the cells to seize or is the result of seizures has been an old question 31 whose resolution will likely take place from better understanding of the coupled dynamics ., For present purposes , it is known that increased in experiments can support the generation , and increase the frequency and propagation velocity of seizures 32 , 33 ., Changes in the concentration of intracellular sodium ions , , are closely coupled with the changes of ( Figure 4C ) ., As shown in panels ( 4D–F ) we reconstructed the parameters controlling the microenvironment of the cell ., These parameters included the diffusion constant of in the extracellular space , buffering strength of glia , and concentration in the reservoir of the perfusing solution in vitro ( or in the vasculature in vivo ) during seizures ., Note that the ionic concentration in the distant reservoir is different from the more rapid dynamics within the smaller connecting extracellular space near single cell where excitability is determined ., We were also able to track other variables and parameters such as extracellular calcium concentration and ion channel conductances ., In Figure 5 , we show an expanded view of a single cell response during a single seizure from Figure 4 ., Extracellular potassium concentration increases several fold above baseline values during seizures 31 ., During a single seizure , starts rising from a baseline value of 3 . 0mM as the seizure begins and peaks at 7mM at the middle of the seizure ( Figure 5 ) ., Interestingly the estimated by UKF matches very closely the measured seen in vitro studies 34 ., Considering the slow time scale of seizure evolution ( period of more than 100 seconds in our experiments ) , we test the importance of slow variables such as ion concentrations for seizure tracking ., As shown in Figure 6 , we found that including the dynamic intra- and extracellular ion concentrations in the model is necessary for accurate tracking of seizures ., Using Hodgkin-Huxley type ionic currents with fixed intra- and extracellular ion concentration of and ions fails to track seizure dynamics in pyramidal cells ( Figure 6C ) ., We used physiologically normal concentrations of 4mM and 18mM for extracellular and intracellular respectively for these simulations ., The conclusion remains the same when higher and are used ., A similar tracking failure is found while tracking the dynamics of OLM interneurons during seizures ( not shown ) ., To further emphasize the importance of ion concentrations dynamics for tracking seizures we calculate the Akaikes information criterion ( AIC ) for the two models used in Figure 6 , i . e . the model with and without ion concentration dynamics ., AIC is a measure of the goodness of fit of a model and offers a measure of the information lost when a given model is used to describe experimental observations ., Loosely speaking , it describes the tradeoff between precision and complexity of the model 35 ., We used equation ( 29 ) for the AIC measure ., The AIC measure for the model without ion concentration dynamics is ., The model with ion concentration dynamics on the other hand has AIC value equal to , indicating the importance of ion concentration dynamics for tracking seizures ., Pyramidal cells and interneurons in the hippocampus reside in different layers with different cell densities ., To investigate whether there exist significant differences in the microenvironment surrounding these two cell types we assimilated membrane potential data from OLM interneurons in the hippocampus and reconstructed and ion concentrations inside and outside the cells ., As shown in Figure 7 , both the baseline level and peak near the interneurons must be very high as compared to that seen for the pyramidal cells ( cf . Figure 4B ) ., This is an important prediction in light of the recently observed interplay between pyramidal cells and interneurons during in vitro seizures 23; in these experiments pyramidal cells were silent when the interneurons were intensively firing ., Following intense firing the interneurons entered a state of depolarization block simultaneously with the emergence of intense epileptiform firing in pyramidal cells ., Such a novel pattern of interleaving neuronal activity is proposed to be a possible mechanism for the sudden drop in inhibition during seizures – it may be permissive of runaway excitatory activity ., The mechanism leading to such interplay , specifically the reasons for differential firing patterns in pyramidal cells and interneurons are unknown ., Our results here indicate the potential role of the neuronal microenvironment in producing such interplay ., Our findings suggest that the buffering mechanism in the OLM layer is weaker as compared with the pyramidal layer , thus causing higher in the OLM layer ., The higher surrounding the interneurons causes increased excitability of the cell by raising the reversal potential for currents ( higher than the pyramidal cells , see equation 7 ) ., The higher reversal potential for currents causes the interneuron to spontaneously burst fire at higher frequency and eventually drives the interneuron to transition into depolarization block when firing is peaked ., As the INs enter the depolarized state , the inhibitory synaptic input from the INs to the PCs drops substantially , releasing PCs to generate the intense excitatory activity of seizures ( equation 8 , Figure S3 ) ., The collapse of inhibition due to the entrance of INs into a depolarized state also helps explain the sudden decrease in inhibition at seizure onset in neocortex described by Trevelyan , et al . 36 as the loss of inhibitory veto ., As shown in Figure S1 , we also tracked the remaining variables for the INs ., Since the interaction of neurons determines network patterns of activity , it is within such interactions that we seek unifying principles for epilepsy ., To demonstrate that the UKF framework can be utilized to study cellular interactions , we reconstructed the dynamics of one cell type by assimilating the measured data from another cell type in the network ., In Figure 8 we only show the estimated membrane potentials , but we also reconstructed the remaining variables and parameters of both cells ( Figures S2 and S3 ) ., We first assimilated the membrane potential of the PC to estimate the dynamics of the same cell and also the dynamics of a coupled IN ( Figure 8A–D ) ., Conversely , we estimate the dynamics of PC from the simultaneously measured membrane potential measurements of the IN ( Figure 8D–F ) ., As is evident from Figure 8 the filter framework is successful at reciprocally reconstructing and tracking the dynamics of these different cells within this network ., In Figure S2 , we show intracellular concentration and gating variables of and channels in PCs for simulation in Figure 8A–D ., The variables modeling the synaptic inputs for both INs and PCs in Figure 8A–D are shown in Figure S3 ., As clear from Figure S3 ( D ) , the variable ( equation 8 ) reaches very high values when the INs lock into depolarization block , shutting off the inhibitory inputs from INs to PCs ., In conclusion , we have demonstrated the feasibility for data assimilation within neuronal networks using detailed biophysical models ., In particular , we demonstrated that estimating the neuronal microenvironment and neuronal interactions can be performed by embedding our improving biophysical neuronal models within a model based state estimation framework ., This approach can provide a more complete understanding of otherwise incompletely observed neuronal dynamics during normal and pathological brain function ., We used two-compartmental models for the pyramidal cells and interneurons: a cellular compartment and the surrounding extracellular microenvironment ., The membrane potentials of both cells were modeled by Hodgkin-Huxley equations containing sodium , potassium , calcium-gated potassium ( after-hyperpolarization ) , and leak currents ., For the network model , the two cell types are coupled synaptically and through diffusion of potassium ions in the extracellular space ., A schematic of the model is shown in Figure 9 ., To estimate and track the dynamics of the neuronal networks , we applied a nonlinear ensemble version of the Kalman filter , the unscented Kalman filter ( UKF ) 8 , 9 ., The UKF uses known nonlinear dynamical equations and observation functions along with noisy , partially observed data to continuously update a Gaussian approximation for the neuronal state and its uncertainty ., At each integration step , perturbed system states that are consistent with the current state uncertainty , sigma points , are chosen ., The UKF consists of integrating the system from the sigma points , estimating mean state values , and then updating the covariance matrix that approximates the state uncertainty ., The Kalman gain matrix updates the new most likely state of the system based on the estimated measurements and the actual partially measured state ., The estimated states ( filtered states ) are used to estimate the experimentally inaccessible parameters and variables by synchronizing the model equations to the estimated states ., To estimate the system parameters from data , we introduced the unknown parameters as extra state variables with trivial dynamics ., The UKF with random initial conditions for the parameters will converge to an optimal set of parameters , or in the case of varying parameters , will track them along with the state variables 11–13 ., Given a function describing the dynamics of the system ( equations 1–10 in our case ) , and an observation function contaminated by uncertainty characterized in the covariance matrix , for a -dimensional state vector with mean the UKF generates the sigma points , … , so that their sample mean and sample covariance are and ., The sigma points are the rows of the matrix ( 11 ) The index on the left-hand side corresponds to the row taken from the matrix in the parenthesis on right-hand side ., The square root sign denotes the matrix square root and indicates transpose of the matrix ., Sigma points can be envisioned as sample points at the boundaries of a covariance ellipsoid ., In what follows , superscript tilde ( ) represents the a priori values of variables and parameter , i . e . the values at a given time-step when observation up to time-step are available , while hat ( ) represents the a posteriori quantities , i . e . the values at time-step when observations up to time-step are available ., Applying one step of the dynamics to the sigma points and calling the results , and denoting the observations of the new states by , we define the means ( 12 ) where and are the a priori state and measurement estimates , respectively ., Now define the a priori covariances ( 13 ) of the ensemble members ., The Kalman filter estimates of the new state and uncertainty are given by the a posteriori quantities ( 14 ) and ( 15 ) where is the Kalman gain matrix and is the actual observation 3 , 8 , 9 , 11–13 ., Thus and are the updated estimated state and covariance for the next step ., The a posteriori estimate of the observation is recovered by ., Thus by augmenting the observed state variables with unobserved state variables and system parameters , UKF can estimate and track both unobserved variables and system parameters . | Introduction, Results, Discussion, Materials and Methods | Observability of a dynamical system requires an understanding of its state—the collective values of its variables ., However , existing techniques are too limited to measure all but a small fraction of the physical variables and parameters of neuronal networks ., We constructed models of the biophysical properties of neuronal membrane , synaptic , and microenvironment dynamics , and incorporated them into a model-based predictor-controller framework from modern control theory ., We demonstrate that it is now possible to meaningfully estimate the dynamics of small neuronal networks using as few as a single measured variable ., Specifically , we assimilate noisy membrane potential measurements from individual hippocampal neurons to reconstruct the dynamics of networks of these cells , their extracellular microenvironment , and the activities of different neuronal types during seizures ., We use reconstruction to account for unmeasured parts of the neuronal system , relating micro-domain metabolic processes to cellular excitability , and validate the reconstruction of cellular dynamical interactions against actual measurements ., Data assimilation , the fusing of measurement with computational models , has significant potential to improve the way we observe and understand brain dynamics . | To understand a complex system such as the weather or the brain , one needs an exhaustive detailing of the system variables and parameters ., But such systems are vastly undersampled from existing technology ., The alternative is to employ realistic computational models of the system dynamics to reconstruct the unobserved features ., This model based state estimation is referred to as data assimilation ., Modern robotics use data assimilation as the recursive predictive strategy that underlies the autonomous control performance of aerospace and terrestrial applications ., We here adapt such data assimilation techniques to a computational model of the interplay of excitatory and inhibitory neurons during epileptic seizures ., We show that incorporating lower scale metabolic models of potassium dynamics is essential for accuracy ., We apply our strategy using data from simultaneous dual intracellular impalements of inhibitory and excitatory neurons ., Our findings are , to our knowledge , the first validation of such data assimilation in neuronal dynamics . | neuroscience/theoretical neuroscience, computational biology/computational neuroscience, neurological disorders/epilepsy | null |
516 | journal.pcbi.1000352 | 2,009 | Statistical Methods for Detecting Differentially Abundant Features in Clinical Metagenomic Samples | The increasing availability of high-throughput , inexpensive sequencing technologies has led to the birth of a new scientific field , metagenomics , encompassing large-scale analyses of microbial communities ., Broad sequencing of bacterial populations allows us a first glimpse at the many microbes that cannot be analyzed through traditional means ( only ∼1% of all bacteria can be isolated and independently cultured with current methods 1 ) ., Studies of environmental samples initially focused on targeted sequencing of individual genes , in particular the 16S subunit of ribosomal RNA 2–5 , though more recent studies take advantage of high-throughput shotgun sequencing methods to assess not only the taxonomic composition , but also the functional capacity of a microbial community 6–8 ., Several software tools have been developed in recent years for comparing different environments on the basis of sequence data ., DOTUR 9 , Libshuff 10 , ∫-libshuff 11 , SONs 12 , MEGAN 13 , UniFrac 14 , and TreeClimber 15 all focus on different aspects of such an analysis ., DOTUR clusters sequences into operational taxonomic units ( OTUs ) and provides estimates of the diversity of a microbial population thereby providing a coarse measure for comparing different communities ., SONs extends DOTUR with a statistic for estimating the similarity between two environments , specifically , the fraction of OTUs shared between two communities ., Libshuff and ∫-libshuff provide a hypothesis test ( Cramer von Mises statistics ) for deciding whether two communities are different , and TreeClimber and UniFrac frame this question in a phylogenetic context ., Note that these methods aim to assess whether , rather than how two communities differ ., The latter question is particularly important as we begin to analyze the contribution of the microbiome to human health ., Metagenomic analysis in clinical trials will require information at individual taxonomic levels to guide future experiments and treatments ., For example , we would like to identify bacteria whose presence or absence contributes to human disease and develop antibiotic or probiotic treatments ., This question was first addressed by Rodriguez-Brito et al . 16 , who use bootstrapping to estimate the p-value associated with differences between the abundance of biological subsytems ., More recently , the software MEGAN of Huson et al . 13 provides a graphical interface that allows users to compare the taxonomic composition of different environments ., Note that MEGAN is the only one among the programs mentioned above that can be applied to data other than that obtained from 16S rRNA surveys ., These tools share one common limitation — they are all designed for comparing exactly two samples — therefore have limited applicability in a clinical setting where the goal is to compare two ( or more ) treatment populations each comprising multiple samples ., In this paper , we describe a rigorous statistical approach for detecting differentially abundant features ( taxa , pathways , subsystems , etc . ) between clinical metagenomic datasets ., This method is applicable to both high-throughput metagenomic data and to 16S rRNA surveys ., Our approach extends statistical methods originally developed for microarray analysis ., Specifically , we adapt these methods to discrete count data and correct for sparse counts ., Our research was motivated by the increasing focus of metagenomic projects on clinical applications ( e . g . Human Microbiome Project 17 ) ., Note that a similar problem has been addressed in the context of digital gene expression studies ( e . g . SAGE 18 ) ., Lu et al . 19 employ an overdispersed log-linear model and Robinson and Smyth 20 use a negative binomial distribution in the analysis of multiple SAGE libraries ., Both approaches can be applied to metagenomic datasets ., We compare our tool to these prior methodologies through comprehensive simulations , and demonstrate the performance of our approach by analyzing publicly available datasets , including 16S surveys of human gut microbiota and random sequencing-based functional surveys of infant and mature gut microbiomes and microbial and viral metagenomes ., The methods described in this paper have been implemented as a web server and are also available as free source-code ( in R ) from http://metastats . cbcb . umd . edu ., To account for different levels of sampling across multiple individuals , we convert the raw abundance measure to a fraction representing the relative contribution of each feature to each of the individuals ., This results in a normalized version of the matrix described above , where the cell in the ith row and the jth column ( which we shall denote fij ) is the proportion of taxon i observed in individual j ., We chose this simple normalization procedure because it provides a natural representation of the count data as a relative abundance measure , however other normalization approaches can be used to ensure observed counts are comparable across samples , and we are currently evaluating several such approaches ., For each feature i , we compare its abundance across the two treatment populations by computing a two-sample t statistic ., Specifically , we calculate the mean proportion , and variance of each treatment t from which nt subjects ( columns in the matrix ) were sampled:We then compute the two-sample t statistic:Features whose t statistics exceeds a specified threshold can be inferred to be differentially abundant across the two treatments ( two-sided t-test ) ., The threshold for the t statistic is chosen such as to minimize the number of false positives ( features incorrectly determined to be differentially abundant ) ., Specifically , we try to control the p-value—the likelihood of observing a given t statistic by chance ., Traditional analyses compute the p-value using the t distribution with an appropriate number of degrees of freedom ., However , an implicit assumption of this procedure is that the underlying distribution is normal ., We do not make this assumption , but rather estimate the null distribution of ti non-parametrically using a permutation method as described in Storey and Tibshirani 21 ., This procedure , also known as the nonparametric t-test has been shown to provide accurate estimates of significance when the underlying distributions are non-normal 22 , 23 ., Specifically , we randomly permute the treatment labels of the columns of the abundance matrix and recalculate the t statistics ., Note that the permutation maintains that there are n1 replicates for treatment 1 and n2 replicates for treatment 2 ., Repeating this procedure for B trials , we obtain B sets of t statistics: t10b , … , tM0b , b\u200a=\u200a1 , … , B , where M is the number of rows in the matrix ., For each row ( feature ) , the p-value associated with the observed t statistic is calculated as the fraction of permuted tests with a t statistic greater than or equal to the observed ti:This approach is inadequate for small sample sizes in which there are a limited number of possible permutations of all columns ., As a heuristic , if less than 8 subjects are used in either treatment , we pool all permuted t statistics together into one null distribution and estimate p-values as: Note that the choice of 8 for the cutoff is simply heuristic based on experiments during the implementation of our method ., Our approach is specifically targeted at datasets comprising multiple subjects — for small data-sets approaches such as that proposed by Rodriguez-Brito et . al . 16 might be more appropriate ., Unless explicitly stated , all experiments described below used 1000 permutations ., In general , the number of permutations should be chosen as a function of the significance threshold used in the experiment ., Specifically , a permutation test with B permutations can only estimate p-values as low as 1/B ( in our case 10−3 ) ., In datasets containing many features , larger numbers of permutations are necessary to account for multiple hypothesis testing issues ( further corrections for this case are discussed below ) ., Precision of the p-value calculations is obviously improved by increasing the number of permutations used to approximate the null distribution , at a cost , however , of increased computational time ., For certain distributions , small p-values can be efficiently estimated using a technique called importance sampling ., Specifically , the permutation test is targeted to the tail of the distribution being estimated , leading to a reduction in the number of permutations necessary of up to 95% 24 , 25 ., We intend to implement such an approach in future versions of our software ., For complex environments ( many features/taxa/subsystems ) , the direct application of the t statistic as described can lead to large numbers of false positives ., For example , choosing a p-value threshold of 0 . 05 would result in 50 false positives in a dataset comprising 1000 organisms ., An intuitive correction involves decreasing the p-value cutoff proportional to the number of tests performed ( a Bonferroni correction ) , thereby reducing the number of false positives ., This approach , however , can be too conservative when a large number of tests are performed 21 ., An alternative approach aims to control the false discovery rate ( FDR ) , which is defined as the proportion of false positives within the set of predictions 26 , in contrast to the false positive rate defined as the proportion of false positives within the entire set of tests ., In this context , the significance of a test is measured by a q-value , an individual measure of the FDR for each test ., We compute the q-values using the following algorithm , based on Storey and Tibshirani 21 ., This method assumes that the p-values of truly null tests are uniformly distributed , assumption that holds for the methods used in Metastats ., Given an ordered list of p-values , p ( 1 ) ≤p ( 2 ) ≤…≤p ( m ) , ( where m is the total number of features ) , and a range of values λ\u200a=\u200a0 , 0 . 01 , 0 . 02 , … , 0 . 90 , we computeNext , we fit with a cubic spline with 3 degrees of freedom , which we denote , and let ., Finally , we estimate the q-value corresponding to each ordered p-value ., First , ., Then for i\u200a=\u200am-1 , m-2 , … , 1 , Thus , the hypothesis test with p-value has a corresponding q-value of ., Note that this method yields conservative estimates of the true q-values ,, i . e ., ., Our software provides users with the option to use either p-value or q-value thresholds , irrespective of the complexity of the data ., For low frequency features , e . g . low abundance taxa , the nonparametric t–test described above is not accurate 27 ., We performed several simulations ( data not shown ) to determine the limitations of the nonparametric t-test for sparsely-sampled features ., Correspondingly , our software only applies the test if the total number of observations of a feature in either population is greater than the total number of subjects in the population ( i . e . the average across subjects of the number of observations for a given feature is greater than one ) ., We compare the differential abundance of sparsely-sampled ( rare ) features using Fishers exact test ., Fishers exact test models the sampling process according to a hypergeometric distribution ( sampling without replacement ) ., The frequencies of sparse features within the abundance matrix are pooled to create a 2×2 contingency table ( Figure 2 ) , which acts as input for a two-tailed test ., Using the notation from Figure 2 , the null hypergeometric probability of observing a 2×2 contingency table is: By calculating this probability for a given table , and all tables more extreme than that observed , one can calculate the exact probability of obtaining the original table by chance assuming that the null hypothesis ( i . e . no differential abundance ) is true 27 ., Note that an alternative approach to handling sparse features is proposed in microarray literature ., The Significance Analysis of Microarrays ( SAM ) method 28 addresses low levels of expression using a modified t statistic ., We chose to use Fishers exact test due to the discrete nature of our data , and because prior studies performed in the context of digital gene expression indicate Fishers test to be effective for detection of differential abundance 29 ., The input to our method , the Feature Abundance Matrix , can be easily constructed from both 16S rRNA and random shotgun data using available software packages ., Specifically for 16S taxonomic analysis , tools such as the RDP Bayesian classifier 30 and Greengenes SimRank 31 output easily-parseable information regarding the abundance of each taxonomic unit present in a sample ., As a complementary , unsupervised approach , 16S sequences can be clustered with DOTUR 9 into operational taxonomic units ( OTUs ) ., Abundance data can be easily extracted from the “* . list” file detailing which sequences are members of the same OTU ., Shotgun data can be functionally or taxonomically classified using MEGAN 13 , CARMA 32 , or MG-RAST 33 ., MEGAN and CARMA are both capable of outputting lists of sequences assigned to a taxonomy or functional group ., MG-RAST provides similar information for metabolic subsystems that can be downloaded as a tab-delimited file ., All data-types described above can be easily converted into a Feature Abundance Matrix suitable as input to our method ., In the future we also plan to provide converters for data generated by commonly-used analysis tools ., Human gut 16S rRNA sequences were prepared as described in Eckburg et al . and Ley et al . ( 2006 ) and are available in GenBank , accession numbers: DQ793220-DQ802819 , DQ803048 , DQ803139-DQ810181 , DQ823640-DQ825343 , AY974810-AY986384 ., In our experiments we assigned all 16S sequences to taxa using a naïve Bayesian classifier currently employed by the Ribosomal Database Project II ( RDP ) 30 ., COG profiles of 13 human gut microbiomes were obtained from the supplementary material of Kurokawa et al . 34 ., We acquired metabolic functional profiles of 85 metagenomes from the online supplementary materials of Dinsdale et al . ( 2008 ) ( http://www . theseed . org/DinsdaleSupplementalMaterial/ ) ., As outlined in the introduction , statistical packages developed for the analysis of SAGE data are also applicable to metagenomic datasets ., In order to validate our method , we first designed simulations and compared the results of Metastats to Students t-test ( with pooled variances ) and two methods used for SAGE data: a log-linear model ( Log-t ) by Lu et al . 19 , and a negative binomial ( NB ) model developed by Robinson and Smyth 20 ., We designed a metagenomic simulation study in which ten subjects are drawn from two groups - the sampling depth of each subject was determined by random sampling from a uniform distribution between 200 and 1000 ( these depths are reasonable for metagenomic studies ) ., Given a population mean proportion p and a dispersion value φ , we sample sequences from a beta-binomial distribution Β ( α , β ) , where α\u200a=\u200ap ( 1/φ−1 ) and β\u200a= ( 1−p ) ( 1/φ−1 ) ., Note that data from this sampling procedure fits the assumptions for Lu et al . as well as Robinson and Smyth and therefore we expect them to do well under these conditions ., Lu et al . designed a similar study for SAGE data , however , for each simulation , a fixed dispersion was used for both populations and the dispersion estimates were remarkably small ( φ\u200a=\u200a0 , 8e-06 , 2e-05 , 4 . 3e-05 ) ., Though these values may be reasonable for SAGE data , we found that they do not accurately model metagenomic data ., Figure 3 displays estimated dispersions within each population for all features of the metagenomic datasets examined below ., Dispersion estimates range from 1e-07 to 0 . 17 , and rarely do the two populations share a common dispersion ., Thus we designed our simulation so that φ is chosen for each population randomly from a uniform distribution between 1e-08 and 0 . 05 , allowing for potential significant differences between population distributions ., For each set of parameters , we simulated 1000 feature counts , 500 of which are generated under p1\u200a=\u200ap2 , the remainder are differentially abundant where a*p1\u200a=\u200ap2 , and compared the performance of each method using receiver-operating-characteristic ( ROC ) curves ., Figure 4 displays the ROC results for a range of values for p and a ., For each set of parameters , Metastats was run using 5000 permutations to compute p-values ., Metastats performs as well as other methods , and in some cases is preferable ., We also found that in most cases our method was more sensitive than the negative binomial model , which performed poorly for high abundance features ., Our next simulation sought to examine the accuracy of each method under extreme sparse sampling ., As shown in the datasets below , it is often the case that a feature may not have any observations in one population , and so it is essential to employ a statistical method that can address this frequent characteristic of metagenomic data ., Under the same assumptions as the simulation above , we tested a\u200a=\u200a0 and 0 . 01 , thereby significantly reducing observations of a feature in one of the populations ., The ROC curves presented in Figure 5 reveal that Metastats outperforms other statistical methods in the face of extreme sparseness ., Holding the false positive rate ( x-axis ) constant , Metastats shows increased sensitivity over all other methods ., The poor performance of Log-t is noteworthy given it is designed for SAGE data that is also potentially sparse ., Further investigation revealed that the Log-t method results in a highly inflated dispersion value if there are no observations in one population , thereby reducing the estimated significance of the test ., Finally , we selected a subset of the Dinsdale et al . 6 metagenomic subsystem data ( described below ) , and randomly assigned each subject to one of two populations ( 20 subjects per population ) ., All subjects were actually from the same population ( microbial metagenomes ) , thus the null hypothesis is true for each feature tested ( no feature is differentially abundant ) ., We ran each methodology on this data , recording computed p-values for each feature ., Repeating this procedure 200 times , we simulated tests of 5200 null features ., Table 1 displays the number of false positives incurred by each methodology given different p-value thresholds ., The results indicate that the negative binomial model results in an exceptionally high number of false positives relative to the other methodologies ., Students t-test and Metastats perform equally well in estimating the significance of these null features , while Log-t performs slightly better ., These studies show that Metastats consistently performs as well as all other applicable methodologies for deeply-sampled features , and outperforms these methodologies on sparse data ., Below we further evaluate the performance of Metastats on several real metagenomic datasets ., In a recent study , Ley et al . 35 identified gut microbes associated with obesity in humans and concluded that obesity has a microbial element , specifically that Firmicutes and Bacteroidetes are bacterial divisions differentially abundant between lean and obese humans ., Obese subjects had a significantly higher relative abundance of Firmicutes and a lower relative abundance of Bacteriodetes than the lean subjects ., Furthermore , obese subjects were placed on a calorie-restricted diet for one year , after which the subjects gut microbiota more closely resembled that of the lean individuals ., We obtained the 20 , 609 16S rRNA genes sequenced in Ley et al . and assigned them to taxa at different levels of resolution ( note that 2 , 261 of the 16S sequences came from a previous study 36 ) ., We initially sought to re-establish the primary result from this paper using our methodology ., Table 2 illustrates that our method agreed with the results of the original study: Firmicutes are significantly more abundant in obese subjects ( P\u200a=\u200a0 . 003 ) and Bacteroidetes are significantly more abundant in the lean population ( P<0 . 001 ) ., Furthermore , our method also detected Actinobacteria to be differentially abundant , a result not reported by the original study ., Approximately 5% of the sample was composed of Actinobacteria in obese subjects and was significantly less frequent in lean subjects ( P\u200a=\u200a0 . 004 ) ., Collinsella and Eggerthella were the most prevalent Actinobacterial genera observed , both of which were overabundant in obese subjects ., These organisms are known to ferment sugars into various fatty acids 37 , further strengthening a possible connection to obesity ., Note that the original study used Students t-test , leading to a p-value for the observed difference within Actinobacteria of 0 . 037 , 9 times larger than our calculation ., This highlights the sensitivity of our method and explains why this difference was not originally detected ., To explore whether we could refine the broad conclusions of the initial study , we re-analyzed the data at more detailed taxonomic levels ., We identified three classes of organisms that were differentially abundant: Clostridia ( P\u200a=\u200a0 . 005 ) , Bacteroidetes ( P<0 . 001 ) , and Actinobacteria ( P\u200a=\u200a0 . 003 ) ., These three were the dominant members of the corresponding phyla ( Firmicutes , Bacteroides , Actinobacteria , respectively ) and followed the same distribution as observed at a coarser level ., Metastats also detected nine differentially abundant genera accounting for more than 25% of the 16S sequences sampled in both populations ( P≤0 . 01 ) ., Syntrophococcus , Ruminococcus , and Collinsella were all enriched in obese subjects , while Bacteroides on average were eight times more abundant in lean subjects ., For taxa with several observations in each subject , we found good concordance between our results ( p-value estimates ) and those obtained with most of the other methods ( Table 2 ) ., Surprisingly , we found that the negative binomial model of Robinson and Smyth failed to detect several strongly differentially abundant features in these datasets ( e . g the hypothesis test for Firmicutes results in a p-value of 0 . 87 ) ., This may be due in part to difficulties in estimating the parameters of their model for our datasets and further strengthens the case for the design of methods specifically tuned to the characteristics of metagenomic data ., For cases where a particular taxon had no observations in one population ( e . g . Terasakiella ) , the methods proposed for SAGE data seem to perform poorly ., Targeted sequencing of the 16S rRNA can only provide an overview of the diversity within a microbial community but cannot provide any information about the functional roles of members of this community ., Random shotgun sequencing of environments can provide a glimpse at the functional complexity encoded in the genes of organisms within the environment ., One method for defining the functional capacity of an environment is to map shotgun sequences to homologous sequences with known function ., This strategy was used by Kurokawa et al . 34 to identify clusters of orthologous groups ( COGs ) in the gut microbiomes of 13 individuals , including four unweaned infants ., We examined the COGs determined by this study across all subjects and used Metastats to discover differentially abundant COGs between infants and mature ( >1 year old ) gut microbiomes ., This is the first direct comparison of these two populations as the original study only compared each population to a reference database to find enriched gene sets ., Due to the high number of features ( 3868 COGs ) tested for this dataset and the limited number of infant subjects available , our method used the pooling option to compute p-values ( we chose 100 permutations ) , and subsequently computed q-values for each feature ., Using a threshold of Q≤0 . 05 ( controlling the false discovery rate to 5% ) , we detected 192 COGs that were differentially abundant between these two populations ( see Table 3 for a lisitng of the most abundant COGs in both mature and infant microbiomes . Full results are presented as supplementary material in Table S1 ) ., The most abundant enriched COGs in mature subjects included signal transduction histidine kinase ( COG0642 ) , outer membrane receptor proteins , such as Fe transport ( COG1629 ) , and Beta-galactosidase/beta-glucuronidase ( COG3250 ) ., These COGs were also quite abundant in infants , but depleted relative to mature subjects ., Infants maintained enriched COGs related to sugar transport systems ( COG1129 ) and transcriptional regulation ( COG1475 ) ., This over-abundance of sugar transport functions was also found in the original study , strengthening the hypothesis that the unweaned infant gut microbiome is specifically designed for the digestion of simple sugars found in breast milk ., Similarly , the depletion of Fe transport proteins in infants may be associated with the low concentration of iron in breast milk relative to cows milk 38 ., Despite this low concentration , infant absorption of iron from breast milk is remarkably high , and becomes poorer when infants are weaned , indicating an alternative mechanism for uptake of this mineral ., The potential for a different mechanism is supported by the detection of a Ferredoxin-like protein ( COG2440 ) that was 11 times more abundant in infants than in mature subjects , while Ferredoxin ( COG1145 ) was significantly enriched in mature subjects ., A recent study by Dinsdale et al . profiled 87 different metagenomic shotgun samples ( ∼15 million sequences ) using the SEED platform ( http://www . theseed . org ) 6 to see if biogeochemical conditions correlate with metagenome characteristics ., We obtained functional profiles from 45 microbial and 40 viral metagenomes analyzed in this study ., Within the 26 subsystems ( abstract functional roles ) analyzed in the Dinsdale et al . study , we found 13 to be significantly different ( P≤0 . 05 ) between the microbial and viral samples ( Table 4 ) ., Subsystems for RNA and DNA metabolism were significantly more abundant in viral metagenomes , while nitrogen metabolism , membrane transport , and carbohydrates were all enriched in microbial communities ., The high levels of RNA and DNA metabolism in viral metagenomes illustrate their need for a self-sufficient source of nucleotides ., Though the differences described by the original study did not include estimates of significance , our results largely agreed with the authors qualitative conclusions ., However , due to the continuously updated annotations in the SEED database since the initial publication , we found several differences between our results and those originally reported ., In particular we found virulence subsystems to be less abundant overall than previously reported , and could not find any significant differences in their abundance between the microbial and viral metagenomes ., We have presented a statistical method for handling frequency data to detect differentially abundant features between two populations ., This method can be applied to the analysis of any count data generated through molecular methods , including random shotgun sequencing of environmental samples , targeted sequencing of specific genes in a metagenomic sample , digital gene expression surveys ( e . g . SAGE 29 ) , or even whole-genome shotgun data ( e . g . comparing the depth of sequencing coverage across assembled genes ) ., Comparisons on both simulated and real dataset indicate that the performance of our software is comparable to other statistical approaches when applied to well- sampled datasets , and outperforms these methods on sparse data ., Our method can also be generalized to experiments with more than two populations by substituting the t-test with a one-way ANOVA test ., Furthermore , if only a single sample from each treatment is available , a chi-squared test can be used instead of the t-test ., 27 ., In the coming years metagenomic studies will increasingly be applied in a clinical setting , requiring new algorithms and software tools to be developed that can exploit data from hundreds to thousands of patients ., The methods described above represent an initial step in this direction by providing a robust and rigorous statistical method for identifying organisms and other features whose differential abundance correlates with disease ., These methods , associated source code , and a web interface to our tools are freely available at http://metastats . cbcb . umd . edu . | Introduction, Materials and Methods, Results, Discussion | Numerous studies are currently underway to characterize the microbial communities inhabiting our world ., These studies aim to dramatically expand our understanding of the microbial biosphere and , more importantly , hope to reveal the secrets of the complex symbiotic relationship between us and our commensal bacterial microflora ., An important prerequisite for such discoveries are computational tools that are able to rapidly and accurately compare large datasets generated from complex bacterial communities to identify features that distinguish them ., We present a statistical method for comparing clinical metagenomic samples from two treatment populations on the basis of count data ( e . g . as obtained through sequencing ) to detect differentially abundant features ., Our method , Metastats , employs the false discovery rate to improve specificity in high-complexity environments , and separately handles sparsely-sampled features using Fishers exact test ., Under a variety of simulations , we show that Metastats performs well compared to previously used methods , and significantly outperforms other methods for features with sparse counts ., We demonstrate the utility of our method on several datasets including a 16S rRNA survey of obese and lean human gut microbiomes , COG functional profiles of infant and mature gut microbiomes , and bacterial and viral metabolic subsystem data inferred from random sequencing of 85 metagenomes ., The application of our method to the obesity dataset reveals differences between obese and lean subjects not reported in the original study ., For the COG and subsystem datasets , we provide the first statistically rigorous assessment of the differences between these populations ., The methods described in this paper are the first to address clinical metagenomic datasets comprising samples from multiple subjects ., Our methods are robust across datasets of varied complexity and sampling level ., While designed for metagenomic applications , our software can also be applied to digital gene expression studies ( e . g . SAGE ) ., A web server implementation of our methods and freely available source code can be found at http://metastats . cbcb . umd . edu/ . | The emerging field of metagenomics aims to understand the structure and function of microbial communities solely through DNA analysis ., Current metagenomics studies comparing communities resemble large-scale clinical trials with multiple subjects from two general populations ( e . g . sick and healthy ) ., To improve analyses of this type of experimental data , we developed a statistical methodology for detecting differentially abundant features between microbial communities , that is , features that are enriched or depleted in one population versus another ., We show our methods are applicable to various metagenomic data ranging from taxonomic information to functional annotations ., We also provide an assessment of taxonomic differences in gut microbiota between lean and obese humans , as well as differences between the functional capacities of mature and infant gut microbiomes , and those of microbial and viral metagenomes ., Our methods are the first to statistically address differential abundance in comparative metagenomics studies with multiple subjects , and we hope will give researchers a more complete picture of how exactly two environments differ . | computational biology/metagenomics | null |
2,388 | journal.pcbi.1003549 | 2,014 | Within-Host Bacterial Diversity Hinders Accurate Reconstruction of Transmission Networks from Genomic Distance Data | A bacterial population of size , which is initially genetically homogeneous , diversifies over time due to the random introduction of mutations at rate per genome per generation ., While there are many measures of diversity , we consider the expected pairwise genetic distance ( eg ., number of single nucleotide polymorphisms ( SNPs ) ) observed when sampling two random isolates from the population;where is the genetic distance between variants and , whose respective frequencies are ., Under neutral assumptions , the expected pairwise SNP distance at equilibrium is 9 , where is the effective population size , and is the mutation rate ., However , equilibrium dynamics cannot typically be assumed for within-host carriage of a bacterial pathogen ., An initially clonal population takes a considerable amount of time to reach equilibrium levels of diversity ( Figure S1 ) ., Evidence has recently emerged that in some pathogens within-host genetic diversity is common ., In principle , an individual may harbor a diverse pathogen population due to one or more of the following: infection with a diverse inoculum , diversification of the population due to mutation or other genetic change during infection , and multiple infections from different sources ., Studies of Staphylococcus aureus have revealed carriage of multiple sequence types , likely caused by independent transmission events 10 , 11 , as well as diversification over time in long-term carriers 12 , 13 , and the coexistence of several genotypes , differing by several SNPs 14 , 15 ., Streptococcus pneumoniae populations in an individual may harbor genetically divergent lineages , as has long been appreciated 16 ., Within-host diversity of other bacterial pathogens has been studied less frequently , although there is some evidence for heterogeneous carriage of Helicobacter pylori 17 , Pseudomonas aeruginosa 18 , Burkholderia dolosa 19 and Klebsiella pneumoniae 20 ., A transmission event involves passing a sample ( inoculum ) of bacteria from a carrier to a susceptible individual ., This is an example of a population bottleneck , as a small fraction of the original population is allowed to independently grow and mutate in a new environment ., Assuming the inoculum is a random sample of size greater than 1 , it can be shown that the expected sample diversity is equal to that of the original population regardless of the size of the bottleneck ( see Supporting Information ) ., However , the variance of the expected diversity is inversely proportional to the size of the bottleneck ( Figure S2 ) , demonstrating that small bottlenecks may generate considerably different levels of diversity in the recipient due to stochastic effects ., Estimating the bottleneck size associated with transmission is challenging , not least because estimates of pathogen diversity pre- and post-bottleneck will be based on a finite sample , and will themselves be uncertain ., A wide bottleneck has previously been implicated in the transmission of equine 21 and avian 22 influenza , while inoculum size for bacterial pathogens may vary dramatically 23 ., There have been several studies aiming to reconstruct transmission links using genetic data ., Many have relied on a phylogenetic reconstruction of available isolates , under the assumption that the transmission network will be topologically similar to the estimated phylogeny 5 , 8 , 24 , 25 , 26 ., However , the phylogenetic tree will not generally correspond to the transmission network based on samples collected during an outbreak 27 , 28 , 29 ., Furthermore , within-host diversity and heterogeneous transmission – the transmission of a genetically heterogeneous inoculum to a new host – will typically complicate such an approach , as isolates from one individual may potentially be interspersed within the same clade as those from other carriers ., Under certain assumptions , the molecular clock can be used to dictate the plausibility of a transmission event ., As the estimated time to the most recent common ancestor ( TMRCA ) between isolates sampled from two carriers gets further from the estimated time of infection , the probability of direct transmission falls , and a cutoff can be specified , beyond which transmission is deemed impossible ( eg . 30 ) ., This approach requires homogeneous transmission and a robust estimate of the mutation rate ., Other network reconstruction approaches have used weighted graph optimization 4 , as well as Markov chain Monte Carlo ( MCMC ) algorithms to sample over all possible transmission links 6 , 7 ., Several variables may affect the outcome of such analyses ., Firstly , the method and frequency of sampling is of great importance ., Taking one sample per case ignores within-host diversity and could lead to poor estimates of the genetic distance between cases ., Asymptomatic infections may not be detected , or may only be detected long after the time of infection – this can lead to greater uncertainty in the estimated network ., Secondly , the bottleneck size plays a crucial role in the amount of diversity established in the newly infected host ., Thirdly , the infectious period affects the degree of diversity that may accumulate within-host , and therefore gets passed on to susceptible individuals ., Using phylogenetic reconstruction as a means to estimate transmission is often inappropriate 29 , and even when combined with additional analytical methods designed to infer transmission , produces highly uncertain networks 31 ., Furthermore , such methodology typically cannot account for diverse founding populations ., We instead used a genetic distance-based approach to determine how informative genomic data can be when used to estimate routes of transmission ., Many methods aiming to reconstruct either phylogenetic trees or transmission networks are based on a function of a pairwise genetic distance matrix ., These include graph optimization 4 , the MCMC sampling approaches 6 , 7 , and various tree reconstruction methods ( eg ., neighbor joining , unweighted pair group method with arithmetic mean ( UPGMA ) , minimum spanning tree ) ., As such , we used a generalized weighting function based on genetic distance to reconstruct networks , in order to provide a framework flexible enough to be similar ( or , in some cases , equivalent ) to these methods ., We investigated how accurately transmission networks could be recovered , and how accuracy was affected by factors such as bottleneck size , transmission rate and mutation rate ., We simulated disease outbreaks under a variety of scenarios , and reflecting various sampling strategies ., Our approach could accommodate within-host diversity and variable bottleneck sizes , in order to investigate their effect on network reconstruction ., Full details are given in Materials and Methods ., We first simulated diversification within a single host , using S . aureus as an example , and compared our findings with estimates of diversity based on published samples ., The expected genetic pairwise distance for S . aureus carriage has been estimated at 4 . 12 SNPs 15 ., S . aureus has a mutation rate of approximately 5×10−4 per genome per bacterial generation ( given a rate of 3×10−6 per nucleotide per year 1 , 12 and a generation time of 30 minutes 32 , 33 , 34 ) ., Nasal carriage of S . aureus has been estimated to have an effective population size in the range 50–4000 12 , 15 ., Figure 1 shows the accumulation of diversity over time under these parameters ., Our simulations indicate that if we assume a host acquires a homogeneous transmission , the expected colonization period required for previously observed levels of diversity to emerge under neutral evolution is typically long ( ∼1 year ) ., While S . aureus may be carried for a number of years 35 , observing high diversity from recently infected individuals suggests that alternative explanations may be more realistic ., First , repeated exposure to infection may result in the introduction of new strains to a host , potentially resulting in rapid establishment of diversity ., Second , the transmitted inoculum may not be a single genotype , but rather a sample of genotypes from the source ., This was investigated in detail in the next simulation experiments ., We assessed the effect of bottleneck size in a disease outbreak by firstly considering a simple transmission chain , where each infected individual transmits to exactly one susceptible individual ., We considered an initial bacterial population of 10 genotypes , which had an expected pairwise distance of 5 SNPs , which could represent a long-term carrier , or the recipient of a diverse infection ., We then simulated a transmission event by selecting an inoculum of size ., We allowed the new founding population to reach equilibrium population size and imposed another bottleneck after 1000 generations ., We repeated this process for 25 transmission events ., Figure S3 shows six realizations of our simulations under different values of ., Clearly , while diversity rapidly drops away for small bottlenecks , larger sizes ( >10 cells ) allow diversity to persist for several bottlenecks ., With sufficient mutation between transmission events , diversity can be maintained ( Figure S4 ) ., If bacterial specimens taken from disease carriers in an outbreak are sequenced , we can attempt to estimate the routes of transmission based on the genetic similarity of the isolates ., There are a number of additional factors that may inform our estimate of the transmission route , such as location , contact patterns and exposure time , but we examined the information to be gained from sequence data alone ., More than one isolate may be taken from a carrier , sampled either simultaneously or at various time points during infection , necessitating a choice of how to describe the genetic distance between populations of isolates from two cases ., We considered both the mean pairwise distance and the centroid distance to summarize the genetic distance between groups of isolates , but found that both resulted in very similar network reconstructions ., Network edges were given a weighting which we assume is inversely proportional to the genetic distance ( see Materials and Methods for detailed specification of weighting functions ) ., The single transmission chain provides an idealized scenario to reconstruct transmission links ., Furthermore , we assumed that the order of infection is known ., As such , the potential source for each individual can only be one of the preceding generations , which , intuitively at least , should become more genetically distant as one goes farther back in time ., Transmission events occur every 1000 bacterial generations , and one cell is selected randomly from each individuals bacterial carriage at regular intervals ( possibly more frequent than the transmission process ) for sequencing ., Figure 2 shows reconstructed networks for a range of scenarios ., We repeated this for several simulations under each scenario , and plotted receiver-operating characteristic ( ROC ) curves to assess the accuracy of the reconstructed network ( Figure 3 ) ., We observed that there was an optimal bottleneck size in this setting which allows the network to be resolved with a relatively high level of accuracy; for the scenario considered here , networks reconstructed using a bottleneck size of 10 clearly outperform those constructed using both larger and smaller inoculum sizes ., In this setting , larger bottlenecks allow a very similar bacterial population to be established within each new infective , while smaller bottlenecks rapidly result in a single dominant strain being carried and transmitted by the infected population ., The optimal bottleneck size depends on the outbreak size , as well as the expected change in pathogen diversity within-host between time of infection and onward transmission ., We found that infrequent sampling ( eg . one sample per infected individual ) can lead to a reconstruction that is no better than selecting sources at random , and sometimes worse ., We next considered a more general susceptible-infectious-removed ( SIR ) epidemic , in order to determine how network accuracy is affected by transmission and mutation rate , and sampling strategy ., We again estimated the transmission network based upon observed sequence data alone under the assumption that the order of infection was known ., Both the centroid and pairwise distance metrics were used , but we found that the performance of both was very similar ., For this reason , all results shown here have been derived using the pairwise distance measure ., We simulated epidemics under a variety of scenarios and found that generally for larger outbreaks , such that several infective individuals were present at any one time , the power to determine the routes of transmission was low ., We supposed that we did not know the infection or removal times , only observing the correct order of infection ., Table S1 gives area under the ROC curve values for estimated networks based on a selection of simulated datasets ., In many cases , particularly for higher rates of infection and removal , we found that the ROC curve indicated no improvement on guessing transmission sources at random ., However , we saw that distinct groups of individuals , representing large branches of the transmission network , may be distinguished from one another , indicating that gross features of the transmission network may be determined ., Figure 4A shows a simulated epidemic in which nodes are colored according to their observed mean distance from the origin ., Clearly later infections can be discriminated from cases further in the past , but a great deal of uncertainty exists among contemporary cases ., Network reconstruction was more successful in scenarios where higher diversity could be established between host and recipient ., As such , network reconstruction improved for long carriage times , low transmission rates , and high mutation rates ( Table S1 ) ., Network entropy may be used to evaluate the uncertainty arising under the network reconstruction approach ( see Materials and Methods ) ., As the outbreak progresses , the entropy of most nodes increases and is only modestly lower than that obtained from assigning an even probability to all preceding cases ( Figure 4B ) ., However , certain nodes are markedly less uncertain than the surrounding ones , indicating that for them , incorporating genetic distance considerably reduces the uncertainty of who infected them ., In this outbreak , for example , the entropy distribution is bimodal , with 99 of the 112 nodes having entropy within one bit of random guessing ., In Figure 4 , the infector of each node was identified with probability proportional to the inverse of the genetic distance between the populations , guaranteeing that some positive probability is assigned to the true infector ., Entropy may be reduced ( possibly at the expense of lowering the estimated probability of infection by the true infector ) by increasing the relative probability of infection from nodes that are genetically close ., Importance of similar nodes can be increased up to the point at which the closest node is selected with certainty , and the maximum directed spanning tree is selected , ( equivalent to the SeqTrack method of network reconstruction 4 ) , resulting in zero entropy ., Figure 5 shows the same network estimated with a varying importance factor ., While some correct edges are estimated with a higher probability , several false connections are also estimated with little uncertainty ., Precision is increased often at the expense of accuracy , and indeed increasing the importance factor for this network reduces the area under the ROC curve ., Table S2 gives values for the area under the ROC curve for estimated networks under a particular simulated dataset , showing how accuracy declines as closer nodes are weighted more heavily ., The true parent of a node has no guarantee of being the closest node , but is likely to belong to a group of genetically similar potential sources ., Sampling strategies play an important role in the accuracy of the estimated network – while it is unsurprising that more frequent sampling results in reduced uncertainty , it is notable that even with perfect sampling , the uncertainty typically remains much too large to identify individual transmission routes ., Figure 6 shows the same simulated outbreak , colored according to two different sampling strategies; firstly sequencing one isolate from each individual every 1000 bacterial generations , and secondly sequencing isolates ten times at each time point ., In each plot , an arbitrarily chosen reference node is marked , to which each other node is compared ., The second plot shows that the ‘neighborhood’ , to which the reference node and its true source belong , may be discerned , genetically distinct from the rest of the outbreak ., Increasing sampling frequency beyond this level does not considerably improve discrimination ., Selecting a single isolate per individual typically leads to a poor estimation of the transmission network ., We found that the initial genotype often persisted throughout an epidemic , and remained the dominant genotype for a large number of infected individuals ., Selecting a single isolate from each infective would result in a large number of individuals with an apparently genetically identical infection , providing little information about transmission ., Multiple samples can reveal minor genotypes , which may be more informative ., We found that in most reasonable settings , the reconstructed network based on single isolates was uncertain and inaccurate , sometimes worse than a random network ., Our work suggests that under a range of plausible scenarios considered here , it is not possible to determine transmission routes based upon sampled bacterial genetic distance data alone ., For every infected individual in a large outbreak , there are several other individuals harboring a similar pathogen population who may be the true source of infection ., Existing distance-based methods typically assume that a single isolate is obtained from each host , in which case the distance between hosts is simply the number of SNPs separating the two isolates ., Sampling only one isolate per case can lead to poor estimates of genetic distance between individuals , and therefore inaccurate identification of transmission routes , often little better than assigning links at random ., Increasing the sampling to obtain more than one sample per host may partially alleviate this problem; in this case , the genetic distance between two hosts may be estimated as the mean distance between isolates from one host and isolates from the other ., The amount of sampling required depends on what one hopes to gain from the sequence data ., Single isolates may be sufficient to rule out infection sources for individuals , based on large observed genetic distances ., Repeated sampling may be used to identify clusters of infected individuals who host very similar bacterial populations , and therefore are likely to be close neighbors in the transmission network ., This allows us to investigate more general trends in the progression of the outbreak , eg ., spread between communities or countries , while individual events remain obscure ., A considerable degree of diversity is transmitted with even a small inoculum from the source , under the assumption that the inoculum is sampled randomly from the pathogen population infecting the source ., We believe that this highlights the importance of establishing the degree of within-host diversity through multiple samples before attempting to infer transmission routes ., Such sampling will also further our understanding of the transmission bottleneck for bacterial pathogens , as well as the effective population size ., Many of the parameters in our simulations are difficult to estimate for bacteria in vivo , and as such , few estimates exist ., Moreover , population structure within a host may lead to divergence between the census and effective population sizes in each host 36 ., To obtain results that would be widely applicable in spite of these uncertainties , we simulated transmission and carriage under a wide range of plausible parameter values for bacterial pathogens ., Bottleneck size is a key factor in the onward transmission of diversity and network recovery – too small and resulting infections are homogeneous , too large and recipients share the same genotype distribution as the source ., In our inference of transmission routes , we have measured the average genetic distance between individuals across the span of the infectious period ., If the removal rate is sufficiently low relative to the mutation rate , the genetic makeup of the pathogen population in an individual will vary considerably over time ., As such , while a source and recipient may be genetically similar at the time of infection , the mean distance between observed samples may be higher ., It may be possible to either restrict or weight the range of samples used in order to gauge the distribution of genotypes at a particular time; however , this comes at the expense of excluding potentially useful data ., Using the mean genetic distance is not unreasonable if the length of carriage is small compared to the time required to accumulate significant diversity ., We have considered different sampling strategies , but have supposed that a large coverage of the infected population can be achieved ., This may be reasonable for an outbreak in a small community , but inevitably , there may still be some missing links , especially when asymptomatic carriage could go undetected ., Furthermore , we assumed that the order of infections is known ., We have demonstrated that the reconstructed network accuracy is typically poor , even in the best-case scenario of near perfect observation ., We did not consider the possibility of repeated infectious contact , leading to infection from multiple sources ., This could serve to increase the diversity within-host , further complicating the inference of transmission routes ., In many settings , it is reasonable to assume that infectious individuals may come into contact with each other , and potentially transmit ., In the case of vector-borne diseases , the vector ( eg . a healthcare worker in nosocomial S . aureus transmission ) may transiently carry multiple strains collected from one or more carriers , and pass this diversity on to recipients ., If is the rate at which a novel SNP is introduced via reinfection , then the equilibrium level of diversity is increased to ., If the type ( s ) introduced upon reinfection are sufficiently dissimilar to the existing population , it may be possible to infer reinfection events ., However , if the rate of infectious contact is high , most bacterial populations may contain artifacts from several disparate sources , preventing any kind of transmission analysis ., The ability to reconstruct transmission networks is dependent on both data and methodological limitations ., While we cannot rule out the possibility of alternative methods using genetic distance data to provide superior network reconstructions , the framework we use here is flexible enough to investigate a range of relationships between genetic distance and transmission , under the widely used assumption that individuals hosting genetically similar pathogens are more likely to have been involved in a transmission event than those infected by more distantly related organisms ., In this study , we have made a number of assumptions ., Firstly , we have used a discrete model of bacterial growth in which cells simultaneously divide and die at generational intervals ., We have specified that a cell must divide or die at each generation , such that persisting without reproduction is not possible ., Under this model , the effective population size is equal to the actual population size - incorporating cell survival without reproduction would only serve to reduce the effective population size , and therefore , the accumulation of diversity ., Secondly , we have assumed neutral evolution; that is , there is no fitness advantage or cost associated with any mutation ., Selection is likely to decrease the amount of instantaneous diversity within a population ., The emergence of fitter mutations is likely to reduce the expected diversity , since fitter strains are more likely to tend towards fixation , eliminating weaker variants and their associated diversity ., However , the effect of selective sweeps over time could increase the observed diversity in a longitudinal sample ., Thirdly , we have assumed that an inoculum is composed of a random sample of bacteria from the entire colony ., If the inoculum is not a random sample , the degree of diversity that is transmitted upon infectious contact may be much smaller ., The suitability of this assumption may vary depending on the mode of transmission ., However , we could consider the bottleneck size used here to represent the effective population size of the inoculum , rather than the true size ., Finally , we have ignored the possibility of recombination ., Further work would be required to explore the effect of each of these aspects in detail ., The observation of rare variants in cross-sectional samples from individual hosts may offer an alternative approach to identifying the transmission network ., Each observation of a particular genotype must arise from a shared ancestor , assuming homoplasy is not possible ., With perfect sampling , a genotype carried by only two individuals under these conditions indicates a transmission event between the pair ., However , many isolates would need to be sequenced to detect such variation which is by definition rare ., Such sampling is typically infeasible via standard genome sequencing , although deep sequencing may reveal uncommon SNPs , suggesting transmission between carriers ., Metagenomic sampling may potentially be of great use in such an approach ., Furthermore , such sampling may provide significant practical and financial advantages over collection and sequencing several individual samples ., Future work may be conducted to investigate the performance of such an approach under a variety of scenarios , for viral as well as bacterial pathogens ., It may be possible to develop a genetic distance threshold such that any observed pair of isolates exceeding this value are deemed , to a given level of confidence , not to have arisen from directly linked cases ., Such a threshold will depend on the bottleneck size , effective population size and mutation rate ., As yet , no such limit has been justified theoretically , and appropriate data to investigate this are lacking ., This work highlights the need to better understand bacterial carriage and transmission at a cellular and molecular level ., As yet , few studies have sequenced repeated samples from infected people , so the scale of within-host diversity is still unclear ., Furthermore , key parameters such as effective population size and inoculum size are either highly uncertain or unknown for bacterial pathogens ., If feasible , we recommend multiple isolates be sequenced per individual when collecting data to assess transmission routes ., While our work casts some doubt on the use of bacterial sequence data to identify individual transmission routes , there is certainly still much scope for its use in the analysis of disease transmission dynamics ., Uncovering clusters of genetically similar isolates can be greatly informative for the spread of a disease between various subpopulations , such as households , schools and hospitals ., By combining genomic data with additional information , such as estimated infection and removal times , contact patterns , social groups and geographic location , it may be possible to narrow the pool of potential sources down considerably ., Genomic data and traditional ‘shoe-leather epidemiology’ methods may complement each other; each eliminating links that the other cannot rule out ., Our simulation studies were based around a discrete-time bacterial fission model ., We supposed that bacteria cells died at random with probability , where is the bacterial population in the previous generation , and is the equilibrium population ., The remaining cells divided , creating a mutant daughter cell with probability , otherwise creating a genetically identical copy of the parent cell ., Mutations introduced one nucleotide substitution at a random position in the genome , such that the genetic distance from parent to mutant was always one SNP ., Neutral evolution is assumed ., Under this model , the effective population size is equal to the size of the population; that is , 37 ., In the event of an infectious contact , an inoculum of size was separated from the original population , and allowed to grow and diversify independently ., The inoculum was assumed to be a random sample from the original population ., In the epidemic simulations , we used a standard SIR model , in which each susceptible individual is exposed to an infection rate of at time , where is the proportion of infected individuals at time ., Infected individuals are then removed ( through recovery or death ) at a rate ., As we operated in a discrete-time framework , we used Poisson approximations to generate times of infection ., For generation , a given susceptible individual avoids infection with probability ., An individual infected in generation may transmit to another individual from generation onwards ., The source of a new infection is chosen uniformly at random from the pool of current infectives ., We assumed that the order of infection was known , and that all infective individuals were observed ., Failure to identify routes of infection under these optimal conditions would provide little confidence that this could be achieved in a real world setting , where such information is rarely available ., The relationships between isolates may be considered either directly from the sequence data , or from a matrix of observed genetic distances ., The former category encompasses methods explicitly considering the evolutionary process , such as maximum likelihood and parsimony tree construction ., Neighbor joining , UPGMA , minimum spanning tree construction and SeqTrack all belong to the latter ., In this study , we were primarily interested in the relationship between individuals , rather than between bacterial specimens , and as such , did not adopt a phylogenetic approach ., We instead weighted network edges according to the genetic distance matrix , supposing that the likelihood of direct infection having occurred was inversely related to the genetic distance ., Given the infective population is fully observed , a function may be defined to provide weight to each potential network edge ., We assume this weight is inversely related to the genetic distance between the two nodes ., This distance may be specified in various ways – here , we consider the mean genetic pairwise distance and the distance between the centroid of each group ., Let denote the set of sequences observed from individual , then the mean genetic distance between and can be given asAlternatively , let be the proportion of samples in with a nucleotide at locus ., The distance between the centroids of and can be defined aswhere is the genome length , and returns the absolute value ., Unlike the pairwise distance , the centroid distance has the desirable property that for all ; however , the converse is not true ., We calculate the relative probability that a particular transmission event occurred by considering the inverse of the chosen distance function ;then we can define our weighting function aswhere is a constant to determine the relative probability of a connection between individuals with identical genotype distributions , and is a proximity factor by which the importance of close connections may be | Introduction, Results, Discussion, Materials and Methods | The prospect of using whole genome sequence data to investigate bacterial disease outbreaks has been keenly anticipated in many quarters , and the large-scale collection and sequencing of isolates from cases is becoming increasingly feasible ., While sequence data can provide many important insights into disease spread and pathogen adaptation , it remains unclear how successfully they may be used to estimate individual routes of transmission ., Several studies have attempted to reconstruct transmission routes using genomic data; however , these have typically relied upon restrictive assumptions , such as a shared topology of the phylogenetic tree and a lack of within-host diversity ., In this study , we investigated the potential for bacterial genomic data to inform transmission network reconstruction ., We used simulation models to investigate the origins , persistence and onward transmission of genetic diversity , and examined the impact of such diversity on our estimation of the epidemiological relationship between carriers ., We used a flexible distance-based metric to provide a weighted transmission network , and used receiver-operating characteristic ( ROC ) curves and network entropy to assess the accuracy and uncertainty of the inferred structure ., Our results suggest that sequencing a single isolate from each case is inadequate in the presence of within-host diversity , and is likely to result in misleading interpretations of transmission dynamics – under many plausible conditions , this may be little better than selecting transmission links at random ., Sampling more frequently improves accuracy , but much uncertainty remains , even if all genotypes are observed ., While it is possible to discriminate between clusters of carriers , individual transmission routes cannot be resolved by sequence data alone ., Our study demonstrates that bacterial genomic distance data alone provide only limited information on person-to-person transmission dynamics . | With the advent of affordable large-scale genome sequencing for bacterial pathogens , there is much interest in using such data to identify who infected whom in a disease outbreak ., Many methods exist to reconstruct the phylogeny of sampled bacteria , but the resulting tree does not necessarily share the same structure as the transmission tree linking infected persons ., We explored the potential of sampled genomic data to inform the transmission tree , measuring the accuracy and precision of estimated networks based on simulated data ., We demonstrated that failing to account for within-host diversity can lead to poor network reconstructions - even with repeated sampling of each carrier , there is still much uncertainty in the estimated structure ., While it may be possible to identify clusters of potential sources , identifying individual transmission links is not possible using bacterial sequence data alone ., This work highlights potential limitations of genomic data to investigate transmission dynamics , lending support to methods unifying all available data sources . | ecological metrics, mutation, population size, ecology, effective population size, population modeling, evolutionary modeling, genetics, biology and life sciences, population genetics, infectious disease modeling, computational biology, evolutionary biology | null |
2,005 | journal.pcbi.1003495 | 2,014 | Bidirectional Control of Absence Seizures by the Basal Ganglia: A Computational Evidence | Absence epilepsy is a generalized non-convulsive seizure disorder of the brain , mainly occurring in the childhood years 1 ., A typical attack of absence seizures is characterized by a brief loss of consciousness that starts and terminates abruptly , and meanwhile an electrophysiological hallmark , i . e . the bilaterally synchronous spike and wave discharges ( SWDs ) with a slow frequency at approximately 2–4 Hz , can be observed on the electroencephalogram ( EEG ) of patients 1 , 2 ., There is a broad consensus that the generation of SWDs during absence seizures is due to the abnormal interactions between cerebral cortex and thalamus , which together form the so-called corticothalamic system ., The direct evidence in support of this view is based on simultaneous recordings of cortex and thalamus from both rodent animal models and clinical patients 3–5 ., Recent computational modelling studies on this prominent brain disorder also approved the above viewpoint and provided more deep insights into the possible generation mechanism of SWDs in the corticothalamic system 6–13 ., The basal ganglia comprise a group of interconnected subcortical nucleus and , as a whole , represent one fundamental processing unit of the brain ., It has been reported that the basal ganglia are highly associated with a variety of brain functions and diseases , such as cognitive 14 , emotional functions 15 , motor control 16 , Parkinsons disease 17 , 18 , and epilepsy 19 , 20 ., Anatomically , the basal ganglia receive multiple projections from both the cerebral cortex and thalamus , and in turn send both direct and indirect output projections to the thalamus ., These connections enable the activities of the basal ganglia to influence the dynamics of the corticothalamic system ., Therefore , it is naturally expected that the basal ganglia may provide an active role in mediating between seizure and non-seizure states for absence epileptic patients ., Such hypothesis has been confirmed by both previous animal experiments 19 , 21–23 and recent human neuroimage data 20 , 24 , 25 ., Nevertheless , due to the complicated interactions between basal ganglia and thalamus , the underlying neural mechanisms on how the basal ganglia control the absence seizure activities are still remain unclear ., From the anatomical perspective , the substantia nigra pars reticulata ( SNr ) is one of the major output nucleus of the basal ganglia to thalamus ., Previous experimental studies using various rodent animal models have demonstrated that suitable changes in the firing of SNr neurons can modulate the occurrence of absence seizures 21–23 , 26 ., Specifically , it has been found that pharmacological inactivation of the SNr by injecting -aminobutyric acids ( GABA ) agonists or glutamate antagonists suppresses absence seizures 21 , 22 ., Such antiepileptic effect was supposed to be attributed to the overall inhibitory effect of the indirect pathway from the SNr to thalamic reticular nucleus ( TRN ) relaying at superior colliculus 21 , 22 ., In addition to this indirect inhibitory pathway , it is known that the SNr also contains GABAergic neurons directly projecting to the TRN and specific relay nuclei ( SRN ) of thalamus 27 , 28 ., Theoretically , changing the activation level of SNr may also significantly impact the firing activities of SRN and TRN neurons 28 , 29 ., This contribution might further interrupt the occurrence of SWDs in the corticothalamic system , thus providing an alternative mechanism to regulate typical absence seizure activities ., To our knowledge , however , so far the precise roles of these direct basal ganglia-thalamic pathways in controlling absence seizures are not completely established ., To address this question , we develop a realistic mean-field model for the basal ganglia-corticothalamic ( BGCT ) network in the present study ., Using various dynamic analysis techniques , we show that the absence seizures are controlled and modulated either by the isolated SNr-TRN pathway or the isolated SNr-SRN pathway ., Under suitable conditions , these two types of modulations are observed to coexist in the same network ., Importantly , in this coexist region , both low and high activation levels of SNr neurons can suppress the occurrence of SWDs due to the competition between these two direct inhibitory basal ganglia-thalamic pathways ., These findings clearly outline a bidirectional control of absence seizures by the basal ganglia , which is a novel phenomenon that has never been identified both in previous experimental and modelling studies ., Our results , on the one hand , further improve the understanding of the significant role of basal ganglia in controlling absence seizure activities , and on the other hand , provide testable hypotheses for future experimental studies ., We build a biophysically based model that describes the population dynamics of the BGCT network to investigate the possible roles of basal ganglia in the control of absence seizures ., The network framework of this model is inspired by recent modelling studies on Parkinsons disease 30 , 31 , which is shown schematically in Fig . 1 ., The network totally includes nine neural populations , which are indicated as follows: excitatory pyramidal neurons ( EPN ) ; inhibitory interneurons ( IIN ) ; TRN; SRN; striatal D1 neurons; striatal D2 neurons; SNr; globus pallidus external ( GPe ) segment; subthalamic nucleus ( STN ) ., Similar to other modelling studies 30–32 , we do not model the globus pallidus internal ( GPi ) segment independently but consider SNr and GPi as a single structure in the present study , because they are reported to have closely related inputs and outputs , as well as similarities in cytology and function ., Three types of neural projections are contained in the BGCT network ., For sake of clarity , we employ different line types and heads to distinguish them ( see Fig . 1 ) ., The red lines with arrow heads denote the excitatory projections mediated by glutamate , whereas the blue solid and dashed lines with round heads represent the inhibitory projections mediated by and , respectively ., It should be noted that in the present study the connections among different neural populations are mainly inspired by previous modelling studies 30 , 31 ., Additionally , we also add the connection sending from SNr to TRN in our model , because recent anatomical findings have provided evidence that the SNr also contains GABAergic neurons directly projecting to the TRN 27–29 ., The dynamics of neural populations are characterized by the mean-field model 9 , 11 , 33–35 , which was proposed to study the macroscopic dynamics of neural populations in a simple yet efficient way ., The first component of the mean-field model describes the average response of populations of neurons to changes in cell body potential ., For each neural population , the relationship between the mean firing rate and its corresponding mean membrane potential satisfies an increasing sigmoid function , given by ( 1 ) where indicate different neural populations , denotes the maximum firing rate , r represents the spatial position , is the mean firing threshold , and is the threshold variability of firing rate ., If exceeds the threshold , the neural population fires action potentials with an average firing rate ., It should be noted that the sigmoid shape of is physiologically crucial for this model , ensuring that the average firing rate cannot exceed the maximum firing rate ., The changes of the average membrane potential at the position r , under incoming postsynaptic potentials from other neurons , are modeled as 9 , 33–36 ( 2 ) ( 3 ) where is a differential operator representing the dendritic filtering of incoming signals ., and are the decay and rise times of cell-body response to incoming signals , respectively ., is the coupling strength between neural populations of type and type ., is the incoming pulse rate from the neural population of type to type ., For simplicity , we do not consider the transmission delay among most neural populations in the present work ., However , since the functions via second messenger processes , a delay parameter is introduced to its incoming pulse rate ( i . e . , ) to mimic its slow synaptic kinetics ., This results in a delay differential equation in the final mathematical description of the BGCT model ., Note that the similar modelling method has also been used in several previous studies 13 , 37 ., In our system , each neural population gives rise to a field of pulses , which travels to other neural population at a mean conduction velocity ., In the continuum limit , this type of propagation can be well-approximated by a damped wave equation 9 , 33–35 , 38: ( 4 ) Here is the Laplacian operator ( the second spatial derivative ) , is the characteristic range of axons of type , and governs the temporal damping rate of pulses ., In our model , only the axons of cortical excitatory pyramidal neurons are assumed to be sufficiently long to yield significant propagation effect ., For other neural populations , their axons are too short to support wave propagation on the relevant scales ., This gives ( ) ., Moreover , as one of typical generalized seizures , the dynamical activities of absence seizures are believed to occur simultaneously throughout the brain ., A reasonable simplification is therefore to assume that the spatial activities are uniform in our model , which has been shown as the least stable mode in models of this class 33 , 34 , 36 ., To this end , we ignore the spatial derivative and set in Eq ., ( 4 ) ., Accordingly , the propagation effect of cortical excitatory axonal field is finally given by 33 , 34 , 36: ( 5 ) where ., For the population of cortical inhibitory interneurons , the BGCT model can be further reduced by using and , which is based on the assumption that intracortical connectivities are proportional to the numbers of synapses involved 9 , 13 , 33–36 ., It has been demonstrated that by making these above reductions , the developed BGCT model becomes computationally more tractable without significant deteriorating the precision of numerical results ., We then rewrite above equations in the first-order form for all neural populations ., Following above assumptions , we use Eqs ., ( 1 ) – ( 3 ) and ( 5 ) for modelling the dynamics of excitatory pyramidal neurons , and Eqs ., ( 1 ) – ( 3 ) for modelling the dynamics of other neural populations ., This yields the final mathematical description of the BGCT model given as follows: ( 6 ) ( 7 ) ( 8 ) ( 9 ) where ( 10 ) ( 11 ) ( 12 ) In Eq ., ( 10 ) , the superscript T denotes transposition ., The detailed expression of for different neural populations is represented by , and , given by ( 13 ) with ( 14 ) ( 15 ) ( 16 ) Here the variable in Eq ., ( 15 ) denotes the delay and the parameter in Eq ., ( 16 ) represents the constant nonspecific subthalamic input onto SRN ., The parameters used in our BGCT model are compatible with physiological experiments and their values are adapted from previous studies 9 , 11 , 13 , 30 , 31 , 36 ., Unless otherwise noted , we use the default parameter values listed in Table 1 for numerical simulations ., Most of the default values of these parameters given in Table 1 are based on either their nominal values or parameter ranges reported in above literature ., A small number of parameters associated with the basal ganglia ( i . e . , , and ) are adjusted slightly , but still within their normal physiological ranges , to ensure our developed model can generate the stable 2–4 Hz SWDs under certain conditions ., Note that due to lack of quantitative data , the coupling strength of the SNr-TRN pathway needs to be estimated ., Considering that the SNr sends GABAergic projections both to SRN and TRN and also both of these two nuclei are involved in thalamus , it is reasonable to infer that the coupling strengths of these two pathways are comparable ., For simplicity , here we chose by default ., In the following studies , we also change ( decrease or increase ) the value of several folds by employing a scale factor ( see below ) to examine how the inhibition from the SNr-TRN pathway regulates absence seizures ., Additionally , during this study , several other critical parameters ( i . e . , , and ) are also varied within certain ranges to obtain different dynamical states and investigate their possible effects on the modulation of absence seizures ., In the present study , several data analysis methods are employed to quantitatively evaluate the dynamical states as well as the properties of SWDs generated by the model ., To reveal critical transitions between different dynamical states , we perform the bifurcation analysis for several key parameters of the model ., For one specific parameter , the bifurcation diagram is simply obtained by plotting the “stable” local minimum and maximum values of cortical excitatory axonal fields ( i . e . , ) over changes in this parameter 11 , 39 ., To this end , all simulations are executed for sufficiently long time ( 10 seconds of simulation time , after the system reaches its stable time series ) , and only the local minimum and maximum values obtained from the latter stable time series are used ., Using the above bifurcation analysis , we can also easily distinguish different dynamical states for combined parameters ., Such analysis technique allows us to further identify different dynamical state regions in the two-parameter space ( for example , see Fig . 2D ) ., On the other hand , the power spectral analysis is used to estimate the dominant frequency of neural oscillations ., To do this , the power spectral density is obtained from the time series ( over a period of 10 seconds ) by using the fast Fourier transform ., Then , the maximum peak frequency is defined as the dominant frequency of neural oscillations ., It should be noted that , by combining the results of both the state and frequency analysis , we can outline the SWD oscillation region that falls into the 2–4 Hz frequency range in the two-parameter space ( for example , see the asterisk region in Fig . 2E ) ., Moreover , we calculate the mean firing rates ( MFRs ) for several key neural populations in some figures ., To compute the MFRs , all corresponding simulations are performed up to 25 seconds and the data from 5 to 25 seconds are used for statistical analysis ., To obtain convincing results , we carry out 20 independent simulations with different random seeds for each experimental setting , and report the averaged result as the final result ., Finally , in some cases , we also compute the low and high triggering mean firing rates ( TMFRs ) for SNr neurons ., In the following simulations , we find that the mean firing rate of SNr neurons is increased with the growth of the excitatory coupling strength , which serves as a control parameter to modulate the activation level of SNr in our work ( see the Results section ) ., Based on this property , the low and high TMFRs can be determined by the mean firing rates of SNr neurons occurring at the boundaries of the typical region of 2–4 Hz SWDs ( for example , see the black dashed lines in Fig . 3B ) ., All network simulations are written and performed under the MATLAB environment ., The aforementioned dynamical equations are integrated by using the standard fourth-order Runge-Kutta method , with a fixed temporal resolution of 40 ., In additional simulations , it turns out that the chosen integration step is sufficiently small to ensure the numerical accuracy of our developed BGCT model ., The computer codes used in the present study are available on ModelDB ( https://senselab . med . yale . edu/ModelDB/showmodel . asp ? model=152113 ) ., The fundamental implementation of the BGCT model is provided as supplementary information to this paper ( we also provide a XPPAUT code for comparison 41; see Text S1 and S2 ) ., Previous studies have suggested that the slow kinetics of receptors in TRN are a candidate pathological factor contributing to the generation of absence seizures both in animal experiments and biophysical models of corticothalamic network 7 , 13 , 42 , 43 ., To explore whether this mechanism also applies to the developed BGCT model , we perform one-dimensional bifurcation analysis for the inhibitory coupling strength and the delay parameter , respectively ., The corresponding bifurcation diagrams and typical time series of are depicted in Figs ., 2A–2C , which reveal that different dynamical sates emerge in our system for different values of and ., When the coupling strength is too weak , the inhibition from TRN cannot effectively suppress the firing of SRN ., In this case , due to the strong excitation from pyramidal neurons , the firing of SRN rapidly reaches a high level after the beginning of the simulation ., Such high activation level of SRN in turn drives the firing of cortical neurons to their saturation states within one or two oscillation periods ( region I ) ., As the coupling strength grows , the inhibition from TRN starts to affect the firing of SRN ., For sufficiently long , this causes our model to successively undergo two different oscillation patterns ., The first one is the SWD oscillation pattern , in which multiple pairs of maximum and minimum values are found within each periodic complex ( region II ) ., Note that this oscillation pattern has been extensively observed on the EEG recordings of real patients during absence seizures 1 ., The other one is the simple oscillation pattern , in which only one pair of maximum and minimum values appears within each periodic complex ( region III ) ., However , if the coupling strength is too strong , the firing of SRN is almost completely inhibited by TRN ., In this situation , the model is kicked into the low firing region and no oscillation behavior can be observed anymore ( region IV ) ., Additionally , we also find that the model dynamics are significantly influenced by the delay , and only sufficiently long can ensure the generation of SWDs in the developed model ( see Fig . 2B ) ., To check whether our results can be generalized within a certain range of parameters , we further carry out the two-dimensional state analysis in the ( ) panel ., As shown in Fig . 2D , the whole ( ) panel is divided into four state regions , corresponding to those regions identified above ., Unsurprisingly , we find that the BGCT model can generate the SWD oscillation pattern only for appropriately intermediate and sufficiently long ., This observation is in consistent with our above finding , demonstrating the generalizability of our above results ., To estimate the frequency characteristics of different oscillation patterns , we compute the dominant frequency based on the spectral analysis in the ( ) panel ., For both the simple and SWD oscillation patterns , the dominant frequency is influenced by and , and increasing their values can both reduce the dominant frequency of neural oscillations ( Fig . 2E ) ., However , compared to , our results indicate that the delay may have a more significant effect on the dominant oscillation frequency ( Fig . 2E ) ., By combining the results in Figs ., 2D and 2E , we roughly outline the SWD oscillation region that falls into the 2–4 Hz frequency range ( asterisk region ) ., It is found that most of , but not all , the SWD oscillation region is contained in this specific region ., Here we emphasize the importance of this specific region , because the SWDs within this typical frequency range is commonly observed during the paroxysm of absence epilepsy in human patients 1 , 2 ., Why can the slow kinetics of receptors in TRN induce absence seizure activities ?, Anatomically , the SRN neurons receive the TRN signals from the inhibitory pathway mediated by both and receptors ., Under suitable condition , the double suppression caused by these two types of GABA receptors occurring at different time instants may provide an effective mechanism to create multiple firing peaks for the SRN neurons ( see below ) ., Such firing pattern of SRN in turn impacts the dynamics of cortical neurons , thus leading to the generation of SWDs ., It should be noted that , during the above processes , both and play critical roles ., In each oscillation period , after the -induced inhibition starts to suppress the firing of SRN neurons , these neurons need a certain recovery time to restore their mean firing rate to the rising state ., Theoretically , if this recovery time is shorter than the delay , another firing peak can be introduced to SRN neurons due to the latter -induced inhibition ., The above analysis implies that our model requires a sufficient long delay to ensure the occurrence of SWDs ., However , as described above , too long is also a potential factor which may push the dominant frequency of SWDs beyond the typical frequency range ., For a stronger , the inhibition caused by is also strong ., In this situation , it is obvious that the SRN neurons need a longer time to restore their firing rate ., As a consequent , a relatively longer is required for the BGCT model to ensure the occurrence of SWDs for stronger ( see Fig . 2D ) ., These findings provide consistent evidence that our developed BGCT model can replicate the typical absence seizure activities utilizing previously verified pathological mechanism ., Because we do not change the normal parameter values for basal ganglia during above studies , our results may also indicate that , even though the basal ganglia operate in the normal state , the abnormal alteration within the corticothalamic system may also trigger the onset of absence epilepsy ., Throughout the following studies , we set for all simulations ., For this choice , the delay parameter is within the physiological range and modest , allowing the generation of SWD oscillation pattern while preserving its dominant frequency around 3 Hz in most considered parameter regions ., It should be noted that , in additional simulations , we have shown that by slightly tuning the values of several parameters our developed BGCT model is also powerful to reproduce many other typical patterns of time series , such as the alpha and beta rhythms ( see Fig . S1 ) , which to a certain extent can be comparable with real physiological EEG signals 9 , 36 ., Using the developed BGCT model , we now investigate the possible roles of basal ganglia in controlling absence seizure activities ., Here we mainly concentrate on how the activation level of SNr influence the dynamics generated by the model ., This is because , on the one hand , the SNr is one of chief output nucleus of the basal ganglia to thalamus , and on the other hand , its firing activity has been found to be highly associated with the regulation of absence seizures 21 , 22 ., To this end , the excitatory coupling strength is employed to control the activation level of SNr and a three-step strategy is pursued in the present work ., In this and next subsections , we assess the individual roles of two different pathways emitted from SNr to thalamus ( i . e . , the SNr-TRN and SNr-SRN pathways ) in the control of absence seizures and discuss their corresponding biophysical mechanisms , respectively ., In the final two subsections , we further analyze the combination effects of these two pathways on absence seizure control and extend our results to more general cases ., To explore the individual role of the SNr-TRN pathway , we estimate both the state regions and frequency characteristics in the ( ) panel ., Note that during these investigations the SNr-SRN pathway is artificially blocked ( i . e . , ) ., With this “naive” method , the modulation of absence seizure activities by the SNr-SRN pathway is removed and the effect caused by the SNr-TRN pathway is theoretically amplified to the extreme ., Similar to previous results , we find that the whole ( ) panel can be also divided into four different regions ( Fig . 3A ) ., These regions are the same as those defined above ., For weak inhibitory coupling strength , increasing the excitatory coupling strength moves the model dynamics from the SWD oscillation state to the saturation state ., Here we have to notice that the saturation state is a non-physiological brain state even though it does not belong to typical seizure activities ., In strong region , the suppression of SWDs is observed by decreasing the excitatory coupling strength , suggesting that inactivation of SNr neurons may result in seizure termination through the SNr-TRN pathway ( Fig . 3A , right side ) ., For strong enough inhibitory coupling strength , such suppression effect is rather remarkable that sufficiently low activation of SNr can even kick the network dynamics into the low firing region ( compare the results in Figs . 3C and 3D ) ., The SNr-TRN pathway induced SWD suppression is complicated and its biophysical mechanism is presumably due to competition-induced collision ., On the one side , the decrease of excitatory coupling strength inactivates the SNr ( Fig . 3E , top panel ) , which should potentially enhance the firing of TRN neurons ., On the other side , however , increasing the activation level of TRN tends to suppress the firing of SRN , which significantly reduces the firing of cortical neurons and in turn inactivates the TRN neurons ., Furthermore , the inactivation of cortical neurons also tends to reduce the firing level of TRN neurons ., As the excitatory coupling strength is decreased , the collision caused by such complicated competition and information interactions finally leads to the inactivation for all the TRN , SRN , and cortical neurons ( Fig . 3E , bottom panel ) , which potentially provides an effective mechanism to destabilize the original pathological balance within the corticothalamic system , thus causing the suppression of SWDs ., Indeed , we find that not only the dynamical state but also the oscillation frequency is greatly impacted by the activation level of SNr , through the SNr-TRN pathway ., For both the simple and SWD oscillation patterns , increasing the excitatory strength can enhance their dominant frequencies ., The combined results of Figs ., 3A and 3B reveal that , for a fixed , whether the model can generate the SWDs within the typical 2–4 Hz is determined by at least one and often two critical values of ( Fig . 3B , asterisk region ) ., Because the activation level of SNr is increased with the growth of , this finding further indicates that , due to effect of the SNr-TRN pathway , the model might exist the corresponding low and high triggering mean firing rates ( TMFRs ) for SNr neurons ( Fig . 3E , dashed lines ) ., If the long-term mean firing rate of SNr neurons falls into the region between these two TMFRs , the model can highly generate typical 2–4 Hz SWDs as those observed on the EEG recordings of absence epileptic patients ., In Fig . 3F , we plot both the low and high TMFRs as a function of the inhibitory coupling strength ., With the increasing of , the high TMFR grows rapidly at first and then reaches a plateau region , whereas the low TMFR almost linearly increases during this process ., Consequently , it can be seen that these two critical TMFRs approach each other as the inhibitory coupling strength is increased until they almost reach an identical value ( Fig . 3F ) ., The above findings indicate that the SNr-TRN pathway may play a vital role in controlling the absence seizures and appropriately reducing the activation level of SNr neurons can suppress the typical 2–4 Hz SWDs ., The similar antiepileptic effect induced by inactivating the SNr has been widely reported in previous electrophysiological experiments based on both genetic absence epilepsy rats and tottering mice 21–23 , 26 ., Note that , however , in literature such antiepileptic effect by reducing the activation of SNr is presumed to be accomplished through the indirect SNr-TRN pathway relaying at superior colliculus 21 , 22 ., Our computational results firstly suggest that such antiepileptic process can be also triggered by the direct SNr-TRN GABAergic projections ., Combining these results , we postulate that for real absence epileptic patients both of these two pathways might work synergistically and together provide a stable mechanism to terminate the onset of absence epilepsy ., We next turn on the SNr-SRN pathway and investigate whether this pathway is also effective in the control of absence seizures ., Similar to the previous method , we artificially block the SNr-TRN pathway ( i . e . , ) to enlarge the effect of the SNr-SRN pathway to the extreme ., Fig . 4A shows the two-dimensional state analysis in the ( ) panel , and again the whole panel is divided into four different state regions ., Compared to the results in Fig . 3A , the suppression of SWDs appears in a relatively weaker region by increasing the excitatory coupling strength ., This finding suggests that the increase in the activation of SNr can also terminate the SWDs , but through the SNr-SRN pathway ., For relatively weak within the suppression region , the SNr-SRN pathway induced suppression is somewhat strong ., In this case , the high activation level of SNr directly kicks the network dynamics into the low firing region , without undergoing the simple oscillation state ( Fig . 4C2 and compare with Fig . 4C3 ) ., Note that this type of state transition is a novel one which has not been observed in the SWD suppression caused by the SNr-TRN pathway ., For relatively strong within the suppression region , the double peak characteristic of SWDs generated by our model is weak ., In this situation , as the inhibitory coupling strength is increased , we observe that the network dynamics firstly transit from the SWD oscillation state to the simple oscillation state , and then to the low firing state ( Fig . 4C3 ) ., To understand how the SNr-SRN pathway induced SWD suppression arises , we present the mean firing rates of several key neural populations within the corticothalamic system , as shown in Fig . 4D ., It can be seen that increasing the strength significantly improves the activation level of SNr ( Fig . 4D , top panel ) , which in turn reduces the firing of SRN neurons ( Fig . 4D , bottom panel ) ., The inactivation of SRN neurons further suppresses the mean firing rates for both cortical and TRN neurons ( Fig . 4D , bottom panel ) ., These chain reactions lead to the overall inhibition of firing activities in the corticothalamic system , which weakens the double peak shaping effect due to the slow kinetics of receptors in TRN ., For strong , such weakening effect is considerable , thus causing the suppression of SWDs ., Our results provide the computational evidence that high activation of SNr can also effectively terminate absence seizure activities by the strong inhibition effect from the SNr-SRN pathway ., Compared to the SWD suppression induced by the SNr-TRN pathway , it is obvious that the corresponding biophysical mechanism caused by the SNr-SRN pathway is simpler and more direct ., Moreover , our two-dimensional frequency analysis indicates that the dominant frequency of neural oscillations depends on the excitatory coupling strength ( see Fig . 4B ) ., For a constant , progressive increase of reduces the dominant frequency , but not in a very significant fashion ., Thus , we find that almost all the SWD oscillation region identified in Fig . 4A falls into the typical 2–4 Hz frequency range ( Fig . 4B , asterisk region ) ., Unlike the corresponding results presented in previous subsection , the combination results of Figs ., 4A and 4B demonstrate that the BGCT model modulated by the isolated SNr-SRN pathway only exhibits one TMFR for SNr neurons ., For a suitably fixed strength , the generation of SWDs can be highly triggered when the mean firing rate of SNr neurons is lower than this critical firing rate ( Fig . 4D , dashed line ) ., With the increasing of , we observe that this TMFR rapidly reduces from a hi | Introduction, Materials and Methods, Results, Discussion | Absence epilepsy is believed to be associated with the abnormal interactions between the cerebral cortex and thalamus ., Besides the direct coupling , anatomical evidence indicates that the cerebral cortex and thalamus also communicate indirectly through an important intermediate bridge–basal ganglia ., It has been thus postulated that the basal ganglia might play key roles in the modulation of absence seizures , but the relevant biophysical mechanisms are still not completely established ., Using a biophysically based model , we demonstrate here that the typical absence seizure activities can be controlled and modulated by the direct GABAergic projections from the substantia nigra pars reticulata ( SNr ) to either the thalamic reticular nucleus ( TRN ) or the specific relay nuclei ( SRN ) of thalamus , through different biophysical mechanisms ., Under certain conditions , these two types of seizure control are observed to coexist in the same network ., More importantly , due to the competition between the inhibitory SNr-TRN and SNr-SRN pathways , we find that both decreasing and increasing the activation of SNr neurons from the normal level may considerably suppress the generation of spike-and-slow wave discharges in the coexistence region ., Overall , these results highlight the bidirectional functional roles of basal ganglia in controlling and modulating absence seizures , and might provide novel insights into the therapeutic treatments of this brain disorder . | Epilepsy is a general term for conditions with recurring seizures ., Absence seizures are one of several kinds of seizures , which are characterized by typical 2–4 Hz spike-and-slow wave discharges ( SWDs ) ., There is accumulating evidence that absence seizures are due to abnormal interactions between cerebral cortex and thalamus , and the basal ganglia may take part in controlling such brain disease via the indirect basal ganglia-thalamic pathway relaying at superior colliculus ., Actually , the basal ganglia not only send indirect signals to thalamus , but also communicate with several key nuclei of thalamus through multiple direct GABAergic projections ., Nevertheless , whether and how these direct pathways regulate absence seizure activities are still remain unknown ., By computational modelling , we predicted that two direct inhibitory basal ganglia-thalamic pathways emitting from the substantia nigra pars reticulata may also participate in the control of absence seizures ., Furthermore , we showed that these two types of seizure control can coexist in the same network , and depending on the instant network state , both lowing and increasing the activation of SNr neurons may inhibit the SWDs due to the existence of competition ., Our findings emphasize the bidirectional modulation effects of basal ganglia on absence seizures , and might have physiological implications on the treatment of absence epilepsy . | theoretical biology, biology | null |
891 | journal.pbio.1001125 | 2,011 | Combining Genome-Wide Association Mapping and Transcriptional Networks to Identify Novel Genes Controlling Glucosinolates in Arabidopsis thaliana | Biologists across fields possess a common need to identify the genetic variation causing natural phenotypic variation ., Genome-wide association ( GWA ) studies are a promising route to associate phenotypes with genotypes , at a genome-wide level , using “unrelated” individuals 1 ., In contrast to the traditional use of structured mapping populations derived from two parent genomes , GWA studies allow a wide sampling of the genotypes present within a species , potentially identifying a greater proportion of the variable loci contributing to polygenic traits ., However , the uneven distribution of this increased genotypic diversity across populations ( population structure ) , as well as the sheer number of statistical tests performed in a genome-wide scan , can cause detection of a high rate of “false-positive” genotype-phenotype associations that may make it difficult to distinguish loci that truly affect the tested phenotype 1–5 ., Epistasis and natural selection can also lead to a high false-negative rate , wherein loci with experimentally validated effects on the focal trait are not detected by GWA tests 4–5 ., Repeated detection of a genotype-phenotype association across populations or experiments has been proposed to increase support for the biological reality of that association , and has even been proposed as a requirement for validation of trait-phenotype associations 2 ., However , replication across populations or experiments is not solely dependent upon genotypes , but also differences in environment and development that significantly influence quantitative traits 5–8 ., Thus , validation of a significant association through replication , while at face value providing a stringent criterion for significance , may bias studies against detection of causal associations that show significant Genotype×Environment interactions 9 ., In this study we employed replicated genotypes to test the conditionality of GWA results upon the environment or development stage within which the phenotype was measured ., Integrating GWA mapping results with additional forms of genome-scale data , such as transcript profiling or proteomics datasets , has also been proposed to strengthen support for detected gene-trait associations and reduce the incidence of false-positive associations 10 ., To date , network approaches have largely focused upon comparing GWA results with natural variation in gene expression across genotypes in transcriptomic datasets ( i . e . , expression quantitative trait loci ( eQTLs ) ) 11–13 ., This requires that candidate genes show natural variation in transcript accumulation , which is not always the functional level at which biologically relevant variation occurs 14 ., Another network approach maps GWA results onto previously generated interaction networks within a single genotype , such as a protein-protein interaction network , enhancing support for associations that cluster within the network 15 ., This network filtering approach has yet to be tested with GWA data where the environment or tissue is varied ., To evaluate the influence of environmental or developmentally conditional genetics on GWA mapping and the utility of network filtering in identifying candidate causal genes , we focused on defense metabolism within the plant Arabidopsis thaliana ., A . thaliana has become a key model for advancing genetic technologies and analytical approaches for studying complex quantitative genetics in wild species 16 ., These advances include experiments testing the ability of genome resequencing and transcript profiling to elucidate the genetics of complex expression traits 17–19 and querying the complexity of genetic epistasis in laboratory and natural populations 20–26 ., Additionally , A . thaliana has long provided a model system for applying concepts surrounding GWA mapping 3–5 , 27–30 ., As a model set of phenotypes , we used the products of two related A . thaliana secondary metabolite pathways , responsible for aliphatic and indolic glucosinolate ( GSL ) biosynthesis ., These pathways have become useful models for quantitative genetics and ecology ( Figure 1 ) 31 ., Aliphatic , or methionine-derived , GSL are critical determinants of fitness for A . thaliana and related cruciferous species via their ability to defend against insect herbivory and non-host pathogens 32–35 ., Indolic GSL , derived from tryptophan , play important roles in resistance to pathogens and aphids 36–40 ., A . thaliana accessions display significant natural genetic variation controlling the production of type and amount of both classes of GSL , with direct impacts on plant fitness in the field 33 , 41–47 ., Additionally , GSL display conditional genetic variation dependent upon both the environment and developmental stage of measurement 48–51 ., GSL thus provide an excellent model to explore the impact of conditional genetics upon GWA analysis ., While the evolutionary and ecological importance of GSL is firmly established , the nearly complete description of GSL biosynthetic pathways provides an additional practical advantage to studying these compounds 52–54 ., A large number of QTL and genes controlling GSL natural variation have been cloned from A . thaliana using a variety of network biology approaches similar to network filtering in GWA studies ( Figure 1 ) 55–59 ., These provide a set of positive control genes of known natural variability and importance to GSL phenotypes , enabling empirical assessment of the level of false-positive and false-negative associations ., Within this study , we measure GSL phenotypes in two developmental stages and stress conditions/treatments using a collection of wild A . thaliana accessions to test the relative influence of these components upon GWA ., In agreement with previous analyses from structured mapping populations , we found that differences in development have more impact on conditioning genetic variation in A . thaliana GSL accumulation ., This is further supported by our observation that GWA-identified candidate genes show a non-random distribution across the three datasets with the GWA candidates from the two developmental stages analyzed overlapping less than expected ., The large list of candidate genes identified via GWA was refined with a network co-expression approach , identifying a number of potential networks ., A subset of loci from these networks was validated for effects on GSL phenotypes ., Even for adaptive traits like GSL accumulation , these analyses suggest the influence of numerous small effect loci affecting the phenotype at levels that are potentially exposed to natural selection ., We measured GSL from leaves of 96 A . thaliana accessions at 35 d post-germination 27–28 using either untreated leaves or leaves treated with AgNO3 ( silver ) to mimic pathogen attack ., In addition , we measured seedling glucosinolates from the same accessions to provide a tissue comparison as well as a treatment comparison ., Seedlings were measured at 2 d post-germination at a stage where the GSL are largely representative of the GSL present within the mature seed 48 , 60 ., GSL from both foliar and seedling tissue grown under these conditions have been measured in multiple independent QTL experiments that used recombinant inbred line ( RIL ) populations generated from subsets of these 96 accessions , thus providing independent corroboration of observed GSL phenotypes 41 , 51 , 61 ., For the untreated leaves , this analysis detected 18 aliphatic GSL compounds and four indolic GSL compounds ., These combined with an additional 21 synthetic variables that describe discrete components of the biochemical pathway to total 43 GSLtraits for analysis 4 , 61–62 ., For the AgNO3-treated samples , we detected only 16 aliphatic GSL and four indolic GSL , but also were able to measure camalexin , which is related to indolic GSL ( Table S3 ) , which in combination with derived measures provided us with 42 AgNO3 treated GSL traits 61 ., For the seedling GSL samples , we detected 19 aliphatic GSLs , two indolic , and three seedling specific phenylalanine GSLs ( Table S4 ) , which in combination with derived descriptive variables gave us a total of 46 total GSL traits 61 ., Population stratification has previously been noted in this set of A . thaliana accessions , where eight subpopulations were proposed to describe the accessions genetic differences 27–28 ., Less explored is the joint effect of population structure and environmental factors , both external ( exogenous treatment ) and internal ( tissue comparison ) on GSL ., We used our three glucosinolate datasets to test for potential confounding effects of environmental variation , population structure , and their various interaction terms upon the GSL phenotypes ( Figure 2 ) ., On average , 36% ( silver versus control ) and 23% ( seedling versus control ) of phenotypic variance in GSL traits was solely attributable to accession ., An additional 7% ( silver versus control ) and 14% ( seedling versus control ) of phenotypic variance was attributable to an interaction between accession and treatment or tissue ., This suggests that , on average and given the statistical power of the experiments , 30%–50% of the detectable genetically controlled variance is stable across conditions , while at least 20% of the variance is conditional on treatment and/or tissue ., In contrast , population structure by itself accounted for 10%–15% of total variance in GSL ( Figure 2 ) ., Interestingly , significantly less variance ( <5% ) could be attributed to interaction of treatment or tissue with population structure ., This suggests that for GSL , large-effect polymorphisms that may be linked with population structure are stable across treatment and tissue while the polymorphisms with conditional effects are less related to the species demographic structure ( Figure 2 ) ., This is consistent with QTL studies using RIL that find greater repeatability of large-effect QTL across populations and conditions than of treatment-dependent loci 41 , 51 , 61 , 63 ., This is further supported by the fact that we utilized replication of defined genotypes across all conditions and tissues and as such have better power to detect these effects than in systems where it is not possible to replicate genotypes ., As such , controlling for population structure will reduce the number of false-positives detected but lead to an elevated false-negative rate , given this significant association between the measured phenotypes and population structure ., Interestingly , developmental effects ( average of 15% ) accounted for 3 times more of the variation in GSL than environmental effects ( average 5% ) ., In particular , only three GSL ( two indolic GSL , I3M and 4MOI3M , and total indolic GSL ) were affected more strongly by AgNO3-treatment than by accession ( Table S1 and Figure S1 ) , whereas 11 GSL traits were found to be influenced more by tissue type than accession ( Table S2 ) ., This agrees with these indolic GSL being regulated by defense response 36 , 64 ., Similarly , twice as much GSL variation could be attributed to the interaction between accession and tissue type compared to the interaction between accession and AgNO3 treatment ., Thus , it appears that intraspecific genetic variation has greater impact on GSL in relation to development than in response to simulated pathogen attack ., Using 229 , 940 SNP available for this collection of 96 accessions , we conducted GWA-mapping for GLS traits in both the Seedling and Silver datasets using a maximum likelihood approach that accounts for genetic similarity ( EMMA ) 65 ., This identified a large number of significant SNPs and genes for both datasets ( Table 1 ) ., We tested the previously published criteria used to assess significance of candidate genes to ensure that different treatments or tissues did not bias the results produced under these criteria 4 ., These criteria required ≥1 SNP , ≥2 SNPs , or ≥20% of SNPs within a gene to show significant association with a specific GSL trait ., This test was independently repeated for all GSL traits in both datasets ( Tables S5 and S6 ) ., As previously found using the control leaf GSL data , the more stringent ≥2 SNPs/gene criterion greatly decreased the overall number of significant genes identified while not overtly influencing the false-negative rate when using a set of GSL genes known to be naturally variable and causal within the 96 accessions ( Tables 2 and 3 ) ., Interestingly , including multiple treatments and tissues did not allow us to decrease the high empirical false-negative rate ( ∼75% ) in identifying validated causal candidate genes ( Table, 3 ) 4 , 31 ., Using the ≥2 SNPs/gene criterion identified 898 genes for GSL accumulation in silver-treated leaves and 909 genes for the seedling GSL data ., As previously found , the majority of these candidate genes were specific to a subset of GSL phenotypes and no gene was linked to all GSL traits within any dataset ( Figure S2 ) 4 ., We estimated the variance explained by the candidate GWA genes identified in this study using a mixed polygenic model of inheritance for each phenotype within each dataset using the GenABEL package in R 66–67 ., This showed that , on average , the candidate genes explained 37% of the phenotypic variation with a range of 1% to 99% ( Table S10 ) ., Interestingly , if the phenotypes are separated into their rough biosynthetic classes of indolic , long-chain , or short-chain aliphatic 68 , there is evidence for different levels of explained phenotypic variation where indolic has the highest percent variance at 45% while short-chain has the lowest at 25% ( p\u200a=\u200a0 . 001 ) ., This is not explainable by differential heritability as the short-chain aliphatic GSLs have the highest heritability in numerous studies including this one ( Tables S1 and S2 ) 4 , 41 , 61 ., This is instead likely due to the fact that short-chain aliphatic GLS show higher levels of multi-locus epistasis that complicates the ability to estimate the explained variance within GWA studies 31 , 41 , 61 ., Previous work with untreated GSL leaf samples showed that candidate genes clustered in hotspots , with the two predominant hotspots surrounding the previously cloned AOP and MAM loci 4 , where multiple polymorphisms surrounding the region of these two causal genes significantly associate with multiple GLS phenotypes ., We plotted GWA-identified candidate genes for GSL accumulation from the silver and seedling datasets to see if treatment or tissue altered this pattern ( Figure 3 ) ., Both datasets showed statistically significant ( p<0 . 05; Figure, 3 ) hotspots of candidate genes that clustered predominantly around the AOP and MAM loci with some minor treatment- or tissue-specific hotspots containing fewer genes ., This phenomenon is observed across multiple GLS traits ( Figure 3 ) ., The AOP and MAM hotspots are known to be generated by local blocks of linkage disequilibrium ( LD ) wherein a large set of non-causal genes are physically linked with the causal AOP2/3 and MAM1/3 genes 4 ., Interestingly , while the silver and control leaf GWA datasets showed similar levels of clustering around the AOP and MAM loci , the hotspot at the MAM locus was much more pronounced than the AOP locus in the seedling GWA dataset ( Figure 3 ) , suggesting more seedling GLS traits are associated with the MAM locus ., This agrees with QTL-mapping results in structured RIL populations of A . thaliana that have shown that the MAM/Elong locus has stronger effects upon seedling GSL phenotypes in comparison to leaves , whereas the effect of the AOP locus is stronger in leaves than seedlings 41 , 62–63 ., In addition , the relationship of GSL phenotypes across accessions is highly similar in the two leaf datasets , while the phenotypic relationships across accessions are shifted when comparing the seedling to the leaf ( Figure 4 ) ., Together , this suggests greater similarity in the genetic variation affecting GSL phenotypic variation between the two leaf datasets than between leaf and seedling datasets , suggesting that GSL variation is impacted more by development than simulated pathogen attack ., This is further supported by the analysis of variance ( Figure 2 ) ., To further test if measuring the same phenotypes in different tissues or treatments will identify similar GWA mapping candidates , we investigated the overlap of GWA candidate genes identified across the three datasets ., For this analysis we excluded genes within the known AOP and MAM LD blocks as previous research has shown that all of these genes except the AOP and MAM genes are likely false-positives and would bias our overlap analysis 4 , 69–71 ., The remaining GWA mapping candidate genes showed more overlap between the two leaf datasets than between leaf and seedling datasets ( Figure 5 ) ., Interestingly , the overlap between GWA-identified candidate gene sets from seedling and leaf data was smaller than would be expected by chance ( χ2 p<0 . 001 for all three sectors ) ( Figure 5 ) ., This suggests that outside of the AOP and MAM loci , distinct sets of genetic variants may contribute to the observed phenotypic diversity in GSL across these tissues , which agrees with QTL-mapping studies identifying distinct GSL QTL for seedling and leaf 41 , 62–63 ., As such , focusing simply on GWA mapping candidates independently identified in multiple treatments or tissues to call true significant associations will overlook genes whose genotype-to-phenotype association is conditional upon differences in the experiments ., Similarly , the amount of phenotypic variance explained by the candidates differed between the datasets , with control and treated having the highest average explained variance , 39% and 41% , respectively ., In contrast , the seedling dataset had the lowest explained variance at 32% , similarly suggesting that altering the conditions of the experiments will change commonly reported summary variables such as explained variance ., GWA studies generally produce large lists of candidate genes , presumed to contain a significant fraction of false-positive associations ., One proposed strategy refines these results by searching for enrichment of candidate genes within pre-defined proteomic or transcriptomic networks 15 ., To test the applicability of this approach to our GWA study , we overlaid our list of 2 , 436 candidate genes ( excluding genes showing proximal LD to the causal AOP2/3 and MAM1/2/3 genes 4 ) that associated with at least one GSL phenotype in at least one of the three datasets ( Figure, 5 ) onto a previously published co-expression network 72 ., If the network filtering approach is valid and there are true causal genes within the candidate gene lists , then the candidate genes should show tighter network linkages to previously validated causal genes than the average gene ., Measuring the distances between all candidate genes to all known GSL causal genes within the co-expression network showed that , for all datasets , the GWA candidate genes were on average closer to known causal genes than non-candidates ( Figure S4 ) ., Interestingly , the GWA mapping candidate genes actually showed closer linkages to the cysteine , homocysteine , and glutathione biosynthetic pathways than to the core GSL biosynthetic pathways , suggesting that natural variation in these pathways may impact A . thaliana secondary metabolism ( Figure S4 and Dataset S1 ) ., The network proximity of GWA mapping candidates to known causal genes supports the utility of the network filtering approach in identifying true causal genes among the long list of GWA mapping candidate genes ., To determine if this network filtering approach finds whole co-expression networks or isolated genes , we extended the co-expression network to include known and predicted GSL causal genes ( Table S7 ) ., The largest network obtained from this analysis centered on the core-biosynthetic genes for the aliphatic and tryptophan derived GSL as well as sulfur metabolism genes ( Figures 6 and S3 ) ., Interestingly , this large network linked to a defense signaling network represented by CAD1 , PEN2 , and EDS1 ( Figure, 6 ) 73 ., The defense signaling pathway associated with PEN2 and , more recently , CAD2 and EDS1 had previously been linked to altered GSL accumulation via both signaling and biosynthetic roles 36 , 39 , 74–75 ., However , the current network analysis has identified new candidate participants in this network altering GSL accumulation ., To test these predicted linkages , we obtained a mutant line possessing a T-DNA insertional disruption of the previously undescribed locus At4g38550 , which is linked to both CAD1 and PEN2 ( Figure 6 , Table S9 ) ., This mutant had elevated levels of all aliphatic GSL within the rosette leaves as well as 4-methoxyindol-3-ylmethyl GSL , shown to mediate non-host resistance ( Table S9 ) 36 , 39 ., These results suggest a role for At4g38550 in either defense responses or GSL accumulation ., Network analysis also identified several previously described ( RML1 ) and novel candidate ( ATSFGH , At1g06640 , and At1g04770 ) genes that were associated with the core-biosynthetic part of the network ., RML1 ( synonymous with PAD2 , CAD2 ) , a biosynthetic enzyme for glutathione , has previously been shown to control GSL accumulation either via a signaling role or actual biosynthesis of glutathione 74–75 ., To test if ATSFGH ( S-formylglutathione hydrolase , At2g41530 ) , At1g06640 ( unknown 2-oxoacid dependent dioxygenase – 2-ODD ) , or At1g04770 ( tetratricopeptide containing protein ) may play a role in GSL accumulation , we obtained insertional mutants ., This showed that the disruption of At1g06640 led to significantly increased accumulation of the short-chain methylsulfinyl GSL but not the corresponding methylthio or long-chain GSL ( Table S9 ) ., In contrast , the AtSFGH mutant had elevated levels of all short-chain GSL along with a decreased accumulation of the long-chain 8-MTO GSL ( Table S9 ) ., The At1g04770 mutant showed no altered GSL levels other than a significantly decreased accumulation of 8-MTO GSL ( Table S9 ) ., This suggests that these genes alter GSL accumulation , although the specific molecular mechanism remains to be identified ., Interestingly , network membership is not sufficient to predict a GSL impact , as T-DNA disruption of homoserine kinase ( At2g17265 ) , a gene co-expressed with the GSL core but not a candidate from the GWA analysis , had no detectable impact upon GSL accumulation ( Table S9 ) ., Thus , the network filtering approach identified genes closely linked to the GSL biosynthetic network that can control GSL accumulation and are GWA-identified candidate genes ., The above analysis shows that GWA candidate genes which co-express with known GSL genes are likely to influence GSL accumulation ., However , networks might influence GSL accumulation independent of co-expression with known GSL genes ., To test this , we investigated several co-expression networks that involved solely GWA-identified candidate genes and genes not previously implicated in influencing GSL accumulation ( Figure 7 ) ., Three of these networks included genes that affect natural variation in non-GSL phenotypes within A . thaliana , namely PHOTOTROPIN 2 ( PHOT2 ) , Erecta ( ER ) 76 , and ELF3/GI ( Figure 7 ) 77 , 78 ., The fourth network did not involve any genes previously linked to natural variation ( Figure 7 ) ., We obtained A . thaliana seed stocks with mutations in a subset of genes for each of these three networks to test whether loss of function at these loci affects GSL accumulation ., The largest network containing no previously known GSL-related genes that we examined is a blue light/giberellin signaling pathway represented by PHOT2 ( Figure 7A ) ., This pathway had not been previously ascribed any role in GSL accumulation in A . thaliana ., We tested this GWA-identified association by measuring GSL in the single and double PHOT1/PHOT2 mutants 79 ., PHOT1 was included as it has been shown to function either redundantly or epistatically with PHOT2 79 ., The single phot1 or phot2 mutation had no significant effect upon GSL accumulation ( Table S9 ) ., The double phot1/phot2 knockout plants showed a significant increase in the production of detected methylthio GSL as well as a decrease in the accumulation of 3-carbon GSL compared to control plants ., Thus , it appears that GSL are influenced by the PHOT1/PHOT2 signaling pathway , possibly in response to blue light signaling ( Table S9 ) ., This agrees with previous reports from Raphanus sativa that blue light controls GSL 80 , 81 ., The second non-GSL network we examined contains the ER gene ( Figure 7B ) ., The ER ( Erecta ) network and specifically the ER locus had previously been queried for the ability to alter GSL accumulation using two Arabidopsis RIL populations ( Ler×Col-0 and Ler×Cvi ) that segregate for a loss-of-function allele at the ER locus 41 , 51 , 63 , 82–86 ., In these analyses , the ER locus was linked to seed/seedling GSL accumulation in only one of the two populations and not linked to mature leaf GSL accumulation 41 , 86 ., Analysis of the ER mutant within the Col-0 genotype showed that the Erecta gene does influence GSL content within leaves as suggested by the GWA results ( Table S9 , Figure 7A ) ., Plants with loss of function at Erecta showed increased levels of methylthio GSL , long-chain GSL , and 4-substituted indole GSL ( Table S9 ) ., Interestingly , the ER network contains a number of chromatin remodeling genes ., We obtained A . thaliana lines with loss-of-function mutations in three of these genes ( Table S9 ) to test if the extended network also alters GSL accumulation ., Mutation of two of the three genes ( At5g18620 – CHR17 and At4g02060 – PRL ) was associated with increased levels of short-chain aliphatic GSL and a corresponding decrease in long-chain aliphatic GSL ( Table S9 ) ., This shows that the Erecta network has the capacity to influence GSL accumulation ., Two smaller networks containing the ELF3 and GI genes were of interest as these two genes are associated with natural variation in the A . thaliana circadian clock ( Figure 7C ) 77 , 87 , 88 ., GSL analysis showed that both the elf3 and gi mutants had lower levels of aliphatic GSL than controls ( Table S9 ) ., Comparing multiple gi mutants from both the Col-0 and Ler genetic backgrounds showed that only gi mutants in the Col-0 background altered GSL accumulation ( Table S9 ) ., This suggests that gis link to glucosinolates is epistatic to other naturally variable loci within the genome , as previously noted for natural GI alleles in relation to other phenotypes ( Table S9 ) 78 ., An analysis of the elf4 mutant which has morphological similarities to elf3-1 but was not a GWA-identified candidate showed that this mutation did not alter GSL accumulation ., Thus , elf3/gi affects GSL via a more direct mechanism than altering plant morphology ., Given two genes in the circadian clock network directly affects GSL accumulation and given the expression of these two genes are correlated with other genes in the network , it is fair to hypothesize that circadian clock plays a role in GSL accumulation ., While the GSL phenotypes of the above laboratory-generated mutants suggest that variation in circadian clock plays a role in GSL accumulation , they do not prove that the natural alleles at these genes affect GSL accumulation ., To validate this , we leveraged germplasm developed in the course of previous research showing that natural variation at the ELF3 locus controls numerous phenotypes , including circadian clock periodicity and flowering time 77 ., We utilized quantitative complementation lines to test if natural variation at ELF3 also generates differences in GSL content 77 ., This showed that the ELF3 allele from the Bay-0 accession was associated with a higher level of short chain aliphatic GSL accumulation in comparison to plants containing the Sha allele ( Table S9 ) ., In contrast , both Bay-0 and Sha allele-bearing plants had elevated levels of 8-MTO GSL in comparison to Col-0 ( Tables S8 and S9 ) ., Thus , ELF3 is a polymorphic locus that contains multiple distinct alleles that influence GSL content within the plant and the ELF3/GI network causes natural variation in GSL content ., The final network examined here , represented by CLPX ( CLP protease ) , is likely involved in chlorophyll catabolism and possibly also chloroplast senescence 89 ., This network is uncharacterized and has not previously been associated with GSL accumulation or natural variation in any phenotype , but participation in chloroplast degradation is suggested by transcriptional correlation of CLPX with several catabolism genes ., Analysis of mutants deficient in function for two of these genes showed that they all possessed increased aliphatic GSL in comparison to wild-type controls ., These results suggest that natural variation in this putative network could influence GSL content in A . thaliana ., The majority ( 12 of 13 ) of genes in this network show significant variation in transcript abundance across A . thaliana accessions , a significantly greater proportion than expected by chance ( X2 p<0 . 001 ) 90–92 , further suggesting that this network may contribute to GSL variation across the accessions ., Finally , we tested a single two gene network found in the co-expression data wherein both genes had been annotated but not previously linked to GSL content ., This network involved AtPTR3 ( a putative peptide transporter , At5g46050 ) and DPL1 ( a dihydrosphingosine lyase , At1g27980 ) ., T-DNA mutants in both genes appeared to be lethal as we could not identify homozygous progeny ., However , comparison of the heterozygous progeny to wildtype homozygotes showed that mutants in both genes led to elevated levels of aliphatic GSL ( Table S9 ) ., Thus , there are likely more networks that are causal for GSL variation within this dataset that remain to be tested ., While GSL are considered “secondary” metabolites , these compounds are affected by many aspects of plant metabolism , thus GSL phenotyping is sensitive to any genetic perturbation that affects plant physiology ., As such , we identified six genes that were expressed in mature leaves but did not show any significant association of DNA sequence polymorphism with GSL phenotypes and were additionally not identified within any of the above co-expression networks ., Insertional mutants disrupted at these loci were designated as random mutant controls ( Table S9 ) ., Analyzing GSL within these six lines showed that on average 13%±4% of the GSL were affected in the random control mutant set even after correction for multiple testing ., While this suggests that GSL may be generally sensitive to mutations affecting genes expressed within the leaf , this incidence of significant GSL effects is much lower than observed for the T-DNA mutants selected to test GWA mapping-identified pathways ( CLPX - 78%±11% , PTR3 – 61%±6% , Erecta – 45%±10% , GSL – 46%±11% , ELF3/GI – 53%±17% ) ., In all cases the mutants deficient in GWA pathway-identified gene function showed significantly greater numbers of altered GSL phenotypes than the negative control T-DNA mutant set ( X2 , p<0 . 001 ) , suggesting that combining GWA-identified candidate genes with co-expression networks successfully identifies genes with the capacity to cause natural variation in GSL content ., Identifying the specific mechanisms involved will require significant future research ., A limiting factor for the utility of GWA studies has been the preponderance of false-positive and false-negative associations which makes the accurate prediction of biologically valid genotype-phenotype associations very difficult ., In this report , we describe the implementation and validation of a candidate gene co-expression filter that has given us a high success rate in candidate gene vali | Introduction, Results, Discussion, Materials and Methods | Genome-wide association ( GWA ) is gaining popularity as a means to study the architecture of complex quantitative traits , partially due to the improvement of high-throughput low-cost genotyping and phenotyping technologies ., Glucosinolate ( GSL ) secondary metabolites within Arabidopsis spp ., can serve as a model system to understand the genomic architecture of adaptive quantitative traits ., GSL are key anti-herbivory defenses that impart adaptive advantages within field trials ., While little is known about how variation in the external or internal environment of an organism may influence the efficiency of GWA , GSL variation is known to be highly dependent upon the external stresses and developmental processes of the plant lending it to be an excellent model for studying conditional GWA ., To understand how development and environment can influence GWA , we conducted a study using 96 Arabidopsis thaliana accessions , >40 GSL phenotypes across three conditions ( one developmental comparison and one environmental comparison ) and ∼230 , 000 SNPs ., Developmental stage had dramatic effects on the outcome of GWA , with each stage identifying different loci associated with GSL traits ., Further , while the molecular bases of numerous quantitative trait loci ( QTL ) controlling GSL traits have been identified , there is currently no estimate of how many additional genes may control natural variation in these traits ., We developed a novel co-expression network approach to prioritize the thousands of GWA candidates and successfully validated a large number of these genes as influencing GSL accumulation within A . thaliana using single gene isogenic lines ., Together , these results suggest that complex traits imparting environmentally contingent adaptive advantages are likely influenced by up to thousands of loci that are sensitive to fluctuations in the environment or developmental state of the organism ., Additionally , while GWA is highly conditional upon genetics , the use of additional genomic information can rapidly identify causal loci en masse . | Understanding how genetic variation can control phenotypic variation is a fundamental goal of modern biology ., A major push has been made using genome-wide association mapping in all organisms to attempt and rapidly identify the genes contributing to phenotypes such as disease and nutritional disorders ., But a number of fundamental questions have not been answered about the use of genome-wide association: for example , how does the internal or external environment influence the genes found ?, Furthermore , the simple question of how many genes may influence a trait is unknown ., Finally , a number of studies have identified significant false-positive and -negative issues within genome-wide association studies that are not solvable by direct statistical approaches ., We have used genome-wide association mapping in the plant Arabidopsis thaliana to begin exploring these questions ., We show that both external and internal environments significantly alter the identified genes , such that using different tissues can lead to the identification of nearly completely different gene sets ., Given the large number of potential false-positives , we developed an orthogonal approach to filtering the possible genes , by identifying co-functioning networks using the nominal candidate gene list derived from genome-wide association studies ., This allowed us to rapidly identify and validate a large number of novel and unexpected genes that affect Arabidopsis thaliana defense metabolism within phenotypic ranges that have been shown to be selectable within the field ., These genes and the associated networks suggest that Arabidopsis thaliana defense metabolism is more readily similar to the infinite gene hypothesis , according to which there is a vast number of causative genes controlling natural variation in this phenotype ., It remains to be seen how frequently this is true for other organisms and other phenotypes . | genome-wide association studies, functional genomics, plant biology, population genetics, metabolic networks, plant science, genome complexity, genetic polymorphism, plant genetics, biology, systems biology, plant biochemistry, genetics, genomics, evolutionary biology, gene networks, computational biology, genetics and genomics | Genome-wide association mapping is highly sensitive to environmental changes, but network analysis allows rapid causal gene identification. |
53 | journal.pcbi.1002714 | 2,012 | Confidence-based Somatic Mutation Evaluation and Prioritization | Next generation sequencing ( NGS ) has revolutionized our ability to determine genomes and compare , for example , tumor to normal cells to identify somatic mutations ., However , the platform is not error free and various experimental and algorithmic factors contribute to the false positive rate when identifying somatic mutations 1 ., Indeed , recent studies report validation rates of 54% 2 ., Error sources include PCR artifacts , biases in priming 3 , 4 and targeted enrichment 5 , sequence effects 6 , base calling causing sequence errors 7 , variations in coverage , and uncertainties in read alignments 8 , such as around insertions and deletions ( indels ) 9 ., Reflecting the rapid development of bench and computational methods , algorithms to identify somatic mutations from NGS data are still evolving rapidly ., Remarkably , the congruence of identified mutations between current algorithms is less than 50% ( below ) ., Given the large discrepancies , one is left wondering which mutations to select , such as for clinical decision making or ranking for follow-up experiments ., Ideal would be a statistical value , such as a p-value , indicating the confidence of each mutation call ., Error sources have been addressed by examining bulk sets of mutations , such as computational methods to measure the expected amount of false positive mutation calls utilizing the transition/transversion ratio of a set of variations 10 , 11 , machine learning 12 and inheritance errors when working with family genomes 13 or pooled samples 14 , 15 ., Druley et al . 13 optimized variation calls using short plasmid sequence fragments for optimization ., The accuracy of calling germline variations , i . e . single nucleotide polymorphisms ( SNPs ) , has been addressed by validating SNPs using other techniques such as genotyping microarrays 15 ., Thus , these methods enable a comparison of methods to identify and characterize error sources , but they do not assign a ranking score to individual mutation ., Several NGS mutation identification algorithms do output multiple parameters for each mutation call , such as coverage , genotype quality and consensus quality ., However , it is not clear if and how to interpret these metrics with regards to whether a mutation call is correct ., Furthermore , multiple parameters are generated for each mutation call and thus one simply cannot rank or prioritize mutations using the values ., Instead , researchers often rely on personal experience and arbitrary filtering thresholds to select mutations ., In summary ,, a ) there is a low level of congruence between somatic mutations identified by different algorithms and sequencing platforms and, b ) no method to assign a single accuracy estimate to individual mutations ., Here , we develop a methodology to assign a confidence value - a false discovery rate ( FDR ) - to individual identified mutations ., This algorithm does not identify mutations but rather estimates the accuracy of each mutation ., The method is applicable both to the selection and prioritization of mutations and to the development of algorithms and methods ., Using Illumina HiSeq reads and the algorithms GATK , SAMtools and SomaticSNiPer , we identified 4 , 078 somatic mutations in B16 melanoma cells ., We assigned a FDR to each mutation and show that 50 of 50 mutations with low FDR ( high confidence ) validated while 0 of 44 with high FDR ( low confidence ) validated ., To discover mutations , DNA from tail tissue of three black6 mice , all litter mates , and DNA from three B16 melanoma samples , was extracted and exon-encoding sequences were captured , resulting in six samples ., RNA was extracted from B16 cells in triplicate ., Single end 50 nt ( 1×50 nt ) and paired end 100 nt ( 2×100 nt ) reads were generated on an Illumina HiSeq 2000 ( Supplementary Table S1 in Text S1 ) ., Each sample was sequenced on an individual lane , resulting in an average of 104 million reads per lane ., DNA reads were aligned to the mouse reference genome using the Burrows-Wheeler Alignment Tool ( bwa ) 16 and RNA reads were aligned with bowtie 17 ., Using the 1×50 nt reads , 97% of the targeted nucleotides were covered at least once , the mean/median targeted nucleotide coverage was 38×/30× and 70–73% of target nucleotides had 20× or higher coverage ., Using the 2×100 nt reads , 98% of the targeted nucleotides were covered at least once , the mean/median targeted nucleotide coverage the was 165×/133× and 97% of target nucleotides had 20× coverage ., Somatic mutations were independently identified using the software packages SAMtools 18 , GATK 11 and SomaticSNiPer 19 ( Figure 2 ) by comparing the single nucleotide variations found in B16 samples to the corresponding loci in the black6 samples ( B16 cells were originally derived from a black6 mouse ) ., The potential mutations were filtered according to recommendations from the respective software tools ( SAMtools and GATK ) or by selecting an appropriate threshold for the somatic score of SomaticSNiPer ( Methods ) ., Considering only those mutations found in all tumor-normal pairings , the union of B16 somatic mutations identified by the three algorithms was 4 , 078 ( Figure 3a ) ., However , substantial differences between the sets of mutations identified by each program exist , even when considering those mutations found in all tumor-normal pairings ( Figure 3a ) ., While 1 , 355 mutations are identified by all three programs ( 33% of 4 , 078 ) , the agreement between results is low ., Of the 2 , 484 mutations identified by GATK , only 1 , 661 ( 67% ) are identified by SAMtools and 1 , 469 ( 60% ) are identified by SomaticSNiPer ., Of the 3 , 109 mutations identified by SAMtools , only 53% and 66% are identified by GATK and SomaticSNiPer , respectively ., Of the 2 , 302 mutation identified by SomaticSNiPer , only 64% and 89% are identified by GATK and SAMtools , respectively ., The number of 1 , 355 mutations identified by all three algorithms reflects only 55% ( GATK ) , 44% ( SAMtools ) and 59% ( SomaticSNiPer ) of the mutations found by the individual programs , respectively ., We want to assign each somatic mutation a single quality score Q that could be used to rank mutations based on confidence ., However , it is not straightforward to assign a single value since most mutation detection algorithms output multiple scores , each reflecting a different quality aspect ., Thus , we generated a random forest classifier 20 that combines multiple scores , resulting in a single quality score Q ( Methods ) ., All identified somatic mutations , whether from the “same versus same” or “tumor versus normal” comparison , thus are assigned a single value predicting accuracy ., Note that the classifier training needs to be performed separately for each program , due to the differences in the set of scores which are returned by the individual programs ., After defining a relevant quality score , we sought to re-define the score into a statistically relevant false discovery rate ( FDR ) ., We determined , at each Q value , the number of mutations with a better Q score in the “same versus same” and the number of mutations with a better Q score in the “tumor versus normal” pair ., For a given mutation with quality score Q detected in the “tumor versus normal” comparison , we estimate the false discovery rate by computing the ratio of “same versus same” mutations with a score of Q or better to the overall number of mutations found in the tumor comparison with a score of Q or better ., A potential bias in comparing methods is differential coverage; we thus normalize the false discovery rate for the number of bases covered by NGS reads in each sample: We calculate the common coverage by counting all bases of the reference genome which are covered by data of the tumor and normal sample or by both “same versus same” samples , respectively ., After assigning our FDR to each mutation , the FDR-sorted list of somatic mutations shows a clear preference of mutations found by three programs in the low FDR region ( Figure 3b; see Supplementary Dataset S1 for a complete list ) ., This observation fits to the naïve assumption that the consensus of multiple different algorithms is likely to be correct ., We identified 50 mutations with a low FDR ( high confidence ) for validation , including 41 with an FDR less than 0 . 05 ( Figure 3c ) ., All 50 were validated by a combination of Sanger resequencing and inspection of the B16 RNA-Seq sequence reads ., Table 1 lists the ten somatic mutations with the best FDRs , all of which validated ., We selected 44 mutations identified by at least one detection algorithm , present in only one B16 sample and assigned a high FDR ( >0 . 5 ) by our algorithm ( Figure 3c ) ., In contrast to the low-FDR mutations , none of the 44 high FDR samples validated , neither by Sanger sequencing nor by inspection of the RNA alignments ., 37 of those mutations were clear false positives ( no mutation by Sanger or RNA-Seq ) while the remaining seven loci neither yielded sequencing reactions nor were covered by RNA-Seq reads ., Figure 4 shows representative mutations together with the Sanger sequencing traces ., In the case of the false positive mutation , the three used programs identified this in black6 as sequencing error ( and did not output a mutation at this locus ) , but failed in the single B16 case ( marked with the red box ) ., If a real experiment would have included only this single sample , it would have produced a false positive mutation call , despite using the consensus of three programs ., To test mutations with less extreme FDRs , we selected 45 somatic mutations , which were distributed evenly across the FDR spectrum from 0 . 1 to 0 . 6 ., Validation using both Sanger sequencing and inspection of the RNA-Seq reads resulted 15 positive ( either Sanger sequencing or RNA-Seq reads ) , 22 negative validations ( neither Sanger sequencing nor RNA-Seq reads ) and 8 non-conclusive ( failed sequencing reactions and no RNA-Seq coverage ) ., See the Supplementary Dataset S2 for a detailed table showing the results of the validation of those 45 mutations ., We computed a receiver operating characteristic ( ROC ) curve for all 131 validated mutations ( Figure 5a ) , resulting in an area under the curve ( AUC ) 21 of 0 . 96 ., As this analysis might be biased due to the relatively large set sizes of the high and low FDR mutations , we randomly sampled 10 mutations each , added the 37 validated mutations with the intermediate FDRs , calculated the ROC-AUC and repeated this 1000 times in order to get a more robust performance estimate ., The resulting mean AUC is 0 . 797 ( +−0 . 002 ) ., A systematic test of FDR thresholds ranging from zero to one with a step size of 0 . 05 implies that an optimal threshold for using the FDR as a binary classifier should be ≤0 . 2 ., ROC curves and the corresponding AUC are useful for comparing classifiers and visualizing their performance 21 ., We extended this concept for evaluating the performance of experimental and computational procedures ., However , plotting ROC graphs requires knowledge of all true and false positives ( TP and FP ) in a dataset , information which is usually not given and hard to establish for high throughput data ( such as NGS data ) ., Thus , we used the calculated FDRs to estimate the respective TP and FP rates and plot a ROC curve and calculate the AUC ( Figure 1c ) ., Figure 5b shows the ROC curve comparing the FDR versus the percent of 50 validated mutations and percent of total ., ROC curves and the associated AUC values can be compared across experiments , lab protocols , and algorithms ., For the following comparisons , we used all somatic mutations found by any algorithm and in any tumor-normal pairing without applying any filter procedure ., We considered only those mutations in target regions ( exons ) ., First , we tested the influence of the reference “same versus same” data on the calculation of the FDRs ., Using the triplicate black6 and B16 sequencing runs , we created 18 triplets ( combinations of “black6 versus black6” and “black6 versus B16” ) to use for calculating the FDR ., When comparing the resulting FDR distributions for the sets of somatic mutations , the results are consistent when the reference data sets are exchanged ( Figure 1c , Supplementary Figure S2 in Text S1 ) ., This suggests that the method is robust with regards to the choice of the reference “same versus same” dataset ., Thus , a “same versus same” duplicate profiling needs only be done once for a given lab platform and the resultant FDR ( Q ) reference function can be re-used for future profiling ., Using our definition of a false discovery rate , we have established a generic framework for evaluating the influence of numerous experimental and algorithmic parameters on the resulting set of somatic mutations ., We apply this framework to study the influence of software tools , coverage , paired end sequencing and the number of technical replicates on somatic mutation identification ., First , the choice of the software tool has a clear impact on the identified somatic mutations ( Figure 3 ) ., On the tested data , SAMtools produces the highest enrichment of true positive somatic mutations ( Figure 6a ) ., We note that each tool has different parameters and quality scores for mutation detection; we used the default settings as specified by the algorithm developers ., The impact of the coverage depth on whole genome SNV detection has been recently discussed 22 ., For the B16 sequencing experiment , we sequenced each sample in an individual flowcell lane and achieved a target region mean base coverage of 38 fold across target nucleotides ., In order to study the effect of the coverage on exon capture data , we down-sampled the number of aligned sequence reads for every 1×50 nt library to generate a mean coverage of 5 , 10 and 20 fold , respectively , and then reapplied the mutation identification algorithms ., As expected , a higher coverage results in a better ( i . e . fewer false positives ) somatic mutation set , although the improvement from the 20 fold coverage to 38 fold is marginal for the B16 cells ( Figure 6b ) ., It is straightforward to simulate and rank other experimental settings using the available data and framework ( Figures 6c and d ) ., As we profiled each sample in triplicate , including three separate exon captures , we wanted to identify the impact of these replicates ., Comparing duplicates to triplicates , triplicates do not offer a benefit compared to the duplicates ( Figure 6c ) , while duplicates offer a clear improvement compared to a study without any replicates ( indicated by the higher AUC ) ., In terms of the ratio of somatic mutations at a FDR of 0 . 05 or less , we see enrichment from 24% for a run without replicates to 71% for duplicates and 86% for triplicates ., These percentages correspond to 1441 , 1549 and 1524 mutations , respectively ., Using the intersection of triplicates removes more mutations with low FDRs than mutations with a high FDR , as indicated by the lower ROC AUC and the shift of the curve to the right ( Supplementary Figure S7 in Text S1 , Figure 6c ) : the specificity is slightly increased at the cost of a lower sensitivity , when assuming that removed low FDR mutations are true positives and the removed high FDR mutations are true negatives ., This assumption is supported by our validation experiments , as true negative mutations are likely to get a high FDR ( Figure 5a ) ., The 2×100 nt library was used to create 6 libraries: a 2×100 nt library; a 1×100 nt library; a 1×50 nt library using the 50 nucleotides at the 5′ end of the first read; a 1×50 nt library using the nucleotides 51 to 100 at the 3′ end of the first read; a 2×50 nt read using nucleotides 1 to 50 of both reads; and a 2×50 nt library using nucleotides 51 to 100 of both reads ., These libraries were compared using the calculated FDRs of predicted mutations ( Figure 6d ) ., The 1×50 3′ library performed worst , as expected due to the increasing error rate at the 3′ end of sequence reads ., Despite the much higher median coverage ( 63–65 vs . 32 ) , the somatic mutations found using the 2×50 5′ and 1×100 nt libraries have a smaller AUC than the 1×50 nt library ., This surprising effect is a result of high FDR mutations in regions with low coverage ( Supplementary Text S1 ) ., Indeed , the sets of low FDR mutations are highly similar ., Thus , while the different read lengths and types identify non-identical mutations , the assigned FDR is nevertheless able to segregate true and false positives ( Supplementary Figure S3 in Text S1 ) ., NGS is a revolutionary platform for detecting somatic mutations ., However , the error rates are not insignificant , with different detection algorithms identifying mutations with less than 50% congruence ., Other high throughput genomic profiling platforms have developed methods to assign confidence values to each call , such as p-values associated with differential expression calls from oligonucleotide microarray data ., Similarly , we developed here a method to assign a confidence value ( FDR ) to each identified mutation ., From the set of mutations identified by the different algorithms , the FDR accurately ranks mutations based on likelihood of being correct ., Indeed , we selected 50 high confidence mutations and all 50 validated; we selected 45 intermediate confidence mutations and 15 validated , 22 were not present and 8 inconclusive; we selected 44 low confidence mutations and none validated ., Again , all 139 mutations were identified by at least one of the detection algorithms ., Unlike a consensus or majority voting approach , the assigned FDR not only effectively segregates true and false positives but also provides both the likelihood that the mutation is true and a statistically ranking ., Also , our method allows the adjustment for a desired sensitivity or specificity which enables the detection of more true mutations than a consensus or majority vote , which report only 50 or 52 of all 65 validated mutations ., We applied the method to a set of B16 melanoma cell experiments ., However , the method is not restricted to these data ., The only requirement is the availability of a “same versus same” reference dataset , meaning at least a single replicate of a non-tumorous sample should be performed for each new protocol ., Our experiments indicate that the method is robust with regard to the choice of the replicate , so that a replicate is not necessarily required in every single experiment ., Once done , the derived FDR ( Q ) function can be reused when the Q scores are comparable ( i . e . when the same program for mutation discovery was used ) ., Here , we profiled all samples in triplicate; nevertheless , the method produces FDRs for each mutation from single-run tumor and normal profiles ( non-replicates ) using the FDR ( Q ) function ., We do show , however , that duplicates improve data quality ., Furthermore , the framework enables one to define best practice procedures for the discovery of somatic mutations ., For cell lines , at least 20-fold coverage and a replicate achieve close to the optimum results ., A 1×50 nt library resulting in approximately 100 million reads is a pragmatic choice to achieve this coverage ., The possibility of using a reference data set to rank the results of another experiment can also be exploited to e . g . score somatic mutations found in different normal tissues by similar methods ., Here , one would expect relatively few true mutations , so an independent set of reference data will improve the resolution of the FDR calculations ., While we define the optimum as the lowest number of false positive mutation calls , this definition might not suffice for other experiments , such as for genome wide association studies ., However , our method allows the evaluation of the sensitivity and specificity of a given mutation set and we show application of the framework to four specific questions ., The method is by no means limited to these parameters , but can be applied to study the influence of all experimental or algorithmic parameters , e . g . the influence of the alignment software , the choice of a mutation metric or the choice of vendor for exome selection ., In summary , we have pioneered a statistical framework for the assignment of a false-discovery-rate to the detection of somatic mutations ., This framework allows for a generic comparison of experimental and computational protocol steps on generated quasi ground truth data ., Furthermore , it is applicable for the diagnostic or therapeutic target selection as it is able to distinguish true mutations from false positives ., Next-generation sequencing , DNA sequencing: Exome capture for DNA resequencing was performed using the Agilent Sure-Select solution-based capture assay 23 , in this case designed to capture all known mouse exons ., 3 µg purified genomic DNA was fragmented to 150–200 nt using a Covaris S2 ultrasound device ., gDNA fragments were end repaired using T4 DNA polymerase , Klenow DNA polymerase and 5′ phosphorylated using T4 polynucleotide kinase ., Blunt ended gDNA fragments were 3′ adenylated using Klenow fragment ( 3′ to 5′ exo minus ) ., 3′ single T-overhang Illumina paired end adapters were ligated to the gDNA fragments using a 10∶1 molar ratio of adapter to genomic DNA insert using T4 DNA ligase ., Adapter ligated gDNA fragments were enriched pre capture and flow cell specific sequences were added using Illumina PE PCR primers 1 . 0 and 2 . 0 and Herculase II polymerase ( Agilent ) using 4 PCR cycles ., 500 ng of adapter ligated , PCR enriched gDNA fragments were hybridized to Agilents SureSelect biotinylated mouse whole exome RNA library baits for 24 hrs at 65°C ., Hybridized gDNA/RNA bait complexes where removed using streptavidin coated magnetic beads ., gDNA/RNA bait complexes were washed and the RNA baits cleaved off during elution in SureSelect elution buffer leaving the captured adapter ligated , PCR enriched gDNA fragments ., gDNA fragments were PCR amplified post capture using Herculase II DNA polymerase ( Agilent ) and SureSelect GA PCR Primers for 10 cycles ., Cleanups were performed using 1 . 8× volume of AMPure XP magnetic beads ( Agencourt ) ., For quality controls we used Invitrogens Qubit HS assay and fragment size was determined using Agilents 2100 Bioanalyzer HS DNA assay ., Exome enriched gDNA libraries were clustered on the cBot using Truseq SR cluster kit v2 . 5 using 7 pM and sequenced on the Illumina HiSeq2000 using Truseq SBS kit ., Sequence reads were aligned using bwa ( version 0 . 5 . 8c ) 16 using default options to the reference mouse genome assembly mm9 24 ., Ambiguous reads – those reads mapping to multiple locations of the genome as provided by the bwa output - were removed ( see Supplementary Dataset S3 for the alignment statistics ) ., The remaining alignments were sorted , indexed and converted to a binary and compressed format ( BAM ) and the read quality scores converted from the Illumina standard phred+64 to standard Sanger quality scores using shell scripts ., For each sequencing lane , mutations were identified using three software programs: SAMtools pileup ( version 0 . 1 . 8 ) 18 , GATK ( version 1 . 0 . 4418 ) 11 and SomaticSNiPer 19 ., For SAMtools , the author-recommend options and filter criteria were used ( http://sourceforge . net/apps/mediawiki/SAMtools/index . php ? title=SAM_FAQ; accessed September 2011 ) , including first round filtering , maximum coverage 200 ., For SAMtools second round filtering , the point mutation minimum quality was 30 ., For GATK mutation calling , we followed the author-designed best practice guidelines presented on the GATK user manual ( http://www . broadinstitute . org/gsa/wiki/index . php ? title=Best_Practice_Variant_Detection_with_the_GATK_v2&oldid=5207; accessed October 2010 ) ., For each sample a local realignment around indel sites followed by a base quality recalibration was performed ., The Unified Genotyper module was applied to the resultant alignment data files ., When needed , the known polymorphisms of the dbSNP 25 ( version 128 for mm9 ) were supplied to the individual steps ., The variant score recalibration step was omitted and replaced by the hard-filtering option ., For both SAMtools and GATK , potential indels were filtered out of the results before further processing and a mutation was accepted as somatic if it is present in the data for B16 but not in the black6 sample ., Additionally , as a post filter , for each potentially mutated locus we required non-zero coverage in the normal tissue ., This is intended to sort out mutations which only look to be somatic because of a not covered locus in the black6 samples ., For SomaticSNiPer mutation calling , the default options were used and only predicted mutations with a “somatic score” of 30 or more were considered further ( see Supplementary Text S1 for a description of the cutoff selection ) ., For all three programs , we removed all mutations located in repetitive sequences as defined by the RepeatMasker track of the UCSC Genome Browser 26 for the mouse genome assembly mm9 ., Barcoded mRNA-seq cDNA libraries were prepared from 5 ug of total RNA using a modified version of the Illumina mRNA-seq protocol ., mRNA was isolated using SeramagOligo ( dT ) magnetic beads ( Thermo Scientific ) ., Isolated mRNA was fragmented using divalent cations and heat resulting in fragments ranging from 160–200 bp ., Fragmented mRNA was converted to cDNA using random primers and SuperScriptII ( Invitrogen ) followed by second strand synthesis using DNA polymerase I and RNaseH ., cDNA was end repaired using T4 DNA polymerase , Klenow DNA polymerase and 5′ phosphorylated using T4 polynucleotide kinase ., Blunt ended cDNA fragments were 3′ adenylated using Klenow fragment ( 3′ to 5′ exo minus ) ., 3′ single T-overhang Illumina multiplex specific adapters were ligated on the cDNA fragments using T4 DNA ligase ., cDNA libraries were purified and size selected at 300 bp using the E-Gel 2% SizeSelect gel ( Invitrogen ) ., Enrichment , adding of Illumina six base index and flow cell specific sequences was done by PCR using Phusion DNA polymerase ( Finnzymes ) ., All cleanups were performed using 1 . 8× volume of Agencourt AMPure XP magnetic beads ., Barcoded RNA-seq libraries were clustered on the cBot using Truseq SR cluster kit v2 . 5 using 7 pM and sequenced on the Illumina HiSeq2000 using Truseq SBS kit ., The raw output data of the HiSeq was processed according to the Illumina standard protocol , including removal of low quality reads and demultiplexing ., Sequence reads were then aligned to the reference genome sequence 24 using bowtie 17 ., The alignment coordinates were compared to the exon coordinates of the RefSeq transcripts 27 and for each transcript the counts of overlapping alignments were recorded ., Sequence reads not aligning to the genomic sequence were aligned to a database of all possible exon-exon junction sequences of the RefSeq transcripts 27 ., The alignment coordinates were compared to RefSeq exon and junction coordinates , reads counted and normalized to RPKM ( number of reads which map per nucleotide kilobase of transcript per million mapped reads 28 ) for each transcript ., We selected SNVs for validation by Sanger re-sequencing and RNA ., SNVs were identified which were predicted by all three programs , non-synonymous and found in transcripts having a minimum of 10 RPKM ., Of these , we selected the 50 with the highest SNP quality scores as provided by the programs ., As a negative control , 44 SNVs were selected which have a FDR of 0 . 5 or more , are present in only one cell line sample and are predicted by only one mutation calling program ., 45 mutations with intermediate FDR levels were selected ., Using DNA , the selected variants were validated by PCR amplification of the regions using 50 ng of DNA ( see Supplementary Dataset S4 for the primer sequences and targeted loci ) , followed by Sanger sequencing ( Eurofins MWG Operon , Ebersberg , Germany ) ., The reactions were successful for 50 , 32 and 37 loci of positive , negative and intermediate controls , respectively ., Validation was also done by examination of the tumor RNA-Seq reads ., Random Forest Quality Score Computation: Commonly-used mutation calling algorithms ( 11 , 18 , 19 ) output multiple scores , which all are potentially influential for the quality of the mutation call ., These include - but are not limited to - the quality of the base of interest as assigned by the instrument , the alignment quality and number of reads covering this position or a score for the difference between the two genomes compared at this position ., For the computation of the false discovery rate we require an ordering of mutations , however this is not directly feasible for all mutations since we might have contradicting information from the various quality scores ., We use the following strategy to achieve a complete ordering ., In a first step , we apply a very rigorous definition of superiority by assuming that a mutation has better quality than another if and only if it is superior in all categories ., So a set of quality properties S\u200a= ( s1 , … , sn ) is preferable to T\u200a= ( t1 , … , tn ) , denoted by S>T , if si>ti for all i\u200a=\u200a1 , … , n ., We define an intermediate FDR ( IFDR ) as follows However , we regard the IFDR only as an intermediate step since in many closely related cases , no comparison is feasible and we are thus not benefitting from the vast amount of data available ., Thus , we take advantage of the good generalization property of random forest regression 20 and train a random forest as implemented in R ( 29 , 30 ) ., For m input mutations with n quality properties each , the value range for each property was determined and up to p values were sampled with uniform spacing out of this range; when the set of values for a quality property was smaller than p , this set was used instead of the sampled set ., Then each possible combination of sampled or selected quality values was created , which resulted in a maximum of pn data points in the n-dimensional quality space ., A random sample of 1% of these points and the corresponding IFDR values were used as predictor and response , respectively , for the random forest training ., The resulting regression score is our generalized quality score Q; it can be regarded as a locally weighted combination of the individual quality scores ., It allows direct , single value comparison of any two mutations and the computation of the actual false discovery rate: For the training of the random forest models used to create the results for this study , we calculate the sample IFDR on the somatic mutations of all samples before selecting the random 1% subset ., This ensures the mapping of the whole available quality space to FDR values ., We used the quality properties “SNP quality” , “coverage depth” , “consensus quality” and “RMS mapping quality” ( SAMtools , p\u200a=\u200a20 ) ; “SNP quality” , “coverage depth” , “Variant confidence/unfiltered depth” and “RMS mapping quality” ( GATK , p\u200a=\u200a20 ) ; or SNP quality” , “coverage depth” , “consensus quality” , “RMS mapping quality” and “somatic score” ( SomaticSNiPer , p\u200a=\u200a12 ) , respectively ., The different values of p ensure a set size of comparable magnitude ., To acquire the “same vs . same” and “same vs . different” data when calculating the FDRs for a given set of mutations , we use all variants generated by the different programs without any additional filtering ., Common coverage computation: The number of possible mutation calls can introduce a major bias in the definition of a false discovery rate ., Only if we have the same number of possible locations for mutations to occur for our tumor comparison and for our “same vs . same” comparison , the number of called mutations is comparable and can serve as a basis for a false discov | Introduction, Results, Discussion, Methods | Next generation sequencing ( NGS ) has enabled high throughput discovery of somatic mutations ., Detection depends on experimental design , lab platforms , parameters and analysis algorithms ., However , NGS-based somatic mutation detection is prone to erroneous calls , with reported validation rates near 54% and congruence between algorithms less than 50% ., Here , we developed an algorithm to assign a single statistic , a false discovery rate ( FDR ) , to each somatic mutation identified by NGS ., This FDR confidence value accurately discriminates true mutations from erroneous calls ., Using sequencing data generated from triplicate exome profiling of C57BL/6 mice and B16-F10 melanoma cells , we used the existing algorithms GATK , SAMtools and SomaticSNiPer to identify somatic mutations ., For each identified mutation , our algorithm assigned an FDR ., We selected 139 mutations for validation , including 50 somatic mutations assigned a low FDR ( high confidence ) and 44 mutations assigned a high FDR ( low confidence ) ., All of the high confidence somatic mutations validated ( 50 of 50 ) , none of the 44 low confidence somatic mutations validated , and 15 of 45 mutations with an intermediate FDR validated ., Furthermore , the assignment of a single FDR to individual mutations enables statistical comparisons of lab and computation methodologies , including ROC curves and AUC metrics ., Using the HiSeq 2000 , single end 50 nt reads from replicates generate the highest confidence somatic mutation call set . | Next generation sequencing ( NGS ) has enabled unbiased , high throughput discovery of genetic variations and somatic mutations ., However , the NGS platform is still prone to errors resulting in inaccurate mutation calls ., A statistical measure of the confidence of putative mutation calls would enable researchers to prioritize and select mutations in a robust manner ., Here we present our development of a confidence score for mutations calls and apply the method to the identification of somatic mutations in B16 melanoma ., We use NGS exome resequencing to profile triplicates of both the reference C57BL/6 mice and the B16-F10 melanoma cells ., These replicate data allow us to formulate the false discovery rate of somatic mutations as a statistical quantity ., Using this method , we show that 50 of 50 high confidence mutation calls are correct while 0 of 44 low confidence mutations are correct , demonstrating that the method is able to correctly rank mutation calls . | genome sequencing, genomics, genetic mutation, genetics, biology, computational biology, genetics and genomics | null |
2,130 | journal.pcbi.1002555 | 2,012 | Minimum Free Energy Path of Ligand-Induced Transition in Adenylate Kinase | Biological functions of proteins are mediated by dynamical processes occurring on complex energy landscapes 1 ., These processes frequently involve large conformational transitions between two or more metastable states , induced by an external perturbation such as ligand binding 2 ., Time scales of the conformational transition are frequently of order microseconds to seconds ., To characterize such slow events in molecular dynamics ( MD ) trajectories , the free energy profile or the potential of mean force ( PMF ) along a reaction coordinate must be identified ., In particular , the identification of the transition state ensemble ( TSE ) enables the barrier-height to be evaluated , and the correct kinetics would be reproduced if there is only a single dominant barrier ., However , for proteins with many degrees of freedom , finding an adequate reaction coordinate and identifying the TSE is a challenging task placing high demands on computational resources ., The finite-temperature string method 3 , 4 , and the on-the-fly string method 5 find a minimum free energy path ( MFEP ) from a high-dimensional space ., Given a set of collective variables describing a conformational change , the MFEP is defined as the maximum likelihood path along the collective variables ., The MFEP is expected to lie on the center of reactive trajectories and contains only important transitional motions 4 ., Furthermore , since the MFEP approximately orthogonally intersects the isocommittor surfaces ( the surfaces of constant committor probability in the original space ) 4 , the TSE can be identified as the intersection with the isocommittor surface with probability of committing to the product ( or the reactant ) =\u200a1/2 ., The methods and MFEP concepts have been applied to various molecular systems 4–6 including protein conformational changes 7–9 ., With regard to high-dimensional systems like proteins , the quality of the MFEP ( whether it satisfies the above-mentioned properties ) is particularly sensitive to choice of the collective variables ., The collective variables should be selected such that their degrees of freedom are few enough to ensure a smooth free energy surface; at the same time they should be sufficiently many to approximate the committor probability 4 , 9 ., To resolve these contrary requirements , effective dimensional reduction is required ., Large conformational transitions of proteins , frequently dominated by their domain motions , can be well approximated by a small number of large-amplitude principal modes 2 , 10 ., This suggests that the use of the principal components may be the best choice for approximating the committor probability with the fewest number of variables for such large conformational transitions involving domain motions ., A further advantage is the smoothness of the free energy landscape in the space of the large-amplitude principal components ., If the curvature of the MFEP is large , the MFEP may provide a poor approximation to the isocommittor surface since the flux can occur between non-adjacent structures along the path 9 ., The selection of the large-amplitude principal components as the collective variables would maintain the curvature of the MFEP sufficiently small ., Here , we conducted preliminary MD simulations around the two terminal structures of the transition and performed a principal component analysis to obtain the principal components ( see Materials and Methods for details ) ., Following selection of a suitable MFEP , determination of the PMF and characterization of the physical quantities along the MFEP are needed to understand an in-depth mechanism of the transition ., Although the finite-temperature string method yields a rigorous estimate of the gradient of the PMF under a large coupling constant with the collective variables 3–5 ( see Materials and Methods ) , errors in the estimates of the gradients and in the tangential directions of the pathway tend to accumulate during the integration process ., To accurately quantify the PMF and the averages of various physical quantities in a multi-dimensional space , we utilized another statistical method , the multi-state Bennett acceptance ratio ( MBAR ) method 11 , which provides optimal estimates of free energy and other average quantities along the MFEP ., Here , we applied the above proposed methods to the conformational change in Escherichia coli adenylate kinase ( AK ) , the best-studied of enzymes exhibiting a large conformational transition 12–23 ., AK is a ubiquitous monomeric enzyme that regulates cellular energy homeostasis by catalyzing the reversible phosphoryl transfer reaction: ATP+AMP↔2ADP ., According to the analysis of the crystal structures by the domain motion analysis program DynDom 24 , AK is composed of three relatively rigid domains ( Figure 1 ) ; the central domain ( CORE: residues 1–29 , 68–117 , and 161–214 ) , an AMP-binding domain ( AMPbd: 30–67 ) , and a lid-shaped ATP-binding domain ( LID: 118–167 ) ., Inspection of the crystal structures suggests that , upon ligand binding , the enzyme undergoes a transition from the inactive open form to the catalytically competent closed structure 25 ( Figure 1 ) ., This transition is mediated by large-scale closure motions of the LID and AMPbd domains insulating the substrates from the water environment , while occluding some catalytically relevant water molecules ., The ATP phosphates are bound to the enzyme through the P-loop ( residues 7–13 ) , a widely-distributed ATP-binding motif ., The interplay between AKs dynamics and function has been the subject of several experimental studies ., 15N NMR spin relaxation studies have revealed that the LID and AMPbd domains fluctuate on nanosecond timescales while the CORE domain undergoes picosecond fluctuations 12 , 13 ., The motions of these hinge regions are highly correlated with enzyme activity 14 ., In particular , the opening of the LID domain , responsible for product release , is thought to be the rate-limiting step of the catalytic turnover 14 ., Recent single-molecule Förster resonance energy transfer ( FRET ) experiments have revealed that the closed and open conformations of AK exist in dynamic equilibrium even with no ligand present 15 , 16 , and that the ligands presence merely changes the populations of open and closed conformations ., This behavior is reminiscent of the population-shift mechanism 26 rather than the induced-fit model 27 , in which structural transitions occur only after ligand binding ., The population-shift like behaviour of AK has also been supported by simulation studies 17–20 ., Lou and Cukier 17 , Kubitzki and de Groot 18 , and Beckstein et al . 19 employed various enhanced sampling methods to simulate ligand-free AK transitions ., Arora and Brooks 20 applied the nudged elastic band method in the pathway search for both ligand-free and ligand-bound forms ., These studies showed that , while the ligand-free form samples conformations near the closed structure 17–20 , ligand binding is required to stabilize the closed structure 20 ., Despite the success of these studies based on all-atom level models , atomistic details of the transition pathways , including the structures around the TSE , have not been fully captured yet ., In this study , we successfully evaluated the MFEP for both ligand-free and ligand-bound forms of AK using the on-the-fly string method , and calculated the PMF and the averages of various physical quantities using the MBAR method ., Our analysis elucidates an in-depth mechanism of the conformational transition of AK ., The MFEPs for apo and holo-AKs , and their PMFs , were obtained from the string method and the MBAR method , respectively ( see Videos S1 and S2 ) ., The MFEPs were calculated using the same 20 principal components selected for the collective variables ., The holo-AK calculations were undertaken with the bisubstrate analog inhibitor ( Ap5A ) as the bound ligand without imposing any restraint on the ligand ., Figures 2A and 2B show the MBAR estimates of the PMFs along the images of the MFEP ( the converged string at 12 ns in Figures 2A and 2B ) for apo and holo-AK , respectively ., Here , the images on the string are numbered from the open ( ; PDBid: 4ake 28 ) to the closed conformation ( ; PDBid: 1ake 29 ) ., These terminal images were fixed during the simulations to enable sampling of the conformations around the crystal structures ., In the figures , the convergence of the PMF in the string method process is clearly seen in both systems ., Convergence was also confirmed by the error estimates ( Figure S1 ) , and by the root-mean-square displacement ( RMSD ) of the string from its initial path ( Figure S2 ) ., The PMF along the MFEP reveals a broad potential well on the open-side conformations of apo-AK , suggesting that the open form of AK is highly flexible 20 ., This broad well is divided into two regions , the fully open ( ) and partially closed states ( , encircled ) by a small PMF barrier ., In holo-AK ( Figure 2B ) , the MFEP exhibits a single substantial free energy barrier ( ) between the open and closed states , which does not appear in the initial path ., This barrier will be identified as the transition state below ., It is shown in the PMF along the MFEP that the closed form ( tightly binding the ligand ) is much more stable than the open form with loose binding to the ligand ( large fluctuations of the ligand will be shown later ) ., To characterize the MFEP in terms of the domain motions , the MFEP was projected onto a space defined by two distances from the CORE domain , the distance to the LID domain and the distance to the AMPbd domain ( the distance between the mass centers of atoms for the two domains; Figures 2C and 2D ) ., The PMF was also projected onto this space ., The comparison of the two figures shows that ligand binding changes the energy landscape of AK , suggestive that this is not a simple population-shift mechanism ., In apo-AK , the motions of the LID and AMPbd domains are weakly correlated , reflecting the zipper-like interactions on the LID-AMPbd interface 19 ., The MFEP clearly indicates that the fully closed conformation ( ) involves the closure of the LID domain followed by the closure of the AMPbd domain ., The higher flexibility of the LID domain has been reported in previous studies 17 , 19 , 20 ., In holo-AK , the pathway can be described by two successive scenarios , that is , the LID-first-closing followed by the AMPbd-first-closing ., In the open state ( ) , the MFEP is similar to that of apo-AK , revealing that LID closure occurs first ., In the closed state ( ) , however , the AMPbd closure precedes the LID closure ., This series of the domain movements was also identified by the domain motion analysis program DynDom 24 ( Figure S3 ) ., It is known in the string method that the convergence of the pathway is dependent on the initial path ., In order to check whether the MFEP obtained here is dependent on the initial path or not , we conducted another set of the calculations for apo-AK by using a different initial path , which has an AMPbd-first-closing pathway , opposed to the LID-first-closing pathway shown above ., If the LID and AMPbd domains move independently of each other , it is expected that LID-first-closing and AMPbd-first-closing pathways are equally stable ., Despite this initial setup , however , our calculation again showed the convergence toward the LID-first-closing pathway ( see Figure S4 ) ., As described above , this tendency of the pathways would be due to the reflection of the highly flexible nature of the LID domain ., Furthermore , in order to check whether the samples around the MFEP are consistent with the experiments , we compared the PMF as a function of the distance between the Cα atoms of Lys145 and Ile52 with the results of the single-molecule FRET experiment by Kern et al . 16 ( see Figure S5 ) ., The PMF was calculated by using the samples obtained by the umbrella samplings around the MFEP ., In the figure , the stable regions of the PMF for holo-AK are highly skewed toward the closed form , and some population toward the partially closed form was also observed even for apo-AK , which is consistent with the histogram of the FRET efficiency 16 ., To more clearly illustrate the energetics along the MFEP in terms of the domain motions , we separately plot the PMF as a function of the two inter-domain distances defined above ( Figures 3A and 3B ) ., We observe that the PMF of apo-AK has a double-well profile for the LID-CORE distance ( indicated by the blue line in Figure 3A ) , whereas the PMF in terms of the AMPbd-CORE distance is characterized by a single-well ( Figure 3B ) ., The single-molecule FRET experiments monitoring the distances between specific residue pairs involving the LID domain ( LID-CORE ( Ala127-Ala194 ) 15 and LID-AMPbd ( Lys145- Ile52 ) 16 ) revealed the presence of double-well profiles in the ligand-free form ., On the other hand , an electron transfer experiment probing the distance between the AMPbd and CORE domains ( Ala55-Val169 ) 30 showed only that the distance between the two domains decreased upon ligand binding ., Considering the PMF profiles in the context of these experimental results , we suggest that the partially closed state ( ) in apo-AK ( Figure 2A ) can be ascribed to the LID-CORE interactions but not to the AMPbd-CORE interactions ., To elucidate the origin of the stability of the partially closed state , we monitored the root mean square fluctuations ( RMSF ) of the atoms along the MFEP ( see Materials and Methods for details ) ., Figures 3C and 3D show the RMSF along the MFEP for apo and holo-AK , respectively ., In apo-AK ( Figure 3C ) , large fluctuations occur in the partially closed state ( ) around the LID-CORE hinge regions ( residue 110–120 , and 130–140 ) and the P-loop ( residue 10–20 ) ., It has been proposed , in the studies of AK using coarse-grained models , that “cracking” or local unfolding occurs due to localized strain energy , and that the strained regions reside in the LID-CORE hinge and in the P-loop 21 , 23 ., Our simulation using the all-atom model confirmed the existence of “cracking” in the partially closed state , and provided an atomically detailed picture of this phenomenon ., The average structures around the partially closed state revealed that , in the open state , a highly stable Asp118-Lys136 salt bridge is broken by the strain induced by closing motion around ( Figure S6A ) ., This salt bridge has been previously proposed to stabilize the open state while imparting a high enthalpic penalty to the closed state 18 ., Breakage of the salt bridge releases the local strain and the accompanying increases in fluctuation may provide compensatory entropy to stabilize the partially closed state ., A similar partially closed state of the LID domain was also found by the work of Lou and Cukier 31 in which they performed all-atom MD simulation of apo-AK at high temperature ( 500 K ) condition ., In holo-AK , both of the LID-CORE and AMPbd-CORE distances exhibit double-well profiles ( indicated by the red lines in Figures 3A and 3B ) , separating the closed from the open state ., The breakage of the 118–136 salt bridge at around is not accompanied by “cracking” of the hinge region ( Figure 3D ) ., Instead , the hinge region is stabilized by binding of ATP ribose to Arg119 and His134 ( Figure S6B ) , leading to a smooth closure of the LID domain ., This suggests that one role of the salt bridge breakage is rearrangement of the molecular interactions to accommodate ATP-binding 32 ., P-loop fluctuations are also suppressed in holo-AK ( Figure 3D ) ., Consistent with our findings , reduced backbone flexibilities in the presence of Ap5A were reported in the above-mentioned NMR study 13 ., The origin of the double-well profile in holo-AK was investigated via the ligand-protein interactions ., The motion of the ligand along the MFEP was firstly analyzed by focusing on the AMP adenine dynamics , since the release of the AMP moiety from the AMP-binding pocket was observed in the open state ., It is again emphasized that the ligand is completely free from any restraint during the simulations ., PCA was performed for the three-dimensional Cartesian coordinates of the center of mass of AMP adenine , and the coordinates were projected onto the resultant 1st PC in Figure 4A ., The AMP adenine is observed to move as much as 10 Å in the open state ( ; Figure 4B ) , while it is confined to a narrow region of width 1–2 Å ( the binding pocket ) in the closed state ( ) ., Such a reduction of the accessible space of the AMP adenine might generate a drastic decrease in entropy or an increase in the PMF barrier of the open-to-closed transition ., Furthermore , close inspection of the PMF surface reveals the existence of a misbinding event at ( Figure 4B ) , in which the AMP ribose misbinds to Asp84 in the CORE domain , and is prevented from entering the AMP-binding pocket ., This event further increases the barrier-height of the transition ., The MFEP revealed that AMP adenine enters the AMP-binding pocket around , as indicated by a rapid decrease in the accessible area ( Figure 4A ) ., This event is well correlated with the position of the PMF barrier along the MFEP ( Figure 2B ) ., This coincidence between the binding process and the domain closure suggests that the two processes are closely coupled ., Before analyzing the situation in detail , however , it is necessary to assess whether the observed PMF barrier around ( Figure 2B ) corresponds to a TSE , because the PMF barrier is not necessarily a signature of dynamical bottleneck in high-dimensional systems 33 ., TSE validation is usually performed with a committor test 4 , 7 , 9 , 33 ., In principle , the committor test launches unbiased MD simulations from structures chosen randomly from the barrier region , and tests whether the resultant trajectories reach the product state with probability 1/2 ., Here , since limited computational resources precluded execution of a full committor test , 40 unbiased MD simulations of 10 ns were initiated from each of , 33 or 34 , a total of 120 simulations or 1 . 2 , and the distributions of the final structures after 10 ns were monitored 9 ., Figure 5A shows the binned distributions of the final structures assigned by index of the nearest MFEP image ( the blue bars ) ., When the simulations were initiated from the image at ( ) , the distribution biases to the open form-side ( the closed form-side ) relative to the initial structures ., On the other hand , when starting from the image at , the distribution is roughly symmetric around the initial structures ., This result suggests that the TSE is located at ., In other words , it was validated that the TSE was successfully captured in the MFEP , and at the same time , the collective variables were good enough to describe the transition ., A close inspection of the structures around the PMF barrier supported our insufficient committer test and revealed the mechanism of the ligand-induced domain closure ., Figure 5B shows the hydrogen bond ( H-bond ) patterns between the ligand and the protein observed in the average structures at ( before the TSE ) and ( after the TSE ) ., At , Thr31:OG1 ( AMPbd ) forms an H-bond with N7 of AMP adenine , and Gly85:O ( CORE ) forms one with adenine N6 ., These two H-bonds mediate the hinge bending of the AMPbd-CORE domains ., In addition , the H-bond between Gly85:O and adenine N6 helps the enzyme to distinguish between AMP and GMP; GMP lacks an NH2 group in the corresponding position of AMP 34 ., This means that the specificity of AMP-binding operates at an early stage of the ligand binding process ., At , the AMPbd-CORE distance becomes smaller than that at , which allows the formation of 3 additional H-bonds with the ligand: Gln92:OE1 ( CORE ) and adenine N6 , Lys57:O ( AMPbd ) and the ribose O2 , and Arg88:NH1 ( CORE ) and O1 of AMP ., The resulting rapid enthalpy decrease stabilizes the closed conformation ., Gln92:OE1 is also important in establishing AMP specificity; GMP lacks the counterpart atom , adenine N6 ., The strictly conserved Arg88 residue is known to be crucial for positioning AMP so as to suitably receive a phosphate group from ATP 35 ., With regard to the AMPbd closure , our result suggests that Arg88 ( CORE ) , in conjunction with Lys57 ( AMPbd ) , works to block adenine release from the exit channel and to further compact the AMPbd-CORE domains ., A remaining question is how closure of the LID domain follows that of the AMPbd domain ., Unlike the AMP-binding pocket , the ATP-binding sites , including the P-loop , are surrounded by charged residues , which attract interfacial water molecules ., Upon LID closure , most of these water molecules will be dehydrated from the enzyme , but some may remain occluded ., To characterize the behaviors of these water molecules , the 3D distribution function of their oxygen and hydrogen constituents were calculated along the MFEP using the MBAR method ( see Materials and Methods ) ., Figures 6A , 6B , and 6C display the isosurface representations of the 3D distribution functions around the P-loop at , 41 , and 42 , respectively ., The surfaces show the areas in which the atoms are distributed four times as probably as in the bulk phase ., At , the ATP phosphates are not yet bound to the P-loop because an occluded water molecule ( encircled ) is wedged between the phosphate and the P-loop , inhibiting binding of ATP and and bending of the side-chain of “invariant lysine” ( Lys13 ) , a residue that plays a critical role in orienting the phosphates to the proper catalytic position 36 ., This occluded water molecule may correspond to that found in the crystal structure of apo-AK ( PDBid: 4ake ) ( Figure 6D , encircled ) ., Figures 6B and 6C clearly demonstrate that , upon removal of this water molecule , the ATP phosphates begin binding to the P-loop ., These observations were confirmed by plots of the PMF surface mapped onto a space defined by the LID-CORE distance versus the index of image ( Figure 6F ) , which shows that the PMF decreases discontinuously upon dehydration followed by LID domain closure ., Interestingly , compared with the crystal structure ( PDBid: 1ake ) ( Figure 6E ) , the position of the ATP moiety is shifted to the AMP side by one monophosphate unit ., This may be a consequence of early binding of the AMP moiety ., At a later stage ( around ) , this mismatch was corrected to form the same binding mode as observed in the crystal structure ., This reformation of the binding mode may be induced by the tight binding of ATP adenine to the LID-CORE domains , and will not occur in the real enzymatic system containing ATP and AMP instead of the bisubstrate analog inhibitor , Ap5A ., In this study , we have applied the on-the-fly string method 5 and the MBAR method 11 to the conformational change of an enzyme , adenylate kinase , and successfully obtained the MFEP ( Figures 2A and 2B ) ., The MFEP yielded a coarse-grained description of the conformational transitions in the domain motion space ( Figures 2C and 2D ) ., At the same time , the atomistic-level characterization of the physical events along the MFEP provided a structural basis for the ligand-binding and the domain motions ( Figures 3–6 ) ., This kind of multiscale approach used here is expected to be useful generally for complex biomolecules since the full space sampling can be avoided in an efficient manner ., We have shown that in the TSE of holo-AK , the conformational transition is coupled to highly specific binding of the AMP moiety ., Our results have been validated by unbiased MD simulations ., The mechanism of the AMPbd domain closure is consistent with that proposed by the induced-fit model ( Figure S7A ) , and follows a process similar to that of protein kinase A , previously investigated by a coarse-grained model 32:, ( i ) the insertion of the ligand into the binding cleft initially compacts the system;, ( ii ) additional contacts between the ligand and non-hinge region further compact the system ., The closure of the LID domain is more complicated ( Figure S7B ) ., It was shown that apo-AK can exist in a partially closed state , stabilized by the “cracking” of the LID-CORE hinge and the P-loop , even with no ligand present ., The cracking of the hinge region enables rearrangement of molecular interactions for ATP-binding , which induces a smooth bending of the hinge ., Along with the LID closure , ATP is conveyed into the P-loop , with removal of an occluded water molecule ., The closure of the LID domain follows the “population-shift followed by induced-fit” scenario discussed in Ref ., 37 , in which a transient local minimum is shifted toward the closed conformation upon ligand binding ., This two-step process of the LID domain closure is similar to the two-step mechanism reported in recent simulation studies of the Lysine- , Arginine- , Ornithine-binding ( LAO ) protein 38 and the maltose binding protein 39 ., In holo-AK , AMPbd domain closure occurs early ( at ) , while the LID domain closes at later stages ( ) ., An interesting question is whether an alternative pathway is possible in the presence of the real ligands ( ATP and AMP ) instead of Ap5A ., Ap5A artificially restrains the distance between the ATP and AMP moieties ., During the process with real ligands , the dynamics of the LID and AMPbd domains is expected to be less correlated ., Nevertheless , for full closing of the LID domain , we conjecture that the AMPbd domain should be closed first , enabling the interactions on the LID-AMPbd interface to drive the dehydration around the P-loop ., This suggests that full recognition of ATP by the LID-CORE domains occurs at a later stage of the conformational transition ., This conjecture may be related to the lower specificity of E . coli AK for ATP compared with AMP 34 ., Nonspecific AMP-binding to the LID domain has previously been suggested to explain the observed AMP-mediated inhibition of E . coli AK at high AMP concentrations 40 ., A missing ingredient in the present study is the quantitative decomposition of the free energy in each event , such as the ligand binding and the interactions on the LID-AMPbd interface ., For enhanced understanding of the conformational change , our methods could be complemented by the alchemical approach 41 ., Varying the chemical compositions of the system during the conformational change would enable us to elucidate the effects of ligand binding , cracking , and dehydration in a more direct manner ., We prepared three systems from the following initial structures:, ( i ) “apo-open system” , X-ray crystal structure of the open-form without ligand ( PDBid: 4ake 28 ) ,, ( ii ) “holo-closed system” , crystal structure of closed-form with Ap5A ( PDBid: 1ake 29 ) ,, ( iii ) “apo-closed system” , structure created by removing Ap5A from the holo-closed system ., The protonation states of the titratable groups at pH 7 were assigned by PROPKA 42 , implemented in the PDB2PQR program package 43 , 44 ., The apo-open and apo-closed systems yielded identical assignments , which were used also for the holo-closed system ., These systems were solvated in a periodic boundary box of water molecules using the LEaP module of the AMBER Tools ( version 1 . 4 ) 45 ., A padding distance of 12 Å from the protein surface was used for the apo-open system ., For the apo-closed and holo-closed systems , a longer padding distance of 20 Å was used to avoid interactions with periodic images during the closed-to-open transition ., Two Na+ ions were added to neutralize the closed-apo and open-apo systems , while seven Na+ ions were required to neutralize the closed-holo system ., The systems were equilibrated under the NVT condition at 300 K by the following procedure: First , the positions of solvent molecules and hydrogen atoms of the protein ( and Ap5A ) were relaxed by 1 , 000 step minimization with restraint of non-hydrogen atoms ., Under the same restraints , the system was gradually heated up to 300 K over 200 ps , followed by 200 ps MD simulation under the NVT condition at 300 K while gradually decreasing the restraint forces to zero , but keeping the restraints on atoms needed in the string method ., The system was further equilibrated by 200 ps MD simulation under the NPT condition ( 1 atm and 300 K ) , adjusting the density of the water environment to an appropriate level ., The ensemble was finally switched back to NVT , and subjected to additional 200 ps simulation at 300 K , maintaining the restraints ., The equilibration process was conducted using the Sander module of Amber 10 45 , with the AMBER FF03 force field 46 for the protein , and TIP3P for water molecules 47 ., The parameters for Ap5A were generated by the Antechamber module of AMBER Tools ( version 1 . 4 ) 45 using the AM1-BCC charge model and the general AMBER Force Field ( GAFF ) 48 ., Covalent bonds involving hydrogen atoms were constrained by the SHAKE algorithm 49 with 2 fs integration time step ., Long-range electrostatic interactions were evaluated by the particle mesh Ewald method 50 with a real-space cutoff of 8 Å ., The Langevin thermostat ( collision frequency 1 ps−1 ) was used for the temperature control ., The production runs , including the targeted MD , the on-the-fly string method , the umbrella sampling , and the committor test , were performed with our class library code for multicopy and multiscale MD simulations ( which will soon be available ) T . Terada et al . , unpublished , using the same parameter set described above ( unless otherwise noted ) ., Protein structures and the isosurfaces of solvent density were drawn with PyMOL ( Version 1 . 3 , Schrödinger , LLC ) ., The calculations were performed using the RIKEN Integrated Cluster of Clusters ( RICC ) facility ., It has been shown that normal modes or principal modes provide a suitable basis set for representing domain motions of proteins 2 , 10 ., In particular , it has been argued that the conformational change in AK can be captured by a set of principal modes of apo-AK 31 , 51 ., In this study , we have defined the collective variables for the on-the-fly string method using the principal components of apo-AK ., The PCA was carried out in the following manner: After the equilibration process , 3 ns MD simulations were executed at 300 K without restraint for both apo-open and apo-closed systems ., The obtained MD snapshots from both systems were combined in a single PCA 52 , removing the external contributions by iteratively superimposing them onto the average coordinates 53 , 54 ., The PCA was then conducted for the Cartesian coordinates of the atoms ., It was found that the first principal mode representing the largest-amplitude merely represents the difference between the open and closed conformations ., The fluctuations in the two structures were expressed in the principal modes of smaller amplitudes ., The cumulative contributions of these modes ( ignoring the first ) are shown in Figure S8 ., As expected , the principal modes represent the collective motions of the LID and AMPbd domains ( Figure S9 ) ., The first 20 principal components ( 82% cumulative contribution , ignoring that of the first ) were adopted as the collective variable of the string method ., These components were sufficient to describe the motions of three domains in AK for which at least degrees of freedom are required in the rigid-body approximation ., The additional eight degrees of freedom were included as a buffer for possible errors in the estimation of the principal modes ., The sum of the canonical correlation coefficients between the two sets of the 20 principal components , one calculated using the samples of the first half ( 0–1 . 5 ns ) snapshots and the other using the last half ( 1 . 5–3 ns ) snapshots , was 11 . 8 ( ∼12 ) , suggesting that the subspace of the domain motions was converg | Introduction, Results, Discussion, Materials and Methods | Large-scale conformational changes in proteins involve barrier-crossing transitions on the complex free energy surfaces of high-dimensional space ., Such rare events cannot be efficiently captured by conventional molecular dynamics simulations ., Here we show that , by combining the on-the-fly string method and the multi-state Bennett acceptance ratio ( MBAR ) method , the free energy profile of a conformational transition pathway in Escherichia coli adenylate kinase can be characterized in a high-dimensional space ., The minimum free energy paths of the conformational transitions in adenylate kinase were explored by the on-the-fly string method in 20-dimensional space spanned by the 20 largest-amplitude principal modes , and the free energy and various kinds of average physical quantities along the pathways were successfully evaluated by the MBAR method ., The influence of ligand binding on the pathways was characterized in terms of rigid-body motions of the lid-shaped ATP-binding domain ( LID ) and the AMP-binding ( AMPbd ) domains ., It was found that the LID domain was able to partially close without the ligand , while the closure of the AMPbd domain required the ligand binding ., The transition state ensemble of the ligand bound form was identified as those structures characterized by highly specific binding of the ligand to the AMPbd domain , and was validated by unrestrained MD simulations ., It was also found that complete closure of the LID domain required the dehydration of solvents around the P-loop ., These findings suggest that the interplay of the two different types of domain motion is an essential feature in the conformational transition of the enzyme . | Conformational transitions of proteins have been postulated to play a central role in various protein functions such as catalysis , allosteric regulation , and signal transduction ., Among these , the relation between enzymatic catalysis and dynamics has been particularly well-studied ., The target molecule in this study , adenylate kinase from Escherichia coli , exists in an open state which allows binding of its substrates ( ATP and AMP ) , and a closed state in which catalytic reaction occurs ., In this molecular simulation study , we have elucidated the atomic details of the conformational transition between the open and the closed states ., A combined use of the path search method and the free energy calculation method enabled the transition pathways to be traced in atomic detail on micro- to millisecond time scales ., Our simulations revealed that two ligand molecules , AMP and ATP , play a distinctive role in the transition scenario ., The specific binding of AMP into the hinge region occurs first and creates a bottleneck in the transition ., ATP-binding , which requires the dehydration of an occluded water molecule , is completed at a later stage of the transition . | computational chemistry, molecular dynamics, biophysic al simulations, chemistry, biology, computational biology | null |
687 | journal.pcbi.1000177 | 2,008 | Top-Down Analysis of Temporal Hierarchy in Biochemical Reaction Networks | The network of interactions that occur between biological components on a range of various spatial and temporal scales confer hierarchical functionality in living cells ., In order to determine how molecular events organize themselves into coherent physiological functions , in silico approaches are needed to analyze how physiological functions emerge from the evolved temporal structure of networks ., Time scale decomposition is a well-established , classical approach to dissecting network dynamics and there is a notable history of analyzing the time scale hierarchy in metabolic networks and matching the events that unfold on each time scale with a physiological function 1–6 ., This approach enables the identification of the independent , characteristic time scales for a dynamic system ., In particular it has been possible to decompose a cell-scale kinetic model of the human red blood cell in time to show how its key metabolic demands are met through a dynamic structure-function relationship ., The underlying principle is one of aggregation of concentration variables into ‘pools’ of concentrations that move in tandem on slower time scales 5 , 7 ., The dynamics of biological networks characteristically span large time scales ( 8 to 10 orders of magnitude ) , which contributes to the challenge of analyzing and interpreting related models ., However , there is structure in this dynamic hierarchy of events , particularly in biochemical networks in which the fastest motions generally correspond to the chemical equilibria between metabolites , and the slower motions reflect more physiologically relevant transformations ., Appreciation of this observation can result in elucidating structure from the network and simplifying the interactions ., The reduction in dynamic dimensionality is based on such pooling and the analysis of pooling is focused in the underlying time scale hierarchy and its determinants ., Understanding the time scale hierarchy and pooling structure of these networks is critical to understanding network behavior and simplifying it down to the core interactions ., Top-down studies of dynamic characteristics of networks begin with fully developed kinetic models that are formal representations of large amounts of data about the chemistry and kinetics component interactions ., Network properties can be studied by numerical simulations ( that are condition-specific ) or by analysis ( that often yield general model properties ) of the model equations ., Since comprehensive numerical simulation studies become intractable for larger networks and the identification of general model properties are needed for the judicious simplification of models , there is a need for analysis based methods in order to characterize properties of dynamic networks ., In this study we present an in silico analysis method to determine pooling of variables in complex dynamic models of biochemical reaction networks ., This method is used to study metabolic network models and allows us to identify and analyze pool formation resulting from the underlying stoichiometric , thermodynamic , and kinetic properties ., The models studied here exhibit a significant span of time scales ( Table 1 ) ., A hierarchy pool formation on different time scales was found in all networks based on the calculation of all pair wise ϑij ( k ) in the models ( Figures 1C and 2 ) ., The results can be presented in a symmetric correlation tiled array , where each entry can be used to represent k for a pair of concentrations ., Figure 3 shows the result of such an array for the human red cell ., Since the array is symmetric we can display both k and the modal coefficient ratio in the pool ( xi/xj ) for each pair of concentrations; thus The time scale ( k ) for the formation of pools and the ratio between a pair of concentrations are functions of three factors: network stoichiometry ( or topology ) , thermodynamics , and kinetic properties of the transformations in the network ., Viewing the dynamics of the network in terms of the modal matrix and the pair-wise concentration correlations on progressing time scales enables one to consider the questions of ( A ) the thermodynamic versus kinetic control of concentrations within the whole network and ( B ) the delineation of kinetic versus topological decoupling in networks ., The method developed above was developed , tested , and implemented in Mathematica ( Wolfram Research , Chicago , IL ) version 5 . 2 ., The models analyzed herein: the model of human red cell metabolism 20–22 , human folate metabolism 23 , and yeast glycolysis 24 were implemented in Mathematica ., For each model , a stable steady state was identified by integrating the equations over time until the concentration variables no longer changed ( error <1×10−10 , see Table S1 ) ., The Jacobian was then calculated symbolically at that steady state condition ., Temporal decomposition was carried out as described in the Results/Discussion section ., Briefly for a general case , a similarity transformation 8 of a square matrix , A , is given by A\u200a=\u200aDΛD−1 in which D is invertible ( by definition ) and Λ is a diagonal matrix ., D is an orthogonal matrix composed of eigenvectors corresponding to the entries of Λ ( the eigenvalues ) ., When the Jacobian matrix for a first order differential equation with respect to time is decomposed in this manner , the negative reciprocals of the eigenvalues correspond to the characteristic time scales for the corresponding modes 8 ( this is immediately clear upon integration of Equation 4 ) ., All three of the models considered here exhibited at least one pair of complex conjugate eigenvalues at the steady states considered , hence the corresponding complex conjugate modes were combined in order to eliminate oscillating motions ., The calculations for the correlations across progressive time scales were carried out as described in Results/Discussion ., Once the modal matrix , M−1 , was calculated , all pairwise angles between the metabolites ( columns of the modal matrix ) were calculated ( see Equation 5 ) ., The modal matrix is rank ordered from the fastest ( k\u200a=\u200a1 ) to the slowest ( k\u200a=\u200an ) modes ., The angles between the columns of the modal matrix were recalculated n−1 more times , in which an additional row of the modal matrix is zeroed out at each iteration ., For example at the third iteration ( k\u200a=\u200a2 ) , the first two rows of the modal matrix have been zeroed out ., The spectrum of correlation cut-off values for pooling were considered from 10% to 99% ., Cut-off values in the range 85% to 95% resulted in pooling of variables most consistent with the known pooling structures of the human red cell 2 , 5 ., A value of 90% was used as the correlation cutoff for the red cell , folate , and yeast glycolysis models ., The angle between two zero vectors was classified as undefined and the angle between any zero vector and another vector with at least one non-zero element was defined as 90° ., Fragmentation of the pooling structure , in the strictest sense , was identified by any 0 entry ( or <∼10−13 ) in the final row of the metabolite modal matrix ., Values for the Gibbs standard free energies of formation for the metabolites in the human red cell model were used from 25 . | Introduction, Results/Discussion, Materials and Methods | The study of dynamic functions of large-scale biological networks has intensified in recent years ., A critical component in developing an understanding of such dynamics involves the study of their hierarchical organization ., We investigate the temporal hierarchy in biochemical reaction networks focusing on: ( 1 ) the elucidation of the existence of “pools” ( i . e . , aggregate variables ) formed from component concentrations and ( 2 ) the determination of their composition and interactions over different time scales ., To date the identification of such pools without prior knowledge of their composition has been a challenge ., A new approach is developed for the algorithmic identification of pool formation using correlations between elements of the modal matrix that correspond to a pair of concentrations and how such correlations form over the hierarchy of time scales ., The analysis elucidates a temporal hierarchy of events that range from chemical equilibration events to the formation of physiologically meaningful pools , culminating in a network-scale ( dynamic ) structure– ( physiological ) function relationship ., This method is validated on a model of human red blood cell metabolism and further applied to kinetic models of yeast glycolysis and human folate metabolism , enabling the simplification of these models ., The understanding of temporal hierarchy and the formation of dynamic aggregates on different time scales is foundational to the study of network dynamics and has relevance in multiple areas ranging from bacterial strain design and metabolic engineering to the understanding of disease processes in humans . | Cellular metabolism describes the complex web of biochemical transformations that are necessary to build the structural components , to convert nutrients into “usable energy” by the cell , and to degrade or excrete the by-products ., A critical aspect toward understanding metabolism is the set of dynamic interactions between metabolites , some of which occur very quickly while others occur more slowly ., To develop a “systems” understanding of how networks operate dynamically we need to identify the different processes that occur on different time scales ., When one moves from very fast time scales to slower ones , certain components in the network move in concert and pool together ., We develop a method to elucidate the time scale hierarchy of a network and to simplify its structure by identifying these pools ., This is applied to dynamic models of metabolism for the human red blood cell , human folate metabolism , and yeast glycolysis ., It was possible to simplify the structure of these networks into biologically meaningful groups of variables ., Because dynamics play important roles in normal and abnormal function in biology , it is expected that this work will contribute to an area of great relevance for human disease and engineering applications . | mathematics, biochemistry/chemical biology of the cell, biochemistry/bioinformatics, computational biology/metabolic networks, biotechnology/bioengineering, biochemistry/theory and simulation | null |
2,247 | journal.pcbi.1000397 | 2,009 | Integrating Statistical Predictions and Experimental Verifications for Enhancing Protein-Chemical Interaction Predictions in Virtual Screening | In the early stages of the drug discovery process , prediction of the binding of a chemical compound to a specific protein can be of great benefit in the identification of lead compounds ( candidates for a new drug ) ., Moreover , the effective screening of potential drug candidates at an early stage generates large cost savings at a later stage of the overall drug discovery process ., In the field of virtual screening for the drug discovery , docking analyses and molecular dynamics simulations have been the principal methods used for elucidating the interactions between proteins and small molecules 1–4 ., Fast and accurate statistical prediction methods for binding affinities of any pair of a protein and a ligand have also been proposed for the case where information regarding 3D structures , binding pockets and binding affinities ( e . g . pKi ) for a sufficient number of pairs of proteins and chemical compounds is available 5 ., However , the requirement of these programs for 3D structural information is a severe disadvantage , as the availability of these data is extremely limited ., Although a number of structures in PDB 6 is increasing ( from 23 , 642 structures in 2003 to 48 , 091 structures in 2007 ) , not all proteins which have been derived from many genome-sequencing projects are suitable for experimental structure determination ., Hence , the genome-wide application of these methods is in fact not feasible ., For example , among the GPCRs ( G-protein coupled receptors ) , whose modulation underlies the actions of 30% of the best-known commercial drugs 7 , the full structure of only a few mammalian members , including bovine rhodopsin 8 and human beta 2 adrenoreceptor 9 , is known ., To achieve more comprehensive and faster protein-chemical interaction predictions in the post-genome era producing a vast number of protein sequences whose structural information is not available , it is essential to be able to utilize more readily available biological data and more generally applicable methods which do not require 3D structural data 10–12 ., In our previous study , we developed a comprehensively applicable statistical method for predicting the interactions between proteins and chemical compounds by exploiting very general biological data , including amino acid sequences , 2-dimensional chemical structures , and mass-spectrometry ( MS ) data 11 ., These statistical approaches provided a novel framework where the input space consists of pairs of proteins and chemical compounds ., These pairs are classified into binding and non-binding pairs , while most chemoinformatics approaches assess only chemical compounds and classify them according to their pharmacological effects ., Our previous study 11 demonstrated that screening target proteins for a chemical compound could be performed on a genome-wide scale ., This is due to the fact that our method can be applied to all proteins whose amino acid sequences have been determined even though the 3D structural data is not yet available ., Genome-wide target protein predictions were conducted for MDMA , or ecstasy , which is one of the best known psychoactive drugs , from a pool of 13 , 487 human proteins , and known bindings of MDMA were correctly predicted 11 ., Although the method yielded a relatively high prediction performance ( more than 80% accuracy ) in cross-validation and usefulness in the comprehensive prediction of target proteins for a given chemical compound with tens of thousands of prediction targets 11 , it suffered from the problem of predicting many false positives when comprehensive predictions were conducted ., Although these false positives might include some unknown true positives , they were mainly due to the low quality of the negative data , which is one of the common problems in utilizing statistical classification methods such as Support Vector Machines ( SVMs ) and Artificial Neural Networks ( ANNs ) ., In this paper , we describe two strategies , namely two-layer SVM and reasonable negative data design , which are used for the purpose of reducing the number of false positives and improving the applicability of our method for comprehensive prediction ., In two-layer SVM , in which outputs produced by the first-layer SVM model are utilized as inputs to the second-layer SVM , in order to design negative data which produce fewer false positives , we iteratively constructed SVM models or classification boundaries and selected negative sample candidates according to pre-determined rules ., By using these two strategies , the number of predicted candidates was reduced to around 100 ( Table, 1 ) in experiments in which the potential ligands for some druggable proteins ( UniProt ID P10275 ( androgen receptor ) , P11229 ( muscarinic acetylcholine receptor M1 ) and P35367 ( histamine H1 receptor ) ) are predicted on the basis of more than 100 , 000 compounds in the PubChem Compound database ( http://pubchem . ncbi . nlm . nih . gov/ ) ., With the aim of validating the usefulness of our method , our proposed prediction model with fewer false positives was applied to the PubChem Compound database in order to predict the potential ligands for the “androgen receptor” , which is one of the genes responsible for prostate cancer ., We verified some of these predictions by measuring the IC50 values in an in vitro assay ., Biological experiments , conducted to verify the computational predictions based on statistical methods , docking methods or molecular dynamics methods , typically involve success as well as failure ., In addition to fast calculation and wide applicability , one of the merits of using statistical methods that involve training with known data is that results obtained by verification experiments can be efficiently utilized as feedback to produce new and more reliable predictions ., Most previous work on virtual screening has focused on the computational prediction and listing of dozens or hundreds of candidates , followed by their experimental verification ., However , only on rare occasions have these experimental results been utilized for the further improvement of computational predictions and experiments ., Moreover , even without verification experiments , additional data acquired from , for example , relevant literature can be used for enhancing the prediction reliability ., Therefore , we propose a strategy based on the effective combination of computational prediction and experimental verification ., Our second computational prediction utilizing feedback from the first experimental verification successfully discovered novel ligands ( Figure 1 and, 2 ) for the androgen receptor ., Our approach suggests the significance of utilizing statistical learning methods and feedback from experimental results in drug lead discovery ., In the following section , we first describe the real application of our method involving the computational prediction , the experimental verification and the feedback , and then explain the computational experiments conducted to verify the usefulness of our computational prediction method in comprehensive prediction ., In bioinformatics , statistical approaches extract rules from numerical data corresponding to biological properties ., Here , it is not guaranteed that the extracted rules are biologically valid , and furthermore it is possible to utilize statistical methods to obtain general rules from any kind of numerical data which are meaningless and irrelevant to biological properties ., The biological relevance of our approach can be verified as follows on the basis of supporting evidence which indicates that our method can extract significant rules only if biologically valid and relevant data is given ., First , high prediction performances on diverse datasets might support the validity of our approach ., In several datasets consisting of known pairs of proteins , including nuclear receptors , GPCRs , ion channels and enzymes , and drugs and random protein-drug pairs , our statistical approach with SVM showed high prediction performances ( details are provided in Text S1 , Table S1 and Figure S2 ) ., The fact that more than 0 . 85 AUC and an accuracy of 80% were obtained for diverse datasets suggests that it is possible to extract some properties accountable for interactions between proteins and drugs by statistical approaches ., This possibility can be further supported by the fact that integrating several datasets whose target proteins were not relevant to each other improved the prediction performances with respect to pairs of proteins and chemical compounds which had a specific binding mode ( details are provided in Text S1 and Table S2 ) ., Second , we showed the biological relevance of these high prediction performances by calculating the prediction performances using biologically meaningless artificial datasets as positives ., Several datasets which contained fractions of valid samples found in the DrugBank dataset , and which comprised artificial pseudo-positive samples of protein-chemical pairs produced by shuffling with the same frequency of chemical compounds and proteins as that in the DrugBank dataset , were generated ., Our method was applied to these shuffled artificial datasets ( Figure 3 ) ., Here , if our approach did not depend on the biological properties of the given dataset but only succeeded in classifying given pairs comprising a protein and a chemical compound and random pairs derived from them , the prediction accuracy for each shuffled dataset was assumed not to fluctuate ., As shown in Figure 3 , the prediction accuracy was proportional to the content rate of the biologically valid samples ., Therefore , the classification of our approach was shown to function only when a certain amount of biologically valid pairs comprising a protein and a chemical compound are given ., This result suggests that our statistical approach succeeds in extracting the rules which are only relevant for the biological binding properties ., It is often observed that although statistical learning approaches achieve very high prediction performances in given datasets , statistical prediction models suffer from the problem of generating vast prediction sets including many false positives when applied to a huge dataset , such as the PubChem database ., In our approach , SVM models based on feature vectors directly representing amino acid sequences , chemical structures , and random protein-compound pairs as negatives also produced many predictions and inevitably yielded many false positives ( Table 1A random ) ., Upon the introduction of the two-layer SVM and the negatives designed to overcome this drawback , the prediction precision , or the confidence of positive prediction , was significantly improved in computational experiments based on the DrugBank dataset ( Table 2 ) ., In Table 2 , the external dataset consisted of 170 positives and 2 , 450 negatives that were randomly chosen from 1 , 731 positives and 24 , 500 designed negatives with the mlt rule ( details are provided in Materials and Methods ) and that were excluded in constructing first-layer and second-layer SVM models ., The external dataset contained much more negatives than positives as it simulated the real application of virtual screening with vast databases where only a fraction of chemical compounds in the databases have the effect of interest ., Tables 2A and 2B showed improvement of precision by introducing the designed negatives and the two-layer SVM respectively ., Table 2B also indicated that the application of SVM to outputs of the first-layer SVM models was superior to other statistical learning methods 15 and naive combination of the first-layer SVM models , and that rational selection of the first-layer SVM models achieved significant higher precision ( P-value\u200a=\u200a0 . 0081 by t test ) than randomly selected models ( other comparisons are provided in Text S1 , Table S3 and Table S4 ) ., Particularly , the second-layer SVM utilizing the allpos first-layer SVM models achieved higher precision than use of higher thresholds in the other SVM models ( Table 2C ) ., The high precision contributes to the selection of more reliable predictions and thus to the reduction of the number of false positives ., Following these results on given datasets , our approaches were evaluated with respect to comprehensive binding ligand prediction ., For three proteins ( UniProt ID P10275 ( androgen receptor ) , P11299 ( muscarinic acetylcholine receptor M1 ) and P35367 ( histamine H1 receptor ) ) , their binding ligands were predicted from PubChem Compound 0000001–00125000 which contains 109 , 841 compounds ( Table 1 ) ., Here , P35367 and P11299 are the two most frequently targeted proteins in the DrugBank dataset , and P10275 is a protein of average occurrence in the DrugBank dataset ., Among the 109 , 841 compounds , 47 , 45 , and 5 known ligands were included for P35367 , P11299 , and P10275 , respectively ., As shown in Tables 1A , 1B and 1C , the use of carefully selected negatives , the introduction of the two-layer SVM , and the integration of these two approaches efficiently reduced the number of predictions and thus the number of false positives ., For example , in comparison to Tables 1A and 1C , the number of candidates discovered by using the max dataset in the allpos two-layer SVM approach was about one fiftieth of the number of chemical compounds predicted by using the random negative dataset in the one-layer SVM ., Furthermore , in comparison to other approaches based solely on the use of chemical compounds ( Tables 1D and 1E ) , our approaches gave a reasonable number of predictions ( other comparisons are described in Text S1 and Tables S5 , S6 , S7 ) ., These results suggest that our prediction models select a reasonable number of ligand candidates from all chemical compounds in large databases and encourage the comprehensive binding ligand prediction for the target protein ., The experimental verification of the computational predictions produces feedback data or samples which are not included in the given training datasets ., The efficient utilization of these data can contribute to the fast identification of compounds with the desired properties and can be of advantage to statistical learning approaches ., We compared several strategies for utilizing feedback data as follows ., For three proteins ( UniProt ID P10275 ( androgen receptor ) , P11299 ( muscarinic acetylcholine receptor M1 ) and P353367 ( histamine H1 receptor ) ) , ligand data which were not included in the DrugBank dataset were collected from relevant literature 16–18 and public databases , PDSP Ki database 19 and GLIDA 20 , in February 2008 ., Overall , 35 androgen receptor-ligand pairs , 49 muscarinic acetylcholine receptor M1-ligand pairs , and 1 , 060 histamine H1 receptor-ligand pairs were supplemented ., Additional models were constructed by using these supplemental pairs as positives ( details are provided in Text S1 ) ., As shown in Figure 4 , the use of the additional model with a sufficient weighting factor controlled the increase of the predictions with a slight decrease of the recall rate ., The use of large weighting factors results in the relative decrease of the influence of other first-layer SVM models derived from the DrugBank dataset in classification ., However , the low performance of “only additional model:st2” , shown in Figure 4A , where only one first-layer SVM model derived from additional data was used to construct the second-layer SVM model , indicates the need for first-layer SVM models derived from the DrugBank dataset as well as combinations of these first-layer SVM models with an additional first-layer SVM model ., With this efficient strategy for utilizing feedback data , computational prediction and experimental verification improve each other to enable faster search toward the identification of useful small molecules ., We proposed a comprehensively applicable computational method for predicting the interactions between proteins and chemical compounds , in which the number of false positives was reduced in comparison to other methods ., Furthermore , we proposed the strategy for the efficient utilization of experimental feedback and the integration of computational prediction and experimental verification ., The application of our method to the androgen receptor resulted in 67% ( 4/6 ) prediction precision according to in vitro experimental verification in the first computational prediction and 60% ( 3/5 ) in the second prediction , which included the feedback of the first experimental verification ., However , these relatively low precision values do not represent the true statistical significance of the method ., This 60–70% precision can also be evaluated by using the following P-value . Here , N is the number of prediction targets , M the number of ligands potentially binding to the target proteins , t is the number of tested compounds , and p is the number of true positives ., With N\u200a=\u200a19171127 , which is the number of chemical compounds in the PubChem Compound database , and M\u200a=\u200a19171127× ( 456/3000 ) × ( 7/964 ) ≒21160 , which is based on the optimistic assumption that all compounds can be regarded as potential drugs for some target protein , it is estimated that 3 , 000 druggable proteins exist 21 ., Moreover , the distribution of target proteins and drugs in the DrugBank dataset , consisting of 456 target proteins and 964 drugs , including 7 known ligands for the human androgen receptor , and P-values of 2∶21×10−11 and 1∶34×10−8 are obtained for the prediction precision of the first and the second computational prediction , respectively ., These extremely small P-values prove the significance of the virtual screening and its precision in the drug discovery process ., These prediction performances are as good as or better than several previous virtual screening studies based mainly on docking analyses 22–24 ., For example , at a threshold of 100 µM , 7% precision ( 3/39 ) for Mycobacterium tuberculosis adenosine 5′-phosphosulfate reductase 22 , 71% precision ( 22/31 ) for Staphylococcus aureus methyonyl-tRNA synthetase 23 and 8% precision ( 16/192 ) for human DNA ligase I 24 were obtained , respectively ., In addition , 0 . 566 AUC was achieved in the docking analysis using AutoDock 3 ( Figure 5 ) for the 17 chemical compounds ( 12 chemical compounds verified in the first experimental verification , with the exception of 6 known drugs , and 5 chemical compounds verified in the second experimental verification ) ., In contrast , 0 . 681 AUC was obtained with our method ., Here , in the calculation of AUC , the threshold level of IC50\u200a=\u200a100 µM for experimental verification was used to define a label ( binding or non-binding ) for each chemical compound , and or the predicted probability was regarded as a value for each molecule ., Note that the docking analysis with AutoDock was not applied to the 19 , 171 , 127 compounds in the PubChem Compound database for the screening purpose , but was applied only to 17 compounds , which were the results of virtual screening by our method ., In terms of computational time , for binding prediction of one pair of a protein and a chemical compound , using one Opteron 275 2 . 2 GHz CPU , AutoDock took approximately 100 minutes on average with 100 genetic algorithm ( GA ) runs , while our method required less than 0 . 3 seconds ., These computational time comparisons indicate that our method can perform a virtual screening of more than 19 million chemical compounds from the PubChem Compound database for any proteins in genome-wide scale and this immense screening task would be infeasible to accomplish with any of the existing docking methods ., Therefore , our statistical approach can contribute as the first fast and rather accurate virtual screening tool for the drug discovery process ., It can be followed by the application of more time-consuming but more informative approaches , such as docking analysis and molecular dynamics analysis , which can provide information regarding the binding affinities and the molecular binding mechanisms to outputs of the first screening ., In another perspective , the re-evaluation of statistical prediction approaches by using 23 chemical compounds experimentally verified in this study showed that our proposed methods , which utilized information of both protein sequence and chemical structures , were superior to a conventional LBVS ( Ligand Based Virtual Screening ) method where only structures of specific chemical compounds were considered ( Figure 6 ) ., As shown in Figure 6A , our proposed methods ( “one-layer SVM” , “two-layer SVM-subpos” and “two-layer SVM-allpos” ) achieved a higher recall rate at ranks higher than 500 compared to a conventional Ligand Based Virtual Screening method ( “only compound SVM” in Figure 6A ) ., The fact that experimentally verified chemical compounds were identified at higher ranks in the pool by our proposed prediction models suggests that our proposed models were highly efficient with respect to the screening method ., Figure 6B also shows that our proposed methods were more successful at discriminating between 15 experimentally verified binding and 8 non-binding ligands better than the LBVS method ., These comparisons suggest that our proposed method utilizing information of protein sequences as well as chemical structures can be regarded as a more useful substitute for usual ligand-based virtual screening methods utilizing only chemical structures ., Furthermore , the fact that the second computational prediction , or the use of feedback data , contributed to the discovery of novel ligands ( Figure 2B–D ) supports the utilization of statistical learning methods in virtual screening ., Regarding the computational prediction method used in this paper , we made the method available to the public as a web-based service named COPICAT ( COmprehensive Predictor of Interactions between Chemical compounds And Target proteins; http://copicat . dna . bio . keio . ac . jp/ ) ., The DrugBank dataset was constructed from Approved DrugCards data , which were downloaded in February , 2007 from the DrugBank database 25 ., These data consist of 964 approved drugs and their 456 associated target proteins , constituting 1 , 731 interacting pairs or positives ., Given Np positive and Nn negative samples in known data and Mp positives and Mn negatives in additional or feedback data , a straightforward strategy for the integration of additional data into statistical training , such as SVM , is to train a statistical model based on a dataset consisting of Np+Mp positives and Nn+Mn negatives ., When the two-layer SVM strategy is applied , another strategy of feedback and supplement involves the utilization of an additional model based on additional data ., In this strategy , the second-layer SVM is trained on the basis of Np+Mp positives and Nn+Mn negatives , and a sample si in the second layer is represented as follows , Here , is an output of the additional model trained on the basis of Mp positives and Mn negatives ., is an output of the first-layer SVM model j , and is a weighting factor ., AutoDock 4 3 was applied to the human androgen receptor ligand-binding domain ( PDB code; 2AM9 31 ) and tested compounds whose 3D structure was generated by Obgen in the Open Babel package ver . 2 . 2 . 0 32 or CORINA 33 ., The conditions of AutoDock followed Jenwitheesuk and Samudrala , 2005 34 ., ARG752 of 2AM9 , which was considered important for the binding of androgens by the human androgen receptor 31 , was set to a flexible residue in AutoDock . | Introduction, Results, Discussion, Materials and Methods | Predictions of interactions between target proteins and potential leads are of great benefit in the drug discovery process ., We present a comprehensively applicable statistical prediction method for interactions between any proteins and chemical compounds , which requires only protein sequence data and chemical structure data and utilizes the statistical learning method of support vector machines ., In order to realize reasonable comprehensive predictions which can involve many false positives , we propose two approaches for reduction of false positives:, ( i ) efficient use of multiple statistical prediction models in the framework of two-layer SVM and, ( ii ) reasonable design of the negative data to construct statistical prediction models ., In two-layer SVM , outputs produced by the first-layer SVM models , which are constructed with different negative samples and reflect different aspects of classifications , are utilized as inputs to the second-layer SVM ., In order to design negative data which produce fewer false positive predictions , we iteratively construct SVM models or classification boundaries from positive and tentative negative samples and select additional negative sample candidates according to pre-determined rules ., Moreover , in order to fully utilize the advantages of statistical learning methods , we propose a strategy to effectively feedback experimental results to computational predictions with consideration of biological effects of interest ., We show the usefulness of our approach in predicting potential ligands binding to human androgen receptors from more than 19 million chemical compounds and verifying these predictions by in vitro binding ., Moreover , we utilize this experimental validation as feedback to enhance subsequent computational predictions , and experimentally validate these predictions again ., This efficient procedure of the iteration of the in silico prediction and in vitro or in vivo experimental verifications with the sufficient feedback enabled us to identify novel ligand candidates which were distant from known ligands in the chemical space . | This work describes a statistical method that identifies chemical compounds binding to a target protein given the sequence of the target or distinguishes proteins to which a small molecule binds given the chemical structure of the molecule ., As our method can be utilized for virtual screening that seeks for lead compounds in drug discovery , we showed the usefulness of our method in its application to the comprehensive prediction of ligands binding to human androgen receptors and in vitro experimental verification of its predictions ., In contrast to most previous virtual screening studies which predict chemical compounds of interest mainly with 3D structure-based methods and experimentally verify them , we proposed a strategy to effectively feedback experimental results for subsequent predictions and applied the strategy to the second predictions followed by the second experimental verification ., This feedback strategy makes full use of statistical learning methods and , in practical terms , gave a ligand candidate of interest that structurally differs from known drugs ., We hope that this paper will encourage reevaluation of statistical learning methods in virtual screening and that the utilization of statistical methods with efficient feedback strategies will contribute to the acceleration of drug discovery . | chemical biology, mathematics/statistics, pharmacology/drug development, computational biology | null |
1,741 | journal.pcbi.1004933 | 2,016 | Structural Determinants of Misfolding in Multidomain Proteins | Protein misfolding and aggregation are well-known for their association with amyloidosis and other diseases 1 , 2 ., Proteins with two or more domains are abundant in higher organisms , accounting for up to 70% of all eukaryotic proteins , and domain-repeat proteins in particular occupy a fraction up to 20% of the proteomes in multicellular organisms 3 , 4 , therefore their folding is of considerable relevance 5 ., Since there is often some sequence similarity between domains with the same structure , it is easily possible to imagine that multidomain proteins containing repeats of domains with the same fold might be susceptible to misfolding ., Indeed , misfolding of multidomain proteins has been observed in many protein families 6 ., Single molecule techniques have been particularly powerful for studying folding/misfolding of such proteins , in particular Förster resonance energy transfer ( FRET ) and atomic force microscopy ( AFM ) ., For instance , recent studies using single-molecule FRET , in conjunction with coarse-grained simulations , have revealed the presence of domain-swapped misfolded states in tandem repeats of the immunoglobulin-like domain I27 from the muscle protein Titin 7 ( an example is shown in Fig 1e ) ., Domain-swapping 2 involves the exchange of secondary structure elements between two protein domains with the same structure ., Remarkably , these misfolded states are stable for days , much longer than the unfolding time of a single Titin domain ., The domain-swapped misfolds identified in the Titin I27 domains are also consistent with earlier observations of misfolding in the same protein by AFM , although not given a structural interpretation at the time 8 ., In addition , AFM experiments have revealed what appears to be a similar type of misfolding in polyproteins consisting of eight tandem repeats of the same fibronectin type III domain from tenascin ( TNfn3 ) 9 , as well as in native constructs of tenascin 8 , and between the N-terminal domains of human γD-crystallin when linked in a synthetic oligomer 10 ., In addition to domain-swapped misfolding , an alternative type of misfolded state is conceivable for polyproteins in which the sequences of adjacent domains are similar , namely the formation of amyloid-like species with parallel β-sheets ., Theoretical work in fact made the prediction that such species would be formed in tandem repeats of titin domains 11 ., Recently , time-resolved single-molecule FRET experiments on tandem domains of I27 have revealed a surprising number of intermediates formed at short times , which include an unexpected species that appears to be consistent with the previously suggested amyloid-like state 12 ., However , since only the domain-swapped species persisted till long times , and therefore are the most likely to be problematic in cells , we focus on their formation in this work ., A simplified illustration of the mechanism for folding and misfolding , based on both coarse-grained simulations as well as single-molecule and ensemble kinetics 7 , 12 , is shown in Fig 1 , using the Titin I27 domain as an example ., Starting from the completely unfolded state in Fig 1a , correct folding would proceed via an intermediate in which either one of the domains is folded ( Fig 1b ) , and finally to the fully folded state , Fig 1c ., The domain-swapped misfolded state , an example of which is shown in Fig 1e , consists of two native-like folds which are in fact assembled by swapping of sequence elements from the N- and C-terminal portions of the protein ., The final structure in Fig 1e comprises what we shall refer to as a “central domain” formed by the central regions of the sequence ( on the left in Fig 1e ) and a “terminal domain” formed from the N- and C-termini ( on the right ) ., The intermediate structure in Fig 1d , suggested by coarse-grained simulations 7 , and supported by experiment 12 , has only the central domain folded ., This central domain can itself be viewed as a circular permutant 13 of the original native Titin I27 structure , as discussed further below ., While domain-swapped misfolding of tandem repeats has been identified in a number of proteins to date , there are several other proteins for which it does not occur to a detectable level ., For instance , extensive sampling of repeated unfolding and folding of a polyprotein of Protein G ( GB1 ) by AFM revealed no indication of misfolded states , in contrast to Titin 14 ., Similarly , early AFM studies on polyUbiquitin also did not suggest misfolded intermediates in constant force unfolding 15–20 , and lock-in AFM studies of refolding 21 were fully consistent with a two-state folding model , without misfolding ., More recent AFM 22 studies have suggested the formation of partially folded or misfolded species , which have been attributed to partial domain swapping in simulations 23 , but these are qualitatively different from the fully domain-swapped species considered here ., Therefore , it is interesting to ask the general questions: when included in tandem repeats , what types of protein structures are most likely to form domain-swapped misfolded states , and by what mechanism ?, In order to investigate the misfolding propensity of different types of domains , we have chosen seven domains , based on, ( i ) the superfamilies with the largest abundance of repeats in the human genome 24 ,, ( ii ) proteins for which some experimental evidence for misfolding ( or lack thereof ) is available and, ( iii ) proteins for which data on folding kinetics and stability is available for their circular permutants ( only some of the proteins meet criterion, ( iii ) ) ., The circular permutant data are relevant because the misfolding intermediates suggested by simulations and experiment 7 , 12 can be viewed as circular permutants of the original structure ( Fig 1d ) ., Each of the chosen proteins is illustrated in Fig 2 and described briefly in Materials and Methods ., We study the folding and misfolding of the seven protein domains , using the same structure-based model as that successfully employed to treat Titin I27 7 , 12 ., Molecular simulations are carried out to characterize the possible structural topologies of the misfolded intermediates and the mechanism of their formation ., Our model is consistent with available experimental information for the systems studied , in terms of which proteins misfold and what misfolded structures they tend to form ., We then investigated what factors influence the propensity of multidomain proteins to misfold ., The simplest rationalization of the propensity of a multidomain protein for domain-swapped misfolding would seem to be offered by parameterizing a kinetic model based on the scheme shown in Fig 1 , particularly for the steps Fig 1a–1b versus 1a–1d ., We hypothesized that the propensity to misfold might be characterized in terms of the folding kinetics of the isolated circular permutants representing the domain-swapped intermediates in Fig 1d ., However , contrary to this expectation , we found that the stability of such isolated domains , rather than their folding rate , is the main determinant of misfolding propensity ., Although superficially this appears to differ from previously suggested kinetic models 12 , it is completely consistent , with a specific interpretation of the rates ., Building on this understanding , we developed a very simplified model which can be used to predict which domains are likely to be susceptible to domain-swapped misfolding ., Finally , we have investigated the effect of the composition and length of the linker between the tandem repeats on the misfolding propensity ., Tandem Src homology 3 ( SH3 ) domains ( Fig 2a ) are widely found in signal transduction proteins and they share functions such as mediating protein-protein interactions and regulating ligand binding 25 ., Kinetic and thermodynamic properties of native and all the possible circular permutations of SH3 single domain have been well characterized 26 ., Two different circular permutant constructs of the sequence are known to fold to a circularly permuted native conformation ( PDB accession codes are 1TUC and 1TUD ) that is similar to the wild-tpe ( WT ) protein 26 ., With a similar function to the SH3 domains , Src homology 2 ( SH2 ) domains ( Fig 2b ) are also involved in the mediation of intra- and intermolecular interactions that are important in signal transduction 27 ., The SH2 domains are well-known from crystallographic analysis to form metastable domain-swapped dimers 28 , 29 ., Fibronectin type III ( fn3 ) domains ( Fig 2c ) are highly abundant in multidomain proteins , and often involved in cell adhesion ., We have chosen to study the third fn3 domain of human tenascin ( TNfn3 ) , which has been used as a model system to study the mechanical properties of this family ., Single-molecule AFM experiments revealed that a small fraction ( ∼ 4% ) of domains in native tenascin ( i . e . the full tenascin protein containing both TNfn3 and other fn3 domains ) 8 , with a similar signature to that observed for I27 ., Subsequently , misfolding events have been identified in a polyprotein consisting of repeats of TNfn3 only 9 ., Interestingly , a structure has been determined for a domain-swapped dimer of TNfn3 involving a small change of the loop between the second and third strand 30 ., PDZ domains ( Fig 2d ) are one of the most common modular protein-interaction domains 31 , recognizing specific-sequence motifs that occur at the C-terminus of target proteins or internal motifs that mimic the C-terminus structurally 32 ., Naturally occurring circularly permuted PDZ domains have been well studied 33–35 , and domain-swapped dimers of PDZ domains have been characterized by NMR spectroscopy 36 , 37 ., Titin ( Fig 2e ) is a giant protein spanning the entire muscle sarcomere 38 ., The majority of titin’s I-band region functions as a molecular spring which maintains the structural arrangement and extensibility of muscle filaments 39 ., The misfolding and aggregation properties of selected tandem Ig-like domains from the I-band of human Titin ( I27 , I28 and I32 ) have been extensively studied by FRET experiments 7 , 24 ., In the earlier work on tandem repeats of I27 domains , around 2% misfolding events were reported in repeated stretch-release cycles in AFM experiments 8 ., A slightly larger fraction ( ∼ 6% ) of misfolded species was identified in single-molecule FRET experiments and rationalized in terms of domain swapped intermediates , captured by coarse-grained simulations 7 , 11 ., In contrast , with the above misfolding-prone systems , there are certain polyprotein chains have been shown be resistant to misfolding , according to pulling experiments ., For instance little evidence for misfolding was identified in a polyprotein of GB1 14 ( Fig 2g ) , with more than 99 . 8% of the chains ( GB1 ) 8 folding correctly in repetitive stretching–relaxation cycles 14 ., Lastly , we consider polyUbiquitin ( Fig 2f ) , for which there is conflicting experimental evidence on misfolding ., Initial force microscopy studies showed only the formation of native folds 15 , with no misfolding ., Later work suggested the formation of collapsed intermediates 22 , however the signature change in molecular extension of these was different from that expected for fully domain-swapped misfolds ., A separate study using a lock-in AFM 21 found Ubiquitin to conform closely to expectations for a two-state folder , without evidence of misfolding ., For this protein , there is a strong imperative to avoid misfolding , since Ubiquitin is initially expressed as a tandem polyUbiquitin chain in which adjacent domains have 100% sequence identity , yet this molecule is critical for maintaining cellular homeostasis 40 ., A coarse grained structure-based ( Go-like ) model similar to the earlier work is employed for the study here 7 , 41 ., Each residue is represented by one bead , native interactions are attractive and the relative contact energies are set according to the Miyazawa–Jernigan matrix ., The model is based on that described by Karanicolas and Brooks 41 , but with native-like interactions allowed to occur between domains as well as within the same domain , as described below 7 ., All the simulations are run under a modified version of GROMACS 42 ., For the seven species we studied in this work , the native structures of single domains that were used to construct the models for SH3 , SH2 , PDZ , TNfn3 , Titin I27 , GB1 and Ubiquitin correspond to PDB entries 1SHG 43 , 1TZE 44 , 2VWR , 1TEN 45 , 1TIT 46 , 1GB1 47 and 1UBQ 48 respectively ., For the single domains of SH3 ( 1SHG ) , TNfn3 ( 1TEN ) and GB1 ( 1GB1 ) , additional linker sequences of Asp-Glu-Thr-Gly , Gly-Leu and Arg-Ser , respectively , are added between the two domains to mimic the constructs used in the corresponding experiments 9 , 14 , 26 ., Construction of the Titin I27 model was described in our previous work 7 ., In order to allow for domain-swapped misfolding , the native contact potentials within a single domain are also allowed to occur between corresponding residues in different domains , with equal strength ., Specifically , considering each single repeat of the dimeric tandem that has L amino acids , given any pair of residues ( with indices i and j ) that are the native interactions within a single domain , the interaction energy for the intradomain interaction ( Ei , j ( r ) ) is the same as the interdomain interaction between the residue ( i or j ) and the corresponding residue ( j + L or i + L ) in the adjacent domain , i . e . Ei , j ( r ) = Ei+L , j ( r ) = Ei , j+L ( r ) = Ei+L , j+L ( r ) ., To investigate the folding kinetics of the dimeric tandem , a total of 1024 independent simulations are performed on each system for a duration of 12 microseconds each ., Different misfolding propensities are observed at the end of the simulations ., With the exception of Ubiquitin and GB1 , the vast majority of the simulations reached stable native states with separately folded domains ., A small fraction of simulations form stable domain-swapped misfolded states ., All the simulations are started from a fully extended structure , and run using Langevin dynamics with a friction of 0 . 1 ps−1 and a time step of 10 fs ., We note that all the generated domain-swapped misfolding structures , containing the central and terminal domains , can be monitored by a reaction coordinate based on circular permutated native-like contact sets ., Each circularly permuted misfold can be characterized according to the loop position K in sequence where the native domain would be cut to form the circular permutant ( K = 0 corresponds to the native fold ) ., If a native contact Cnative = ( i , j ) exists between residues i and j in the native fold , the corresponding native-like contacts for the central ( Cin ( K ) ) and terminal domains ( Cout ( K ) ) of the domain swapped conformation are generated as, C i n ( K ) = ( i + Θ ( K − i ) L , j + Θ ( K − j ) L ) , C o u t ( K ) = ( i + Θ ( i − K ) L , j + Θ ( j − K ) L ) ,, where Θ ( x ) is the Heaviside step function and L is the length of each single domain ( plus interdomain linker ) ., Sin , K is the set of native-like contacts Cin of the central domain , and Sout , K is the set of all the native-like contacts Cout of the terminal domain ., Sin , K and Sout , K can be used to define a contact-based reaction coordinate to analyze the kinetics of the dimeric tandem misfolding ., The corresponding fraction of contacts for the central domain could be calculated by:, QK ( χ ) =1N∑ ( i , j ) ∈Sin , K11+eβ ( rij ( χ ) −λrij0 ) , ( 1 ), where N is the total number of domain swapped contacts , SK = Sin , K ∪ Sout , K ( equal to the total number of native contacts ) , rij ( χ ) is the distance between residue i and j in the protein configuration χ ., r i j 0 is the corresponding distance in the native structure for native-like contacts , β = 50 nm−1 and λ = 1 . 2 is used to account for fluctuations about the native contact distance ., The equilibrium properties of a single domain of each system are obtained from umbrella sampling along the native contacts Q as the reaction coordinate ., The obtained melting temperature of each system is listed in Table A in S1 Text ., A temperature at which the folding barrier ΔGf of approximately ∼ 2 . 5 kBT is chosen for the 2-domain tandem simulations for reasons described below ., The stability ΔGs is calculated as, Δ G s = - k B T ln ∫ Q ‡ 1 e - F ( Q ) / k B T d Q / ∫ 0 Q ‡ e - F ( Q ) / k B T d Q , ( 2 ), where kB and T are the Boltzmann constant and temperature respectively ., Q‡ is the position of the barrier top in F ( Q ) , separating the folded and unfolded states and F ( Q ) represents the free energy profile on Q . Barrier heights ΔGf were simply defined as ΔGf = G ( Q‡ ) − G ( Qu ) , where Qu is the position of the unfolded state free energy minimum on Q . We calculated the relative contact order 49 , RCOK of different circular permutants K via, RCO K = 1 L · N ∑ ( i , j ) ∈ S in , K | i - j | , ( 3 ), where L is the length of the single domain , and N is the total number of the native like contacts ( the same for different K ) ., Sin , K is the contacts set of the circular permutant corresponding to the “central domain” of the misfolded state ., Note that the contact order calculation here is using residue-based native contacts ( the same ones defined as attractive in the Gō model ) , instead of all atom native contacts ., An Ising-like model was built based on the native contact map , in which each residue is considered either folded or unfolded and so any individual configuration can be specified as a binary sequence , in a similar spirit to earlier work 50–52 ., Interactions between residues separated by more than two residues in the sequence are considered ., To simplify the analysis , we also consider that native structure grows only in a single stretch of contiguous native residues ( native segment ) , which means the configurations such as …UFFFUUUUU… or …UUUUUFFFU… are allowed , however , …UFFFUUUFFFU… is not allowed ( “single sequence approximation” ) 50 ., Each residue which becomes native incurs an entropy penalty ΔS , while all possible native contacts involving residues within the native segment are considered to be formed , each with a favourable energy of contact formation ϵ ., The partition function for such a model can be enumerated as:, Z = ∑ χ exp − G ( χ ) k B T = ∑ χ exp − n ( χ ) ϵ − N f ( χ ) T Δ s k B T , where kB and T are the Boltzmann constant and temperature ., G ( χ ) is the free energy determined by the number of native contacts n ( χ ) in the configuration χ , and the number of native residues , Nf ( χ ) ., The distribution of the microstates ( χ ) can be efficiently generated by the Metropolis-Hastings method with Monte Carlo simulation ., In each iteration , the state of one randomly chosen residue ( among the residues at the two ends of the native fragment and their two neighbouring residues ) is perturbed by a flip , from native to unfolded or from unfolded to native , taking the system from a microstate χ1 with energy E1 to a microstate χ2 with energy E2 ., The new microstate is subject to an accept/reject step with acceptance probability, P acc = min 1 , exp ( - E 2 - E 1 k B T ) ., ( 4 ) To mimic the folding stability difference between native and circular permutant folds , a penalty energy term Ep has been added whenever the native fragment crosses the midpoint of the sequence from either side ( the function θ ( χ ) above is 1 if this is true , otherwise zero ) ., That situation corresponds to formation of a domain-swapped structure , in which there is additional strain energy from linking the termini , represented by Ep ., We only use the Ising model here to investigate formation of the first domain ( either native or circular permutant ) , by rejecting any proposed Monte Carlo step that would make the native segment longer than the length of single domain , L ., In order to characterize the potential misfolding properties of each type of domain , we have used a Gō-type energy function based on the native structure ., Such models have successfully captured many aspects of protein folding , including ϕ-values 53 , 54 , dimerization mechanism 55 , 56 , domain-swapping 57–60 , and the response of proteins to a pulling force 61 , 62 ., More specifically , a Gō type model was used in conjunction with single-molecule and ensemble FRET data to characterize the misfolded states and misfolding mechanism of engineered tandem repeats of Titin I27 7 , 12 ., We have therefore adopted the same model ., Although it is based on native-contacts , it can describe the type of misfolding we consider here , which is also based on native-like structure ., Note that this model effectively assumes 100% sequence identity between adjacent domains , the scenario that would most likely lead to domain-swap formation ., It is nonetheless a relevant limit for this study , as there are examples in our data set of adjacent domains having identical sequences which do misfold ( e . g . titin I27 ) and those which do not ( e . g . protein G ) ., For each of the folds shown in Fig 2 , we ran a large number of simulations , starting from a fully extended , unfolded chain , for sufficiently long ( 12 μs each ) such that the vast majority of them reached either the correctly folded tandem dimer , or a domain-swapped misfolded state similar to that shown in Fig 1e for titin ., In fact , for each protein , a number of different misfolded topologies are possible , illustrated for the Src SH3 domain in Fig 3 ., Each of these domains , shown in conventional three-dimensional cartoon representation in the right column of Fig 3 and in a simplified two-dimensional topology map in the left column , consists of two native-like folded ( or misfolded ) domains ., For convenience , we call the domain formed from the central portion of the sequence the “central domain” and that from the terminal portions the “terminal domain” ., We have chosen to characterize each topology in terms of the position , K , in sequence after which the central domain begins ., Thus , the native fold has K = 0 , and all the misfolded states have K > 0 . Typically , because of the nature of domain swapping , K must fall within a loop ., Of course , there is a range of residues within the loop in question that could be identified as K and we have merely chosen a single K close to the centre of the loop ., This position , and the central domain , are indicated for the Src SH3 misfolded structures in Fig 3 ., We note that each of these central domains can also be considered as a circular permutant of the native fold , in which the ends of the protein have been joined and the chain has been cut at position K . With this nomenclature in hand , we can more easily describe the outcome of the folding simulations for the seven domain types considered in terms of the fraction of the final frames that belonged to the native fold , versus each of the possible misfolded states ., These final populations are shown in Table 1 ., We see that for five of the domains ( SH3 , SH2 , PDZ , TNfn3 , Titin I27 ) , misfolded structures are observed , with total populations ranging from 5–10% ., For the remaining two domains , Ubiquitin ( UBQ ) and protein G ( GB1 ) , no misfolded population is observed ., The ability to capture domain-swapped misfolds with simple coarse-grained simulations potentially allows us to investigate the origin of the misfolding , and its relation , if any , to the topology of the domain in question ., However , we also need to benchmark the accuracy of the results against experiment as far as possible , in order to show that they are relevant ., There are two main sources of information to validate our results ., The first is the overall degree of domain-swapped misfolding for those proteins where it has been characterized , for example by single molecule AFM or FRET experiments ., Qualitatively we do observe good agreement , where data is available: in experiment , domains which have been shown to misfold are TNfn3 ( AFM ) and Titin I27 ( AFM , FRET ) , which are both found to misfold here , while there is no detectable misfolded population for protein G ( AFM ) , again consistent with our results ., We also do not observe any misfolding for Ubiquitin , consistent with the lack of experimental evidence for fully domain-swapped species for this protein 15–23 ., Quantitatively , the fractional misfolded population is also consistent with the available experimental data ., For instance , the frequency of misfolded domains in native tenascin is ∼ 4% as shown by previous AFM experiments 8 , the misfolded population of I27 dimers is ∼5% in single-molecule FRET experiments 7 while the misfolded population of GB1 domains in polyproteins ( GB18 ) is extrememly low ( < 0 . 2% ) 14 ., Even though the observed misfolding population of the misfolded tandem dimer is low , it is potentially a problem considering that many of the multidomain proteins in nature have large number of tandem repeats , such as Titin which contains twenty-two I27 repeats 63 ., Recent FRET experiments on I27 tandem repeats have shown that the fraction of misfolded proteins increases with the number of repeats ., For the 3- and 8-domain polyproteins , the fraction of misfolded domains increases by a factor of 1 . 3 and 1 . 8 , respectively , relative to a tandem dimer 12 ., The second type of evidence comes from experimental structures of domain-swapped dimers ., For several of the proteins , bimolecular domain-swapped structures have been determined experimentally ., While no such structures have yet been determined for single-chain tandem dimers , we can compare the misfolded states with the available experimental data ., For each experimental example , we are able to find a corresponding misfolded species in our simulation with very similar structure ( related by joining the terminis of the two chains in the experimental structures ) ., The domain swapped dimers solved obtained from experiments ( Fig 4a , 4c , 4e and 4g ) are strikingly similar to the domain swapping dimeric tandem from simulations , which are the domain swapped SH3 domains when K ( sequence position after which the central domain begins ) = 37 ( Fig 4b ) , SH2 with K = 72 ( Fig 4d ) , TNfn3 with K = 28 ( Fig 4f ) and PDZ with K = 23 ( Fig 4h ) ., Most of these states have relatively high population among all the possible misfolds as observed from the simulations ( “Population” in Table 1 ) ., While the coverage of possible domain swaps is by no means exhaustive , the observed correspondence gives us confidence that the misfolded states in the simulations are physically plausible ., Having shown that the misfolding propensities we obtain are qualitatively consistent with experimental evidence ( and in the case of Titin I27 , in semi-quantitative agreement with single-molecule FRET ) , we set out to establish some general principles relating the properties of each domain to its propensity to misfold in this way ., We can start to formulate a hypothesis based on the alternative folding and misfolding pathways illustrated in Fig 1 ., Native folding has as an intermediate a state in which either the N- or the C-terminal domain is folded ., In contrast , on the misfolding pathway , the first step is formation of the central domain , followed by that of the terminal domain ., This parallel pathway scheme suggests that a descriptor of the overall misfolding propensity may be obtained from the rate of formation of a single correctly folded domain , relative to that of the central domain ( neglecting back reactions , because this are rarely seen in our simulations ) ., We can study the central domain formation in isolation , since these structures are just circular permutants of the native fold , i . e . the two proteins have the same sequence as the native , but with the position of the protein termini moved to a different point in the sequence , as is also found in nature 35 ., These structures can be thought of as originating from the native by cutting a specific loop connecting secondary structure elements ( the free energy cost of splitting such an element being too high ) , and splicing together the N- and C- termini ., In the context of the tandem dimers , the position at which the loop is cut is the same K that defines the start of the central domain in sequence ., We investigate the role of the central domain by characterizing the free energy landscape of the single domain of each system , as well as all of its possible circular permutants , using umbrella sampling along the reaction coordinate QK ., QK is exactly analogous to the conventional fraction of native contacts coordinate Q 64 , but defined using the corresponding ( frame-shifted ) contacts in the circular permutant pseudo-native structure ., The index K indicates the position along the sequence of the WT where the cut is made in order to convert to the circular permutant ., The free energy surfaces F ( QK ) of two representative systems , SH3 and Ubiquitin , are shown in Fig 5 , with the data for the remaining proteins given in the Fig A in S1 Text ., The free energy barrier height for folding ΔGf and the stability ΔGs are listed in the Table 1 ., The free energy plots indicate that the single domains of Ubiquitin and GB1 are stable only for the native sequence order , and not for any of the circular permutants ., Based on the type of misfolding mechanism sketched in Fig 1 , one would expect that unstable circular permutants would result in an unstable central domain , and consequently no stable domain-swappping misfolding would occur in the dimer folding simulations , as we indeed observe ., This is also consistent with previous studies of polyproteins of GB1 and Ubiquitin using using AFM experiments , which reveal high-fidelity folding and refolding 14 , 65 , 66 ., We note that only under very strongly stabilizing conditions is any misfolding observed for ubiquitin dimers: running simulations at a lower temperature ( 260 K ) , we observe a very small ( 1 . 3% ) population of misfolded states from 1024 trial folding simulations ., At a higher temperature of 295 K , once again no misfolding is observed ., In contrast to the situation for GB1 and Ubiquitin , all of the circular permutants of the SH3 domain in Fig 5 are in fact stable , although less so than the native fold ., The destabilization of circular permutants relative to native is in accord with the experimental results for the Src SH3 domain 26 ( rank correlation coefficient stabilities is 0 . 80 ) ., The other domains considered also have stable circular permutant structures ., This is consistent with the fact that all of these domains do in fact form some fraction of domain-swapped misfolded states ., The simplest view of the misfolding mechanism would be as a kinetic competition between the correctly folded intermediates versus the domain-swapped intermediates with a central domain folded ( i . e . a “kinetic partitioning” mechanism 67 ) ., In this case one might naively expect that the propensity to misfold would be correlated with the relative folding rates of an isolated native domain and an isolated circular permutant structure ., However , the folding barriers ΔGf projected onto Q ( for native ) or QK ( for circular permutants ) show little correlation to the relative frequency of the corresponding folded or misfolded state , when considering all proteins ( Table 1 ) ., Since this barrier height may not reflect variations in the folding rate if some of the coordinates are poor ( yielding a low barrier ) or if there are large differences in kinetic prefactors , we have also directly computed the folding rate for the circular permutants of those proteins which misfold , and confirm that the rates of formation of the native fold and circular permutants are similar ., We indeed obtain a strong correlation between the folding rate o | Introduction, Materials and Methods, Results | Recent single molecule experiments , using either atomic force microscopy ( AFM ) or Förster resonance energy transfer ( FRET ) have shown that multidomain proteins containing tandem repeats may form stable misfolded structures ., Topology-based simulation models have been used successfully to generate models for these structures with domain-swapped features , fully consistent with the available data ., However , it is also known that some multidomain protein folds exhibit no evidence for misfolding , even when adjacent domains have identical sequences ., Here we pose the question: what factors influence the propensity of a given fold to undergo domain-swapped misfolding ?, Using a coarse-grained simulation model , we can reproduce the known propensities of multidomain proteins to form domain-swapped misfolds , where data is available ., Contrary to what might be naively expected based on the previously described misfolding mechanism , we find that the extent of misfolding is not determined by the relative folding rates or barrier heights for forming the domains present in the initial intermediates leading to folded or misfolded structures ., Instead , it appears that the propensity is more closely related to the relative stability of the domains present in folded and misfolded intermediates ., We show that these findings can be rationalized if the folded and misfolded domains are part of the same folding funnel , with commitment to one structure or the other occurring only at a relatively late stage of folding ., Nonetheless , the results are still fully consistent with the kinetic models previously proposed to explain misfolding , with a specific interpretation of the observed rate coefficients ., Finally , we investigate the relation between interdomain linker length and misfolding , and propose a simple alchemical model to predict the propensity for domain-swapped misfolding of multidomain proteins . | Multidomain proteins with tandem repeats are abundant in eukaryotic proteins ., Recent studies have shown that such domains may have a propensity for forming domain-swapped misfolded species which are stable for long periods , and therefore a potential hazard in the cell ., However , for some types of tandem domains , no detectable misfolding was observed ., In this work , we use coarse-grained structure-based folding models to address two central questions regarding misfolding of multidomain proteins ., First , what are the possible structural topologies of the misfolds for a given domain , and what determines their relative abundance ?, Second , what is the effect of the topology of the domains on their propensity for misfolding ?, We show how the propensity of a given domain to misfold can be correlated with the stability of domains present in the intermediates on the folding and misfolding pathways , consistent with the energy landscape view of protein folding ., Based on these observations , we propose a simplified model that can be used to predict misfolding propensity for other multidomain proteins . | simulation and modeling, fluorophotometry, protein structure, thermodynamics, research and analysis methods, fluorescence resonance energy transfer, proteins, structural proteins, repeated sequences, molecular biology, spectrophotometry, free energy, physics, biochemistry, biochemical simulations, tandem repeats, protein domains, genetics, biology and life sciences, physical sciences, genomics, computational biology, spectrum analysis techniques, macromolecular structure analysis | null |
1,198 | journal.pcbi.1006514 | 2,018 | RNA3DCNN: Local and global quality assessments of RNA 3D structures using 3D deep convolutional neural networks | RNA molecules consist of unbranched chains of ribonucleotides , which have various essential roles in coding , decoding , regulation , expression of genes , and cancer-related networks via the maintenance of stable and specific 3D structures 1–5 ., Therefore , their 3D structural information would help fully appreciate their functions ., In this context , experiments such as X-ray crystallography , nuclear magnetic resonance ( NMR ) spectroscopy , and cryoelectron microscopy are the most reliable methods of determining RNA 3D structures , but they are costly , time-consuming , or technically challenging due to the physical and chemical nature of RNAs ., As a result , many computational methods have been developed to predict RNA tertiary structures 6–32 ., These methods usually have a generator producing a large set of structural candidates and a discriminator evaluating these generated candidates ., A good generator should be able to produce structural candidates as close to native structures as possible , and a good discriminator should be able to recognize the best candidates ., Moreover , a discriminator can direct generator searching structural space in heuristic prediction methods ., For protein or RNA tertiary structure prediction , a discriminator generally refers to a free energy function , a knowledge-based statistical potential , or a scoring function ., Several statistical potentials have been developed to evaluate RNA 3D structures , such as RASP 33 , RNA KB potentials 34 , 3dRNAscore 35 and the Rosetta energy function 9 , 16 ., Generally , these potentials are proportional to the logarithm of the frequencies of occurrence of atom pairs , angles , or dihedral angles based on the inverse Boltzmann formula ., The all-atom version of RASP defines 23 atom types , uses distance-dependent geometrical descriptions for atom pairs with a bin width of 1 Å , and is derived from a non-redundant set of 85 RNA structures ., The all-atom version of RNA KB potential defines 85 atom types , also uses distance-dependent geometrical descriptions for atom pairs , and is derived from 77 selected representative RNA structures ., Moreover , RNA KB potentials are fully differentiable and are likely useful for structure refinement and molecular dynamics simulations ., 3dRNAscore also defines 85 atom types and uses distance-dependent geometrical descriptions for atom pairs with a bin width of 0 . 15 Å , and is derived from an elaborately compiled non-redundant dataset of 317 structures ., In addition to distance-dependent geometrical descriptions , 3dRNAscore uses seven RNA dihedral angles to construct the statistical potentials with a bin width of 4 . 5° , and the final output potentials are equal to the sum of the two energy terms with an optimized weight ., The Rosetta energy function has two versions: one for low resolution and the other for high resolution ., The low-resolution knowledge-based energy function explicitly describing the base-pairing and base-stacking geometries guides the Monte Carlo sampling process in Rosetta , while the more detailed and precise high-resolution all-atom energy function can refine the sampled models and yield more realistic structures with cleaner hydrogen bonds and fewer clashes ., As the paper on 3dRNAscore reported , 3dRNAscore is the best among these four scoring functions ., Overall , the choices of the geometrical descriptors and the reference states in the scoring functions can affect their performance significantly , and the optimization of the parameters also influences this ., Recently , we have witnessed astonishing advances in machine learning as a tool to detect , characterize , recognize , classify , or generate complex data and its rapid applications in a broad range of fields , from image classification , face detection , auto driving , financial analysis , disease diagnosis 36 , playing chess or games 37 , 38 , and solving biological problems 39–42 , to even quantum physics 43–45 ., Even this list is incomplete , and has the potential to be extended further in the future ., Therefore , we expect that machine learning methods will be able to help evaluate the structural candidates generated in the process of RNA tertiary structure prediction ., Inspired by the successful application of 2D convolutional neural networks ( CNNs ) in image classification , we believe that 3D CNNs are a promising solution in that RNA molecules can be treated as a 3D image ., Compared with other machine learning methods employing conventional hand-engineered features as input , 3D CNNs can directly use a 3D grid representation of the structure as input without extracting features manually ., 3D CNNs have been applied to computational biology problems such as the scoring of protein–ligand poses 46 , 47 , prediction of ligand–binding protein pockets 48 , prediction of the effect of protein mutations 49 , quality assessment of protein folds 50 , and prediction of protein–ligand binding affinity 51 ., Here , we report our work on developing two new scoring functions for RNA 3D structures based on 3D deep CNNs , which we name RNA3DCNN_MD and RNA3DCNN_MDMC , respectively ., Our scoring functions enable both local and global quality assessments ., To our knowledge , this is the first paper to describe the use of 3D deep CNNs to assess the quality of RNA 3D structures ., We also tested the performance of our approaches and made comparisons with the four aforementioned energy functions ., The environment surrounding a nucleotide refers to its neighboring ., To determine the neighboring atoms of a nucleotide , a local Cartesian coordinate system is specified first by its atoms C1’ , O5’ , C5’ , and N1 for pyrimidine or N9 for purine ., Specifically , the origin of the local coordinate system is located at the position of atom C1’ ., The x- , y- , and z-axes of the local coordinate system , denoted as x , y , and z , respectively , are decided according to Eqs 1–6 where rC1′ , rO5′ , rC5′ and rN stand for the vectors pointing from the origin in the global coordinate system to the atoms C1’ , O5’ , C5’ , and N1 or N9 , respectively ., x = r N - r C 1 ′ ( 1 ), x = x ∥ x ∥ ( 2 ), y = r O 5 ′ + r C 5 ′ 2 - r C 1 ′ ( 3 ), z = x × y ( 4 ), z = z ∥ z ∥ ( 5 ), y = z × x ( 6 ), The environment surrounding a nucleotide consists of the atoms whose absolute values of x , y , and z coordinates are less than a certain threshold ., Here , the threshold is set to 16 Å , which means that the environment surrounding a nucleotide contains the atoms within a cube of length 32 Å centered at this very nucleotide , as shown in Fig 1A ., For a colorful 2D image , the input of a 2D CNN is an array of pixels of RGB channels ., Similarly , in our work , the nucleotide and its surrounding environment are transformed into a 3D image consisting of an array of voxels ., As shown in Fig 1A , the box of size 32 × 32 × 32 Å is partitioned into 32 × 32 × 32 grid boxes ., Each grid box represents a voxel of three channels and its values are calculated by the accumulations of the occupation number , mass , or charge of the atoms in the grid box ., The mass and charge information of each type of atoms is listed in S1 Table ., After transformation , the input of the 3D CNN is a colorful 3D image of 32 × 32 × 32 voxels with three channels corresponding to RGB channels presented in Fig 1B ., Practically , each channel is normalized to 0 , 1 by min-max scaling ., The output of our CNN is the nucleotide unfitness score characterizing how poorly a nucleotide fits into its surroundings ., For a nucleotide , its unfitness score is equal to the RMSD of its surroundings plus the RMSD of itself after optimal superposition between its conformations in the native structure and the assessed structure ., The latter RMSD is generally very small , but the former varies in a large range ., Nucleotides with smaller unfitness scores are in a conformation closer to the native conformation , and a score of 0 means that the nucleotide fits into its surrounding environment perfectly and is in its native conformation ., Practically , the nucleotide unfitness score is normalized to 0 , 1 by min-max scaling ., For the global quality assessment , the unfitness scores of all nucleotides are accumulated ., Fig 1C exhibits the architecture of our CNN , a small VGG-like network 52 containing a stack of convolutional layers , a maxpooling layer , a fully connected layer , and 4 , 282 , 801 parameters in total ., VGGNet is a famous image classification CNN ., It is a very deep network and uses 19 weight layers , consisting of 16 convolutional layers stacked on each other and three fully-connected layers ., The input image size 224 × 224 in VGGNet is much larger than our input size 32 × 32 × 32 in terms of the side length , and thus we used a smaller architecture ., There are only four 3D convolutional layers in our neural network ., The numbers of filters in each convolutional layer are 8 , 16 , 32 , and 64 , and the receptive fields of the filters in the first two convolutional layers and in the last two convolutional layers are 5 × 5 × 5 voxels and 3 × 3 × 3 voxels , respectively ., The convolution stride is set to one voxel ., No spatial padding is implemented in the convolutional layers ., Moreover , a max-pooling layer of stride 2 is placed following the first two consecutive convolutional layers ., Subsequently , one fully connected layer with 128 hidden units is stacked after the convolutional layers ., The final output layer is a single number , namely , the unfitness score ., All units in hidden layers are activated by the ReLU nonlinear function , while the output layer is linearly activated ., The neural network was trained to reduce the mean squared error ( MSE ) between the true and predicted unfitness scores ., A back-propagation-based mini-batch gradient descent optimization algorithm was used to optimize the parameters in the network ., Batch size was set to 128 ., The training was regularized by dropout regularization for the second , fourth convolutional layers , and the fully connected layer with a dropout ratio of 0 . 2 ., The Glorot uniform initializer was used to initialize the network weights ., The learning rate was initially set to 0 . 05 , and then decreased by half whenever the MSE of the validation dataset stopped improving for five epochs ., The training process stopped when the learning rate decreased to 0 . 0015625 ., Our 3D CNN was implemented using the python deep learning library Keras 53 , with Theano library as the backend ., To construct the training dataset , first a list of 619 RNAs was downloaded with the search options “RNA Only” and “Non Redundant RNA Structures” from the NDB website http://ndbserver . rutgers . edu/ , which means that our training dataset includes RNA-only structures and the RNAs are non-redundant in both sequence and geometry ., Second , the RNAs with an X-ray resolution >3 . 5 Å were removed from the list above ., Finally , the RNAs in the test dataset were removed and the RNAs in the equivalence classes with the test dataset were also removed ., “Structures that are provisionally redundant based on sequence similarity and also geometrical similarity are grouped into one equivalence class , ” as Leontis et al . defined 54 ., Thus , 414 native RNAs were left to construct the training dataset ., According to their length , the 414 RNAs were randomly divided into two groups , namely , 332 RNAs for training and 82 RNAs for validation in the CNN training process ., Practically , the training samples were generated in two ways , namely , by MD and MC methods elaborated as follows ., To evaluate our CNN-based scoring function and make comparisons with the traditional statistical potentials , three test datasets were collected from different sources ., Test dataset I comes from the RASP paper 33 which is generated by the MODELLER computer program from the native structures of 85 non-redundant RNAs given a set of Gaussian restraints for dihedral angles and atom distances , and contains 500 structural decoys for each of the 85 RNAs ., The RMSDs are in different ranges for these RNAs ., The narrowest are from 0 to 3 . 5 Å , the broadest are from 0 to 13 Å , and the RMSDs of most decoys are less than 10 Å ., This dataset can be downloaded from http://melolab . org/supmat/RNApot/Sup . _Data . html ., Test dataset II comes from the KB paper 34 , which is generated by both position-restrained dynamics and REMD simulations for 5 RNAs and the normal-mode perturbation method for 15 RNAs ., For the MD dataset , there are 3 , 500 decoys for each of four RNAs whose RMSDs range from 0 to >10 Å , and 2 , 600 decoys for one RNA ( PDB ID: 1msy ) whose RMSDs range from 0 to 8 Å ., Meanwhile , for the normal-mode dataset , there are about 490 decoys for each of the 15 RNAs , whose RMSDs range only from 0 to 5 Å ., This dataset can be downloaded from http://csb . stanford . edu/rna ., One point that should be noted is that the downloaded pdb files name atom O2 in pyrimidine bases as “O . ”, Test dataset III comes from RNA-Puzzles rounds I to III 55–57 , a collective and blind experiment in 3D RNA structure prediction ., Given the nucleotide sequences , interested groups submit their predicted structures to the RNA-Puzzles website before the experimentally determined crystallographic or NMR structures of these target sequences are published ., Therefore , the dataset is produced in a real RNA modeling scenario and can reveal the real performance of the existing scoring function ., Marcin Magnus compiled the submitted structures from rounds I to III , and now the predicted models of 18 target RNAs can be downloaded from https://github . com/RNA-Puzzles/RNA-Puzzles-Normalized-submissions ., There are only 12–70 predicted models for the 18 RNAs , some of whose RMSDs range from 2 to 4 Å , while some cover a wide range from 20 to 60 Å ., Two neural networks were trained based on two sets of training samples ., The first set included only MD training samples and the second set included both MD and MC training samples ., And the two network models are named RNA3DCNN_MD and RNA3DCNN_MDMC , respectively ., We tested test datasets I and II using RNA3DCNN_MD , and tested test dataset III using RNA3DCNN_MDMC ., The reason why we trained two neural networks is that the three test datasets come from two kinds of methods ., Test dataset I and II were produced by MD and normal-mode methods initiated from native structures , while test dataset III was produced by MC structure prediction methods , covering a broad structural space ., After testing , for test datasets I and II , RNA3DCNN_MD performed better than RNA3DCNN_MDMC ., But for test dataset III , RNA3DCNN_MDMC was superior ., The results are reasonable ., RNA3DCNN_MD is more accurate in the region close to native structures in that most of the MD training samples are not very far away from native structures or native topologies ., However , when MC training samples were included , the neural network RNA3DCNN_MDMC became not as accurate as RNA3DCNN_MD for the structures around native ones and biased the non-native ., On the contrary , RNA3DCNN_MD did not see the more random training structures far away from native states and thus it did not perform as well as RNA3DCNN_MDMC for test dataset III ., In general , a scoring function with good performance should be able to recognize the native structure from a pool of structural decoys and to rank near-native structures reasonably ., Consequently , two metrics were used for a quantitative comparison with other scoring functions ., One was the number of native RNAs with minimum scores in the test dataset , and the other was the Enrichment Score ( ES ) 34 , 35 , 58 , which characterizes the degree of overlap between the structures of the top 10% scores ( Etop10% ) and the best 10% RMSD values ( Rtop10% ) in the structural decoy dataset ., The ES is defined as, E S = | E t o p 10 % ∩ R t o p 10 % | 0 ., 1 × 0 ., 1 × N d e c o y s ( 7 ), where |Etop10% ∩ Rtop10%| is the number of structures in both the lowest 10% score range and the lowest 10% RMSD range , and Ndecoys is the total number of structures in the decoy dataset ., If the score and RMSD are perfectly linearly correlated , ES is equal to 10 ., If they are completely unrelated , ES is equal to, 1 . If ES is less than 1 , the scoring function performs rather poorly with respect to that decoy dataset ., We compared our CNN-based scoring function with four traditional statistical potentials for RNA , namely , 3dRNAscore , KB , RASP , and Rosetta ., First , the number of native RNAs with minimum scores was counted as listed in Table, 1 . As the 3dRNAscore paper reported , 3dRNAscore identified 84 of 85 native structures , KB 80 of 85 , RASP 79 of 85 , and Rosetta 53 of 85 ., 3dRNAscore is thus clearly the best among the four statistical potentials ., Our RNA3DCNN identified 62 of 85 native structures , and the unidentified native structures generally had the second or third lowest scores , almost the same as the lowest scores ., Fig 2A shows an example in test dataset I in which the native structure was identified by our method , and Fig 2B shows an example in test dataset I in which the native structure had a slightly higher score calculated by our method than the structure of an RMSD of 0 . 9 Å ., The RMSD-score plots of all 85 examples are provided in S1 Fig . The result that our method identified fewer native structures is reasonable ., Specifically , the input and output of our neural network are geometry based , and thus similar structures have similar scores ., The structures in the 0–1 Å range generally resemble each other and thus , for our scoring function , all the non-native structures with minimum scores have an RMSD ∼1 Å ., Meanwhile , for the statistical potentials , atom steric clashes , angle , or dihedral angle deviations from the native form may quickly increase the potential values ., Second , the ES was calculated ., The mean ES values of the 85 RNAs calculated by 3dRNAscore , RASP , Rosetta , and our method RNA3DCNN were 8 . 69 , 8 . 69 , 6 . 7 , and 8 . 61 , respectively ., The mean ES calculated by KB is not given in that we cannot open its original website and download its program , and the results of KB method shown in this paper come from the papers on KB and 3dRNAscore ., The ES values of 3dRNAscore and our method are almost the same ., The mean ES values of three methods are very large , suggesting that the RMSDs and scores calculated by the different methods are highly linearly correlated and that this test dataset is an easy benchmark to rank near-native decoys ., For the MD decoys in test dataset II , 3dRNAscore and KB identified 5 of 5 native structures , RASP 1 of 5 , Rosetta 2 of 5 , and our method 4 of 5 , as listed in Table, 1 . Our method gave the lowest score to the decoy of an RMSD of 0 . 97 Å for RNA 1f27 , as shown in Fig 3B ., The ES values of the MD decoys using different scoring functions are listed in Table, 2 . Fig 3A shows the relationship between RMSD and the score calculated by our method for the RNA 434d with the best ES ., The RMSD-score plots of all five examples are provided in S2 Fig . From the table , we can see that our method performed better than 3dRNAscore for 2 of 5 RNAs , slightly worse for 1 of 5 RNAs , and worse for 2 of 5 RNAs , especially for the RNA 1f27 , in that the native structure had a slightly higher score than the decoys of RMSD around 1 Å ., Moreover , our method performed better than KB , RASP , and Rosetta for 3 of 5 RNAs , comparably for 1 of 5 RNAs , and worse for the RNA 1f27 , as explained above ., For the normal-mode decoys in this dataset , 3dRNAscore identified 12 of 15 native structures , RASP 11 of 15 , Rosetta 10 of 15 , KB and our method 15 of 15 , as listed in Table, 1 . The ES values of the normal-mode decoys using different scoring functions are also listed in Table, 2 . From the table , we can see that our method performed better than 3dRNAscore for 7 of 15 RNAs , equally for 4 of 15 RNAs , and worse for only 4 of 15 RNAs ., Moreover , our method performed better than KB , RASP , and Rosetta for 12 , 11 , and 13 of 15 RNAs ., The mean ES values of 3dRNAscore and our method were the same , and were greater than the other scoring functions ., The RMSD-score plots of all 15 examples are provided in S2 Fig . The structures in test dataset III are derived from different groups by different RNA modeling methods ., There are only dozens of predicted models for each target RNA and the RMSDs are almost always greater than 10 Å , and often even greater than 20 , or 30 Å ., Consequently , we did not calculate the ES for this dataset and gave only the RMSDs of models with minimum scores in Table, 3 . The results of method KB were not provided in that we could not open its website and get the program ., From the table , we can see that our RNA3DCNN identified 13 of 18 native RNAs , 3dRNAscore 5 of 18 , RASP 1 of 18 , and Rosetta 4 of 18 ., For puzzle 2 , though the native structures were not identified , our method gave the lowest RMSD among four methods ., And for puzzle 3 , our method gave the RMSD as low as other two methods ., Fig 4A shows an example in test dataset III in which the native structure was well identified by our method , and Fig 4B is the one not identified ., The RMSD-score plots of all 18 examples are provided in S3 Fig . For test datasets I and II , all decoys are obtained from native structures , which means that they almost always stay around one local minimum in the energy landscape ., But for test dataset III , in the real modeling scenario , the structures are far from native topologies and are located at different local minima in the energy landscape ., For this reason , we trained two neural networks with two sets of training samples , that is , one set including only training samples from MD simulations initiated from native structures and another set including both MD training samples and MC training samples obtained in the broader and more complicated structural space ., Our scoring function can evaluate each nucleotide , reveal the regions in need of further structural optimization , and guide the sampling direction in RNA tertiary structure modeling ., Fig 5 portrays how our scoring function helps locate the unfit regions ., In this figure , a decoy of RMSD 3 . 0 Å from test dataset II MD decoys and the native RNA 1nuj are superimposed , and thicker tubes show larger deviations from the native structure ., The rainbow colors represent the calculated unfitness scores of each nucleotide , and the colors closer to red represent larger unfitness scores ., We can see that the tubes in nucleotides 1 , 7 , 8 , 9 , and 14 are much thicker , and the colors of those regions are much closer to red , which means that our scoring function can rank the nucleotide quality correctly ., Nucleotides 1 and 14 are the terminal nucleotides in two chains and are unpaired , so the deviations of these two are the largest ., Nucleotides 7–9 are in the internal loop , so the deviations are larger than those of the remaining helical regions ., The Pearson correlation coefficients between actual and predicted nucleotide unfitness scores were 0 . 69 and 0 . 34 for MD decoys and NM decoys in test dataset II , respectively , as shown in S4 Fig . The structures in NM decoys are all near native structures with RMSD ranging from 0 to 5 Å , thus making the correlation not strong ., Saliency maps were used to visualize the trained network and help understand which input atoms are important in deciding the final output ., In paper 59 , an image-specific class saliency map was first introduced to rank the pixels of an input 2D image based on their influence on the class score by computing the gradient of output class score with respect to the input image ., The gradient can reveal how sensitive the class score is to a small change in input image pixels ., Larger positive gradients mean that a slight decrease in the corresponding pixels can cause the true class score to drop markedly , and thus the corresponding pixels are more important in determining the right output class ., Meanwhile , for our regression problem and a near-native conformation , the smaller output was better and the voxels of negative gradients were highlighted and important ., Moreover , we mapped the gradients of each voxel back to the corresponding atoms ., In Fig 6 , examples of saliency maps for the three input channels are presented ., A , B , and C correspond to atomic occupation number , mass , and charge channels , respectively ., The example is used to calculate the unfitness score of the 12th nucleotide in a helical region for the native RNA 1nuj ., The nucleotide under assessment is drawn as spheres and sticks , its surrounding environment is drawn as sticks , while the atoms beyond its surrounding environment are shown as a black cartoon ., The redder atoms represent smaller negative gradients , the bluer atoms represent larger positive gradients , and the nearly white atoms represent gradients close to 0 ., The red regions are highlighted and more important in deciding the final output ., In the atomic occupation number channel , atomic category differences disappear and only shapes count ., From Fig 6A , we can see that the atoms in the nucleobases of the 10th–13th and 15th–19th nucleotides are highlighted and atom N3 in the 16th nucleotide is the most important , in accordance with the base-pairing and base-stacking interactions ., In the atomic mass channel , the importance of atoms in the nucleobases described above declines somewhat , while atom P in the 12th nucleotide and atom N3 in the 16th nucleotide are the most important , in that atom P is much heavier than atoms C , N , and O and atom N3 is in the A12’s paired-base U16 ., In the atomic charge channel , the seven most important atoms are N1 , P , N3 , and O3’ in the 12th nucleotide , atoms C4 and C2 in the 16th nucleobase , and atom N2 in the 17th nucleobase ., Overall , from the analyses of the salient maps , it was found that the neural networks can learn the knowledge , such as the relevance of base pairing and stacking interactions to the score , from the training data automatically without any priori knowledge ., It would be very interesting to see if neural networks can dig new knowledge out of data in the future work ., We tested the computational time of 100 decoys of 91 nucleotides ., The total time was 321 . 0 seconds ., For a comparison , the C++ version of 3dRNAscore method took only 19 seconds ., However , it was found that 99 . 6% of our computational time ( 319 . 7 seconds ) was used to prepare the input to CNN , and this time decreased to 2 seconds after we changed the code from Python to C++ ., Therefore , the CNN-based approach is very efficient in terms of speed , and it is estimated that the overall computational time of our method will be approximately 3 seconds if we rewrite the entire code in C++ ., However , the computational time of our method in Python version is acceptable for now , at least temporarily ., We postpone the code rewriting work to the future when necessary ., Moreover , our method can be downloaded from https://github . com/lijunRNA/RNA3DCNN ., Recently , we have witnessed the astonishing power of machine learning methods in characterizing , classifying , and generating complex data in various fields ., It is therefore interesting to explore the potential of machine learning in characterizing and classifying RNA structural data ., In this study , we developed two 3D CNN-based scoring models , named RNA3DCNN_MD and RNA3DCNN_MDMC , for assessing structural candidates built by two kinds of methods ., If the structural candidates are generated by MC methods such as fragment assembly , RNA3DCNN_MDMC is suggested ., If the structural candidates are not very far away from the native structures , such as from MD simulations , the RNA3DCNN_MD model is better ., We also compared our method with four other traditional scoring functions on three test datasets ., The current 3D CNN-based approaches performed comparably with or better than the best statistical potential 3dRNAscore on different test datasets ., For the first test dataset , the mean ES was almost the same as that of the best traditional scoring function , 3dRNAscore ., The reason why the number of native structures identified by our method was much smaller than that by other scoring functions is that our method is structure-based and the scores of native structures and decoys of RMSD less than 1 . 0 Å are almost the same ., This suggests that our method is robust if an RNA structure does not change much ., For the second test dataset , our method generally performed similarly to 3dRNAscore and outperformed the three other scoring functions ., For the MD decoys in the second test dataset , our method was slightly worse than 3dRNAscore ., For the normal-mode decoys in the second test dataset , our method identified all the native structures , while 3dRNAscore identified only 12 of 15 native RNAs , and our method outperformed 3dRNAscore for 7 of 15 RNAs and underperformed it for only 4 of 15 RNAs ., For the third test dataset from blind and real RNA modeling experiments , our method was far superior to the other scoring functions in identifying the native structures ., Our method has some novel features ., First , it is free of the choice of the reference state , which is a difficult problem in traditional statistical potentials ., Second , it treats a cube of atoms as a unit like a many-body potential , while traditional statistical potentials divide them into atom pairs ., Moreover , our method can evaluate each nucleotide , reveal the regions in need of further structural optimization , and guide the sampling direction in RNA tertiary structure prediction ., Our method demonstrates the power of CNNs in quality assessments of RNA 3D structures and shows the potential to far outperform traditional statistical potentials ., There remains great scope to improve the CNN models , such as by expanding them to include more input channels ( only three are considered currently ) , featuring more complex network architecture , and involving larger training datasets ., Moreover , more RNA-related problems can be dealt with by 3D CNNs , such as protein–RNA binding affinity prediction and RNA–ligand docking and virtual screening . | Introduction, Materials and methods, Results and discussion | Quality assessment is essential for the computational prediction and design of RNA tertiary structures ., To date , several knowledge-based statistical potentials have been proposed and proved to be effective in identifying native and near-native RNA structures ., All these potentials are based on the inverse Boltzmann formula , while differing in the choice of the geometrical descriptor , reference state , and training dataset ., Via an approach that diverges completely from the conventional statistical potentials , our work explored the power of a 3D convolutional neural network ( CNN ) -based approach as a quality evaluator for RNA 3D structures , which used a 3D grid representation of the structure as input without extracting features manually ., The RNA structures were evaluated by examining each nucleotide , so our method can also provide local quality assessment ., Two sets of training samples were built ., The first one included 1 million samples generated by high-temperature molecular dynamics ( MD ) simulations and the second one included 1 million samples generated by Monte Carlo ( MC ) structure prediction ., Both MD and MC procedures were performed for a non-redundant set of 414 RNAs ., For two training datasets ( one including only MD training samples and the other including both MD and MC training samples ) , we trained two neural networks , named RNA3DCNN_MD and RNA3DCNN_MDMC , respectively ., The former is suitable for assessing near-native structures , while the latter is suitable for assessing structures covering large structural space ., We tested the performance of our method and made comparisons with four other traditional scoring functions ., On two of three test datasets , our method performed similarly to the state-of-the-art traditional scoring function , and on the third test dataset , our method was far superior to other scoring functions ., Our method can be downloaded from https://github . com/lijunRNA/RNA3DCNN . | RNA is an important and versatile macromolecule participating in various biological processes ., In addition to experimental approaches , the computational prediction of RNA 3D structures is an alternative and important source of obtaining structural information and insights into their functions ., An important part of these computational prediction approaches is structural quality assessment ., For this purpose , we developed a 3D CNN-based approach named RNA3DCNN ., This approach uses raw atom distributions in 3D space as the input of neural networks and the output is an RMSD-based nucleotide unfitness score for each nucleotide in an RNA molecule , thus making it possible to evaluate local structural quality ., Here , we tested and made comparisons with four other traditional scoring functions on three test datasets from different sources . | molecular dynamics, neural networks, particle physics, statistics, rna structure prediction, neuroscience, nucleotides, atoms, mathematics, forecasting, composite particles, research and analysis methods, computer and information sciences, rna structure, mathematical and statistical techniques, chemistry, molecular biology, physics, biochemistry, rna, molecular structure, nucleic acids, biology and life sciences, physical sciences, computational chemistry, chemical physics, statistical methods, macromolecular structure analysis | null |
14 | journal.pcbi.1005103 | 2,016 | Forecasting Human African Trypanosomiasis Prevalences from Population Screening Data Using Continuous Time Models | Human African trypanosomiasis ( HAT ) , also known as sleeping sickness , is a parasitic disease that is caused by two sub-species of the protozoa Trypanosoma brucei: Trypanosoma brucei gambiense ( gambiense HAT ) and Trypanosoma brucei rhodesiense ( rhodiense HAT ) ., The infection causing the disease is transmitted from person to person through the tsetse fly ., It is estimated that there were 20000 cases in the year 2012 1 and that 70 million people from 36 Sub-Saharan countries are at risk of HAT infection 2 , 3 ., Our work focuses on gambiense HAT , which represents 98% of all HAT cases 3 ., Gambiense HAT , which we will refer to as “HAT” from now on , is a slowly progressing disease and is fatal if left untreated ., In the first stage of the disease , symptoms are usually absent or non-specific 4 ., The median duration of this stage is about 1 . 5 years 5 ., By the time patients arrive at a healthcare provider , the disease has often progressed to the neurological phase , which causes severe health problems ., In addition , this treatment delay increases the rate of transmission , since an infected patient is a potential source of infection for the tsetse fly 4 , 6 ., Therefore , active case finding and early treatment are key to the success of gambiense HAT control 7 , 8 ., The current case finding strategy uses mobile teams that travel from village to village to conduct exhaustive population screening 4 , 8 , 9 ., For example , 35 mobile teams are active in the Democratic Republic of the Congo ( DRC ) ., Because this strategy has considerably reduced disease prevalence in several African countries 6 , 10–12 , the disease is no longer perceived as a major threat ., Consequently , donors are now scaling down their financial commitments 8 ., This , however , poses a serious risk to the control of HAT ., The disease tends to re-emerge when screening activities are scaled down , bringing about the risk of a serious outbreak , as shown by an epidemic in the 1990s 4 , 11 , 13 ., For example , the number of cases in 1998 is estimated to have exceeded 300000 3 ., In order to minimize the risk of re-emergence when resources are scaled down , and in order to eliminate and eradicate the disease , maximizing the effectiveness of the control programs is crucial ., Mpanya et al . 9 suggest that the effectiveness of population screening is determined by ( among others ) the management and planning of the mobile teams ., Planning decisions—which determine which villages to screen , and at what time interval to screen them—have a direct impact on the risk and the magnitude of an outbreak ., Existing literature does not address these issues , as highlighted by the WHO 1 , and a wide variety of screening intervals have been applied in different control programs 12 , 14 , 15 ., To optimize the planning decisions , it is of key importance to be able to predict the evolution of the HAT prevalence level in the villages at risk ., This allows decision makers to assess the relative effectiveness of a screening round in these villages and to prioritize the screening rounds to be performed ., However , practical tools for predicting HAT prevalence appear to be lacking ., Existing models for HAT are mostly based on differential equations , describing the rate of change for the HAT prevalence level among humans and flies as a function of the prevalence levels among humans and flies ( some models also include an animal reservoir ) 16–22 ., As the information needed to use such models—e . g . , the number of tsetse flies in a village—is not available on the village level , using these models for prediction is impractical ., This paper therefore sets out to develop practical models describing and predicting the expected evolution of the HAT prevalence level in a given village , based on historical information on HAT cases and screening rounds in that village ., The main difference with the models mentioned in the previous paragraph is that our models make no assumptions about the causal factors underlying the observed prevalence levels: the “inflow” of newly infected persons and the “outflow” of infected persons by cure or death ., Instead , we just consider data on the net effect of these two processes—the evolution of the prevalence level—and fit five different models to this ., To analyze the predictive performance of these models , we make use of a dataset describing screening operations and HAT cases in the Kwamouth district in the DRC for the period 2004–2013 ., Furthermore , we use one of the models to analyze the fixed frequency screening policy , which assigns to each village a fixed time interval for consecutive screening rounds ., Specifically , we investigate screening frequency requirements for reaching elimination and eradication ., Here , eradication is defined as “letting the expected prevalence level go to zero in the long term” , and elimination is defined as “reaching an expected prevalence level of one case per 10000” ., Our paper thereby contributes to the branch of research on control strategies for HAT ., Next , we list several other papers that are highly related to our work ., The effectiveness of active case finding operations is analyzed by Robays et al . 23 , who define “effectiveness” as the expected fraction of cases in a village which will eventually get cured as a result of a screening round in that village ., The papers by Stone & Chitnis 16 , Chalvet-Monfray et al . 20 , and Artzrouni & Gouteux 24 introduce differential equation models to gain structural insights on the effectiveness of combinations of active case finding and vector control efforts and on the requirements for eradicating HAT ., The effect of active case finding activities is modeled through a continuous “flow” of infected individuals into the susceptible compartment ., Since we explicitly model the timing and the effects of a screening round , this is one of the main differences with our paper ., Finally , Rock et al . 10 study the effectiveness of screening and treatment programs and the time to elimination using a multi-host simulation model ., Their paper , however , considers the screening frequency as a given , whereas we consider the effects of changing this frequency ., Furthermore , we propose models for predicting prevalence on a village level , whereas their model implicitly assumes all villages to be homogeneous ., Our dataset consists of information on screening operations in the period 2004–2013 in the health zone Kwamouth in the province Bandundu ., The raw data were cleaned up based on the rules described in S1 Text ., The number of villages in the dataset equals 2324 , and 143 of these villages were included in the data analysis based on three criteria: ( 1 ) the number of screening rounds recorded was at least two , ( 2 ) at least one case has been detected over the time horizon , and ( 3 ) at least one record of the number of people screened during the operation was available ., The first condition is necessary to enable modeling the prevalence level observed in a given screening round as a function of past observed prevalence levels , and the third condition is necessary for estimating prevalence itself ., We estimate the prevalence level in a village at the time of a screening round as the number of cases detected in that round over the number of people participating in that round ., Furthermore , lacking population size data , we estimate the population of a village as the maximum number of people participating in a screening round reported for that village ., Though our dataset also contains cases identified by the regular health system in between successive screening rounds , these do not yield ( direct ) estimates of prevalence levels in the corresponding villages , as required by the models proposed in the next section ., We therefore focus on the active case finding data only ., The total number of screening rounds reported for the 143 villages included equals 766 ( on average 5 . 4 per village ) ., Fig 1 shows cumulative distributions of the observed prevalence level in these screening rounds ( mean 0 . 0055 , median 0 . 0011 , standard deviation 0 . 0121 ) , the time interval between each pair of consecutive screening rounds ( mean 1 . 28 , median 1 . 00 , standard deviation 1 . 03 ) , the estimated population for each village ( mean 1073 , median 450 , standard deviation 2046 ) , and the participation level in the screening rounds ( mean 0 . 69 , median 0 . 72 , standard deviation 0 . 27 ) ., Note that the relatively large number of observations with a participation level of 100% is due to the method used to estimate the population sizes ., Before we propose our prediction methods , we introduce some notations ., A table of the most important notations used in this article can be found in S1 Table ., Let sv = {sv1 , sv2 , …} denote the vector of screening time intervals for village v , where sv1 denotes the time between the start of the time horizon and the first screening for this village , sv2 denotes the time between the first and the second screening , and so on ., The time at which the nth screening is performed is given by Svn = ∑m≤n svm and the participation fraction in this screening round is denoted by pvn ., Parameter Nv represents the population size of village v . Furthermore , let iv represent historical information on HAT cases in this village: the numbers of cases detected during past screening rounds ., We model the expected prevalence level at time t in village v as a function fv ( ⋅ ) of sv , iv , and some parameters β: fv ( t , sv , iv , β ) ., Note that the expected prevalence level is a latent , i . e . unobserved , variable , and that the observed prevalence level , xv ( t ) , generally deviates from the expected value ., We measure prevalence levels fv ( ⋅ ) and xv ( ⋅ ) as fractions and represent the difference between the expected and observed prevalence level in village v by the random variable εv:, x v ( t ) = f v ( t , s v , i v , β ) + ε v ( 1 ) Time series models such as discrete time ARMA , ARIMA or ARIMAX models seem to be the most popular methods for predicting prevalence ( or incidence ) ( see e . g . 25 , 26 ) ., These models describe the prevalence level at time t as a linear function of the prevalence levels at time t − 1 , t − 2 , … and ( optionally ) some other variables ., Their applicability in our context is however limited ., Discrete time models require estimates of the prevalence level at each time unit ( e . g . , each month ) , whereas information to estimate the HAT prevalence level is available only at moments at which a screening round is performed ., Namely , many HAT patients are not detected by the regular health system , particularly if they are in the first stage of the disease 8 ., The class of continuous time models is much more suitable for analyzing data observed at irregularly spaced times ., These models assume that the variable of interest , fv ( t , sv , iv , β ) , follows a continuous process , defining its value at each t > 0 ., The next subsections propose five continuous time models for predicting HAT prevalence levels ., We again note that models describing the causal processes determining the observed prevalence levels in detail ( e . g . , by explicitly modelling disease incidence , passive case finding , death and cure ) may be most intuitive , but require data that are not available on a village level ., Therefore , to safeguard their relevance for practical application , the variables we include are only those that are available on a large scale ., This does not imply that our models neglect the causal processes ., Instead , they are to some extent accounted for in an implicit way by fitting the models to the observed prevalence levels ., Data that are typically available at village level are numbers of HAT cases found during screening rounds and the times of these screening rounds ., For a given village , the first yields estimates of past prevalence levels , and the latter yield the time intervals between past screening rounds ., We hypothesize that the current expected prevalence level at time t is related to past prevalence levels , past screening intervals , and in particular the time since the last screening round , which we denote by δ v - ( t ) = min n { t - S v n | S v n ≤ t } ., Hence , we include ( functions of ) these variables in our models ., Linear regression models are very widely used in the world of forecasting ( see e . g . 27 ) ., Major advantages of these models are that they are easy to understand , to implement , to fit , and to analyze ., Therefore , the first model we introduce is a linear model ( model 1 ) , which also serves as a benchmark for our more advanced models ., This model describes the expected HAT prevalence in a given village as a function of the time since the last screening and past prevalence levels ., Such linear model is , however , very vulnerable to a typical structure present in active case finding datasets ., High past prevalence levels tend to increase the priority of screening a village , causing the time intervals between screening rounds to decrease ., As a result , δ v - ( t ) is a highly “endogenous” variable ., More formally , external variables ( past prevalence levels ) are correlated with both the dependent variable ( fv ( t ) ) and the independent variable ( δ v - ( t ) ) , which makes it hard to quantify the ( causal ) relation between them ., In response to this , we present four alternative models ., Model 2 is a fixed effects model , which adds a dummy-variable for each village to the initial model ., Model 3 is a ( non-linear ) exponential growth and decay model which is inspired by the SIS epidemic model ., This model is being used extensively for modeling epidemics that are characterized by an initial phase in which the number of infected individuals grows exponentially , and a second phase in which this number levels off to a time-invariant carrying capacity ., We refer to model 3 as the logistic model with a constant carrying capacity ., Finally , model 4 is a less data dependent version of model 3 and model 5 is variant of model 3 in which the carrying capacity is allowed to vary over time ., As HAT prevalence levels are very low , the variance of these levels is high , which enhances the chance that there are significant outliers among the observations ., For example , no cases were detected in three out of four screening rounds performed in a village of 122 people , whereas five cases were detected in the 4th round ., This implies two things ., First , observed prevalence levels will generally deviate significantly from expected prevalence levels ., Second , we need to choose a technique for estimating the model coefficients that is robust with respect to outliers ., Instead of Least Squares ( LS ) regression , one of the most commonly applied model fitting methods , we therefore use Least Absolute Deviations ( LAD ) regression to fit the model parameters , which is known to be relatively insensitive to outlying observations 32 ., An alternative technique would be to use a maximum likelihood estimation ( MLE ) approach based on a heavy-tailed probability distribution for the observed prevalence levels ., In S2 Table , we show the results obtained when assuming a Poisson , Beta-Binomial , or Negative Binomial distribution ., Each of the MLE approaches is , however , clearly outperformed by the LAD regression approach ., The variance of the observed prevalence level strongly depends on the sample size ., For example , under the assumption of an independent infection probability for each person , the variance is inversely proportional to the sample size ., We therefore weight the fitting deviation evn = fv ( Svn ) − xv ( Svn ) for observation n for village v by weight w v n = N v · p v n , yielding the following weighted LAD regression problem:, min β S a b s ( β ) = ∑ ( v , n ) w v n | e v n | ( 13 ) To deal with the risk of overfitting , we select the variables to be included in the models by means of a backward elimination method ., This method initially includes all variables in the model and iteratively removes the least significant variable ( if its p-value > 0 . 10 ) and estimates the model with the remaining variables ., The algorithm stops as soon as all remaining variables are significant or if only one variable is left ., We enforce that αv and κ cannot be removed by the backward elimination method so as to preserve essential elements of the corresponding models ., Hence , only parameters β1-β8 in models 1 and 2 , and parameters β1-β2 in models 3–5 could be removed ., Finally , to test the predictive performance of the models , we split the data in an estimation sample ( which we use for fitting the model ) and a prediction sample ., Specifically , for each of the 143 villages , we include the last screening round in the prediction sample , and include the others in the estimation sample ., Next , we measure performance based on the mean of the prediction errors M E = ∑ v e v n ^ | V | , indicating whether the predictions obtained by the model are biased , and based on two indicators for the amount of explained variation in the prevalence levels: the mean absolute error , M A E = ∑ v | e v n ^ | | V | and the mean relative error , M R E = ∑ v | e v n ^ | ∑ v x v ( S v n ^ ) ., Here , the index combination v n ^ indicates the last screening round for village v . The intuition behind the measures of explained variation is that they equal 0 if the predicted prevalence levels are exactly equal to the observed prevalence levels ( i . e . , the model perfectly explains the variation in the observed prevalence levels ) and that their value increases when the absolute difference between predicted and observed levels increases ( i . e . when the model explains less variation in observed prevalence levels ) ., We use Matlab R2015b for the implementation of our methods ., Table 1 presents the coefficient estimates for the variables of the five presented models ., The results for models 1 and 2 are very similar ., Seven of the eight variables are identified as being non-significant by the backward elimination algorithm: the interaction terms , the long term prevalence level , the time since the last screening round , and the square root of the time since the last screening round ., The resulting model provides a clear prediction method: the expected prevalence equals 24 . 5% of the prevalence level observed at the previous screening round ( note , if this level was 0 . 0% , the estimated expected prevalence remains 0 . 0% ) according to model 1 , and equals 14 . 7% of this prevalence level plus a constant fraction αv according to model 2 ., Hence , this model predicts that , in the absence of screening activities , the expected prevalence remains the same over time ., The fitted models 3 , 4 , and 5 reveal a clear and intuitive relationship between screening frequency , prevalence , and carrying capacity: a larger historical prevalence indicates a higher carrying capacity , and facing an equal historical prevalence for a higher historical screening frequency indicates a higher carrying capacity ., The constant term has been identified as non-significant for models 3 and 4 and as significant for model 5 ., To illustrate the typical output of models 3 and 4 , Fig 2 shows the development of the expected prevalence levels for two villages over time ( the lines ) , as well as the observed prevalence levels ( stars and circles ) ., Furthermore , Fig 3 depicts the carrying capacities for the 143 villages in Kwamouth , as estimated by the LMCCC model ., Though data to validate these estimates are lacking , we note that they are in the same order of magnitude as prevalence levels found during screening rounds ., The latter are usually between 1% and 5% in high or very high transmission areas , and exceed 10% in some extreme cases 33 , 34 ., As mentioned in Section Model Fitting , we measure the predictive performance of the different models in terms of the prediction bias and in terms of the amount of variation explained ., Table 2 contains the values of the different indicators for each of the models , and Fig 4 and S1 Fig . compare the prediction errors produced by the different models ., These prompt several interesting observations ., First , the prediction bias ranges from 0 . 47/1000 ( rLMCCC model ) to -1 . 86/1000 ( LM model ) ., Given that the average observed prevalence in the 766 screening rounds in our dataset equals 5 . 5/1000 , we consider the biases of the LM model and the LMVCC model as quite substantial ., Yet , this may be very well explained by the highly variable character of the HAT epidemic ., A small number of outbreaks may substantially shift the average observed prevalence level ., For example , without the four most negative prediction errors , the prediction bias for the LM model would be only -0 . 69/1000 ., Second , the LM model performs relatively well in terms of explained variation ., Yet , we see two vulnerabilities of this model: ( 1 ) as discussed before , this model is likely to be hampered by endogeneity , inducing a potential bias in the coefficient estimates , and ( 2 ) the variation in the screening intervals is relatively small for the villages with the highest endemicity levels in our sample , as these villages are screened almost every year ., When there is little variation in δ v - ( t ) , the true effects of variations might not become visible ., These two fundamental vulnerabilities may very well explain why ( a function of ) δ v - ( t ) has not been identified as significant for the LM model ., As a result , this model unrealistically predicts that the value of the expected prevalence level remains the same over time in the absence of screening activities , contrasting with vast historical evidence ., The same vulnerabilities apply to the FEM model , which also provides a counter-intuitive relation between the expected prevalence and δ v - ( t ) ., On top of that , its predictive power is relatively low , which could be explained by the fact that , for many villages , there is insufficient data to estimate the fixed effect accurately ., As variants of the logistic model already fix the structure of the relationship between δ v - ( t ) and f v ( S v n + δ v - ( t ) ) based on epidemiological insights , these models do not suffer from the vulnerabilities mentioned above ., We therefore consider these models to have most potential for accurately predicting HAT prevalence levels in general ( i . e . , in any region and for any time horizon ) ., Among the three logistic model variants , model 3 ( LMCCC ) performs reasonably well in terms of both criteria ., Model 5 ( LMVCC ) has a substantial prediction bias , but performs best in terms of explained variation , as can be seen in Fig 4 ( its performance is closest to the “perfect fit” ) ., Though model 4 ( rLMCCC ) performs best in terms of prediction bias , it performs very weakly in terms of explained variation ., Hence , among the logistic model variants , there is no clear winner when both criteria are assigned equal importance ., For planning decisions , however , we consider model 5 to be most suitable , followed by model 3 ., The reason is that , in contrast with prediction bias , explained variation indicates the ability to identify differences in expected prevalence levels between villages , as required for effective planning decisions ., Hence , identifying an effective prioritization of the different villages will be more important than obtaining unbiased estimates of the resulting prevalence levels ., The sensitivity level s is known to differ between regions 35 ., Furthermore , the population size of a village had to be estimated , which induces a potential bias in the participation level estimates ., These issues beg to question the robustness of our results on the logistic model variants ( note , models 1 and 2 are not affected by this as these do not use these parameters ) ., S3 Table shows the results of a sensitivity analysis , which largely confirm our findings ., In all scenario’s analyzed , model 5 remains best in terms of explained variation , followed by model 3 , and models 3 and 4 outperform model 5 in terms of prediction bias ., Another assumption that questions the robustness of our results is the one about the expected prevalence level at the beginning of the time horizon ( i . e . , at 01-01-2004 ) ., S4 Table provides the results of a sensitivity analysis on this assumption ., Again our main findings remain the same ., In the previous section we argue that , among the models analyzed in this paper , variants of the logistic model have most potential for accurately predicting HAT prevalence levels in general ., In this section we demonstrate the applicability of one of these model variants to analyze the effectiveness of screening operations ., In particular , since information on the development of the carrying capacities is lacking , as required by the LMVCC model , and since we consider the predictive performance of model 3 superior to that of model 4 , we choose to use the LMCCC model as a basis for this analysis ., We do note that the theoretical results presented here also hold for model 4 and , if the carrying capacity remains constant , for model 5 also ., Our analysis will concentrate on the fixed frequency screening policy ., This policy assigns to each village a fixed time interval for consecutive screening rounds based on the village’s characteristics ., As the policy is relatively easy to understand and implement , it has been the basis for guideline documents for HAT control ., For example , the WHO recommends a screening interval of one year for villages reporting at least one case in the past three years , and an interval of 3 years for villages that did not report a case in the last three years , but did report at least one case during the past five years 1 ., In the first part of this section , we mathematically analyze the impact of a fixed screening policy for a given village and investigate the screening frequency required to eradicate HAT in that village ., As mentioned in the introduction , we define that HAT is eradicated in the long term if the expected prevalence level goes to zero in the long term ., A shorter term objective is to eliminate HAT , where elimination is defined as having at most one new case per 10000 persons per year 1 , 7 ., For example , the WHO’s roadmap towards elimination of HAT states the aim to eliminate ( gambiense ) HAT as a public health problem by 2020—which is defined as having less than one new case per 10000 inhabitants in at least 90% of the disease foci 1—and to reach worldwide elimination by 2030 ., The second part of this section presents analytical results about the time needed to reach elimination and about the screening frequency requirements for reaching elimination within a given time frame ., As our models consider expected prevalence instead of incidence , we redefine elimination as “reaching an expected prevalence level of one case per 10000” ., We argue that the times and efforts required to reach this elimination target are practically suitable lower bounds on the times and efforts needed to reach the WHO’s targets ., First , incidence and prevalence levels are argued to be “comparable” for HAT if mobile units visit afflicted areas infrequently 16 ., If mobile teams visit the areas more frequently , incidence will only become larger compared to prevalence and the prevalence level target will be easier to achieve than the incidence level target ( e . g . , under the assumption that the fraction of flies infected is proportional to the fraction of humans infected , this follows directly from the epidemic model presented by Rogers 22 ) ., Second , even if the expected prevalence level is below the defined threshold level , the intrinsic variability of the HAT epidemic may induce an actual prevalence level that exceeds this threshold ., Throughout this section , we consider an imaginary village with a constant carrying capacity K . ( For sake of conciseness we omit the subscript v in this section ) ., Furthermore , we assume a constant participation level pvn = p , 0 < p < 1 , and a fixed screening interval τ ., The expected prevalence level at the beginning of the time horizon is denoted by f ( 0 ) , f ( 0 ) > 0 ., Finally , recall that s , 0 < s < 1 , denotes the sensitivity level ., This paper introduces and analyzes five models for predicting HAT prevalence in a given village based on past observed prevalence levels and past screening activities in that village ., Based on the quality of prevalence level predictions in 143 villages in Kwamouth ( DRC ) , and based on the theoretical foundation underlying the models , we conclude that variants of the logistic model—a model inspired by the SIS model—are most practically suitable for predicting HAT prevalence levels ., Sensitivity analyses show that this conclusion is very robust with respect to assumptions about participation levels , the sensitivity of the diagnostic test , or the initialization value of the prevalence curves are violated ., Second , we demonstrate the applicability of one variant of the logistic model to analyze the effectiveness of the fixed frequency screening policy , which assigns to each village a fixed time interval for consecutive screening rounds ., Due to the intrinsic variability of the HAT epidemic , observed prevalence levels will generally deviate significantly from predicted prevalence levels ., We strongly believe , however , that this does not render predictions worthless in the context of planning decisions ., In contrast , a major contribution of our models is that they indicate the expected disease burden in different villages and can hence be applied to develop planning policies that aim to minimize the total expected disease burden for the villages considered ., Our analysis of the fixed frequency screening policy reveals that eradication of HAT is to be expected in the long term when the screening interval is smaller than a given threshold ., This threshold strongly depends on the case detection fraction: the fraction of cases who participate in the screening rounds and are detected by the diagnostic tests ., Under current conditions , we estimate the threshold to be approximately 15 months ., This suggests that annual screening , as recommended by the WHO for endemic areas , will eventually lead to eradication ., More specifically , our model predicts that annual screening will lead to eradication if the case detection fraction exceeds 55% ., The logistic model also reveals expressions for the time needed to reach the more short term target of eliminating HAT and for the screening interval required to eliminate HAT within a given time frame ., These suggest that it takes 10 years to eliminate HAT in a village or focus with a prevalence of 5/1000 ( under current conditions and annual screening ) ., Furthermore , we estimate that it is only feasible to reach elimination within five years if the case detection fraction is very high—roughly above 75%—or if the current prevalence level is very low—roughly below 1/1000 ., We argue that these figures are practically suitable lower bounds on the time or efforts needed to reach the WHO’s targets for elimination ., Our results on requirements for eradication or elimination are based on a deterministic model , which begs to question their validity for reality , where events are stochastic ., We note , however , that we model the expected behavior of a stochastic system , and hence that our results also hold in expectation for the stochastic system ., On the other hand , we acknowledge that our models are not perfect ., For example , we neglect interaction effects between neighboring villages ., It would therefore be interesting and relevant to investigate whether our results can be reproduced by a validated simulation model ., A necessary condition for the applicability of our prediction models is that data about possibl | Introduction, Materials and Methods, Results, Discussion | To eliminate and eradicate gambiense human African trypanosomiasis ( HAT ) , maximizing the effectiveness of active case finding is of key importance ., The progression of the epidemic is largely influenced by the planning of these operations ., This paper introduces and analyzes five models for predicting HAT prevalence in a given village based on past observed prevalence levels and past screening activities in that village ., Based on the quality of prevalence level predictions in 143 villages in Kwamouth ( DRC ) , and based on the theoretical foundation underlying the models , we consider variants of the Logistic Model—a model inspired by the SIS epidemic model—to be most suitable for predicting HAT prevalence levels ., Furthermore , we demonstrate the applicability of this model to predict the effects of planning policies for screening operations ., Our analysis yields an analytical expression for the screening frequency required to reach eradication ( zero prevalence ) and a simple approach for determining the frequency required to reach elimination within a given time frame ( one case per 10000 ) ., Furthermore , the model predictions suggest that annual screening is only expected to lead to eradication if at least half of the cases are detected during the screening rounds ., This paper extends knowledge on control strategies for HAT and serves as a basis for further modeling and optimization studies . | The primary strategy to fight gambiense human African trypanosomiasis ( HAT ) is to perform extensive population screening operations among endemic villages ., Since the progression of the epidemic is largely influenced by the planning of these operations , it is crucial to develop adequate models on this relation and to employ these for the development of effective planning policies ., We introduce and test five models that describe the expected development of the HAT prevalence in a given village based on historical information ., Next , we demonstrate the applicability of one of these models to evaluate planning policies , presenting mathematical expressions for the relationship between participation in screening rounds , sensitivity of the diagnostic test , endemicity level in the village considered , and the screening frequency required to reach eradication ( zero prevalence ) or elimination ( one case per 10000 ) within a given time-frame ., Applying these expressions to the Kwamouth health zone ( DRC ) yields estimates of the maximum screening interval that leads to eradication , the expected time to elimination , and the case detection fraction needed to reach elimination within five years ., This paper serves as a basis for further modeling and optimization studies . | medicine and health sciences, infectious disease epidemiology, african trypanosomiasis, tropical diseases, parasitic diseases, health care, mathematics, forecasting, statistics (mathematics), screening guidelines, neglected tropical diseases, infectious disease control, research and analysis methods, public and occupational health, infectious diseases, zoonoses, epidemiology, mathematical and statistical techniques, protozoan infections, trypanosomiasis, differential equations, health care policy, physical sciences, statistical methods | null |
1,569 | journal.pcbi.1005763 | 2,017 | Probabilistic models for neural populations that naturally capture global coupling and criticality | We represent the response of a neural population with a binary vector s = {s1 , s2 , … , sN} ∈ {0 , 1}N identifying which of the N neurons elicited at least one action potential ( ‘1’ ) and which stayed silent ( ‘0’ ) during a short time window ., Our goal is to build a model for the probability distribution of activity patterns , p, ( s ) , given a limited number M of samples , D = { s ( 1 ) , … , s ( M ) } , observed in a typical recording session ., The regime we are mainly interested in is the one where the dimensionality of the problem is sufficiently high that the distribution p cannot be directly sampled from data , i . e . , when 2N ≫ M . Note that we are looking to infer models for the unconditional distribution over neural activity patterns ( i . e . , the population “vocabulary” ) , explored in a number of recent papers 8 , 9 , 11 , 13–18 , 24 , 34 , rather than to construct stimulus-conditional models ( i . e . , the “encoding models” , which have a long tradition in computational neuroscience 1–3 ) ., Previous approaches to modeling globally coupled populations focused on the total network activity , also known as synchrony , K ( s ) = ∑ i = 1 N s i ., The importance of this quantity was first analyzed in the context of probabilistic models in Ref 11 where the authors showed that a K-pairwise model , which generalizes a pairwise maximum entropy model by placing constraints on the statistics of K, ( s ) , is much better at explaining the observed population responses of 100+ salamander retinal ganglion cells than a pairwise model ., Specifically , a pairwise model assumes that the covariance matrix between single neuron responses , Cij = 〈sisj〉 , which can be determined empirically from data D , is sufficient to estimate the probability of any population activity pattern ., In the maximum entropy framework , this probability is given by the most unstructured ( or random ) distribution that reproduces exactly the measured Cij:, p ( s ; J ) = 1 Z ( J ) exp ( ∑ i , j = 1 N J i j s i s j ) , ( 1 ), where Z ( J ) is a normalization constant , and J is a coupling matrix which is chosen so that samples from the model have the same covariance matrix as data ., Note that because s i 2 = s i , the diagonal terms Jii of the coupling matrix correspond to single neuron biases , i . e . firing probabilities in the absence of spikes from other neurons ( previous work 11 used a representation si ∈ {−1 , 1} for which the single neuron biases need to be included as separate parameters and where Jii are all 0 ) ., A K-pairwise model generalizes the pairwise model and has the form, p ( s ; J , ϕ ) = 1 Z ( J , ϕ ) exp ( ∑ i , j = 1 N J i j s i s j + ∑ k = 0 N ϕ k δ k , K ( s ) ) ., ( 2 ), The coupling matrix J has the same role as in a pairwise model while the additional parameters ϕ are chosen to match the probability distribution of K, ( s ) under the model to that estimated from data ., The “potentials” ϕk introduced into the K-pairwise probabilistic model , Eq ( 2 ) , globally couple the population , and cannot be reduced to low-order interactions between , e . g . , pairs or triplets , of neurons , except in very special cases ., We will generically refer to probabilistic models that impose non-trivial constraints on population-level statistics ( of which the distribution of total network activity K is one particular example ) as “globally coupled” models ., Here we introduce new semiparametric energy-based models that extend the notion of global coupling ., These models are defined as follows:, p ( s ; α , V ) = e - V ( E ( s ; α ) ) Z ( α , V ) , ( 3 ), where E ( s; α ) is some energy function parametrized by α , and V is an arbitrary increasing differentiable function which we will refer to simply as the “nonlinearity . ”, The parametrization of the energy function should be chosen so as to reflect local interactions among neurons ., Crucially , while it is necessary to choose a specific parametrization of the energy function , we do not make any assumptions on the shape of the nonlinearity—we let the shape be determined nonparametrically from data ., Fig 1 schematically displays the relationship between the previously studied probabilistic models of population activity and two semiparametric energy-based models that we focus on in this paper , the semiparametric independent model ( which we also refer to as “V ( independent ) ” ) and the semiparametric pairwise model ( which we also refer to as “V ( pairwise ) ” ) ., Our motivation for introducing the global coupling via the nonlinearity V traces back to the argument made in Ref 11 for choosing to constrain the statistics of synchrony , K, ( s ) ; in short , the key intuition in earlier work has been that K, ( s ) is a biologically relevant quantity which encodes information about the global state of a population ., There are , however , many other quantities whose distributions could contain signatures of global coupling in a population ., In particular , while most energy functions—e . g . , the pairwise energy function , E ( s; J ) = −∑i , j Jijsisj—are defined solely in terms of local interactions between small groups of neurons , the statistics of these same energy functions ( for instance , their moments ) are strongly shaped by global effects ., Specifically , we show in Methods that the role of the nonlinearity in Eq ( 3 ) is precisely to match the probability density of the energy under the model to that estimated from data ., In other words , once any energy function for Eq ( 3 ) has been chosen , the nonlinearity V will ensure that the distributions of that particular energy in the model and over data samples agree ., Constraining the statistics of the energy E ( s; α ) is different from constraining the statistics of K, ( s ) , used in previous work ., First , the energy depends on a priori unknown parameters α which must be learned from data ., Second , while K, ( s ) is always an integer between 0 and N , the energy can take up to 2N distinct values; this allows for extra richness but also requires us to constrain the ( smoothed ) histogram of energy rather than the probability of every possible energy value , to prevent overfitting ., As we discuss next , the statistics of the energy are also closely related to criticality , a formal , model-free property distinguishing large , globally-coupled neural populations ., The notion of criticality originates in thermodynamics where it encompasses several different properties of systems undergoing a second-order phase transition 35 ., Today , many other phenomena , such as power-law distributed sizes of “avalanches” in neural activity , have been termed critical 20 ., Our definition , which we discuss below , is a restricted version of the thermodynamic criticality ., We consider a sequence of probability distributions { p N } N = 1 ∞ over the responses of neural populations of increasing sizes , N . These probability distributions define the discrete random variable s ( the population response ) , but they can also be thought of simply as functions which map a population response to a number between 0 and 1 ., Combining these two viewpoints , we can consider a real-valued random variable pN, ( s ) ∈ ( 0 , 1 ) which is constructed by applying the function pN to the random variable s ., The behavior of this random variable as N → ∞ is often universal , meaning that some of its features are independent of the precise form of pN ., As is conventional , we work with the logarithm of pN, ( s ) instead of the actual distribution ., We call a population “critical” if the standard deviation of the random variable log pN, ( s ) /N does not vanish as the population size becomes large , i . e ., 1 N σ ( log p N ( s ) ) ↛ 0 as N → ∞ ., ( 4 ), ( For completeness , we further exclude some degenerate cases such as when the probability density of log pN, ( s ) /N converges to two equally sized delta functions ., ) The above definition is related to criticality as studied in statistical physics ., In thermodynamics , σ ( log p N ( s ) ) / N is proportional to the square root of the specific heat , which diverges in systems undergoing a second-order phase transition ., While at a thermodynamical critical point σ ( log pN, ( s ) ) /N scales as N−γ with γ ∈ ( 0 , 1/2 ) , here we are concerned with the extreme case of γ = 0 ., Rather than being related to second-order phase transitions , this definition of criticality is related to the so-called Zipf law 23 ., A pattern s can be assigned a rank by counting how many other patterns have a higher probability ., In its original form , a probability distribution is said to satisfy Zipf law if the probability of a pattern is inversely proportional to its rank ., No real probability distribution is actually expected to satisfy this definition precisely , but there is a weaker form of Zipf law which concerns very large populations , and which is much less restrictive ., This weaker form can be stated as a smoothed version of the original Zipf law ., Consider patterns whose rank is in some small interval r , r + ΔN , and denote pN, ( r ) the average probability of these patterns ., We generalize the notion of Zipf law to mean that for very large populations pN, ( r ) ∝ r−1 ( ΔN is assumed to go to zero sufficiently quickly with N ) ., As shown in Ref 23 , a system is critical in the sense of Eq ( 4 ) precisely when it follows this generalized Zipf law ., Practically speaking , no experimentally studied population ever has an infinite size , and a typical way to check for signs of criticality is to see if a log-log plot of a pattern probability versus its rank resembles a straight line with slope −1 ., Most systems are not expected to be critical ., The simplest example is a population of identical and independent neurons ,, p N ( s ) = q ∑ i = 1 N s i ( 1 - q ) N - ∑ i = 1 N s i , ( 5 ), where q is the probability of eliciting a spike ., For such population ,, 1 N σ ( log p N ( s ) ) = 1 N q ( 1 - q ) log q 1 - q , ( 6 ), which vanishes for very large number of neurons , and so the system is not critical ., More generally , if pN, ( s ) can be factorized into a product of probability distributions over smaller subpopulations which are independent of each other and whose number is proportional to N , then log pN, ( s ) /N turns into an empirical average whose standard deviation is expected to vanish in the large N limit , and the population is not critical ., Reversing this argument , signatures of criticality can be interpreted as evidence that the population is globally coupled , i . e . that it cannot be decomposed into independent parts ., These preliminaries establish a direct link between criticality and semiparametric energy models of Eq ( 3 ) ., Nonlinearity in semiparametric energy models makes sure that the statistics of the energy E ( s; α ) , and , since V ( E ) is monotone , also the statistics of log p ( s; α , V ) are modeled accurately ( see Methods ) ., Because the behavior of log probability is crucial for criticality , as argued above , semiparametric energy models can capture accurately and efficiently the relevant statistical structure of any system that exhibits signs of criticality and/or global coupling ., To fully specify semiparametric energy models , we need a procedure for constructing the nonlinearity V ( E ) ., We cannot let this function be arbitrary because then the model could learn to assign nonzero probabilities only to the samples in the dataset , and hence it would overfit ., To avoid such scenarios , we will restrict ourselves to functions which are increasing ., We also require V ( E ) to be differentiable so that we can utilize its derivatives when fitting the model to data ., The class of increasing differentiable functions is very large ., It includes functions as diverse as the sigmoid , 1/ ( 1 + exp ( −E ) ) , and the square root , E ( for positive E ) , but we do not want to restrict ourselves to any such particular form—we want to estimate V ( E ) nonparametrically ., Nonparametric estimation of monotone differentiable functions is a nontrivial yet very useful task ( for example , consider tracking the height of a child over time—the child is highly unlikely to shrink at any given time ) ., We follow Ref 36 and restrict ourselves to the class of strictly monotone twice differentiable functions for which V′′/V′ is square-integrable ., Any such function can be represented in terms of a square-integrable function W and two constants γ1 and γ2 as, V ( E ) = γ 1 + γ 2 ∫ E 0 E exp ( ∫ E 0 E ′ W ( E ′ ′ ) d E ′ ′ ) d E ′ , ( 7 ), where E0 is arbitrary and sets the constants to γ1 = V ( E0 ) , γ2 = V′ ( E0 ) ., The function is either everywhere increasing or everywhere decreasing ( depending on the sign of γ2 ) because the exponential is always positive ., Eq ( 7 ) is easier to understand by noting that V ( E ) is a solution to the differential equation V′′ = WV′ ., This means , for example , that on any interval on which W = 0 , the equation reduces to V′′ = 0 , and so V ( E ) is a linear function on this interval ., If V ( E ) is increasing ( V′ > 0 ) , it also shows that the sign of W at a given point determines the sign of the second derivative of V at that point ., An advantage of writing the nonlinearity in the form of Eq ( 7 ) is that we can parametrize it by expanding W in an arbitrary basis without imposing any constraints on the coefficients of the basis vectors yet V ( E ) is still guaranteed to be monotone and smooth ., In particular , we will use piecewise-constant functions for W . This allows us to use unconstrained optimization techniques for fitting our models to data ., We start by considering one of the simplest models of the form Eq ( 3 ) , the semiparametric independent model:, p ( s ; α , V ) = e - V ( - ∑ i = 1 N α i s i ) Z ( α , V ) ., ( 8 ), If V were a linear function , the model would reduce to an independent model , i . e . a population of independent neurons with diverse firing rates ., In general , however , V introduces interactions between the neurons that may not have a straightforward low-order representation ., When fitted to our data , the nonlinearity V turns out to be a concave function ( see later sections on more complex models for a detailed discussion of the shape of the nonlinearity ) ., Note that if V had a simple functional form such as a low order polynomial , then the model Eq ( 8 ) would be closely related to mean field models of ferromagnetism with heterogenous local magnetic field studied in physics ., Our first goal is to use this simple model to verify our intuition that the nonlinearity helps to capture criticality ., Many population patterns are observed several times during the course of the experiment , and so it is possible to estimate their probability simply by counting how often they occur in the data 19 ., Given this empirical distribution , we construct a corresponding Zipf plot—a scatter plot of the frequency of a pattern vs its rank ., For systems which are close to critical , this should yield a straight line with slope close to −1 on a log-log scale ., We repeat the same procedure with samples generated from a semiparametric independent model as well as an independent model , which were both fitted to the responses of all 160 neurons ., Fig 2 shows all three scatter plots ., The independent model vastly deviates from the empirical Zipf plot; specifically , it greatly underestimates the probabilities of the most likely states ., In contrast , the learned semiparametric independent model follows a similar trend to that observed in data ., This does not mean that the semiparametric independent model itself is an excellent model for the detailed structure in the data , but it is one of the simplest possible extensions of the trivial independent model that qualitatively captures both global coupling and the signatures of criticality ., Since the semiparametric independent model is able to capture the criticality of the data distribution , we also expect it to accurately model other features of the data which are related to the globally coupled nature of the population ., To verify this , Fig 3A compares the empirical probability distribution of the total activity of the population K ( s ) = ∑i si to that predicted by the semiparametric independent model ., The match is very accurate , especially when compared to the same distribution predicted by the independent model ., This result goes hand in hand with the analysis in 39 which showed that interactions of all orders ( in our case mediated by the nonlinearity ) are necessary to model the wide-spread distribution of the total activity ., The independent model is a maximum entropy model which constrains the mean responses , 〈si〉 , of all neurons ., In other words , neurons sampled from the model would have the same firing rates as those in the data ( up to sampling noise ) ., Even though the semiparametric independent model is strictly more general , it does not retain this property when the parameters α and the nonlinearity V are learned by maximizing the likelihood of data ., Fig 3B demonstrates this point: although the predicted firing rates are approximately correct , there are slight deviations ., On the other hand , the nonlinearity induces pairwise correlations between neurons which is something the independent model by construction cannot do ., Fig 3C compares these predicted pairwise correlations to their data estimates ., While there is some correlation between the predicted and observed covariances , the semiparametric independent model often underestimates the magnitude of the covariances and does not capture the fine details of their structure ( e . g . the largest covariance predicted by the semiparametric independent model is about 5× smaller than the largest covariance observed in the data ) ., This is because a combination of independent terms and a single nonlinearity does not have sufficient expressive power , motivating us to look for a richer model ., One way to augment the power of the semiparametric independent model that permits a clear comparison to previous work is by means of the semiparametric pairwise model:, p ( s ; J , V ) = 1 Z ( J , V ) exp ( - V ( - ∑ i , j = 1 N J i j s i s j ) ) ., ( 9 ), We fit this model to the responses of the various subpopulations of the 160 neurons , and we compare the resulting goodness-of-fit to that of a pairwise ( Eq ( 1 ) ) , K-pairwise ( Eq ( 2 ) ) , and semiparametric independent model ( Eq ( 8 ) ) ., We measure goodness-of-fit as the improvement of the log-likelihood of data per neuron under the model relative to the pairwise model , as shown in Fig 4A ., This measure reflects differences among models rather than differences among various subpopulations ., The semiparametric pairwise model consistently outperforms the other models and this difference grows with the population size ., To make sure that this improvement is not specific to this particular experiment , we also fitted the models to two additional recordings from the salamander retina which were also collected as part of the study 11 ., One consists of 120 neurons responding to 69 repeats of a 30 second random checkerboard stimulus , and the other of 111 neurons responding to 98 repeats of a 10 second random full-field flicker stimulus ., As shown in Fig 4B , the improvements of individual models on these datasets are consistent with the ones observed for the population stimulated with a natural movie ., The advantage of using likelihood as a goodness-of-fit measure is its universal applicability which , however , comes hand-in-hand with the difficulty of interpreting the quantitative likelihood differences between various models ., An alternative comparison measure that has more direct relevance to neuroscience asks about how well the activity of a single chosen neuron can be predicted from the activities of other neurons in the population ., Given any probabilistic model for the population response , we use Bayes rule to calculate the probability of the ith neuron spiking ( si = 1 ) or being silent ( si = 0 ) conditioned on the activity of the rest of the population ( s−i ) as, p ( s i | s - i ; α ) = p ( s ; α ) p ( s i = 1 , s - i ; α ) + p ( s i = 0 , s - i ; α ) ., ( 10 ), We turn this probabilistic prediction into a nonrandom one by choosing whether the neuron is more likely to spike or be silent given the rest of the population , i . e ., s i ( s - i ; α ) = argmax s i ∈ { 0 , 1 } p ( s i | s - i ; α ) ., ( 11 ), In Fig 4C and 4D we compare such predictive single neuron models constructed from semiparametric pairwise , K-pairwise , pairwise , and semiparametric independent models learned from the data for populations of various sizes ., Specifically , we ask how often these models would make a mistake in predicting whether a chosen single neuron has fired or not ., Every population response in our dataset corresponds to 20 ms of an experiment and so we can report this accuracy as number of errors per unit of time ., Predictions based on the semiparametric pairwise model are consistently the most accurate ., Fig 5A shows the nonlinearities of the semiparametric pairwise models that we learned from data ., In order to compare the nonlinearities inferred from populations of various sizes , we normalize the domain of the nonlinearity as well as its range by the number of neurons ., Even though the nonlinearities could have turned out to have e . g . a sigmoidal shape , the general trend is that they are concave functions whose curvature—and thus departure from the linear V that signifies no global coupling—grows with the population size ., The shape of these nonlinearities is reproducible over different subnetworks of the same size with very little variability ., To further visualize the increasing curvature , we extrapolated what these nonlinearities might look like if the size of the population was very large ( the black curve in Fig 5A ) ., This extrapolation was done by subtracting an offset from each curve so that V ( 0 ) = 0 , and then fitting a straight line to a plot of 1/N vs . the value of V at points uniformly spaced in the function’s domain ., The plots of 1/N vs . V are only linear for N ≥ 80 , and so we only used these points for the extrapolation which is read out as the value of the fit when 1/N = 0 ., To quantify the increasing curvature , Fig 5B shows the average absolute value of the second derivative of V across the function’s domain ., The coupling matrix J of both the pairwise and the semiparametric pairwise models describes effective interactions between neurons , and so it is interesting to ask how the couplings predicted by these two models are related ., While Fig 5C shows a strong dependency between the couplings in a network of N = 160 neurons , the dependency is not deterministic and , moreover , negative couplings tend to be amplified in the semiparametric pairwise model as compared to the pairwise model ., Similarly to the semiparametric independent model , there is no guarantee that the semiparametric pairwise model will reproduce observed pairwise correlations among neurons exactly , even though pairwise model has this guarantee by virtue of being a maximum entropy model ., Fig 5D shows that despite the lack of such a guarantee , semiparametric pairwise model predicts a large majority of the correlations accurately , with the possible exceptions of several very strongly correlated pairs ., This is simply because the semiparametric paiwise model is very accurate–the inset of Fig 5D shows that it can also reproduce third moments of the responses ., A K-pairwise model also has this capability but , as shown in Ref 11 , a pairwise model systematically mispredicts higher than second moments ., Suppose we use the semiparametric pairwise model to analyze a very large population which is not globally coupled and can be divided into independent subpopulations ., The only way the model in Eq ( 9 ) can be factorized into a product of probability distributions over the subpopulations is if the function V is linear ., Therefore , the prior knowledge that the population is not globally coupled immediately implies the shape of the nonlinearity ., Similarly , a prior knowledge that the population is critical also carries a lot of information about the shape of the nonlinearity ., We show in Methods that if the parameters α are known , then the optimal nonlinearity in Eq ( 3 ) can be explicitly written as, V ( E ) = log ρ ¯ ( E ; α ) - log p ¯ ^ ( E ; α ) , ( 12 ), where ρ ¯ ( E ; α ) is the density of states which counts the number of patterns s whose energy is within some narrow range E , E + Δ ., The density of states is a central quantity in statistical physics that can be estimated also for neural activity patterns either directly from data or from inferred models 19 ., Similarly , p ¯ ^ ( E ; α ) is the empirical probability density of the energy E ( s; α ) smoothed over the same scale Δ ., Eq ( 12 ) follows from the relation p ¯ ^ ( E ; α ) ∝ ρ ¯ ( E ; α ) exp ( - V ( E ) ) , i . e . the probability of some energy level is just the number of states with this energy times the probability of each of these states ( see Methods ) ., We would like to establish a prior expectation on what the large N limit of the nonlinearites in Fig 5A is ., Adapting the same normalization as in the figure , we denote ϵ ( s; α ) = E ( s; α ) /N ., Changing variables and rewriting Eq ( 12 ) in terms of the empirical probability density of the normalized energy p ϵ ¯ ^ ( ϵ ) = N p ¯ ^ ( ϵ N ; α ) yields, V ( ϵ N ) = log ρ ¯ ( ϵ N ; α ) - log p ϵ ¯ ^ ( ϵ ) + log N ., ( 13 ), For a system where si can take on two states , the total number of possible activity patterns is 2N , and so we expect the log of the density of states to be proportional to N . If the system is critical , then by virtue of Eq ( 4 ) σ ( log pN ( s ) ) is proportional to N , and similarly we also expect σ ( E ( s; α ) ) ∝ N . This means that σ ( ϵ ( s; α ) ) = σ ( E ( s; α ) ) /N converges to some finite , nonzero number , and therefore log p ϵ ¯ ^ ( ϵ ) also stays finite no matter how large the population is ., Taken together , for large critical populations , the first term on the right hand side of Eq ( 13 ) is the only one which scales linearly with the population size , and hence it dominates the other terms:, V ( E ) ≈ log ρ ¯ ( E ; α ) ., ( 14 ), One of our important results is thus that for large critical populations , the nonlinearity should converge to the density of states of the inferred energy model ., In other words , for critical systems as defined in Eq ( 4 ) , there is a precise matching relation between the nonlinearity V ( E ) and the energy function E ( s; α ) ; in theory this is exact as N → ∞ , but may hold approximately already at finite N . To verify that this is the case for our neural population that has previously been reported to be critical , we compare in Fig 6A the nonlinearity inferred with the semiparametric pairwise model ( Fig 5A ) to the density of states estimated using a Wang and Landau Monte Carlo algorithm 40 for a sequence of subpopulations of increasing size ., As the population size increases , the nonlinearity indeed approaches the regime in which our prediction in Eq ( 14 ) holds ., This convergence is further quantified in Fig 6B which shows the average squared distance between the density of states and the nonlinearity ., The average is taken over the range of observed energies ., The nonlinearities are only specified up to an additive constant which we chose so as to minimize the squared distance between the density of states and the nonlinearity ., The link between global coupling and criticality is related to recent theoretical suggestions 28 , 29 , where global coupling between the neurons in the population emerges as a result of shared latent ( fluctuating ) variables that simultaneously act on extensive subsets of neurons ., In particular , Ref 28 theoretically analyzed models with a multivariate continuous latent variable h distributed according to some probability density q ( h ) , whose influence on the population is described by the conditional probability distribution, p N ( s | h ) = e - ∑ j h j O j ( N ) ( s ) Z N ( h ) , ( 15 ), where ZN ( h ) is a normalization constant , and O j ( N ) ( s ) are global quantities which sum over the whole population ., The authors showed that under mild conditions on the probability density q ( h ) of h , and the scaling of O j ( N ) ( s ) with N , the sequence of models, p N ( s ) = ∫ q ( h ) p N ( s | h ) d h ( 16 ), is critical in the sense of Eq ( 4 ) ., If the latent variable is one-dimensional , i . e . h = h , then the models in Eq ( 16 ) have exactly the form of models in Eq ( 3 ) with E ( s; α ) = O ( s ) , i . e . given a probability density q ( h ) of the latent variable , we can always find a nonlinearity V ( E ) such that, 1 Z ( α ) e - V ( E ( s ; α ) ) = ∫ 0 ∞ q ( h ) e - h E ( s ; α ) Z ( h ; α ) d h ., ( 17 ), The reverse problem of finding a latent variable for a given function V ( E ) such that this equation is satisfied does not always have a solution ., The condition for this mapping to exist is that the function exp ( −V ( E ) ) is totally monotone 41 , which , among other things , requires that it is convex ., While our models allow for more general nonlinearites , we showed in Fig 5A that the inferred functions V ( E ) are concave and so we expect this mapping to be at least approximately possible ( see below ) ., The mapping in Eq ( 17 ) is based on a Laplace transformation , a technique commonly used for example in the study of differential equations ., Laplace transformations are also often used in statistical physics where they relate the partition function of a system to its density of states ., While the mathematics of Laplace transformations yields conditions on the function V ( E ) so that it is possible to map it to a latent variable ( i . e . , exp ( −V ( E ) ) must be totally monotone ) , analytically constructing this mapping is possible only in very special cases ., We can gain a limited amount of intuition for this mapping by considering the case when the latent variable h is a narrow gaussian with mean h0 and variance σ2 ., For small σ2 , one can show that, V ( E ) ≈ h 0 E - σ 2 ( E - E 0 ) 2 , ( 18 ), where E0 is the average energy if σ2 = 0 , and the approximation holds only in a small neighborhood of E0 ( |E − E0| ≪ σ ) ., This approximation shows that the curvature of V ( E ) is proportional to the size of the fluctuations of the latent variable which , in turn , is expected to correlate with the amount of global coupling among neurons ., This relationship to global coupling can be understood from the right hand side of Eq ( 17 ) ., When the energy function is , for example , a weighted sum of individual neurons as in the semiparametric independent model of Eq ( 8 ) , then we can think of Eq ( 17 ) as a latent variable h ( perhaps reflecting the stimulus ) coupled to every neuron , and hence inducing a coupling between the whole population ., A non-neuroscience example is that of a scene with s representing the luminance in each pixel , and the latent h representing the lighting conditions which influence all the pixels simultaneously ., We used the right hand side of Eq ( 17 ) ( see Methods ) to infer the shapes of the probability densities of the latent variables which correspond to the nonlinearities in the semiparametric pairwise models learned from data ., These probability densities are shown in Fig 6C ., A notable difference to the formulation in Eq ( 16 ) is that the inferred latent variables scale with the population size; in particular , the inset to Fig 6C shows that the entropy of the inferred latent variable increases with the population size ., Entropy is a more appropriate measure of the “broadness” of a probability density than standard deviation when the density is multimodal ., Taken together with the results in Fig 4A , this suggests that global coupling is especially important for larger populations ., However , it is also possi | Introduction, Results, Discussion, Methods | Advances in multi-unit recordings pave the way for statistical modeling of activity patterns in large neural populations ., Recent studies have shown that the summed activity of all neurons strongly shapes the population response ., A separate recent finding has been that neural populations also exhibit criticality , an anomalously large dynamic range for the probabilities of different population activity patterns ., Motivated by these two observations , we introduce a class of probabilistic models which takes into account the prior knowledge that the neural population could be globally coupled and close to critical ., These models consist of an energy function which parametrizes interactions between small groups of neurons , and an arbitrary positive , strictly increasing , and twice differentiable function which maps the energy of a population pattern to its probability ., We show that:, 1 ) augmenting a pairwise Ising model with a nonlinearity yields an accurate description of the activity of retinal ganglion cells which outperforms previous models based on the summed activity of neurons;, 2 ) prior knowledge that the population is critical translates to prior expectations about the shape of the nonlinearity;, 3 ) the nonlinearity admits an interpretation in terms of a continuous latent variable globally coupling the system whose distribution we can infer from data ., Our method is independent of the underlying system’s state space; hence , it can be applied to other systems such as natural scenes or amino acid sequences of proteins which are also known to exhibit criticality . | Populations of sensory neurons represent information about the outside environment in a collective fashion ., A salient property of this distributed neural code is criticality ., Yet most models used to date to analyze recordings from large neural populations do not take this observation explicitly into account ., Here we aim to bridge this gap by designing probabilistic models whose structure reflects the expectation that the population is close to critical ., We show that such principled approach improves previously considered models , and we demonstrate a connection between our models and the presence of continuous latent variables which is a recently proposed mechanism underlying criticality in many natural systems . | linguistics, social sciences, random variables, neuroscience, covariance, probability distribution, mathematics, statistics (mathematics), thermodynamics, entropy, animal cells, probability density, probability theory, physics, statistical models, cellular neuroscience, cell biology, neurons, biology and life sciences, cellular types, physical sciences, computational linguistics | null |
2,198 | journal.pcbi.1003011 | 2,013 | Data-Driven Modeling of Src Control on the Mitochondrial Pathway of Apoptosis: Implication for Anticancer Therapy Optimization | Protein tyrosine kinases of the Src family are involved in multiple facets of cell physiology including survival , proliferation , motility and adhesion 1 ., Their deregulation has been described in numerous malignancies such as colorectal , breast , melanoma , prostate , lung or pancreatic cancers and is known to favor tumorigenesis and tumor progression 2–4 ., Modulation of apoptosis sensitivity by Src deregulation is more controversial ., We recently described that Src activation promotes resistance to the mitochondrial pathway of apoptosis in mouse and human cancer cell lines 5 ., The molecular mechanism underlying such resistance involved the accelerated degradation of the proapoptotic BH3-only protein Bik ., Indeed , in Src-transformed NIH 3T3 mouse fibroblasts , Bik was found to be phosphorylated by activated Erk1/2 , which was followed by Bik subsequent polyubiquitylation and proteasomal degradation 5 ., Thus in Src-transformed cells , Bik downregulation compromised Bax activation and mitochondrial outer membrane ( MOM ) permeabilization upon an apoptotic stress 5 ., That observation might be of importance since MOM permeabilization is the key step that commits cells to apoptosis ., Indeed , MOM permeabilization leads to the irreversible release of cytochrome c and other cytotoxic molecules from the mitochondrial inter-membrane space into the cytosol 6 , 7 ., Once released , cytochrome c induces the formation of the apoptosome complex , which triggers caspase activation , these molecules being the main executioners of the apoptotic program ., MOM permeabilization is triggered by the insertion and oligomerization of the pro-apoptotic effector Bax into the membrane 8–11 ., Antiapoptotic proteins such as Bcl-2 or Bcl-xL prevent this process , whereas pro-apoptotic BH3-only proteins contribute to Bax activation 6 , 11–16 ., Using western blotting and specific shRNAs , the respective contribution of the different Bcl-2 family members to the cell response triggered by various death- inducing agents was assessed in parental and Src-transformed NIH-3T3 fibroblasts 5 ., Experimentally and mathematically investigating the cell response to death-inducing agents might be of interest since it has long been postulated that restoration of apoptosis might be an effective way to selectively kill cancer cells ., The rationale of this assumption is that cancer cells need to counteract the pro-apoptotic effect of oncogenes such as Myc or E2F-1 that stimulate cell proliferation as well 17 ., Moreover , Src deregulation has specifically been associated with resistance to treatment in a number of cancers 18 , 19 ., Therefore , a critical clinical concern lies in the design of therapeutic strategies that would circumvent resistance to apoptosis of cells with deregulated Src activity ., To this end , several classes of therapeutic agents might be a priori considered ., Inhibitors of Src tyrosine kinases , such as dasatinib , are currently widely used in the clinic 20–23 ., Other anticancer therapeutic strategies aim at restoring apoptosis in cancer cells 24 ., In particular , inhibitors of antiapoptotic proteins such as ABT-737 or the Oblimersen Bcl-2 antisense oligodeoxyribonucleotide are currently evaluated in clinical trials 25–28 ., Apoptosis may also be restored by increasing the expression of pro-apoptotic proteins such as Bax , Bik or p53 29–32 ., Here we propose a systems biology approach for optimizing potential anticancer therapeutic strategies using parental and Src-transformed NIH 3T3 fibroblasts as a biological model ., To this end , molecular mathematical models of Bik kinetics and of the mitochondrial pathway of apoptosis were built and fitted to available experimental data ., They guided further experimental investigation in parental and Src-transformed cells which allowed their refinement ., Then , those models were used to generate predictions which were validated by subsequent specifically-designed experiments ., Finally , we theoretically explored different drug combinations involving the kinase inhibitor staurosporine , Src inhibitors , and activators or inhibitors of the Bcl-2 protein family , in order to design optimal anticancer strategies for this biological system ., Optimal strategies were defined as those which maximized the efficacy on Src-transformed cells considered as cancer cells under the constraint of toxicity remaining under a tolerable threshold in parental cells ., We recently provided evidence that Bik , a BH3-only protein , is a key regulator of apoptosis in the considered biological system 5 ., Therefore we first built a mathematical model to investigate Bik kinetics in non-apoptotic conditions ., Bik concentration temporal variations were assumed to result from two processes: protein formation and protein polyubiquitylation , which eventually leads to its degradation ., Let us denote and the intracellular concentration of Bik and polyubiquitylated Bik proteins respectively , expressed in nM ., Bik protein was assumed to be synthesized at a constant rate in both Src-transformed and parental cells as suggested by similar Bik mRNA level in both cell types 5 ., Concerning Bik ubiquitylation , we considered that it occurred either spontaneously at the rate , or after Bik phosphorylation by activated Erk1/2 downstream of SRC activation , as demonstrated in Src-transformed fibroblasts ( 5 , Figure 1 ) ., In those cells , this prior phosphorylation increased Bik ubiquitylation rate and further proteasomal degradation ., This Src-dependent pathway was modeled by Michaelis-Menten kinetics with parameters and ., In the model , we assumed that spontaneous and Src-mediated ubiquitylation could occur in both transformed and parental cells ., Ubiquitin molecules were assumed to be in large excess compared to Bik amount ., Therefore ubiquitin concentration was considered as constant and implicitly included in and ., Poly-ubiquitylated molecules were then assumed to be degraded by the proteasome at a constant rate in both cell types ., was arbitrary set to 1 as it does not influence kinetics , and only acts on ., The model of Bik kinetics can be written as follows: ( 1 ) ( 2 ) Parameters were then estimated for parental and Src-transformed cells by fitting experimental results on Bik protein degradation in both cell types ( 5 , reprinted with permission in Figure 2B ) ., We assumed that the spontaneous phosphorylation occurred at the same rate in parental and Src-transformed fibroblasts and therefore looked for a unique ., Parameters of Src-dependent Bik degradation were denoted and in parental cells and and in Src-transformed 3T3 cells ., Inhibition of Src kinase activity by herbimycin was experimentally monitored in Src-transformed cells ( Figure 2A , reprinted with permission from 5 ) ., Herbimycin exposure achieved a decrease of 98% in phosphorylated Y416 amount ., Therefore , we modeled herbimycin exposure as a decrease of 98% in values ., See Text S1 for details on the parameter estimation procedure ., The best-fit parameter value for the spontaneous ubiquitylation was ., Src-dependent ubiquitylation was predicted to be inactive in normal fibroblasts as , which was in agreement with experimental results 5 ., On the contrary , the Src pathway was predominant in transformed cells as and which leads to The dynamical system 1–2 admits a unique steady state:where ., For parental cells , in which is equal to zero , steady state becomes ., Bik steady-state concentrations in parental cells was assumed to be equal to 50 nM which is in the physiological range of BH3-only protein intracellular levels 33–37 ., This allowed us to deduce ., We then computed Bik steady-state concentrations in Src-transformed cells which was equal to nM ., Thus , the simulated ratio of Bik concentration in Src-transformed cells over that in parental cells was equal to 0 . 18 which is similar to the experimentally-observed value quantified to 0 . 2 ( Figure 3 , Table 1 ) ., This constitutes a partial validation of the model since the data of Figure 3 was not used in Bik kinetics model design and calibration ., In the following , these steady state concentrations were used as Bik initial condition since cells were assumed to be in non-apoptotic conditions prior to the death stimulus ., We then investigated Bik kinetics in parental and Src-transformed NIH-3T3 cells in response to an apoptotic stress that consisted of a 8-hour-long exposure to staurosporine ( 2 M ) ., As demonstrated by knockdown experiments ( Figure 2b in 5 ) , Bik was required for apoptosis induction ., Bik was present in non-transformed cells with no sign of apoptosis in normal conditions , which suggested either that Bik concentration was not large enough to trigger apoptosis in these conditions , or that Bik was activated upon apoptotic stress ., The first assumption to be mathematically investigated was that Bik protein amount might increase upon staurosporine treatment as a result of the turning-off of the degradation processes , Bik synthesis rate remaining unchanged under staurosporine treatment ., Thus , if Bik ubiquitylation process is turned off in the model , only the formation term remains in equation 1 which is now the same for parental and transformed cells , with different initial conditions ., This equation can be solved analytically: ( 3 ) where stands for Bik initial concentration taken equal to Bik steady state concentration in parental and transformed fibroblasts ., We did not observe any significant apoptosis either in normal or Src-transformed cells in the first six hours of staurosporine treatment ( data not shown ) ., In non-transformed cells , setting t\u200a=\u200a360 min in equation 3 gave which meant that Bik concentration would only double in six hours if this hypothesis was right ., This was tested by measuring Bik protein level during staurosporine exposure in parental cells ., However , no significant increase in Bik levels upon a 6 hour-long staurosporine treatment was observed , which ruled out that the induction of apoptosis could depend on Bik accumulation ( Figure 4 A ) ., Therefore , we investigated a second hypothesis that consisted of an activation of Bik upon apoptosis induction ., Such a possibility might rely on a release of Bik from a protein complex upon apoptotic stress as observed with other BH3-only proteins such as Bad , Bim or Bmf 38 ., To investigate the likelihood of this hypothesis , we performed the immunostaining of endogenous Bik in parental NIH-3T3 cells upon staurosporine exposure ., Our data was in agreement with a relocation of Bik from its known location at the ER to the mitochondria within 2 h of treatment ( 39 , 40 , Figure 4 B ) ., This relocation might correspond to Bik release from a binding protein at the ER as previously observed 41 ., We modeled this relocation by the equations 4 and 5 in which stands for Bik protein that had been activated possibly through this relocation and represents inactive Bik molecules ., This translocation occurred at the rate ., Colocalization between Bik fluorescence and mitotracker staining showed that 4513% of Bik molecules were located at the mitochondria within 2 h of treatment which led to the estimated value ( Figure 4 B ) ., We then investigated the mitochondrial pathway of apoptosis in NIH-3T3 parental and Src-transformed cells ., We only considered the Bcl-2 members that were experimentally detected in this biological model 5 ., The only pro-apoptotic multidomain effector was Bax , whereas the multidomain antiapoptotic protein family was represented by Bcl-2 , Bcl-xL and Mcl-1 5 ., Five BH3-only proteins were present: three BH3-only activators ( i . e . able to directly bind and activate Bax ) Puma , Bim and tBid and two BH3-only sensitizers ( i . e . able to bind Bcl-2 and related apoptosis inhibitors , but unable to bind and activate Bax ) Bad and Bik ., The respective role of present BH3-only proteins in apoptosis induction was assessed by a shRNA-mediated approach ., Bim , which was expressed at very low level , could be neglected in the onset of apoptosis , since its downregulation induced no significant increase in apoptosis resistance upon staurosporine , thapsigargin or etoposide ., In contrast , PUMA had a prominent role for apoptosis induced by genotoxic stresses ( UV or etoposide ) but displayed no significant role in staurosporine- and thapsigargin-induced apoptosis 5 ., As we focused here on staurosporine-induced apoptosis , the only BH3-only activator that we considered was tBid ., Concerning BH3-only sensitizers , Bad could be neglected as its silencing by shRNA did not significantly modify cell response to staurosporine ., Therefore the only sensitizer to be considered was Bik ., Bax , Bik and tBid were described to bind all the antiapoptotic proteins expressed in our biological model , namely Bcl2 , Bcl-xL and Mcl-1 ., Thus , for the sake of simplicity , we denoted by the cumulative concentration of those three antiapoptotic proteins ., We then modeled Bax activation ., In non-apoptotic conditions , Bax spontaneously adopts a closed 3D-conformation that does not bind antiapoptotic proteins 10 ., This conformation was denoted ., During apoptosis , Bax transforms into an opened 3D-conformation ( ) and inserts strongly into the MOM ., molecules can be inhibited by antiapoptotic proteins which trap them into dimers ., Moreover , they may spontaneously transform back into their closed conformation 42 ., If they are not inhibited , molecules may oligomerize into molecules and create pores in the MOM which correlates with the release into the cytosol of apoptogenic factors , including cytochrome C 8–10 ., We considered that was inefficient at binding oligomerized Bax 7 ., In the model , Bax oligomerization happens either by the oligomerization of two molecules or by a much faster autocatalytic process in which a molecule recruits a molecule to create two molecules ., Those two processes occurred at the respective rates and which were chosen such that to account for the preponderance of the autocatalytic pathway ., Bax activation from into isoforms was assumed to be catalyzed by the BH3-only activator ., We assumed that this reaction occurred in a “kiss and run” manner and therefore follows Michaelis-Menten kinetics ., resulted from activation by truncation which occurred at the rate 43 ., BH3-only activator can also be inhibited by which trap it into complexes ., Those complexes may be dissociated by active Bik molecules which bind to and release 13 ., Finally , we also considered that antiapoptotic proteins directly inhibit active Bik molecules and associate into complexes ., Above-mentioned chemical reactions that occur spontaneously were assumed to follow the law of mass action ., All protein concentrations are expressed in nM in the mathematical model ., This mathematical model is recapitulated in Figure 5 and Table 2 ., It can be written as follows: ( 4 ) ( 5 ) ( 6 ) ( 7 ) ( 8 ) ( 9 ) ( 10 ) ( 11 ) ( 12 ) ( 13 ) ( 14 ) Bik total protein amount was assumed to be constant during apoptosis as experimentally demonstrated ( Figure 4 A ) ., We also assumed that , and total amounts remained constant following the death stimulus ., However , the apoptotic stress may induce Bax transcription and repress Bcl2 one , in particular through the activation of p53 44 ., Four conservation laws hold: Only seven from the eleven equations of the mathematical model 4–14 need to be solved as the four remaining variables can be computed using those conservation laws ., We subsequently modeled the cell population behavior ., Let us denote by the percentage of surviving cells at time t ., No cell division was assumed to occur in presence of staurosporine as the very first effect of most cytotoxic drug consists in stopping the cell cycle 45 ., Natural cell death was neglected as almost no apoptosis was observed in either parental or Src-transformed cells in the absence of death stimuli 5 ., We considered that apoptosis is irreversibly activated when concentration reaches the threshold which corresponds to the minimal amount of oligomerized Bax molecules required to trigger the cytochrome C release into the cytosol ., This assumption was modeled in equation 15 by a S-shape function which also ensures that the death rate does not grow to infinity ., Below is the equation for the percentage of surviving cells: ( 15 ) Parameters , a and were assumed to be the same for the two cell populations ., At the initial time just before the apoptotic stress , cells were assumed to be in steady state conditions ., The initial percentage of surviving cells is ., Bik initial concentrations were set to steady-state values computed using equations 1–2 ., Moreover , we assumed that Bik was entirely under its inactive form so that: , , ., All Bax molecules are assumed to be inactive: , and ., All existing molecules are trapped in complexes with : and ., Initial protein concentration of Bid and Bcl2 can be computed using the conservation laws: and ., For the sake of simplicity , we considered that no complexes were present at the initial time ( ) as dimers do not play any part in the overall dynamics since we assumed that they do not dissociate ., As previously stated , the considered apoptotic stress consists of an 8-hour-long exposure to staurosporine ( 2 M ) which starts at time t\u200a=\u200a0 ., It triggers two molecular events: activation into and formation representing truncation into ., Mathematically , and are set to non-zero values at the initial time ., Parameters of this model of mitochondrial apoptosis were estimated by fitting experimental data in parental and Src-transformed cells from 5 and integrating biological results from literature ., First , we assessed quantitative molar values of considered Bcl-2 family proteins in non-apoptotic conditions as follows ., We set Bax total concentration in Src-transformed cells to 100 nM according to 46 in which the authors stated that this was a physiological level in tumor cells ., This value is also in agreement with concentration ranges found in the literature 33 , 34 , 36 , 47 , 48 ., Then , in 46 , they found that anti-apoptotic total concentration had to be 6 times higher than that of Bax in order to prevent apoptosis ., Therefore , we set in Src-transformed cells ., Concerning Bid total concentration , we assumed which is in agreement with experimental results from the literature 33 , 34 , 36 , 47 ., Finally , tBid initial concentration was set to 1 nM since this band was hardly detectable by western-blot ( Figure 3 ) ., Moreover , this value was consistent with previous modeling results 47 ., We then computed protein ratios between parental and Src-transformed cells using immunoblotting data of Figure 3 ., We experimentally determined that there was a 9-fold higher amount of proteins in the cytosol fraction compared to the mitochondria compartment which allowed us to compute protein ratios of total intracellular quantities ( Table 1 ) ., As previously stated , Bik protein amount was reduced in Src-transformed cells by a factor of 0 . 2 compared to parental fibroblasts ( Figure 3 ) ., This dramatic decrease was the result of the Src-dependent activation of Erk1/2 kinases , leading to Bik phosphorylation , polyubiquitylation and subsequent degradation by the proteasome 5 ., Bax steady-state level in non-apoptotic conditions was increased by a factor of 2 . 1 in Src-transformed cells compared to normal ones and that of Bid was decreased by a factor of 0 . 77 ., Concerning antiapoptotic molecules , the sum of Bcl2 , Bcl-xL and Mcl-1 quantities was slightly increased in Src-transformed cells by a factor of 1 . 1 compared to parental ones ., Those protein ratios were used to compute molar quantities of considered Bcl-2 family protein total concentrations ( Table 1 ) ., Then , we estimated the apoptotic threshold as follows ., Quantification of Figure S1d in 5 showed that 38% of BAX molecules at the mitochondria were activated during apoptosis ., Previously-described quantification of Figure 3 showed that 33% of BAX total amount were located at the mitochondria , the remaining part being in the cytosol ., Therefore , the percentage of activated BAX was set to 33% * 38%/10013% ., This percentage is in agreement with previous experimental data which suggests that approximately 10–20% of Bax total amount is actually activated during apoptosis 46 ., The high intensity of the bands corresponding to Bcl-xL expression in Figure 3 suggested that it might be the predominant antiapoptotic protein in our biological model ., Dissociation constant between Bcl-xL and respectively Bik , Bid and Bax was experimentally found to be equal to nM , nM and nM 49 , 50 ., Therefore , we set and and only estimated ., At this point , 10 kinetic parameters still needed to be estimated ., In order to determine those 10 parameters , we fitted experimental data from Figure 6 under constraints inferred from experimental results ., We used the three experimental data points of Figure 6 corresponding to exposure to staurosporine as a single agent or combined with herbimycin ., We modeled the administration of staurosporine after an exposure to the Src tyrosine kinase inhibitor herbimycin as follows ., We assumed that herbimycin was administrated before staurosporine exposure such that the system had time to reach steady state ., As previously described , herbimycin exposure was modeled by decreasing ( the maximal velocity of Src-induced Bik ubiquitylation ) of 98% of its original value ., Then , we set constraints on state variables as follows ., We assumed that did not decrease below 20% ( i . e . the apoptotic threshold ) of its initial value within 6 h of staurosporine exposure as approximately 20% of Bax total quantity is activated during apoptosis 9 , 46 ., Moreover , we ensured that reached the apoptotic threshold in parental cells after 6 to 8 h of exposure to staurosporine as biological experiments showed ., Moreover , as Bax oligomerization was assumed to be an autocatalytic process , we expected to obtain ., Therefore , in the parameter estimation procedure , we set initial search values for and such that ., Finally , molecule association rates were searched between and which is a realistic range with respect to the diffusion limit 48 ., Estimated parameter values are shown in Table 2 ., The data-fitted mathematical model allowed the investigation of the dynamical molecular response to staurosporine exposure ( Figure 7 ) ., As expected , higher Bik concentration in normal fibroblasts led to a higher concentration of and of free compared to transformed cells ., molecules then activated into which oligomerized until reaching the apoptotic threshold in parental cells ., On the contrary , could efficiently be sequestered in complexes with antiapoptotic proteins in Src-transformed cells as a result of the lower level of Bik protein ., This perfectly fit the described function of Bik as a sensitizer 51 ., Concerning co-administration of staurosporine and herbymicin , the model predicted that this drug combination circumvents the resistance of the cancer cell population in which 99% of cells are apoptotic after 8 hours of exposure to staurosporine ( Figure 6 ) ., This model behavior was in agreement with experimental data which showed 98% of apoptotic cells in the Src-transformed population ., Moreover , the model predicted that an exposure to staurosporine as a single agent or combined with herbimycin lead to the same activity of 80% of apoptotic cells in the parental fibroblasts population ., We intended to determine optimal therapeutic strategies for our particular biological system in which parental and Src-transformed NIH-3T3 fibroblasts stand for healthy and cancer cells respectively ., In the following , both cell populations are exposed simultaneously to the same drugs , mimicking the in vivo situation in which healthy and tumor tissues are a priori exposed to the same blood concentrations of chemotherapy agents ., From a numerical point of view , identical parameter changes were applied to normal and cancer cells ., First , we investigated the combination of staurosporine with ABT-737 , a competitive inhibitor of Bcl-2 and Bcl-xL that were the main antiapoptotic proteins in our cellular model ., ABT-737 inhibits free antiapoptotic proteins but also dissociates complexes of anti- and pro-apoptotic proteins ., As for herbimycin , we assumed that ABT-737 was administrated before staurosporine such that the system had time to reach steady state ., ABT-737 pre-incubation was thus modeled by decreasing Bcl-2 total amount in proportion to ABT-737 concentration and by setting and at the initial time ., Interestingly , ABT-737 exposure in the absence of staurosporine ( i . e . ) did not result in cell death induction for any dose of ABT-737 in the mathematical model , as experimentally demonstrated 5 ., Indeed , in the absence of staurosporine , Bid was not activated into tBid and the low quantity of tBid present in the cells at steady state was not sufficient to trigger Bax oligomerization , even when ABT-737 inhibited all anti-apoptotic proteins ., This confirmed that the mathematical model described correctly this cell model that does not behave as a “primed for death model” in which inhibition of anti-death proteins results in death , even in the absence of apoptosis induction ., As a reminder , in the primed for death situation , incubation with ABT-737 led to cell death as a consequence of the release of the BH3-only pro-apoptotic proteins that were therefore able to activate Bax ., The main difference between the primed for death situation and our model is that apoptosis resistance in the primed for death model primary comes from the overexpression of anti-apoptotic proteins such as Bcl-xL or Bcl2 that are efficiently inhibited by ABT-737 whereas here it comes from the decrease of a pro-death protein in the Src-transformed model ., The combination of staurosporine and ABT-737 at any concentration , i . e . for any decrease in Bcl2 total protein amount , was predicted by the model to induce much more apoptosis in parental cells compared to Src-transformed cells and thus to fail in circumventing cancer cells resistance ( Figure 8 ) ., To experimentally confirm this model prediction , we pre-incubated parental and Src-transformed cells with ABT-737 prior to staurosporine exposure ., The resulting death-inducing effect on Src-transformed cells was significantly increased compared to staurosporine alone ( Figure 6 ) ., However , as anticipated by the model , this drug combination resulted in an extremely high toxicity of 99% of apoptotic cells in the parental fibroblasts population ( Figure 6 ) ., Those data points were reproduced by the calibrated mathematical model for a predictive decrease of 182 nM in Bcl2 total concentrations in both cell types ., After that , we looked for theoretically optimal therapeutic strategies by applying optimization procedures on the calibrated model of the mitochondrial apoptosis ., Optimal strategies were defined as those which maximized efficacy in cancer cells under the toxicity constraint that less than 1% of healthy cells die during drug exposure ., We investigated drug combinations that consisted of the exposure to staurosporine after treatment with Src inhibitors , or up- or down-regulators of BCL-2 family proteins ., Pre-incubation with inhibitors or activators aimed at modifying the equilibrium of the biological system before exposure to the cytotoxic drug ., Src inhibition was simulated by a decrease in value whereas up- or down-regulation of Bcl-2 family proteins were modeled by modifying the total concentration of the targeted proteins ., The theoretically-optimal drug combination would consist of administering staurosporine combined with inhibitors of Src , Bax and Bcl2 , together with a upregulator ., The concentration of Bax inhibitor should be set such that Bax total concentration decreases below the apoptotic threshold in healthy cells thus protecting them from apoptosis ., As Bax total amount was higher in cancer cells , it would remain high enough to allow these cells to undergo apoptosis ., Once healthy cells are sheltered from apoptosis , Bcl2 amount could be decreased , using for instance ABT-737 , and amount increased at the same time without risking any severe toxicity ., As expected , the optimal therapeutic strategy also included the suppression of the Src-dependent phosphorylation of Bik in cancer cells , using for instance herbimycin ., This drug combination led to 99% of apoptotic cells in the cancer cell population and less than 1% in the parental one where Bax was hardly present ( Figure 9 , Text S1 ) ., This theoretically optimal strategy involved the administration of a cytotoxic agent combined with four other chemicals , which may not be realistic in the perspective of clinical application ., Therefore we hierarchically ranked the considered therapeutic agents by searching for optimal strategies consisting in the combination of staurosporine with only one or two agents ., Strategies which satisfied the tolerability constraint ( i . e . less than 1% of apoptotic parental cells ) and reached an efficacy value of 99% of apoptotic cells all involved Bax downregulation in addition to a second agent among Bcl2 downregulator , upregulator and Src inhibitor ( See Text S1 for more details ) ., Of note , isolated decrease of Bax total amount fulfilled the tolerability constraint but resulted in less than 1% of apoptotic cancer cells ., Finally , we experimentally validated feasibility of this counterintuitive theoretical strategy ., We selected two siRNAs that fully downregulated Bax in parental cells but not in Src-transformed ones ( Figure 10 A ) ., Bax knockdown protected parental cells from treatment by staurosporine and ABT737 or staurosporine and herbimycin but not Src-transformed cells ( Figure 10 B ) ., Therefore by downregulating Bax in our biological model , we were capable of selectively killing Src-transformed cells ., A combined mathematical and experimental approach was undertaken to study the mitochondrial pathway of apoptosis in parental and Src-transformed NIH-3T3 cells ., First , a mathematical model for Bik kinetics in normal and apoptotic conditions was built ., It took into account Bik ubiquitylation and further proteasomal degradation that Src-dependent Bik phosphorylation stimulated in Src-transformed cells ., Then , we designed a mathematical model of the mitochondrial pathway of apoptosis which only involved the proteins that participated in apoptosis induction in the studied biological model ., Interestingly , this mathematical model was quite simple , with only one effector , Bax , two BH3-only proteins , Bik ( a sensitizer ) and tBid ( a direct Bax activator ) , and a pool of antiapoptotic proteins which were all described as behaving identically toward Bax , Bik and tBid 38 ., Several published works propose mathematical modeling of apoptosis ., Some of them model all pathways to apoptosis from the death stimulus to the actual cell death 52–54 , other focus on the caspase cascade leading to apoptosis 55 ., Molecular modeling of the mitochondrial pathway was achieved in several works 47 , 48 , 56–59 ., Those models being conceived to address other biological issues , we had to build a new mathematical model that was tailored to our particular problematic and aimed at optimizing anticancer therapies in the specific case of Src transformation ., Exploring Bik kinetics upon apoptosis induction led to the interesting prediction that the inhibition of Bik degradation might not allow its accumulation above a threshold that would induce apoptosis in the experimentally-demonstrated time range ., This was validated by immunoblotting that established that Bik concentration was not changed upon apoptosis induction by staurosporine ., Therefore , we looked for another explanation that might support these observat | Introduction, Results, Discussion, Materials and Methods | Src tyrosine kinases are deregulated in numerous cancers and may favor tumorigenesis and tumor progression ., We previously described that Src activation in NIH-3T3 mouse fibroblasts promoted cell resistance to apoptosis ., Indeed , Src was found to accelerate the degradation of the pro-apoptotic BH3-only protein Bik and compromised Bax activation as well as subsequent mitochondrial outer membrane permeabilization ., The present study undertook a systems biomedicine approach to design optimal anticancer therapeutic strategies using Src-transformed and parental fibroblasts as a biological model ., First , a mathematical model of Bik kinetics was designed and fitted to biological data ., It guided further experimental investigation that showed that Bik total amount remained constant during staurosporine exposure , and suggested that Bik protein might undergo activation to induce apoptosis ., Then , a mathematical model of the mitochondrial pathway of apoptosis was designed and fitted to experimental results ., It showed that Src inhibitors could circumvent resistance to apoptosis in Src-transformed cells but gave no specific advantage to parental cells ., In addition , it predicted that inhibitors of Bcl-2 antiapoptotic proteins such as ABT-737 should not be used in this biological system in which apoptosis resistance relied on the deficiency of an apoptosis accelerator but not on the overexpression of an apoptosis inhibitor , which was experimentally verified ., Finally , we designed theoretically optimal therapeutic strategies using the data-calibrated model ., All of them relied on the observed Bax overexpression in Src-transformed cells compared to parental fibroblasts ., Indeed , they all involved Bax downregulation such that Bax levels would still be high enough to induce apoptosis in Src-transformed cells but not in parental ones ., Efficacy of this counterintuitive therapeutic strategy was further experimentally validated ., Thus , the use of Bax inhibitors might be an unexpected way to specifically target cancer cells with deregulated Src tyrosine kinase activity . | Personalizing medicine on a molecular basis has proven its clinical benefits ., The molecular study of the patients tumor and healthy tissues allowed the identification of determinant mutations and the subsequent optimization of healthy and cancer cells specific response to treatments ., Here , we propose a combined mathematical and experimental approach for the design of optimal therapeutics strategies tailored to the patient molecular profile ., As an in vitro proof of concept , we used parental and Src-transformed NIH-3T3 fibroblasts as a biological model ., Experimental study at a molecular level of those two cell populations demonstrated differences in the gene expression of key-controllers of the mitochondrial pathway of apoptosis thus suggesting potential therapeutic targets ., Molecular mathematical models were built and fitted to existing experimental data ., They guided further experimental investigation of the kinetics of the mitochondrial pathway of apoptosis which allowed their refinement ., Finally , optimization procedures were applied to those data-calibrated models to determine theoretically optimal therapeutic strategies that would maximize the anticancer efficacy on Src-transformed cells under the constraint of a maximal allowed toxicity on parental cells . | oncology, systems biology, cell death, medicine, mathematics, theoretical biology, applied mathematics, cancer treatment, chemotherapy and drug treatment, signaling networks, biology, molecular cell biology, computational biology | null |
0 | journal.pcbi.1000510 | 2,009 | Predicting the Evolution of Sex on Complex Fitness Landscapes | Sexual reproduction is widespread among multi-cellular organisms 1 ., However , the ubiquity of sex in the natural world is in stark contrast to its perceived costs , such as the recombination load 2 or the two-fold cost of producing males 3 , 4 ., Given these disadvantages it is puzzling that sexual reproduction has evolved and is maintained so commonly in nature ., The “paradox of sex” has been one of the central questions in evolutionary biology and a large number of theories have been proposed to explain the evolution and maintenance of sexual reproduction 5 ., Currently , the most prominent theories include, ( i ) the Hill-Robertson effect 6–8 ,, ( ii ) Mullers ratchet 9 ,, ( iii ) the Red Queen hypothesis 10 , 11 , and, ( iv ) the Mutational Deterministic hypothesis 12 , 13 ., While originally described in various different ways , the underlying benefit of sex can always be related to the role of recombination in breaking up detrimental statistical associations between alleles at different loci in the genome ., What fundamentally differentiates the theories is the proposed cause of these statistical associations , assigned to either the interactions between drift and selection ( Fisher-Muller effect , Mullers ratchet , and Hill-Robertson effect ) or gene interactions and epistatic effects ( Red Queen hypothesis and Mutational Deterministic hypothesis ) ., The present list of hypotheses is certainly not exhaustive , with new ones continuously being proposed , complementing or replacing the existing ones 14 ., However , it is not new hypotheses that are most needed , but the real-world evidence that allows us to distinguish between them ., The major question that still remains is whether the assumptions and requirements of different theories are fulfilled in the natural world ., Accordingly , there has been considerable effort to experimentally test these assumptions , mainly for the epitasis-based theories ( reviewed in 15–17 ) ., However , an even more basic and crucial problem underlies all work on evolution of sex: how does one choose , measure , and interpret appropriate population properties that relate to different theories 17–19 ., The difficulty stems from the often large divide between the theoretical and experimental research: theories are frequently formulated as mathematical models and rely on simplistic fitness landscapes or small genome size ( e . g . two locus , two allele models ) 13 , 20–25 ., As a result , it may be unclear how a property established based on these simplified assumptions relates to actual properties of natural populations ., In this study we attempt to bridge the gap between the theoretical and experimental work and to identify which population measures are predictive of the evolution of sexual reproduction by simulating the evolution of both sexual and asexual populations on fitness landscapes with different degrees of complexity and epistasis ., The measures we use are the change of mean fitness , of additive genetic variance , or of variance in Hamming distance as well as four epistasis-based measures , physiological , population , mean pairwise , and weighted mean pairwise epistasis ., While this certainly is not an exhaustive list , we took care to include major quantities previously considered in theoretical and experimental literature ( e . g . 26–28 ) ., With some exceptions 29–32 , earlier work generally focused on the smooth , single peaked landscapes , while here we also use random landscapes and NK landscapes ( random landscapes with tunable ruggedness ) ., Some studies of more complex rugged landscapes tested whether they would select for sex but have not found a simple and unique answer , even in models with only two-dimensional epistasis 33 , 34 ., A recent paper , which uniquely combines experimental and theoretical approaches and simulates evolution of sex on empirical landscapes , also finds that landscape properties greatly affect the outcome of evolution , sometimes selecting for but more often against sex 35 ., However , what specifically distinguishes our study is the goal of not only determining when sex evolves but also of quantifying our ability to detect and predict such outcome in scenarios where we know how the evolution proceeds ., Whether the more complex landscapes we are using here are indeed also more biologically realistic is open to debate as currently little is known about the shape and the properties of real fitness landscapes ( for an exception see for example 35 , 36 ) ., Our goal is to move the research focus away from the simple landscapes mostly investigated so far to landscapes with various higher degrees of complexity and epistasis , and to probe our general understanding of the evolution of sexual reproduction on more complex fitness landscapes ., Notably , we find that some of the measures routinely used in the evolution of sex literature perform poorly at predicting whether sex evolves on complex landscapes ., Moreover , we find that genetic neutrality lowers the predictive power of those measures that are typically robust across different landscapes types , but not of those measures that perform well only on simple landscapes ., The difficulty of predicting sex even under the ideal conditions of computer simulations , where in principle any detail of a population can be measured with perfect accuracy , may be somewhat sobering for experimentalists working on the evolution of sex ., We hope , however , that this study will evoke interest among theoreticians to tackle the challenge and develop more reliable predictors of sex that experimentalists can use to study the evolution of sex in natural populations ., We investigated the evolution of sex in simulations on three types of fitness landscapes with varying complexity ( smooth , random and NK landscapes ) and used seven population genetic quantities ( ΔVarHD , ΔVaradd , ΔMeanfit , Ephys , Epop , EMP , and EWP , Table, 1 ) as predictors of change in frequency of the recombination allele ( see Methods for more details ) ., We calculated predictor accuracy ( the sum of true positives and true negatives divided by the total number of tests ) and used it to assess their quality on 110 smooth landscapes with varying selection coefficients and epistasis , 100 random landscapes , and 100 NK landscapes each for K\u200a=\u200a0 , … , 5 ., All landscapes are based on 6 biallelic loci and they were generated such that an equal number of landscapes of each type select for versus against sex in deterministic simulations with infinite population size ., Hence , random prediction by coin flipping is expected to have an accuracy of 0 . 5 ., Figure 1 shows the accuracy of the predictors for the different landscape types ., Increasing levels of blue indicate greater accuracy of prediction ., For the simulations with infinite population size ( deterministic simulations ) we ran a single competition between sexual and asexual populations to assess whether sex was selected for ., For simulations with finite population size ( stochastic simulations ) , we ran 100 simulations of the competition phase and assessed whether the predictor accurately predicts the evolution of sex in the majority of these simulations ., Focusing on the top left panel we find that for deterministic simulations most predictors are only highly accurate in predicting evolutionary outcomes for the smooth landscapes ., The exception is the poor performance of ΔMeanfit , which is not surprising , as theory has shown that for populations in mutation-selection balance ΔMeanfit is typically negative 2 ., According to our use of ΔMeanfit as a predictor , it always predicts no selection for sex when negative and thus is correct in 50% of cases , due to the way the landscapes were constructed ., For the NK0 landscapes , all predictors perform poorly , because such NK landscapes have no epistasis by definition ( see Methods ) ., For infinite population size , theory has established that in absence of epistasis there is no selection for or against sex ., Indeed , in our simulations the increase or decrease in the frequency of sexual individuals is generally so small ( of order 10−15 and smaller ) that any change in frequency can be attributed to issues of numerical precision ., Generally , the accuracy of most predictors is much weaker for complex landscapes ( NK and random landscapes ) than for the simpler , smooth landscapes ., The predictors that have highest accuracy across different landscape types are ΔVarHD and Epop ., To test whether combinations of the predictors could increase the accuracy of prediction of the evolution of sex we plot for each landscape the value of the predictors ΔVarHD , ΔVaradd and ΔMeanfit against each other and color code whether the number of sexual individuals increased ( red ) or decreased ( blue ) during deterministic competition phase ( see Figure 2 ) ., If the blue and red points are best separated by a vertical or a horizontal line , then we conclude that little can be gained by combining two predictors ., If , however , the points can be separated by a different linear ( or more complex ) function of the two predictors , then combining these predictors would indeed lead to an improved prediction ., Figure 2 shows the corresponding plots for the smooth , the random , and the NK2 landscapes ., For the smooth landscapes the criterion ΔVarHD>0 or ΔVaradd>0 are both equally good in separating cases where sex evolved from those where it did not ., As already shown in Figure 1 , ΔVarHD is generally a more reliable predictor of the evolution of sex than ΔVaradd in the more complex random or NK landscapes ., Epistasis-based theories suggest that the selection for sex is related to a detrimental short-term effect ( reduction in mean fitness ) and a possibly beneficial long-term effect ( increase in additive genetic variance ) 28 ., The plots of ΔVaradd against ΔMeanfit , however , do not indicate that combining them would allow a more reliable prediction of the evolution of sex ., Generally , the plots show that blue and red points either tend to overlap ( in the more complex landscapes ) or can be well separated using horizontal or vertical lines ( in the smooth landscapes ) such that combining predictors will not allow to substantially increase the accuracy of prediction ., This is also the case for all other landscapes and all other pairwise combinations of predictors ( data not shown ) ., It is possible that some of the effect described in 28 and expected here are too small to be detected with the level of replication in our study ., However , as the level of replication used in this computational study goes way beyond what can be realistically achieved in experimental settings we expect that these effects would also not be detected in experimental studies ., We also used a linear and quadratic discriminant analysis to construct functions to predict the outcome of competitions between the two modes of reproduction ., For these purposes , half of the data set was used for training and the other half for testing of the discriminant functions , and the procedure was repeated separately for each of the three population sizes ( 1 , 000 , 10 , 000 , and 100 , 000 ) and the deterministic case ., In no case did these methods improve the accuracy of predictions ( data not shown ) ., While there certainly are other , potentially more sophisticated techniques that could be used here , our analysis indicates that there may not be much additional information in our metrics that could be extracted and used to increase the accuracy of the predictions ., All predictors performed much worse for simulations with finite population size ( Figure 1 ) , most likely because the selection coefficient for sex is weak 19 , 20 ., To further examine the effect of finite population size on the evolution of sex on different landscape types we analyzed 100 independent simulations of the competition phase starting from the genotype frequencies obtained from the burn-in phase on each landscape ., Figure 3 shows the fraction of cases in which the frequency of sexual individuals increased for three population sizes ( 1 , 000 , 10 , 000 , and 100 , 000 ) , plotted separately for those landscapes in which frequency of the recombination modifier increased or decreased in deterministic simulations ., For almost all landscapes the fraction of cases in which sex evolves is close to 50% , indicating that selection for sexual reproduction is indeed extremely weak , and can thus easily be overwhelmed by stochastic effects ( in contrast to simulations with infinite populations where selection coefficients of any size will always produce a consistent observable effect ) ., As a consequence , even for relatively large population sizes the outcome of the competition between sexual and asexual populations is largely determined by drift ., Such weak selection may in part due to the small number of loci used for these simulations and stochastic simulations with larger genomes have indeed been shown to result in stronger selection for or against sex 37 , 38 ., However , accurate deterministic simulations are computationally not feasible for large genome sizes , because of the need to account for the frequency of all possible genotypes in deterministic simulations ( see Supporting Information ( Text S1 ) for more details ) ., According to the Hill-Robertson effect ( HRE ) 8 , 21 selection for recombination or sex may be stronger in populations of limited size , because in such populations the interplay between drift and selection can generate negative linkage disequilibria , which in turn select for increased sexual reproduction ., The strength of HRE vanishes for very small populations and for populations of infinite size 21 ., In an intermediate range of population sizes , the HRE increases with increasing number of loci ( as does the range of population sizes in which the effect can be observed ) 38 and for large genome size it can be strong enough to override the effect of weak epistasis 37 ., In our simulations , however , HRE is weak , as is evidenced by the fact that , in the NK0 landscape , which by definition have no epistasis , the fraction of runs in which sex evolves is only very marginally above 50% ( Figure 3 ) ., Our results indicate that for finite population size the predictors generally perform poorly ., Of course this does not imply that they could not be better than a simple coin toss ., However , the results suggest that these predictors will likely be of limited use , as any experiment will have difficulties to reach even the replicate number that we have used to generate Figure 1 ., We also examined additional fitness landscapes , characterized by increased neutrality ( for full details and figures see Text S1 ) ., We found that the allelic diversity at neutral loci both decreases the accuracy and generates a systematic bias in the previously best performing predictors , Epop and ΔVarHD ., In contrast , other predictors investigated here , ΔVaradd , ΔMeanfit , Ephys , EMP , and EWP are not affected by including neutral loci , but still have poor accuracy of prediction on more complex fitness landscapes ., The central message of our study is that the prediction of the evolution of sex is difficult for complex fitness landscapes , even in the idealized world of computer simulations where in principle one can measure any detail of a given population and fitness landscape ., Here we put the emphasis on predictors that are experimentally measurable and are based on conditions for the evolution of sex established in the population genetic literature using simple fitness landscapes ., We have however included EMP and EWP , two predictors which would be more difficult to measure experimentally , but are based on the most fundamental and general theoretical treatment of the evolution of sex 28 ., Of course , while our choice of predictors , landscapes and selection regimes is comprehensive , we are aware that it can never be exhaustive or complete – there will always be other options to try out and test ., Future work will have to focus on identifying more reliable predictors of the evolution of sex that can be used in conjunction with experimental data ., Additionally , a better characterization of properties of natural fitness landscapes is badly needed to improve our understanding of the forces selecting for the evolution of sex ., As it stands , ΔVarHD , our best candidate for a predictor of the evolution of sex , has nevertheless important shortcomings ., In particular , it never reaches high levels of accuracy on many of the landscapes ., Still , ΔVarHD at least suggests a potential direction for future research: a focus on predictors that would take advantage of the rapidly increasing number of fully or partially sequenced genomes and allow us to determine the advantage of sex in large numbers of taxa , bringing us closer to fully understanding the evolution of sex ., All simulations of the evolution of a haploid population on a given fitness landscape are divided into a “burn-in” and a “competition” phase ., In the burn-in phase an asexually reproducing population is allowed to equilibrate on the landscape starting from random initial genotype frequencies ., In the competition phase we determine whether the frequency of an allele coding for increased recombination increases in the population ., The burn-in phase consists of repeated cycles of mutation and selection ., Genotype frequencies after selection are given by the product of their frequency and relative fitness before selection ., In all simulations mutations occur independently at each locus with a mutation rate μ\u200a=\u200a0 . 01 per replication cycle ., This high mutation rate was chosen in order to obtain sufficient levels of genetic diversity ., However , we also tested mutation rates up to 10 times lower and found no qualitative differences in the results ( data not shown ) ., In the competition phase the population undergoes recombination in addition to mutation and selection in each reproduction cycle ., To this end a recombination modifier locus is added to one end of the genome , with two alleles m and M , each present in exactly half of the population ., Recombination between two genotypes depends on the modifier allele in both genotypes , with the corresponding recombination rates denoted by rmm , rmM , and rMM ., For the simulations discussed in the main text we used rmm\u200a=\u200armM\u200a=\u200a0 and rMM\u200a=\u200a0 . 1 ., For this parameter choice individuals carrying distinct modifier alleles cannot exchange genetic material and thus any effect of increased recombination remains linked to the M allele ., Sexual and asexual individuals compete directly with each other , and we refer to this scenario as the evolution of sex ., In contrast , if rmm<rmM<rMM , then genetic material can be exchanged between all individuals ., We refer to this scenario as the evolution of recombination ., For the sake of simplicity , we primarily consider the evolution of sex in the main text , but analogous simulations of the evolution of recombination scenario led to qualitatively indistinguishable results ( Text S1 ) ., Moreover , for the evolution of sex scenario we also tested values of rMM ranging from 0 . 01 to 0 . 3 ( data not shown ) , which produced qualitatively indistinguishable results ., All recombination values refer to a probability of recombination happening between neighboring loci with one recombination event per genome ., The position of the crossover point is chosen randomly ., No mutations occur between m and M alleles at the modifier locus ., Recombination , mutation and selection as described above are deterministic and are calculated assuming infinite population size ., To examine stochastic effects , we also considered populations with 1 , 000 , 10 , 000 , and 100 , 000 individuals ., Those simulations included a step in which the frequencies of genotypes are sampled from a multinomial distribution according to their frequencies as calculated based on infinite population size ., The burn-in phase always consists of 2500 generations of mutation and selection ., We confirmed that 2500 generations were typically sufficient for the system to go into mutation-selection balance from random initial genotype frequencies ( data not shown ) ., The competition phase consists of 250 generations of recombination , mutation and selection ., For infinite population size we ran a single competition phase for each burn-in phase ., For finite-size populations , the outcome was estimated as the average of 100 simulations of the competition phase . | Introduction, Results, Discussion, Materials and Methods | Most population genetic theories on the evolution of sex or recombination are based on fairly restrictive assumptions about the nature of the underlying fitness landscapes ., Here we use computer simulations to study the evolution of sex on fitness landscapes with different degrees of complexity and epistasis ., We evaluate predictors of the evolution of sex , which are derived from the conditions established in the population genetic literature for the evolution of sex on simpler fitness landscapes ., These predictors are based on quantities such as the variance of Hamming distance , mean fitness , additive genetic variance , and epistasis ., We show that for complex fitness landscapes all the predictors generally perform poorly ., Interestingly , while the simplest predictor , ΔVarHD , also suffers from a lack of accuracy , it turns out to be the most robust across different types of fitness landscapes ., ΔVarHD is based on the change in Hamming distance variance induced by recombination and thus does not require individual fitness measurements ., The presence of loci that are not under selection can , however , severely diminish predictor accuracy ., Our study thus highlights the difficulty of establishing reliable criteria for the evolution of sex on complex fitness landscapes and illustrates the challenge for both theoretical and experimental research on the origin and maintenance of sexual reproduction . | One of the biggest open questions in evolutionary biology is why sexual reproduction is so common despite its manifold costs ., Many hypotheses have been proposed that can potentially explain the emergence and maintenance of sexual reproduction in nature , and currently the biggest challenge in the field is assessing their plausibility ., Theoretical work has identified the conditions under which sexual reproduction is expected ., However , these conditions were typically derived , making strongly simplifying assumptions about the relationship between organisms genotype and fitness , known as the fitness landscape ., Building onto previous theoretical work , we here propose different population properties that can be used to predict when sex will be beneficial ., We then use simulations across a range of simple and complex fitness landscapes to test if such predictors generate accurate predictions of evolutionary outcomes ., We find that one of the simplest predictors , related to variation of genetic distance between sequences , is also the most accurate one across our simulations ., However , stochastic effects occurring in small populations compromise the accuracy of all predictors ., Our study both illustrates the limitations of various predictors and suggests directions in which to search for new , experimentally attainable predictors . | computational biology/evolutionary modeling, evolutionary biology, genetics and genomics/population genetics | null |
1,882 | journal.pcbi.1005534 | 2,017 | Locking of correlated neural activity to ongoing oscillations | To date it is unclear which channels the brain uses to represent and process information ., A rate-based view is argued for by the apparent stochasticity of firing 1 and by the high sensitivity of the network dynamics to single spikes 2 ., In an extreme view , correlated firing is a mere epiphenomenon of neurons being connected ., Indeed , a large body of literature has elucidated how correlations relate to the connectivity structure 3–14 ., But the matter is further complicated by the observation that firing rates and correlations tend to be co-modulated , as demonstrated experimentally and explained theoretically 4 , 5 ., If the brain employs correlated firing as a means to process or represent information , this requires in particular that the appearance of correlated events is modulated in a time-dependent manner ., Indeed , such modulations have been experimentally observed in relation to the expectation of the animal to receive task-relevant information 15 , 16 or in relation to attention 17 ., Oscillations are an extreme case of a time-dependent modulation of the firing rate of cells ., They are ubiquitously observed in diverse brain areas and typically involve the concerted activation of populations of neurons 18 ., They can therefore conveniently be studied in the local field potential ( LFP ) that represents a complementary window to the spiking activity of individual neurons or small groups thereof: It is composed of the superposition of the activity of hundreds of thousands to millions of neurons 19 , 20 and forward modeling studies have confirmed 21 that it is primarily driven by the synaptic inputs to the local network 22–24 ., As the LFP is a quantity that can be measured relatively easily , this mesoscopic signal is experimentally well documented ., Its interpretation is , however , still debated ., For example , changes in the amplitude of one of the components of the spectrum of the LFP have been attributed to changes in behavior ( cf . e . g . 25 ) ., A particular entanglement between rates and correlations is the correlated firing of spikes in pairs of neurons in relation to the phase of an ongoing oscillation ., With the above interpretation of the LFP primarily reflecting the input to the cells , it is not surprising that the mean firing rate of neurons may modulate in relation to this cycle ., The recurrent network model indeed confirms this expectation , as shown in Fig 1A ., It is , however , unclear if and by which mechanisms the covariance of firing follows the oscillatory cycle ., The simulation shown in Fig 1B indeed exhibits a modulation of the covariance between the activities of pairs of cells ., Such modulations have also been observed in experiments: Denker et al . 26 have shown that the synchronous activation of pairs of neurons within milliseconds preferentially appears at a certain phase of the oscillatory component of the LFP in the beta-range—in their words the spike-synchrony is “phase-locked” to the beta-range of the LFP ., They explain their data by a conceptual model , in which an increase in the local input , assumed to dominate the LFP , leads to the activation of cell assemblies ., The current work investigates an alternative hypothesis: We ask if a periodically-driven random network is sufficient to explain the time-dependent modulation of covariances between the activities of pairs of cells or whether additional structural features of the network are required to explain this experimental observation ., To investigate the mechanisms causing time-dependent covariances in an analytically tractable case , we here present the simplest model that we could come up with that captures the most important features: A local network receiving periodically changing external input ., The randomly connected neurons receive sinusoidally modulated input , interpreted as originating from other brain areas and mimicking the major source of the experimentally observed LFP ., While it is obvious that the mean activity in a network follows an imposed periodic stimulation , it is less so for covariances ., In the following we will address the question why they are modulated in time as well ., Extending the analysis of mean activities and covariances in the stationary state 13 , 27 , 28 , we here expose the fundamental mechanisms that shape covariances in periodically driven networks ., Our network model includes five fundamental properties of neuronal dynamics: First , we assume that the state of low and irregular activity in the network 1 is a consequence of its operation in the balanced state 29 , 30 , where negative feedback dynamically stabilizes the activity ., Second , we assume that each neuron receives a large number of synaptic inputs 31 , each of which only has a minor effect on the activation of the receiving cell , so that total synaptic input currents are close to Gaussian ., Third , we assume the neurons are activated in a threshold-like manner depending on their input ., Fourth , we assume a characteristic time scale τ that measures the duration of the influence a presynaptic neuron has on its postsynaptic targets ., Fifth , the output of the neuron is dichotomous or binary , spike or no spike , rather than continuous ., As a consequence , the variance of the single unit activity is a direct function of its mean ., We here show how each of the five above-mentioned fundamental properties of neuronal networks shape and give rise to the mechanisms that cause time-dependent covariances ., The presented analytical expressions for the linear response of covariances expose two different paths by which a time-dependence arises: By the modulation of single-unit variances and by the modulation of the linear gain resulting from the non-linearity of the neurons ., The interplay of negative recurrent feedback and direct external drive can cause resonant behavior of covariances even if mean activities are non-resonant ., Qualitatively , these results explain the modulation of synchrony in relation to oscillatory cycles that are observed in experiments , but a tight locking of synchronous events to a particular phase of the cycle is beyond the mechanisms found in the here-studied models ., To address our central question , whether a periodically-driven random network explains the experimental observations of time-modulated pairwise covariances , we consider a minimal model here ., It consists of one inhibitory ( I ) population and , in the latter part of the paper , additionally one excitatory population ( E ) of binary model neurons 6 , 27 , 29 , 32 ., Neurons within these populations are recurrently and randomly connected ., All neurons are driven by a global sinusoidal input mimicking the incoming oscillatory activity that is visible in the LFP , illustrated in Fig 2 . The local network may in addition receive input from an external excitatory population ( X ) , representing the surrounding of the local network ., The fluctuations imprinted by the external population , providing shared inputs to pairs of cells , in addition drive the pairwise covariances within the network 13 , c . f . especially the discussion ., Therefore we need the external population X to arrive at a realistic setting that includes all sources of covariances ., In the following , we extend the analysis of cumulants in networks of binary neurons presented in 6 , 13 , 27 , 28 , 33 to the time-dependent setting ., This formal analysis allows us to obtain analytical approximations for the experimentally observable quantities , such as pairwise covariances , that expose the mechanisms shaping correlated network activity ., Binary model neurons at each point in time are either inactive ni = 0 or active ni = 1 . The time evolution of the network follows the Glauber dynamics 34; the neurons are updated asynchronously ., At every infinitesimal time step dt , any neuron is chosen with probability d t τ ., After an update , neuron i is in the state 1 with the probability Fi ( n ) and in the 0-state with probability 1 − Fi ( n ) , where the activation function F is chosen to be, F i ( n ) = H h i - θ i h i = ∑ k = 1 N J i k n k + h extsinω t + ξ i H ( x ) = 1 if x ≥ 0 0 if x < 0 ., ( 1 ), We here introduced the connectivity matrix J with the synaptic weights J i j ∈ ℝ describing the influence of neuron j on neuron i ., The weight Jij is negative for an inhibitory neuron j and positive for an excitatory neuron ., Due to the synaptic coupling the outcome of the update of neuron i potentially depends on the state n = ( n1 , … , nN ) of all other neurons in the network ., Compared to the equations in 13 , page 4 , we added an external sinusoidal input to the neurons representing the influence of other cortical or subcortical areas and Gaussian uncorrelated noise with vanishing mean 〈ξi〉 = 0 and covariance 〈 ξ i ξ j 〉 = δ i j σ noise 2 . The threshold θi depends on the neuron type and will be chosen according to the desired mean activity ., We employ the neural simulation package NEST 35 , 36 for simulations ., Analytical results are obtained by mean-field theory 6 , 13 , 27 , 28 , 37 , 38 and are described for completeness and consistency of notation in the section “Methods” ., In the main text we only mention the main steps and assumptions entering the approximations ., The basic idea is to describe the time evolution of the Markov system in terms of its probability distribution p ( n , t ) ., Using the master Eq 14 we obtain ordinary differential equations ( ODEs ) for the moments of p ( n , t ) ., In particular we are interested in the population averaged mean activities mα , variances aα , and covariances cαβ, m α t ≔ 1 N α ∑ i ∈ α n i t ( 2 ), a α t ≔ 1 N α ∑ i ∈ α n i t - n i t 2 ( 3 ), c α β t ≔ 1 N α N β ∑ i ∈ α , j ∈ β , i ≠ j n i t n j t - n i t n j t , ( 4 ), which are defined as expectation values 〈〉 over realizations of the network activity , where the stochastic update of the neurons and the external noisy input presents the source of randomness in the network ., The dynamics couples moments of arbitrarily high order 33 ., To close this set of equations , we neglect cumulants of order higher than two , which also approximates the input by a Gaussian stochastic variable with cumulants that vanish for orders higher than two 39 ., This simplification can be justified by noticing that the number of neurons contributing to the input is large and their activity is weakly correlated , which makes the central limit theorem applicable ., In a homogeneous random network , on expectation there are Kαβ = pαβ Nβ synapses from population β to a neuron in population α ., Here pαβ is the connection probability; the probability that there is a synapse from any neuron in population β to a particular neuron in population α and Nα is the size of the population ., Mean Eq ( 2 ) and covariance Eq ( 4 ) then follow the coupled set of ordinary differential equations ( ODEs , see section II A in S1 Text for derivation ), τ d d t m α t = - m α t + φ ( μ α ( m t , h extsinω t ) , σ α ( m t , c t ) ) ( 5 ), τ d d t c α β t = { - c α β t + ∑ γ S μ α m t , h extsinω t , σ α m t , c t × K α γ J α γ c γ β t + δ γ β a β t N β } + α ↔ β , ( 6 ), where α ↔ β indicates the transposed term ., The Gaussian truncation employed here is parameterized by the mean μα and the variance σ α 2 of the summed input to a neuron in population α ., These , in turn , are functions of the mean activity and the covariance , given by Eqs ( 18 ) and ( 19 ) , respectively ., Here φ is the expectation value of the activation function , which is smooth , even though the activation function itself is a step function , therefore not even continuous ., The function φ fulfills limm → 0 φ = 0 and limm → 1 φ = 1 and monotonically increases ., Its derivative S with respect to μ has a single maximum and is largest for the mean input μ within a region with size σ around the threshold θ ., S measures the strength of the response to a slow input and is therefore termed susceptibility ., The definitions are given in “Methods” in Eqs ( 17 ) and ( 20 ) ., The stationary solution ( indicated by a bar ) of the ODEs Eqs ( 5 ) and ( 6 ) can be found by solving the equations, m ¯ = φ m ¯ ( 7 ), 2 c ¯ = S K J c ¯ + a ¯ N + transposed ( 8 ), numerically and self-consistently , as it was done in 13 , 27 , 33 ., The full time-dependent solution of Eqs ( 5 ) and ( 6 ) can , of course , be determined numerically without any further assumptions ., Besides the comparison with simulation results , this will give us a check for the subsequently applied linear perturbation theory ., The resulting analytical results allow the identification of the major mechanisms shaping the time-dependence of the first two cumulants ., To this end , we linearize the ODEs Eqs ( 5 ) and ( 6 ) around their stationary solutions ., We only keep the linear term of order hext of the deviation , justifying a Fourier ansatz for the solutions ., For the mean activities this results in m α ( t ) = m ¯ α + δ m α ( t ) = m ¯ α + M α 1 e i ω t with, M α 1 = ∑ β U α β M β 1 = ∑ β U α β h ext U - 1 S μ ¯ , σ ¯ β - i τ ω + 1 - λ β τ ω 2 + 1 - λ β 2 ., ( 9 ) The time-dependence of σ was neglected here , which can be justified for large networks ( “Methods” , Eqs ( 22 ) and ( 30 ) ) ., The matrix U represents the basis change that transforms W ¯ α β ≔ S ( μ ¯ α , σ ¯ α ) K α β J α β into a diagonal matrix with λα the corresponding eigenvalues ., We see that , independent of the number of populations or the detailed form of the connectivity matrix , the amplitude of the time-dependent part of the mean activities has the shape of a low-pass-filtered signal to first order in hext ., Therefore the phase of δm lags behind the external drive and its amplitude decreases asymptotically like 1 ω , as can be seen in Fig 3A and 3B ., If we also separate the covariances into their stationary part and a small deviation that is linear in the external drive , c α β ( t ) = c ¯ α β + δ c α β ( t ) , expand S ( μα ( t ) , σα ( t ) ) and a ( t ) around their stationary values , keeping only the terms of order hext and neglect contributions from the time-dependent variation of the variance of the input σ2 ( see “Methods” , especially Eq ( 30 ) for a discussion of this point ) , we get the ODE, τ d d t δ c t + 2 δ c t - W ¯ δ c t - W ¯ δ c t T = { W ¯ diag 1 - 2 m ¯ N diag δ m t ︸ modulated-autocorrelations-drive + diag K ⊛ J δ m t ︸ recurrent drive + h extsinω t ︸ direct drive diag ∂ S ∂ μ t K ⊛ J c ¯ total } + . . . T , ( 10 ), where we introduced the point-wise ( Hadamard ) product ⊛ of two matrices A and B see 40 , for a consistent notation of matrix operations as ( A ⊛ B ) ij ≔ AijBij , defined the matrix with the entries diag ( x ) ij := δij xi for the vector x = ( x1 , ‥ , xn ) and set c ¯ t o t a l ≔ c ¯ + d i a g ( a ¯ N ) to bring our main equation into a compact form ., We can now answer the question posed in the beginning: Why does a global periodic drive influence the cross covariances in the network at all and does not just make the mean activities oscillate ?, First , the variances are modulated with time , simply because they are determined via Eq ( 3 ) by the modulated mean activities ., A neuron i with modulated autocorrelation ai ( t ) projects via Jji to another neuron j and therefore shapes the pairwise correlation cji ( t ) in a time-dependent way ., We call this effect the “modulated-autocovariances-drive” , indicated by the curly brace in the second line of Eq ( 10 ) ., Its form in index notation is W ¯ d i a g ( ( 1 − 2 m ¯ ) / N ) d i a g ( δ m ( t ) ) α β = W ¯ α β ( 1 − 2 m ¯ β ) / N β δ m β ( t ) ., This is the low-pass-filtered input ., The other contributions are a bit more subtle and less obvious , as they are absent in networks with a linear activation function ., The derivative of the expectation value of the activation function , the susceptibility , contributes linearly to the ODE of the covariances ., As the threshold-like activation function gives rise to a nonlinear dependence of φ on the mean input μ , the susceptibility S = φ′ is not constant , but depends on the instantaneous mean input ., The latter changes as a function of time by the direct external drive and by the recurrent feedback of the oscillating mean activity , indicated by the terms denoted by the curly braces in the third line of Eq ( 10 ) ., Together , we call these two term the “susceptibility terms” ., Both terms are of the same form, diag δ μ ( t ) diag ∂ S ∂ μ t K ⊛ J c ¯ total α β= δ μ α ( t ) ∂ S α ∂ μ α ∑ γ K α γ J α γ ( c ¯ γ β + δ γ β a ¯ β N β ) , ( 11 ), but with different δμα ., This form shows how the time-dependent modulation of the mean input δμα , by the second derivative of the gain function ∂ S α ∂ μ α = φ ″ , influences the transmission of covariances ., The sum following ∂ S α ∂ μ α is identical to the one in the static case Eq ( 8 ) ., For the “recurrent drive” , the time-dependent input is given by δμα ( t ) = ∑β Kαβ Jαβ δmβ ( t ) , which is a superposition of the time-dependent activities that project to population α and is therefore low-pass-filtered , too ., The term due to “direct drive” is δμα ( t ) = hext sin ( ωt ) ., We solve Eq ( 10 ) by transforming into the eigensystem of W ¯ and inserting a Fourier ansatz , δ c α β ( t ) = C α β 1 e i ω t ., The solution consists of a low-pass filtered part coming from the direct drive and two parts that are low-pass filtered twice , coming from the recurrent drive and the modulated-autocovariances-drive ., For a detailed derivation , consult the section “Covariances: Stationary part and response to a perturbation in linear order” ., We have calculated higher Fourier modes of the simulated network activity and of the numerical solution of the mean-field equations to check if they are small enough to be neglected , so that the response is dominated by the linear part ., Of course , it would be possible to derive analytical expressions for those as well ., However , we will see that the linear order and the corresponding first harmonic qualitatively and for remarkably large perturbations even quantitatively gives the right predictions ., The limits of this approximation are analyzed in Fig D in S1 Text ., We will therefore constrain our analysis to controlling the higher harmonics through the numerical solution ., In the following we will study three different models of balanced neuronal networks to expose the different mechanisms in their respective simplest setting ., We have left out so far several steps in the derivation of the results that were not necessary for the presentation of the main ideas ., In this section , we will therefore give a self-contained derivation of our results also necessitating paraphrases of some results known from earlier works ., The starting point is the master equation for the probability density of the possible network states emerging from the Glauber dynamics 34 described in “Binary network model and its mean field equations” ( see for the following also 13 , 37 ), ∂ p ∂ t ( n , t ) = 1 τ ︸ update rate ∑ i = 1 N ( 2 n i - 1 ) ︸ ∈ { - 1 , 1 } , direction of flux ϕ i ( n ∖ n i , t ) ︸ net flux due to neuron i ∀ n ∈ { 0 , 1 } N , ( 14 ), where, ϕ i ( n ∖ n i , t ) = p ( n i - , t ) F i ( n i - ) ︸ neuron i transition up - p ( n i + , t ) ( 1 - F i ( n i + ) ) ︸ neuron i transition down = - p ( n i + ) + p ( n i - , t ) F i ( n i - ) + p ( n i + , t ) F i ( n i + ) ., The activation function Fi ( n ) is given by Eq ( 1 ) ., Using the master equation ( for details cf . section II A in S1 Text ) , one can derive a differential equation for the mean activity of the neuron i , 〈ni〉 ( t ) = ∑n p ( n , t ) ni and the raw covariance of the neurons i and j , 〈ni ( t ) nj ( t ) 〉 = ∑n p ( n , t ) ninj 6 , 13 , 27 , 34 , 37 ., This yields, τ d d t n k t = - n k t + F k t d d t n k t n l t = - n k t n l t + n l t F k t + k ↔ l ., ( 15 ), As mentioned in “Binary network model and its mean field equations” , we assume that the input hi coming from the local and the external population is normally distributed , say with mean μi and standard deviation σi given by, μ i ( t ) ≔ h i = J n i + h ext sin ( ω t ) σ i 2 ( t ) ≔ h i 2 - h i 2 = ∑ k , k ′ = 1 N J i , k J i , k ′ n k n k ′ - n k n k ′ + σ i noise 2 = J T c J i i + J ⊛ J n ⊛ 1 - n + σ i noise 2 , ( 16 ), where the average 〈〉 is taken over realizations of the stochastic dynamics and we used the element-wise ( Hadamard ) product ( see main text ) ., The additional noise introduced in Eq ( 1 ) effectively leads to a smoothing of the neurons’ activation threshold and broadens the width of the input distribution ., It can be interpreted as additional variability coming from other brain areas ., Furthermore , it is computationally convenient , because the theory assumes the input to be a ( continuous ) Gaussian distribution , while in the simulation , the input ∑ l = k N J i k n k , being a sum of discrete binary variables , can only assume discrete values ., The smoothing by the additive noise therefore improves the agreement of the continuous theory with the discrete simulation ., Already weak external noise compared to the intrinsic noise is sufficient to obtain a quite smooth probability distribution of the input ( Fig 8 ) ., The description in terms of a coupled set of moment equations instead of the ODE for the full probability distribution here serves to reduce the dimensionality: It is sufficient to describe the time evolution of the moments on the population level , rather than on the level of individual units ., To this end we need to assume that the synaptic weights Jij only depend on the population α , β ∈ {exc ., , inh ., , ext ., } that i and j belong to , respectively , and thus ( re ) name them Jαβ ( homogeneity ) ., Furthermore , we assume that not all neurons are connected to each other , but that Kαβ is the number of incoming connections a neuron in population α receives from a neuron in population β ( fixed in-degree ) ., The incoming connections to each neuron are chosen randomly , uniformly distributed over all possible sending neurons ., This leads to expressions for the population averaged input hα , mean activity mα and covariance cαβ , formally nearly identical to those on the single cell level and analogous to those in 13 , sec . Mean-field solution ., The present work offers an extension of the well-known binary neuronal network model beyond the stationary case 6 , 13 , 27 , 28 , 33 ., We here describe the influence of a sinusoidally modulated input on the mean activities and the covariances to study the statistics of recurrently generated network activity in an oscillatory regime , ubiquitously observed in cortical activity 18 ., Comparing with the results of the simulation of the binary network with NEST 35 , 36 and the numerical solution of the full mean-field ODE , we are able to show that linear perturbation theory is sufficient to explain the most important effects occurring due to sinusoidal drive ., This enables us to understand the mechanisms by the help of analytical expressions and furthermore we can predict the network response to any time-dependent perturbation with existing Fourier representation by decomposing the perturbing input into its Fourier components ., We find that the amplitude of the modulation of the mean activity is of the order h ext / ( ( 1 − λ α ) 2 + ( τ ω ) 2 ) 1 2 , where λα , α ∈ {E , I} are the eigenvalues of the effective connectivity matrix W , i . e . the input is filtered by a first order low-pass filter and the amplitude of the modulation decays like ∝ ω−1 for large frequencies ., This finding is in line with earlier work on the network susceptibility 27 , esp . section V ., The qualitatively new result here is the identification of two distinct mechanisms by which the covariances δc are modulated in time ., First , covariances are driven by the direct modulation of the susceptibility S due to the time-dependent external input and by the recurrent input from the local network ., Second , time-modulated variances , analogous to their role in the stationary setting 13 , drive the pairwise covariances ., Our setup is the minimal network model , in which these effects can be observed—minimal in the sense that we would lose these properties if we further simplified the model: The presence of a nonlinearity in the neuronal dynamics , here assumed to be a threshold-like activation function , is required for the modulation of covariances by the time-dependent change of the effective gain ., In a linear rate model 10 , 46 this effect would be absent , because mean activities and covariances then become independent ., The second mechanism relies on the binary nature of neuronal signal transmission: the variance a ( t ) of the binary neuronal signal is , at each point in time , completely determined by its mean m ( t ) ., This very dependence provides the second mechanism by which the temporally modulated mean activity causes time-dependent covariances , because all fluctuations and therefore all covariances are driven by the variance a ( t ) ., Rate models have successfully been used to explain the smallness of pairwise covariances 6 by negative feedback 10 ., A crucial difference is that their state is continuous , rather than binary ., As a consequence , the above-mentioned fluctuations present due to the discrete nature of the neuronal signal transmission need to be added artificially: The pairwise statistics of spiking or binary networks are equivalent to the statistics of rate models with additive white noise 46 ., To obtain qualitative or even quantitative agreement of time-dependent covariances between spiking or binary networks and rate models , the variance of this additive noise needs to be chosen such that its variance is a function of the mean activity and its time derivative ., The direct modulation of the susceptibility S due to the time-dependent external input leads to a contribution to the covariances with first order low-pass filter characteristics that dominates the modulated covariances at large frequencies ., For small—and probably biologically realistic—frequencies ( typically the LFP shows oscillations in the β-range around 20 Hz ) , however , the modulation of the susceptibility by the local input from the network leads to an equally important additional modulation of the susceptibility ., The intrinsic fluctuations of the network activity are moreover driven by the time-dependent modulation of the variance , which is a function of the mean activity as well ., Because the mean activity follows the external drive in a low-pass filtered manner , the latter two contributions hence exhibit a second order low-pass-filter characteristics ., These contributions are therefore important at the small frequencies we are interested in here ., The two terms modulating the susceptibility , by the direct input and by the feedback of the mean activity through the network , have opposite signs in balanced networks ., In addition they have different frequency dependencies ., In networks in which the linearized connectivity has only real eigenvalues , these two properties together lead to their summed absolute value having a maximum ., Whether or not the total modulation of the covariance shows resonant behavior , however , depends also on the third term that stems from the modulated variances ., We find that in purely inhibitory networks , the resonance peak is typically overshadowed by the latter term ., This is because inhibitory feedback leads to negative average covariances 13 , which we show here reduce the driving force for the two resonant contributions ., In balanced E-I networks , the driving force is not reduced , so the resonant contribution can become dominant ., For the biologically motivated parameters used in the last setting studied here , the effective coupling matrix W has complex eigenvalues which cause resonant mean activities ., If the inhomogeneity was independent of the driving frequency , δc would have resonant modes with frequency fres and 2fres ., Due to the mixing of the different modes and by the frequency dependence of the inhomogeneity driving the modulation of covariances , these modes determine only the ballpark for the location of the resonance in the covariance ., Especially the resonances are not sharp enough so that each of them is visible in any combination of the modes ., Different behavior is expected near the critical point where ℜ ( λ ) ≲\u20091 . For predictions of experimental results , however , a more careful choice of reasonable biological parameters would be necessary ., In particular , the external drive should be gauged such that the modulations of the mean activities are in the experimentally observed range ., Still , our setup shows that the theory presented here works in the biologically plausible parameter range ., The goal of extracting fundamental mechanisms of time-dependent covariances guides the here presented choice of the level of detail of our model ., Earlier works 6 , 28 , 29 showed that our setup without sinusoidal drive is sufficient to qualitatively reproduce and explain phenomena observed in vivo , like high variability of neuronal activity and small covariances ., The latter point can be explained in binary networks by the suppression of fluctuations by inhibitory feedback , which is a general mechanism also applicable to other neuron models 10 and even finds application outside neuroscience , for example in electrical engineering 47 ., The high variability observed in binary networks can be explained by the network being in the balanced state , that robustly emerges in the presence of negative feedback 29 , 30 ., In this state , the mean excitatory and inhibitory synaptic inputs cancel so far that the summed input to a neuron fluctuates around its threshold ., This explanation holds also for other types of model networks and also for biological neural networks 48 ., We have seen here that the operation in the balanced state , at low frequencies , gives rise to a partial cancellation of the modulation of covariances ., Our assumption of a network of homogeneously connected binary neurons implements the general feature of neuronal networks that every neuron receives input from a macroscopic number of other neurons , letting the impact of a single synaptic afferent on the activation of a cell be small and the summed input be distributed close to Gaussian: For uncorrelated incoming activity , the ratio between the fluctuations caused by a single input and the fluctuations of the total input is N − 1 2 , independent of how synapses scale with N . However , the input to a neuron is actually not independent , but weakly correlated , with covariances decaying at least as fast as N−1 6 , 29 ., Therefore this additional contribution to the fluctuations also decays like N − 1 2 . The Gaussian approximation of the synaptic input relies crucially on these properties ., Dahmen et al . 39 investigated third order cumulants , the next order of non-Gaussian corrections to this approximation ., They found that the approximation has a small error even down to small networks of about 500 neurons and 50 synaptic inputs per neuron ., These estimates hold as long as all synaptic weights are of equal size ., For distributed synaptic amplitudes , in particular those following a wide or heavy-tailed distributions ( e . g . 49 , 50 , reviewed in 51 ) , we expect the simple mean-field approximation applied here to require corrections due to the strong effect of single synapses ., The generic feature of neuronal dynamics , the threshold-like nonlinearity that determines the activation of a neuron , is shared by the binary , the leaky integrate-and-fire and , approximately , also the Hodgkin-Huxley model neuron ., An important approximation entering our theory is the linearity of the dynamic response with respect to the perturbation ., We estimate the validity of our theory by comparison to direct simulations ., To estimate the breakdown of this approximation we compare the linear response to the first non-linear correcti | Introduction, Results, Methods, Discussion | Population-wide oscillations are ubiquitously observed in mesoscopic signals of cortical activity ., In these network states a global oscillatory cycle modulates the propensity of neurons to fire ., Synchronous activation of neurons has been hypothesized to be a separate channel of signal processing information in the brain ., A salient question is therefore if and how oscillations interact with spike synchrony and in how far these channels can be considered separate ., Experiments indeed showed that correlated spiking co-modulates with the static firing rate and is also tightly locked to the phase of beta-oscillations ., While the dependence of correlations on the mean rate is well understood in feed-forward networks , it remains unclear why and by which mechanisms correlations tightly lock to an oscillatory cycle ., We here demonstrate that such correlated activation of pairs of neurons is qualitatively explained by periodically-driven random networks ., We identify the mechanisms by which covariances depend on a driving periodic stimulus ., Mean-field theory combined with linear response theory yields closed-form expressions for the cyclostationary mean activities and pairwise zero-time-lag covariances of binary recurrent random networks ., Two distinct mechanisms cause time-dependent covariances: the modulation of the susceptibility of single neurons ( via the external input and network feedback ) and the time-varying variances of single unit activities ., For some parameters , the effectively inhibitory recurrent feedback leads to resonant covariances even if mean activities show non-resonant behavior ., Our analytical results open the question of time-modulated synchronous activity to a quantitative analysis . | In network theory , statistics are often considered to be stationary ., While this assumption can be justified by experimental insights to some extent , it is often also made for reasons of simplicity ., However , the time-dependence of statistical measures do matter in many cases ., For example , time-dependent processes are examined for gene regulatory networks or networks of traders at stock markets ., Periodically changing activity of remote brain areas is visible in the local field potential ( LFP ) and its influence on the spiking activity is currently debated in neuroscience ., In experimental studies , however , it is often difficult to determine time-dependent statistics due to a lack of sufficient data representing the system at a certain time point ., Theoretical studies , in contrast , allow the assessment of the time dependent statistics with arbitrary precision ., We here extend the analysis of the correlation structure of a homogeneously connected EI-network consisting of binary model neurons to the case including a global sinusoidal input to the network ., We show that the time-dependence of the covariances—to first order—can be explained analytically ., We expose the mechanisms that modulate covariances in time and show how they are shaped by inhibitory recurrent network feedback and the low-pass characteristics of neurons ., These generic properties carry over to more realistic neuron models . | resonance frequency, perturbation theory, neural networks, random variables, neuroscience, covariance, probability distribution, mathematics, algebra, network analysis, quantum mechanics, computer and information sciences, animal cells, resonance, probability theory, physics, cellular neuroscience, cell biology, linear algebra, neurons, biology and life sciences, cellular types, physical sciences, eigenvalues | null |
1,179 | journal.pcbi.1000826 | 2,010 | Mammalian Sleep Dynamics: How Diverse Features Arise from a Common Physiological Framework | The diversity of mammalian sleep poses a great challenge to those studying the nature and function of sleep ., Typical daily sleep durations range from 3 h in horses to 19 h in bats 1 , 2 , which has led to recent speculation that sleep has no universal function beyond timing environmental interactions , with its character defined purely by ecological adaptations on a species-by-species basis 3 ., Consolidated ( monophasic ) sleep , has only been reported in primates 2 , whereas the vast majority of mammals sleep polyphasically , with sleep fragmented into a series of daily episodes , ranging in average length from just 6 min in rats to 2 h in elephants 1 ., Some aquatic mammals ( such as dolphins and seals ) engage in unihemispheric sleep , whereby they sleep with only one brain hemisphere at a time 4–6 ., This behavior appears to serve several functions , including improved environmental surveillance and sensory processing , and respiratory maintenance 7 , although the physiological mechanism is unknown 8 , 9 ., Determining which aspects of mammalian sleep patterns can be explained within a single framework therefore has important implications in terms of both the evolution and function of sleep ., As we show here , although mammalian sleep is remarkably diverse in expression , it is very likely universal in origin ., Recent advances in neurophysiology have revealed the basic mechanisms that control the mammalian sleep cycle 10 , 11 ., Monoaminergic ( MA ) brainstem nuclei diffusely project to the cerebrum , promoting wake when they are active 12 ., Mutually inhibitory connections between the MA and the sleep-active ventrolateral preoptic area of the hypothalamus ( VLPO ) result in each group reinforcing its own activity by inhibiting the other and thereby indirectly disinhibiting itself ., This forms the basis of the sleep-wake switch , with active MA and suppressed VLPO in wake , and vice versa in sleep 10 ., State transitions are effected by circadian and homeostatic drives , which are afferent to the VLPO 13 ., The approximately 24 h periodic circadian drive is entrained by light , and projects from the suprachiasmatic nucleus ( SCN ) to the VLPO via the dorsomedial hypothalamus ( DMH ) 14 ., The homeostatic drive is a drive to sleep that increases during wake due to accumulation of somnogens , accounting for the observed sleep rebound following sleep deprivation 15 ., During sleep , somnogen clearance exceeds production and the homeostatic drive decreases ., The exact physiological pathway has yet to be fully elaborated , but some important somnogenic factors have been identified , including adenosine ( a metabolic by-product of ATP hydrolysis ) 16 and immunomodulatory cytokines 17 ., The present work uses a model that does not depend on the precise identity of the somnogen ( or somnogens ) , but may help to elucidate its characteristics ., Whether the above system can account for the wide variety of mammalian sleep patterns is unknown ., Is the sleep-wake switch a universal physiological structure among mammals ?, Or are the qualitative differences in sleep-wake patterns between species such as rats and dolphins due to fundamentally different mechanisms ?, To answer these questions we apply a recent quantitative physiologically-based model 18 , 19; this approach allows the underlying physiological structure to be related to the observed dynamics ., As shown in Fig . 1 , the model includes the MA and VLPO groups , circadian and homeostatic drives to the VLPO , and cholinergic and orexinergic input to the MA ( for mathematical details , see Methods ) ., The model is based on physiological and behavioral studies of a small number of species , including rats , mice , cats , and humans , and has been calibrated previously to reproduce normal human sleep and recovery from sleep deprivation 18 , 19 ., But as we will show , the model is also capable of reproducing the typical sleeping patterns for a wide range of mammalian species , including both terrestrial and aquatic mammals ., With nominal parameter values ( given in Methods ) , the model has previously been shown to reproduce normal human sleep patterns , with approximately 8 h of consolidated sleep , and relatively rapid ( approximately 10 min ) transitions between wake and sleep 18 , as shown in Fig . 1 ., We found that by varying just two of the model parameters , the model could be made to reproduce the bihemispheric sleep patterns of a wide variety of mammals , including many in which the neuronal circuitry controlling sleep rhythms has not been examined ., These parameters were:, ( i ) the homeostatic time constant , determining the rate of somnogen accumulation and clearance , and ( ii ) the mean drive to the VLPO , provided by the SCN , DMH and other neuronal populations ., The homeostatic time constant was found previously to be approximately 45 h for humans , based on the rate of recovery from total sleep deprivation 19 , but we found here that reducing it below 16 h resulted in polyphasic sleep , as seen in most other mammals ., This is because a shorter time constant causes somnogens to accumulate more quickly during wake , and dissipate more quickly during sleep , resulting in more rapid cycling between wake and sleep ., Increasing the mean inhibitory drive to the VLPO was found to decrease daily sleep duration with little effect on the other dynamics ., Fitting the model to experimental data for 17 species in which both average daily sleep duration and average sleep episode length have been reliably reported yielded the map in Fig . 2 , showing which regions of parameter space correspond to the typical sleep patterns of each species ., ( Note that at least some quantitative sleep data is available for over 60 species , but these two measures have not both been reliably reported in most cases . ), This map enables classification of mammals based on sleep patterns , and can be further populated in future when more data becomes available ., The regions corresponding to the human , rhesus monkey , and slow loris lie in the monophasic zone , but with different mean VLPO drives ., In each case , the lower bound for the homeostatic time constant was determined by the boundary of the monophasic zone ., For humans , the upper bound of 72 h was previously determined using sleep deprivation experiments 19 ., In the absence of experiments detailing recovery from total sleep deprivation in non-human primates , we used the same upper bound for both the rhesus monkey and the slow loris; more data is required to rigorously constrain the homeostatic time constant for these species ., Animals that sleep relatively little , such as the elephant , were inferred to have high values of mean drive to VLPO , while animals that sleep a lot , such as the opossum and armadillo , were inferred to have low values of mean drive to VLPO ., Those that cycle rapidly between wake and sleep , such as rodents , were inferred to have short homeostatic time constants ( around 10 min to 1, h ) , while those with fewer sleep episodes per day , such as the jaguar and elephant were inferred to have longer time constants ( around 5 h to 10, h ) , thus lying closer to the boundary between polyphasic and monophasic sleep ., The extreme cases of no wake and no sleep may correspond to brainstem lesions , such as those documented clinically 31 , and possibly other states of reduced arousal ( e . g . , hibernation , torpor , coma ) , although we did not pursue them here ., Using parameter values from the appropriate regions in Fig . 2 , we generated sample time series for various species ., Comparisons to experimental data for the human , elephant and opossum are shown in Fig . 3 ., In each case , the model reproduced the salient features of the sleep/wake pattern ., For the opossum , the circadian signal was shifted in phase by 12 h to reproduce the nocturnal distribution ., This is justified by physiological evidence suggesting that temporal niche is determined by how SCN output is modulated by the DMH relay system 11 ., Plotting the homeostatic time constants inferred for each species versus body mass in Fig . 4 revealed a positive correlation ., Fitting a power-law relationship yielded an exponent of 0 . 29±0 . 10 for non-primates ., Additional data are required to accurately constrain homeostatic time constants in non-human primates , but using the human-derived upper bound of 72 h yielded an exponent of 0 . 01±0 . 26 for primates , and 0 . 28±0 . 12 for all species ., Power-law relationships are ubiquitous in biology , although their quantification remains controversial ., For mammals it has been found that brain mass scales as approximately , where is total body mass , and metabolic power per unit volume scales as for brain tissue 33 ., Without knowing the precise mechanism by which the homeostatic drive is regulated , we nonetheless tested general assumptions that are equally applicable to a wide range of candidate mechanisms ., We assumed that somnogen production is proportional to the total power output of the brain ( as would plausibly be the case for adenosine ) , meaning production per unit volume would scale as , with different production rates in wake and sleep ., Furthermore , we made the generic assumption that somnogen clearance rate is proportional to working surface area , where this surface area may be glial , vascular , or otherwise , depending on the exact physiological pathway ., The total clearance rate then scaled as , where , depending on the geometry: corresponds to surface area scaling as the square of the brains linear dimension ( i . e . , as for simple solids ) , and to scaling as its cube ( e . g . , as for solids with highly convoluted or fractal surfaces ) ., By assuming clearance rate was also proportional to somnogen concentration , the homeostatic time constant was found to be proportional to ( see Methods for a full derivation ) ., For , this yielded a power law exponent of 0 . 23 , consistent with that found for non-primates ., The smaller exponent found for primates was consistent to within uncertainties with that found for non-primates; more primate data are required to determine whether is closer to 1 in primates , or whether both groups follow the same scaling law but with different normalization constants ., We next turned to modeling unihemispheric sleep by extending the above model to permit distinct dynamics for the two halves of the brain ., As shown in Fig . 1 , this was achieved by coupling together two identical versions of the original model , each representing one hemisphere ., This division in the model was justified by the fact that all nuclei in the VLPO and MA groups are bilaterally paired 12 , 34 , with the exception of the dorsal raphé nucleus , which lies on the brainstem midline 12 ., Separate homeostatic drives were included for each brain hemisphere , based on experimental evidence for localized homeostatic effects in humans , rats and dolphins 35–38 ., Aquatic mammals that have been observed to sleep unihemispherically spend little or no time in bihemispheric sleep while in water 8 ( although fur seals switch to exclusively bihemispheric sleep when on land 39 ) ., Hence , we postulated the existence of a mutually inhibitory connection between the two VLPO groups in aquatic mammals to prevent both activating at once ( just as the mutually inhibitory VLPO-MA connection prevents both those groups activating simultaneously ) , thereby preventing bihemispheric sleep ., This connection is presumably absent or very weak in other mammals ., For VLPO-VLPO connection strengths weaker than a threshold value sleep was purely bihemispheric , and above this value at least some unihemispheric sleep episodes occurred ., For connection strengths stronger than a higher threshold the model exhibited purely unihemispheric sleep , typical of cetaceans ., Differing homeostatic pressures between the two hemispheres drove alternating episodes of left and right unihemispheric sleep , with episode length controlled by homeostatic time constant , in a way similar to polyphasic bihemispheric sleep as described above ., In Fig . 5 , increasing the VLPO-VLPO connection strength was shown to cause a transition from polyphasic bihemispheric sleep to unihemispheric sleep , as for fur seals moving from land to water 6 , 39 ., Since no other parameter changes were required , we hypothesized that fur seals achieve this readjustment by dynamically neuromodulating the VLPO-VLPO connection strength in response to environmental stimuli ., The required strengthening by a factor of somewhat more than 2 . 4 is reasonable given the magnitudes of typical neuromodulator effects ., We have provided the first demonstration that the neuronal circuitry found in a small number of species in the laboratory , including rats , mice and cats , can account for the sleep patterns of a wide range of mammals ., Furthermore , this was achieved by varying only two model parameters , with all others taking fixed values determined previously ., The implications of this are far-reaching: universality of this fundamental physiological structure across diverse orders would suggest that its evolution predates mammals ., This is consistent with findings that show the monoaminergic system is phylogenetically pre-mammalian 40 , and that simple organisms such as the zebrafish share homologous neuronal and genetic control of sleep and wake 41 , 42 ., Our results also demonstrate the inherent functional flexibility of the sleep-wake switch , which plausibly accounts for its evolutionary success in the face of diverse evolutionary pressures on the sleep-wake cycle ., Physiological commonality is also of immense importance when using animals in pharmaceutical development , and for inferring the consequences for humans of animal sleep experiments and genetics ., Our findings suggest that the rate of cycling between wake and sleep is largely determined by the homeostatic time constant , which is inferred to have a positive correlation with body mass ., Deviations from this relationship are likely due to selective pressures such as predation , food availability , and latitude ., Consistent with this , a previous study found a scaling law of exponent 0 . 20±0 . 03 between the characteristic timescale of sleep episode durations ( which followed an exponential distribution ) and body mass 43 ., Mean drive to the VLPO determined sleep duration , and no clear correlation was found between this parameter and body size ., Experimental evidence suggests that sleep duration is dictated by interplay between physiological and ecological pressures 44 ., The primary advantage conferred by using a physiologically-based model to analyze and interpret data is the ability to relate such behavioral measures to physiology , giving new insights into how interspecies differences in sleep patterns arise ., Due to the relative paucity of appropriate data , in this study we made use of all data we could find ., This meant combining results of behavioral studies with EEG studies , despite the fact that these methods likely produce slightly different estimates of sleep duration and sleep bout length ., While this should not affect our main conclusions , it could fractionally shift the zones in Fig . 2 ., We thus emphasize the importance of experimentalists continuing to study a wide variety of mammalian species , and encourage them to report metrics such as sleep bout length , total daily sleep duration , and transition frequencies ., While the exact physiological mechanism underlying the homeostatic sleep drive is unknown , some pieces of the puzzle have been identified ., Growing evidence points to the role of adenosine accumulation at specific brain sites in promoting sleep ., In the rat , basal forebrain adenosine concentration has been found to gradually rise and fall during wake and sleep , respectively , with heightened levels following sleep deprivation 16 ., Artificial infusion of adenosine reduces vigilance 45 , and the wake-promoting effects of caffeine ( which is a competitive antagonist of adenosine ) provide additional indirect evidence for adenosines role in homeostatic sleep regulation ., However , the pathway by which adenosine induces sleep is not altogether clear ., Adenosine inhibits wake-promoting cholinergic neurons in the basal forebrain , and disinhibits the VLPO via another basal forebrain population 13 , 46 , yet adenosine agonists continue to promote sleep even after cholinergic neurons are lesioned 47 ., Immune signaling molecules such as interleukin-1 ( IL-1 ) and tumor necrosis factor ( TNF ) have also been linked to homeostatic sleep regulation 17 ., Levels of TNF and IL-1 alternate with the sleep/wake cycle , and their exogenous administration induces sleepiness 48 ., Furthermore , increased cytokine production during bacterial infection increases sleep duration 48 , unless the IL-1 system is antagonized 49 ., However , the pathway by which cytokines regulate sleep has yet to be fully elaborated ., More critically , no physiological process has been demonstrated to account for the homeostatic drives timescale , which can be up to a week in the case of chronic sleep deprivation in humans 50 ., Adenosines half life in the blood is only seconds 51 , suggesting that clearance and production may be rate-limited further upstream ., In this paper , we assumed that somnogen production and clearance rates are proportional to brain volume and surface area , respectively ., The utility of this approach is that it does not require precise knowledge of the physiology underlying the homeostatic drive , because these assumptions are equally valid for a wide range of candidate mechanisms ., Using them , we were able to relate scaling laws for metabolism and brain mass to the observed interspecies differences in sleep patterns ., Additional data is required to ascertain whether primates follow a different scaling law from non-primates , and if so whether this is due to greater cortical folding , cortical thickness , and neuronal density than most other mammals 52 , which could feasibly account for geometrical differences in vascular surface area for instance ., Furthermore , additional data is required to determine whether the positive correlation between body mass and homeostatic time constant conforms to a power law ., In a similar vein , a theoretical study by Savage and West 53 was able to predict an observed power law relationship between body mass and the ratio of sleep to wake duration , based on the assumption that sleeps primary function is brain maintenance and repair , but the present derivation is the first from a dynamical sleep model ., While sleep/wake patterns are controlled at a fundamental level by systems in the brainstem and hypothalamus , it is worth remembering that sleep is a multi-scale phenomenon , regulated at many levels ., For example , synaptic homeostasis may contribute to the local regulation of slow wave activity in the cortex during sleep , and could even play a role in generating the homeostatic drive to the sleep-wake switch 54 , 55 ., The proposed interhemispheric inhibitory connection in unihemispheric sleepers awaits experimental testing ., To date , VLPO afferents have only been studied in animals that sleep bihemispherically , with the great majority of these being ipsilateral 34 ., It remains to be seen whether aquatic mammals have a stronger contralateral connection ., A question that naturally arises is whether an analogous connection might also be present to some degree in animals that sleep bihemispherically , and whether unihemispheric sleep could be induced by decoupling the hemispheres by other means ., Acallosal humans have decreased EEG coherence between hemispheres during sleep , but do not display unihemispheric sleep 56 , suggesting that hemispheric synchrony is achieved subcortically ., Consistent with this , bisection of the brainstem in cats has been shown to result in all four behavioral states: bihemispheric wake , bihemispheric sleep , and unihemispheric sleep in each hemisphere 57 ., This suggests that in bihemispheric sleepers , contralateral excitatory connections between wake-promoting brainstem nuclei and/or the VLPO nuclei may be important to maintaining synchrony ., However , bisection of the brainstem in monkeys did not induce unihemispheric sleep 58 ., The existence of several other commissures between the hemispheres , including the corpus callosum , may help to explain these results , with one able to compensate for the lack of another in some species ., Animals that sleep unihemispherically appear to have evolved multiple physiological changes in parallel to enable this mode of sleep , including a narrow or absent corpus callosum in dolphins and birds , respectively , to reduce interhemispheric coupling 59 ., In future , our model could be applied to the sleep of species from other classes , including unihemispheric sleep in reptiles and birds 8 ., Furthermore , we could consider explicitly modeling the DMH pathway to explore how temporal niche ( diurnal vs . nocturnal vs . crepuscular ) is determined ., Extending the model to differentiate between REM and NREM sleep could provide additional insights ., Using such approaches in parallel with physiological investigations could then help to elucidate the evolutionary development of the sleep-wake switch and its specializations ., We begin by reviewing the sleep-wake switch model developed previously; for more details see references 18 and 19 ., The model includes the MA and VLPO neuronal populations , and the parameters of the model have been rigorously calibrated by comparison to physiological and experimental data for normal human sleep and recovery from sleep deprivation 18 , 19 ., Nominal human parameter values are given in Table, 1 . Each neuronal population has a mean cell-body potential relative to resting and a mean firing rate , where for MA and VLPO , respectively , with ( 1 ) where is the maximum possible firing rate , is the mean firing threshold relative to resting , and is its standard deviation ., Neuronal dynamics are represented by ( 2 ) ( 3 ) where the weight the input to population j from k , is the decay time for the neuromodulator expressed by group j ., The orexinergic/cholinergic input to the MA group is held at a constant average level to smooth out ultradian REM/NREM dynamics 18 ., The drive to the VLPO , ( 4 ) includes homeostatic and circadian components , where and are constants determining the strengths of the homeostatic and circadian drives , respectively ., The parameter is positive , so that the homeostatic drive promotes sleep; this is consistent with disinhibition of the VLPO by basal forebrain adenosine 13 ., The parameter is negative , consistent with the fact that SCN activity promotes wake in diurnal animals 60 ., Differences in temporal niche appear to be due in part to an inversion of this signal 60 , but as noted in the Discussion , we do not attempt to model this here ., The circadian drive is here assumed to be well entrained and so is approximated by a sinusoid with 24 h period , ( 5 ) where h−1 , is the mean drive to the VLPO , and is the initial phase ., The homeostatic sleep drive is represented by somnogen concentration , with its dynamics governed by ( 6 ) where is the homeostatic time constant , and is a constant which determines the rate of homeostatic production ., Previously , has been considered a model for adenosine concentration in the basal forebrain 18 , but this general form is equally applicable to many other candidate somnogens ., As shown in earlier work 18 , during normal functioning of the model , is high ( ∼5 s−1 ) in wake , is low ( ∼0 s−1 ) and is increasing , while is low in sleep , is high and is decreasing ., For the purposes of comparing to data , we define the model to be in wake if s−1 , based on comparison with experimental data for MA firing rates 61 ., The model differentiates wake vs . sleep states , and we make no attempt to reproduce different sleep intensities or intra-sleep architectures between species ., The parameters and are varied to reproduce mammalian sleep patterns using total daily sleep duration and average sleep episode length as metrics to calibrate against ., They have previously been estimated to take the values and h for humans ., These parameters were selected as best able to account for differences in both total daily sleep duration and sleep bout length based on preliminary investigations and previous sensitivity analysis 18 ., Data for calibration were derived from an extensive search of the literature to find studies that reported ranges for both metrics , yielding the 17 species used here ., Parameter ranges that satisfied these metrics were plotted as the regions shown in Fig ., 2 . All of the available data were used , with one exception: additional data for non-human primates that sleep monophasically were omitted since we are unable to derive an upper bound for the homeostatic time constant without obtaining data detailing the dynamics of recovery from total sleep deprivation for these species ., Those included in the study ( the slow loris and the rhesus monkey ) are shown for illustrative purposes using the human-derived upper bound of 72 h ., To produce Fig . 3 , we add noise terms with to the right hand sides of Eqs ( 2 ) and ( 3 ) , respectively , so as to make the sleep patterns less regular ., The noise is taken from a normal distribution of mean 0 and standard deviation 1 , and mV h1/2/ ( ΔT ) 1/2 , where ΔT is the size of the time step used in the numerical integration ., Values of parameters are taken from within the appropriate regions in Fig ., 2 . For the human , we use , h; for the elephant , we use , h; for the opossum we use , h ., For modeling unihemispheric sleep , the above model , defined by Eqs ., ( 1 ) – ( 6 ) is used identically to model the dynamics of each half of the brain , with the following modification to the VLPO differential equation: ( 7 ) where is the firing rate of the VLPO population in the other half of the brain , and represents the strength of the contralateral inhibitory connection ., Mammalian brain mass has been found to follow an approximate scaling law ( 8 ) where is body mass 33 ., Furthermore , the power output of the brain follows , ( 9 ) If the total rate of somnogen production in the brain is assumed to be proportional to the total power output of the brain , then the rate of somnogen production per unit volume , denoted by , is ( 10 ) We assume that the total clearance rate is proportional to the working surface area , which may be glial , vascular , or otherwise ., The working surface area will thus scale as the brains mass , , where depending on the brains geometry ., Therefore , the rate of somnogen clearance per unit volume , denoted by , is ( 11 ) Now , if is produced at a rate where is a factor that depends on the state of arousal ( i . e . , production is expected to be higher in wake than in sleep ) , and is cleared at a rate , where is constant , then ( 12 ) which can be rewritten as ( 13 ) where the homeostatic time constant is , and ., For , this yields and , justifying the approximation of holding constant while varying throughout this study . | Introduction, Results, Discussion, Methods | Mammalian sleep varies widely , ranging from frequent napping in rodents to consolidated blocks in primates and unihemispheric sleep in cetaceans ., In humans , rats , mice and cats , sleep patterns are orchestrated by homeostatic and circadian drives to the sleep–wake switch , but it is not known whether this system is ubiquitous among mammals ., Here , changes of just two parameters in a recent quantitative model of this switch are shown to reproduce typical sleep patterns for 17 species across 7 orders ., Furthermore , the parameter variations are found to be consistent with the assumptions that homeostatic production and clearance scale as brain volume and surface area , respectively ., Modeling an additional inhibitory connection between sleep-active neuronal populations on opposite sides of the brain generates unihemispheric sleep , providing a testable hypothetical mechanism for this poorly understood phenomenon ., Neuromodulation of this connection alone is shown to account for the ability of fur seals to transition between bihemispheric sleep on land and unihemispheric sleep in water ., Determining what aspects of mammalian sleep patterns can be explained within a single framework , and are thus universal , is essential to understanding the evolution and function of mammalian sleep ., This is the first demonstration of a single model reproducing sleep patterns for multiple different species ., These wide-ranging findings suggest that the core physiological mechanisms controlling sleep are common to many mammalian orders , with slight evolutionary modifications accounting for interspecies differences . | The field of sleep physiology has made huge strides in recent years , uncovering the neurological structures which are critical to sleep regulation ., However , given the small number of species studied in such detail in the laboratory , it remains to be seen how universal these mechanisms are across the whole mammalian order ., Mammalian sleep is extremely diverse , and the unihemispheric sleep of dolphins is nothing like the rapidly cycling sleep of rodents , or the single daily block of humans ., Here , we use a mathematical model to demonstrate that the established sleep physiology can indeed account for the sleep of a wide range of mammals ., Furthermore , the model gives insight into why the sleep patterns of different species are so distinct: smaller animals burn energy more rapidly , resulting in more rapid sleep–wake cycling ., We also show that mammals that sleep unihemispherically may have a single additional neuronal pathway which prevents sleep-promoting neurons on opposite sides of the hypothalamus from activating simultaneously ., These findings suggest that the basic physiology controlling sleep evolved before mammals , and illustrate the functional flexibility of this simple system . | marine and aquatic sciences/evolutionary biology, biophysics/theory and simulation, computational biology/computational neuroscience, physiology, evolutionary biology, computational biology/systems biology | null |
921 | journal.pcbi.1000731 | 2,010 | A Stevedores Protein Knot | In the last decade , our knowledge about structure and characteristics of proteins has considerably expanded ., The ability of proteins of small and medium size to fold into native structures is attributed to a minimally frustrated free energy landscape , which allows for fast and robust folding 1 , 2 ., In recent years , however , a new class of proteins with knotted topologies emerged 3 , 4 , 5 , 6 , 7 that broadened the scope of possible folding landscapes ., Not withstanding our daily experiences with shoelaces and cables , knots are mathematically only properly defined in closed loops , and not on open strings ., In proteins , however , this issue can be resolved by connecting the termini ( which are usually located on the surface ) by an external loop 3 , 4 , 7 ., This approach actually corresponds to a more practical definition of knottedness in which we demand that a knot remains on a string and tightens when we pull on both ends ., After such closure , mathematical algorithms like the Alexander polynomial 8 can be employed to determine the type of knot ( a topological invariant ) ., Knots are usually classified according to the minimum number of crossings in a projection onto a plane ., Most knotted proteins discovered to date are quite simple ., Out of the seven distinctly knotted folds discovered to date ( see Table 1 ) , four are simple trefoil knots ( 31 ) with 3 crossings , two are figure-eight knots ( 41 ) with 4 crossings , and only one fold is made up of five crossings ( 52 ) ., Most of the knots in protein structures , however , were initially undetected from their structures since finding them by visual inspection is fairly hard , requiring a computational approach ., Even though some pioneering experiments 9 , 10 , 11 , 12 , 13 have began to shed some light on how these peculiar structures fold and unfold , still little is known about the exact mechanisms involved ., Recently , this subject was addressed with simulations of structure-based coarse-grained models 14 , 15 that suggested for the first time potential folding mechanisms and unfolding pathways 14 , 15 , 16 , 17 , 18 , 19 for knotted proteins ., It has been suggested that folding of knotted proteins may proceed through an unfolded but knotted intermediate by simulations which include non-native contacts 14 , or by formation of slipknot conformations 15 ( segments containing a knot which disappears when protein as a whole is considered ) in conjunction with partial folding and refolding ( backtracking ) events 20 ., The slipknot conformations allow the protein to overcome topological barriers in the free energy landscape which might otherwise lead to kinetic traps 21 , 22 , 23 ., In a more general context , it is also intriguing to ask if the folding of complex knots can be reconciled with the folding funnel hypothesis 1 , 2 or nucleation mechanisms 24 ., In this paper we present the most complex and also the smallest , knotted proteins known to date ., To shed some light on potential folding routes of the former , we undertook molecular dynamics simulations with a coarse-grained model which only includes native contacts ., Even though it is intrinsically difficult to fold such a large protein with a simple structure-based model , a small fraction of our trajectories ( 6 out 1000 ) folded into the knotted native state ., Based on these simulations we propose a new mechanism by which this complex protein knot may fold in a single flipping movement ., The proposed mechanism differs from mechanisms suggested before as it involves the flipping of a large loop over a mostly folded structure rather than folding via mostly unstructured knotted intermediates 14 ., It is difficult to imagine how proteins can actually fold into topologically elaborate structures like the 61 knot displayed in fig . 1a ., Complex knots , however , are not necessarily difficult to tie ., There are actually quite a few rather complicated knots , including the Stevedore knot in DehI , which can be transformed into unknots by removing a single crossing ., Likewise , these knots can typically be formed in a single movement which simplifies the folding of these peculiar structures considerably ., Recently , Taylor 31 predicted that complex protein knots discovered in the future will most likely belong to this class which is corroborated by the discovery of the Stevedore knot in DehI ., As indicated in 31 , knots of arbitrary complexity can be obtained by twisting a loop in a string over and over again before threading one end through the loop ., Even though this way of creating knots may appear as an attractive protein folding scenario due to its simplicity , our results suggest a somewhat different potential mechanism , which is able to reduce topological constraints and fold DehI in a single movement ., Two loops are crucial for the formation of the 61 knot in DehI: a smaller loop which we call S-loop containing amino acids 64 to 135 and a slightly bigger loop termed B-loop ranging from amino acid 135 to 234 ., Note that the latter includes the proline rich unstructured segment mentioned earlier ., The analysis of the crystallographic B-factor ( see fig . S1 ) reveals that the center of the S-loop , the beginning and the end of the B-loop , as well as the unstructured proline rich segment , are particularly mobile ., In addition , a very mobile unstructured segment around amino acid 240 provides additional flexibility to the C-terminus ., Note that if the B-loop is flipped over to the other side of the protein , the Stevedore knot disentangles in a single step ., In an attempt to elucidate the folding route of DehI , we undertook molecular dynamics simulations with a coarse-grained structure based Go-model 1 , 32 , 33 of DehI which does not include non-native interactions ., With this model we were able to fold six trajectories ( out of 1000 ) into the 61 knot ( with more than 90% of native contacts ) ., We emphasize that this number should not be associated with experimental folding rates ., Folding large knotted proteins with a generic structure-based model without non-native interactions is extremely difficult as the protein has to undergo a series of twists and threading movements in correct order while collapsing ., As demonstrated in Ref ., 14 the addition of non-native interactions will increase the folding rate substantially , however , at the cost of introducing a bias ., There is also a strong dependence of successful folding events on protein size ., For example , in Ref ., 15 a rather simple and short trefoil knot in an RNA methyltransferase , folded successfully in only 2% of all cases with the same underlying model ., On the other hand we succeeded in folding 2efv with 100% success rate 34 ., For comparison the number of amino acids in 2efv is roughly two times smaller than the number of amino acids in the methyltransferase , which again is roughly two times smaller than the number of amino acids in the dehalogenase ., While acknowledging such limitations of coarse-grained models , we are still confident in deducing a potential folding pathway from the analysis of the successful trajectories , in particular because all six trajectories are very similar ., Fig . 2 shows an actual folding trajectory ., The S-loop is colored red , the B-loop green and the C-terminus blue ., Two very similar potential folding routes were observed in our simulations ., In both routes , the two loops form in the beginning by twists ( fig . 2a ) of the partially unfolded protein such that B- and S-loop are aligned ( fig . 2b ) ., In the first route , the C-terminus is threaded through the S-loop ( which needs to twist once again – fig . 2c ) before the B-loop flips over the S-loop ., In the second route the steps are interchanged: the B-loop flips over the S-loop and the C-terminus ( shaded in light blue in fig . 2c ) ., A figure-eight ( 41 ) knot forms as a result before the C-terminus manages to thread through the S-loop to reach the native state ., In both cases , the C-terminus moves through the S-loop via a slipknot conformation ( fig . 2c ) ., Note that loop flipping and threading are typically accomplished with backtracking events 20 for topologically frustrated proteins 21 ., Similar conformational changes during folding mechanisms have been observed in other topologically non-trivial structures ., The rotation of a proline rich loop was also observed in a big slipknotted protein , Thymidine Kinase 15 ., Slipknot intermediates appear in the folding mechanism for the trefoil knot in Methyltransferase 15 as well ., Unfortunately , the size and complexity of the protein does not allow us to study the full thermodynamic process and reconstruct the free energy profile along a reaction coordinate ., However , kinetic data allow us to distinguish some characteristic times from which we can deduce a likely folding mechanism ., In fig . 3 we investigate the rate-limiting step in the folding of the Stevedore knot ., On the left panel , we plot the time it takes to thread the C-terminus through the S-loop ( tc ) against the time it takes to flip the B-loop over the S-loop ., Solid symbols are trajectories associated with route I ( 0→61 ) , and open symbols are trajectories associated with route II ( 0→41→61 ) ., In the first pathway , the flipping of the B-loop takes longer than the threading of the C-terminus in two out of three cases ., In the second pathway ( and the third trajectory associated with route I ) , the threading of the C-terminus through the S-loop occurs shortly after the flipping of the B-loop ., In both scenarios , the flipping of the B-loop over the S-loop is the rate-limiting step ., Once this is achieved , the protein is essentially folded ( fig . 3b ) ., The flipping of the B-loop can therefore be associated with an entropic barrier in the folding free energy ., From an analysis of the order at which contacts occur ( fig . S2 ) it is possible to deduce the occurrence of a first small barrier , which is associated with the formation and twisting of B- and S-loop before the B-loop flips ., Hence , we believe a three-state folding scenario is more likely than a two-state scenario ., In order to study the unfolding pathway , we raised the temperature above the folding temperature ., Even though some native contacts are lost at higher temperatures , the global mechanism is by and large reversed as compared to the folding routes ( see fig . S3 ) ., To check how topological complexity restricts the free energy landscape the protein topology was changed from 61 to 41 ( by eliminating a crossing , as previously performed with a different protein in Ref . 35 ) ., This slight modification increases the folding ability of DehI substantially to 11% , suggesting that complexity of the knot is an important parameter in determining the foldability of a protein ., Our analysis of the Protein Data Bank revealed the most complex protein knot in α-haloacid Dehalogenase DehI and the shortest ( so far unclassified ) knotted protein known to date ., This discovery underscores that knots in the backbone of proteins are significant structural motifs that appear at different levels of protein complexity and might offer new insight in the understanding of protein folding mechanisms ., The finding of the smallest knotted protein ( which is almost half the size of all previously known protein knots ) may eventually enable us to study the folding of knotted proteins with more sophisticated all-atom simulations ., We investigated the folding route of the most topologically complex protein knot with molecular dynamics simulations of a structure-based model ., The analysis of successful folding trajectories suggests that the Stevedore ( 61 ) knot in DehI folds via a simple mechanism: a large twisted loop in the protein flips over another previously twisted loop , thus essentially creating the six-fold knot in a single movement ., Thus , the topological complexity of the Stevedore knot in DehI can be overcome and explained in the context of classical theories of protein folding 1 , 2 , 36 ., The flipping of a loop over a mostly folded structure constitutes a new scenario in the folding of knotted proteins which differs , e . g . , from the folding of knots via partially unstructured knotted intermediates 14 ., Our mechanism also includes previously observed elements like the threading of slipknot conformations through loops 15 ., These mechanisms can be essential for folding into topologically challenging structures and provide a general framework for the understanding of knotted proteins ., The programs used to detect knots are identical to those used in our previous work 7 ., To determine whether or not a structure is knotted , we reduce the protein to its backbone , and draw two lines outward starting at the termini in the direction of the connection line between the center of mass of the backbone and the respective ends ., The knot type is determined by computing the Alexander polynomial , which is also implemented on our protein knot detection server ( http://knots . mit . edu . ) 37 ., For a detailed discussion of our methods , the reader is referred to Ref ., 7 ., Note that this class of structure based models was not created with protein knots in mind and is very prone to fold into topologically frustrated states ., Even though Go-models can be adapted to enhance the formation of knots 14 we refrained from this approach because we did not want to impose any bias ., We applied a structure based coarse-grained model with only native contacts 32 , 33 ., In total we folded 1000 trajectories of DehI at temperature T\u200a=\u200a0 . 48 out of which 6 folded into a 61 knot ., Furthermore , we observed 737 unknotted conformations , 85 trefoil ( 31 ) , 167 figure-eight ( 41 ) and five 52 knots ., Higher and lower temperatures resulted in a lower rate of 61 formation ., After the structure was simplified to a figure-eight knot , 11% of all configurations folded into the native state ( with more than 95% native contacts . ) | Introduction, Results, Discussion, Methods | Protein knots , mostly regarded as intriguing oddities , are gradually being recognized as significant structural motifs ., Seven distinctly knotted folds have already been identified ., It is by and large unclear how these exceptional structures actually fold , and only recently , experiments and simulations have begun to shed some light on this issue ., In checking the new protein structures submitted to the Protein Data Bank , we encountered the most complex and the smallest knots to date: A recently uncovered α-haloacid dehalogenase structure contains a knot with six crossings , a so-called Stevedore knot , in a projection onto a plane ., The smallest protein knot is present in an as yet unclassified protein fragment that consists of only 92 amino acids ., The topological complexity of the Stevedore knot presents a puzzle as to how it could possibly fold ., To unravel this enigma , we performed folding simulations with a structure-based coarse-grained model and uncovered a possible mechanism by which the knot forms in a single loop flip . | Knots are ubiquitous in many aspects of our life , but remain elusive in proteins ., The multitude of protein structures archived in the Protein Data Bank can be grouped into several hundred patterns , but only a handful are folded into knots ., Combing through the recently added structures we found several novel knotted proteins ., A microbial enzyme that catalyzes the breakdown of pollutants is the most complex protein knot encountered so far ( similar to a knot used by stevedores for lifting cargo ) ., The smallest knotted protein on the other hand consists of only 92 amino acids ., The existence of these complex motifs demonstrates that the ability of self assembly goes far beyond normal expectations ., Aided by computer simulations we present evidence which suggests that the Stevedore protein knot , despite its topological complexity , may actually form in a single flipping movement . | computational biology/molecular dynamics, computational biology/macromolecular structure analysis | null |
1,441 | journal.pcbi.1000560 | 2,009 | Interactions between Connected Half-Sarcomeres Produce Emergent Mechanical Behavior in a Mathematical Model of Muscle | Many biological systems are irreducible meaning that they have more complicated properties than the structures of which they are composed ., Detailed understanding of a complete system therefore requires knowledge both about how its individual components function and about how those components interact ., A property of the complete system is described as emergent if it arises because of interactions between components and is not a property of a single component in isolation ., Studying the emergence of new properties is an important aspect of modern systems biology and the approach has produced important new insights into many living systems 1 ., Although systems-based models of muscle are now being developed 2 , alternative reductionist models have dominated quantitative muscle biophysics for the last 60 years ., The main strategy has been to try and explain the properties of an entire muscle fiber as the scaled behavior of a single half-sarcomere ., This technique was pioneered by A . F . Huxley in 1957 3 and it has been outstandingly successful ., For example , reductionist half-sarcomere theories can explain virtually all of the mechanical effects that occur immediately after a muscle fiber is subjected to a very rapid length or tension perturbation 4 , 5 ., Muscle fibers do however exhibit some mechanical properties that are not immediately consistent with the expected behavior of a single half-sarcomere ., The goal of the present work was to determine whether one specific experimental effect might be an emergent property of a group of half-sarcomeres as opposed to an inherent property of a single one ., The analysis focused on the tension responses produced by stretching a chemically permeabilized rabbit psoas muscle fiber ., If this type of preparation is stretched when it is inactive , the force response is relatively small and probably largely attributable to the elongation of titin molecules 6 ., When the preparation is activated and then lengthened , the stretch response contains an additional , larger , component reflecting the displacement of populations of attached cross-bridges away from the distributions that they adopted during the isometric phase of the contraction ., If the filaments keep moving at the same rate for a sufficiently long time , the standard mathematical theories ( for example , 3 ) predict that cross-bridge populations will reach new steady-state distributions dictated by the strain-dependence of the myosin rate transitions and the velocity of the imposed length change 7 ., If steady-state is indeed achieved , the cross-bridge population distributions will remain stable and the force due to attached cross-bridges will therefore remain constant ., This simple analysis implies that titin molecules are the only molecular structures inside the half-sarcomere that can produce a force that increases during the latter stages of an imposed stretch ., If titin behaves as an elastic spring that is independent of the level of Ca2+ activation the rate at which force rises late in an imposed stretch should therefore be the same in maximally-activated fibers as it is in relaxed fibers ., In fact force rises >3-fold faster in activated rabbit psoas fibers than it does in the same fibers when they are inactive 7 ., One possible explanation for this effect is that the properties of molecules within each half-sarcomere change when a muscle is stretched while it is activated ., For example , titin filaments could become stiffer , or the cross-bridge populations could fail to reach steady-state during a prolonged movement ., Both of these effects could potentially reflect force-dependent protein-protein interactions 8 ., A second possible explanation is that the half-sarcomeres continue to operate as they did before the stretch and that the measured experimental behavior is an emergent property of a collection of heterogeneous half-sarcomeres ., These explanations are not mutually exclusive so it is also possible that both effects contribute to the activation dependence of the latter stages of the stretch response ., An argument against variable titin properties being the sole explanation is that the magnitude of the Ca2+-dependent stiffening required to explain the behavior observed in psoas fibers ( ∼300% increase in titin stiffness ) is much larger than that ( ∼30% increase in stiffness ) observed in experiments that have specifically investigated titins Ca2+-sensitivity 9 ., The idea that the activation-dependence of the latter stages of the stretch response could reflect emergent behavior of a collection of half-sarcomeres might be inferred from a number of previous reports 10–12 but it does not seem to have been explicitly stated or analyzed in quantitative detail before ., This paper presents a mathematical model that was developed to investigate the potential emergence of new mechanical behavior in a system composed of multiple half-sarcomeres ., Detailed computer simulations show that the model can reproduce the activation dependence of the latter stages of the stretch response without requiring that titin filaments stiffen when the Ca2+ concentration rises ., The stretch response of a fast mammalian muscle fiber may therefore be an irreducible property of the complete cell ., Fig 1 shows experimental force records for a chemically permeabilized rabbit psoas fiber subjected to a ramp lengthening followed by a ramp shortening in four different pCa solutions ., The rate at which force rose during the latter stages of the stretch increased with the level of Ca2+ activation ., Data from 5 fibers showed that the slope ( estimated by linear regression ) of the tension response during the last one-third of the stretch was 3 . 26±0 . 87 ( SD ) times greater ( t-test for value greater than unity , p<0 . 001 ) in pCa ( =\u200a−log10Ca2+ ) 4 . 5 solution ( maximal Ca2+ activation ) than it was in pCa 9 . 0 solution ( minimal Ca2+ activation ) ., As discussed in the Introduction , the increased slope in the pCa 4 . 5 condition is not consistent with the expected behavior of a single population of cycling cross-bridges arranged in parallel with an elastic component that has properties that are independent of the level of activation ., Computer simulations were performed to test the hypothesis that the activation dependence of the latter stages of the force response may be an emergent property of a collection of half-sarcomeres ., The model is summarized in Fig 2 and explained in detail in Materials and Methods ., Parameters defining the passive mechanical properties of the half-sarcomeres ( Table 1 , Column, 3 ) were determined by fitting Eq 8 to an experimental record measured in pCa 9 . 0 solution ., Multidimensional optimization procedures were then used to adjust the other parameters defining the models behavior in an attempt to fit the simulated force response to the experimental record measured in pCa 4 . 5 solution ., The best-fitting force response obtained in this manner is shown in red in the top panel in Fig 3 ., The corresponding model parameters are listed in Table 2 ( Column 3 ) ., The blue lines in the top panel in Fig 3 show the force responses produced by a single half-sarcomere framework with the same model parameters ., The simulated force records for the single and multi-half-sarcomere frameworks are the same for the pCa 9 . 0 condition ( where there are no attached cross-bridges ) but different for the pCa 4 . 5 condition ., Note in particular that the multi-half-sarcomere framework predicts a smaller short-range force response and a tension that rises more steeply during the latter stages of the stretch ., This progressively increasing tension is not a property of a single activated half-sarcomere in these simulations and therefore reflects interactions that occur between half-sarcomeres; it is an emergent property of the multi-half-sarcomere framework ., The red lines in the bottom panel in Fig 3 show the length traces for the 300 half-sarcomeres in the larger framework superposed ., ( The traces are shown in more detail in Supporting Information Figure S1 . ), Although individual half-sarcomeres followed length trajectories defined by Eq 2 the behavior of the overall system is chaotic ., During the stretch , for example , some half-sarcomeres are lengthening , some are shortening , and some remain nearly isometric ., The behavior of each pair of half-sarcomeres on the other hand is more orderly ., Indeed , at any given time-point in the simulation , all the full sarcomeres had virtually the same length ., This is because the inter-myofibrillar links ( Fig, 2 ) were sufficiently stiff to keep the Z-disks in register during the activation ., The effect is demonstrated in Fig 4B where the computer-rendered striation patterns show that the Z-disks ( drawn in magenta ) are always aligned whereas the M-lines ( drawn in yellow ) are frequently displaced from the middle of the sarcomere ., Z-disk alignment is no longer maintained in the simulations if the inter-myofibrillar links are ablated in silico by setting kim equal to zero ( Fig 4C ) ., In this situation , mean sarcomere length averaged perpendicular to the filaments for the different half-sarcomere pairs ( green lines in Fig 4A ) is no longer constant although mean sarcomere length averaged parallel to the filaments is always the same in the different myofibrils ., ( This has to be the case because all the myofibrils have the same length and contain the same number of sarcomeres . ), A movie showing how the computer-generated striation patterns change during the length perturbations is provided as Supporting Information Video S1 ., Interestingly , the predicted isometric force value is lower for the simulations with kim equal to zero ., The area under an xy-plot of force against length during the stretch ( not shown ) is also lower indicating that the framework simulated without inter-myofibrillar links would absorb less energy during an eccentric contraction ., This mimics experimental results obtained by Sam et al . 13 using muscles from desmin-null mice ., Fig 5 shows the effects of changing the size of the model framework and the numerical value of a key model parameter ., All simulations were performed with the parameters listed in the third columns of Tables 1 and 2 except for Fig 5C where α ( Eq 7 ) was varied as shown ., Increasing nhs ( the number of half-sarcomeres in each myofibril ) from 1 to 10 in a framework with 6 myofibrils markedly improved the fit to the experimental record ., The additional improvement gained by further increasing nhs to 50 was more modest ., When there were already 50 half-sarcomeres in each myofibril , increasing the number of myofibrils did not dramatically improve the fit during the stretch response ( Fig 5B ) but it did help to stabilize isometric force before the stretch ., This is at least partly because the presence of inter-myofibrillar links stabilized sarcomere ( but not half-sarcomere ) lengths ( Fig 4B and C ) ., The effects of varying α to alter the amount of half-sarcomere heterogeneity in the largest framework are summarized in Fig 5C ., Note that increasing α beyond 0 . 1 did not substantially change the fit to the experimental data and that the simulated response for the framework with 300 half-sarcomeres and α equal to zero was not different from that of the single half-sarcomere framework with the same model parameters ., This second point demonstrates that a fiber system does not exhibit emergent properties if the half-sarcomeres of which it is composed are all identical ., This informal sensitivity analysis suggests that the activation dependence of the latter stages of the stretch response is more likely to reflect inhomogeneity between half-sarcomeres along a myofibril than inhomogeneity between different myofibrils ., This prediction is based on the computed results shown in Fig 5A and B . Increasing the number of half-sarcomeres from 1 to 50 in a framework with 6 myofibrils markedly changed the slope of the force response during the second half of the stretch ( Fig 5A ) ., In contrast , increasing the number of myofibrils in a framework with 50 half-sarcomeres ( Fig 5B ) reduced the magnitude of oscillations in the computed force records but did not substantially alter the underlying trend of the responses ., The value of the parameters defining Fpas ( Table 1 , Column, 3 ) were determined by fitting Eq 8 to force records measured for a fiber in pCa 9 . 0 solution during small dynamic stretches ( 4% muscle length ) imposed from a starting sarcomere length of ∼2600 nm ., It is therefore possible that the calculated parameters overestimate the isometric passive tension that would have been measured if the half-sarcomeres were stretched more than 4% ., ( The passive length tension relationship was not measured in the original experiments 7 so the relevant experimental data were not available for comparison . ), To eliminate any possibility that the tension response during the latter stages of an imposed stretch is only activation-dependent in the current simulations because the titin filaments are unrealistically stiff at long lengths , additional calculations were performed with a linear passive component ., The parameters defining Fpas in this case ( Table 1 , Column, 4 ) were determined by fitting Eq 9 to the same pCa 9 . 0 force record ., Passive force calculated in this way did not reach the maximal Ca2+-activated value until the sarcomeres were stretched beyond 3500 nm ., The best-fitting force simulations deduced by multi-dimensional optimization with the linear titin component are shown in red in Fig 6A ., While the simulation of the active fiber does not match the experimental data as well as the simulations ( Fig, 3 ) performed with the non-linear titin component ( r2\u200a=\u200a0 . 93 as opposed to r2\u200a=\u200a0 . 98 ) it does reproduce the activation-dependence of the slope of the force response during the latter stages of the stretch ., Rat soleus fibers exhibit a stretch response that is qualitatively different from that produced by rabbit psoas fibers 14 ., Instead of force rising during the latter stages of the movement , force tends to peak and then fall slightly to a plateau that is maintained as long as the stretch persists ., ( A similar plateau is observed when frog tibialis anterior fibers are stretched 15 ) ., Although the shape of the response seems to imply that passive titin properties are less important in rat soleus fibers than they are in rabbit psoas fibers , Campbell & Moss 14 showed that a single half-sarcomere model produced the best-fit to the real Ca2+-activated data when the cross-bridges were arranged in parallel with a titin spring that was ∼3 times stiffer than that measured experimentally in pCa 9 . 0 solution ., The behavior of the soleus fibers was thus very similar to that described here for psoas preparations ., This suggests that simulations performed with a multi-half-sarcomere framework might also produce a better fit to the mechanical data from soleus fibers than a model based on a single half-sarcomere ., Fig 6B shows the results of calculations performed to test this hypothesis ., Parameter values for the simulations are listed in Tables 1 and 2 ( Column 5 in both cases ) ., The predictions for the multi-half-sarcomere framework fit the experimental data well ( r2\u200a=\u200a0 . 97 ) and , as in the case of the simulations of psoas fiber data , predict a lower isometric force and a less prominent short-range response than the simulations performed with a single half-sarcomere framework and otherwise identical model parameters ., This work provides important new insights and introduces novel simulation techniques but the idea that the mechanical properties of a muscle fiber might be influenced by individual half-sarcomeres behaving in different ways is not new 15 , 20–22 ., One of the controversies in the field is whether sarcomeres ‘pop’ , that is , extend rapidly to beyond filament overlap 12 ., This behavior can be predicted from an analysis of the steady-state active and passive length tension relationships but it has not been observed in some experiments that have specifically investigated the issue in small myofibrillar preparations 23 , 24 ., Other data 25 suggest that some sarcomeres in a sub-maximally activated myofibril ‘yield’ and others ‘resist’ during a stretch ., The present simulations suggest that there are at least two mechanisms that may reduce the likelihood of ( but perhaps not entirely eliminate ) popping under normal physiological conditions ., First , attached cross-bridges in half-sarcomeres that are starting to elongate will be stretched thereby producing increased force ., If the total length of the muscle fiber is fixed , other half-sarcomeres in the same myofibril will have to shorten and force will therefore drop in these structures ., The changes in the forces produced by cross-bridges in the half-sarcomeres that moved are transient because they will dissipate as the myosin heads progress through their normal cycle ., However , while they exist , they act in such a way as to reduce the development of additional heterogeneity ., In vivo , this effect could be enough to prevent the cell from being structurally damaged before it relaxes at the end of the contraction and passive mechanical properties are able to restore the fibers prior arrangement ., Second , forces in molecules that link half-sarcomeres will help to preserve sarcomere length uniformity ., In the current simulations , some of these molecules are represented mathematically by linear springs that connect Z-disks in adjacent myofibrils ., It was particularly interesting to discover that the in silico ‘knock-out’ of inter-myofibrillar connections ( kim\u200a=\u200a0 , Fig 4 ) reproduced the functional effects observed in mice from desmin-null mice - lower isometric force and decreased energy absorption during imposed stretches 13 ., One of the many interesting features of the second phase of the stretch response of activated muscle fibers is that it can be quite variable ., Fig 6 , for example , shows that it is markedly different in fast and slow mammalian fibers under very similar experimental conditions ., Getz et al . 11 observed that differences can also be observed within fast fibers from rabbit psoas muscle ., Their manuscript notes that the “continued force rise after the critical stretch was sometimes but not always present in our data” ., ( It is important to note that the stretches used by Getz et al . were up to 25 times faster than the ones simulated in the present work . A slow rise in force during the latter stages of the stretch was always observed in the experiments with psoas fibers that are simulated here 7 . ), Getz et al . suggested that the variable nature of their measured responses might reflect different amounts of half-sarcomere heterogeneity in their preparations ., Their conclusion is supported by the present simulations ., Half-sarcomere heterogeneity has also been suggested as a potential explanation for residual force enhancement - the augmented force that persists long after a stretch and hold imposed during a maximal contraction 26 ., The current simulations support this hypothesis too because Edman & Tsuchiya 10 showed that the size of the enhancement correlates with the magnitude of the second phase of the force response in the stretch that produces it ., However , half-sarcomere heterogeneity may not be the only mechanism responsible for residual force enhancement because Edman & Tsuchiya 10 also showed that there could be a small residual enhancement when the conditioning stretch didnt produce a measurable second phase force response ., Precise measurements of the mechanical properties of single muscle fibers are often performed using a technique known as sarcomere length control 27 , 28 ., This is an important experimental approach but it should be made clear that the technique does not eliminate the potential emergence of new properties due to the collective behavior of half-sarcomeres ., This is because sarcomere length control dictates the mean sarcomere length in a selected region of the muscle fiber rather than the lengths of the individual half-sarcomeres ., It is thus the in vitro equivalent of the computer simulations discussed in this work in which xT , the total length of a defined group of half-sarcomeres , is the controlled variable ., Many biologists probably regard it as axiomatic that the properties of a muscle fiber vary along its length ., After all , organelles , such as nuclei and mitochondria , are localized structures that are not uniformly ‘smeared’ throughout the cell ., There are , of course , other sorts of non-uniformity in muscle cells as well ., There is good evidence to suggest , for example , that eye muscle fibers express different myosin isoforms along their length 29 and that sarcomeres near the end of a fiber are shorter than those near the middle 30 ., Many quantitative models of muscle on the other hand overlook variability within muscle fibers and attribute the mechanical properties of an experimental preparation to the scaled behavior of a single population of cycling cross-bridge that is sometimes arranged in parallel with a passive mechanical component ., These reductionist theories have been outstandingly successful at explaining the behavior observed in some specific experiments 31 but the simulations presented in this work suggest that more realistic multi-scale modeling may be required to fully reproduce the behavior of whole muscle fibers ., Multi-scale modeling may be particularly helpful in studies of muscle disease ., It is well known , for example , that muscle function is compromised in muscular dystrophy where the primary defect occurs in a large structural protein 32 ., Defects in such proteins will affect the way that forces are transmitted between and around myofibrils which , as shown in Fig 4 , may significantly alter a muscles mechanical behavior ., This concept is also supported by experimental data ., Shimamoto et al . 25 recently showed , for example , that modifying Z-disk structure with antibodies can influence the emergent properties of a myofibrillar preparation by altering the way that half-sarcomeres interact ., Finally , the simulations shown in Fig 5C demonstrate that the relatively small amount of half-sarcomere heterogeneity produced by increasing α from 0 . 0 to 0 . 1 dramatically alters the mechanical properties of the muscle framework ., Further increases in α produce more half-sarcomere heterogeneity but do not substantially alter the predicted force response ., This is a very interesting finding because it implies that the mechanical properties of a muscle that was originally perfectly uniform would change markedly if localized structural and/or proteomic abnormalities developed as a result of a disease process and/or unusual mechanical stress ., The mechanical properties of a muscle cell that was already slightly heterogeneous on the other hand would not be substantially altered by additional irregularities ., This could be a significant advantage for a living cell that is continually repairing itself and which is potentially subject to damaging stimuli and large external forces ., Muscle cells may have evolved to become fault-tolerant systems ., The mathematical modeling presented in this work suggests that muscle fibers may exhibit emergent mechanical properties that reflect interactions between half-sarcomeres ., If this is indeed the case , systems-level approaches will tbe required to explain how known proteomic and structural heterogeneities influence function in normal and diseased tissue ., Animal use was approved by the University of Wisconsin-Madison Institutional Animal Care and Use Committee ., All of the experimental records shown in this work were collected by the author in Dr . Richard Mosss laboratory at the University of Wisconsin-Madison ., Full details of the experimental procedures and some of the records have already been published 7 , 14 ., Animal use was approved by the relevant Institutional Animal Care and Use Committee ., The structural framework studied in this work ( Fig, 2 ) consisted of nm parallel chains of myofibrils , each of which was itself composed of nhs half-sarcomeres arranged in series ., Every second Z-line was linked to the corresponding Z-line in each of the other myofibrils by a linear elastic spring of stiffness kim ., These connections simulated the mechanical effects of proteins such as desmin that connect myofibrils at Z-disks 33 ., The force within each half-sarcomere ( Fhs ) was the sum of Fpas , a ‘passive’ elastic force due to the mechanical elongation of structural molecules such as titin , and Fact , an ‘active’ force produced by ATP-dependent cross-bridge cycling 34 , 35 ., ( 1 ) Fpas was a single-valued function of the length ( xhs ) of each half-sarcomere ., Fact was more complicated and depended on the half-sarcomeres preceding motion ., Both force components are described in more detail below ., The mechanical behavior of the multi-half-sarcomere framework was simulated by assuming that ( 1 ) the force in a given myofibril was the same at every point along its length and ( 2 ) the sum of the lengths of the half-sarcomeres in each myofibril was equal to the total muscle length ., These assumptions lead to a set of functions ( 2 ) where Fhs , i , j and xhs , i , j respectively describe the force developed by and the length of half-sarcomere i in myofibril j , Fm , j is the force in myofibril j and xT is the total length of the framework ( Fig 2 ) ., These functions can be solved using a root-finding method ( see Numerical Methods section below ) to yield the lengths of each half-sarcomere and thus the mechanical state of the framework ., Fact values for each half-sarcomere in the framework were calculated using techniques previously described for a single half-sarcomere model by Campbell & Moss 7 ., Myosin heads were assumed to cycle through the 3-state kinetic scheme shown in Fig 7 ., The proportion p ( xhs ) of cross-bridges participating in the kinetic scheme in each half-sarcomere was set to zero for all xhs during simulations of passive muscle ( pCa 9 . 0 conditions ) ., In simulations of activate muscle ( pCa 4 . 5 conditions ) , p ( xhs ) was assumed to scale with the number of myosin heads overlapping the thin filament ( Fig 8A ) so that ( 3 ) where xoverlap is lthin+lthick−xhs , xmaxoverlap is lthick−lbare , and lthin , lthick , and lbare are the lengths of the thin filaments ( 1120 nm ) , thick filaments ( 815 nm ) and thick filament bare zone ( 80 nm ) respectively and λfalloff is a model parameter arbitrarily set to 0 . 005 nm−1 ., The rate constants defining the probability of a cross-bridge moving to a different biochemical state depended on the length x of the cross-bridge link and twelve model parameters ( Table, 2 ) that were determined by fitting the simulated force values to representative data records using multidimensional optimization techniques ( see below ) ., The spring constant kcb for an individual cross-bridge link was defined as 0 . 0016 N m−1 in close agreement with recent experimental estimates for this parameter 36 , 37 ., Energies for the cross-bridge states ( Fig 8B ) were defined as ( 4 ) where x is the length of the cross-bridge link , xps is the length of the force-generating power-stroke and A1 , base and A2 , base define the minimum energy of cross-bridge links bound in the A1 and A2 states respectively ., The energy difference between the ED and ED′ states ( Fig 8B ) was 25 kBT where kB is Boltzmanns constant ( 1 . 381×10−23 J K−1 ) and T was 288 K . ( The original experiments were performed at 15°C 7 , 37 ) ., Strain-dependent rate functions f12 ( x ) , f23 ( x ) and f31 ( x ) for the forward transitions ( Fig 7 ) were defined as ( 5 ) Reverse rate functions g21 ( x ) , g32 ( x ) and g13 ( x ) were defined in terms of the forward rate functions and the energy difference between the relevant states 38 as ( 6 ) Panels B , C and D in Fig 8 show the strain-dependence of the free energy diagram for the cross-bridge scheme , the forward rate functions and the reverse rate functions used in the simulations shown in Fig 3 ., The numerical values of the relevant parameters are listed in the third column in Table 2 ., The number of myosin heads per unit cross-sectional area in a single half-sarcomere framework was always N0 ( defined in this work as 1 . 15×1017 m−2 36 ) ., Half-sarcomere heterogeneity was incorporated into the simulations of multiple half-sarcomere frameworks by assuming that the number of myosin heads per half-sarcomere was a normally distributed variable ., Thus the actual number ( Ni ) of myosin heads participating in the cross-bridge cycle in half-sarcomere i at half-sarcomere length xhs was equal to ( 7 ) where Gi ( α ) was a variable randomly selected from a Gaussian distribution with mean of unity and a variance of α ., The passive force Fpas increased in a non-linear manner as ( 8 ) where σ , xoffset and L were determined by curve-fitting 7 , 14 , with the exception of one set of simulations ., Fig 6A shows force records simulated with a passive force that increased linearly with half-sarcomere length as ( 9 ) where kpas defines the stiffness of the passive elastic spring and xslack is the half-sarcomere length at which the spring falls slack ., Filament compliance effects 39 , 40 were incorporated by assuming that if a half-sarcomere changed length by Δxhs in a given time-step each cross-bridge link in the half-sarcomere changed length by ½Δxhs 11 ., This over-simplifies the realignment of actin binding sites and myosin heads that occurs in real muscle fibers but the finite availability of computing power means that it is not yet practical to implement more realistic simulations of filament compliance effects 41–44 with a framework containing 300 half-sarcomeres ., The mathematical model was implemented as a multi-threaded console application ( Visual Studio 2005 , Microsoft , Redmond , WA ) written in C++ ., Equation 2 was solved using the newt ( ) function described by Press et al . 45 which invokes Newtons method to solve non-linear sets of functions ., Δx for cross-bridge populations 7 was set to 0 . 5 nm ., The time-step was set to 1 ms . Reducing these parameters by 50% did not materially change the results of the calculations ., Calculated rate constants ( Eqs 5 and 6 ) were constrained to a maximum value of 500 s−1 ., Rate constants were set to zero if the calculated value was less than 0 . 01 s−1 ., This simplified the numerical procedures used to solve the evolution of the cross-bridge populations ., Randomly-distributed double-precision numbers were generated using the Mersenne Twister Algorithm 46 ., Post-processing of simulation output files and subsequent figure development was performed using custom-written MATLAB ( The Mathworks , Nattick , MA ) software ., Particle swarm optimization routines 47 were used to fit the force traces predicted by the simulations to selected experimental records ., This was done by searching for the lowest attainable value of an error function defined as ( 10 ) where Fexpt , i is the experimentally-recorded force value at time-point i and F ( Φ ) predict , i is the corresponding prediction for parameter set Φ ., Solving Eq 2 for a framework with nm\u200a=\u200a6 and nhs\u200a=\u200a50 took ∼0 . 25 s on a quad-core 2 . 5 GHz personal computer ., Each simulated force response ( of order 103 time-steps with 1 ms resolution ) therefore required ∼5 minutes to compute ., To reduce the wall-time required for the parameter estimation procedures , the calculations were performed using spare screen-saver processing time on ∼30 computers running DEngine ( for Distributed computing ENGINE ) software developed by the author ( http://www . dengine . org ) ., This arrangement allowed typical optimization tasks to be completed using a particle swarm algorithm 47 in ∼2 days ( ∼10 times faster | Introduction, Results, Discussion, Materials and Methods | Most reductionist theories of muscle attribute a fibers mechanical properties to the scaled behavior of a single half-sarcomere ., Mathematical models of this type can explain many of the known mechanical properties of muscle but have to incorporate a passive mechanical component that becomes ∼300% stiffer in activating conditions to reproduce the force response elicited by stretching a fast mammalian muscle fiber ., The available experimental data suggests that titin filaments , which are the mostly likely source of the passive component , become at most ∼30% stiffer in saturating Ca2+ solutions ., The work described in this manuscript used computer modeling to test an alternative systems theory that attributes the stretch response of a mammalian fiber to the composite behavior of a collection of half-sarcomeres ., The principal finding was that the stretch response of a chemically permeabilized rabbit psoas fiber could be reproduced with a framework consisting of 300 half-sarcomeres arranged in 6 parallel myofibrils without requiring titin filaments to stiffen in activating solutions ., Ablation of inter-myofibrillar links in the computer simulations lowered isometric force values and lowered energy absorption during a stretch ., This computed behavior mimics effects previously observed in experiments using muscles from desmin-deficient mice in which the connections between Z-disks in adjacent myofibrils are presumably compromised ., The current simulations suggest that muscle fibers exhibit emergent properties that reflect interactions between half-sarcomeres and are not properties of a single half-sarcomere in isolation ., It is therefore likely that full quantitative understanding of a fibers mechanical properties requires detailed analysis of a complete fiber system and cannot be achieved by focusing solely on the properties of a single half-sarcomere . | Quantitative muscle biophysics has been dominated for the last 60 years by reductionist theories that try to explain the mechanical properties of an entire muscle fiber as the scaled behavior of a single half-sarcomere ( typical muscle fibers contain ∼106 such structures ) ., This work tests the hypothesis that a fibers mechanical properties are irreducible , meaning that the fiber exhibits more complex behavior than the half-sarcomeres do ., The key finding is that a system composed of many interacting half-sarcomeres has mechanical properties that are very different from that of a single half-sarcomere ., This conclusion is based on the results of extensive computer modeling that reproduces the mechanical behavior of a fast mammalian muscle fiber during an imposed stretch without requiring that titin filaments become more than 3-fold stiffer in an activated muscle ., This work is significant because it shows that it is probably not sufficient to attribute functional properties of whole muscle fibers solely to the behavior of a single half-sarcomere ., Systems-level approaches are therefore likely to be required to explain how known structural and biochemical heterogeneities influence function in normal and diseased muscle tissue . | mathematics, physiology/muscle and connective tissue, biophysics/theory and simulation, physiology/motor systems, computational biology/systems biology | null |
525 | journal.pcbi.1003123 | 2,013 | Predicting Network Activity from High Throughput Metabolomics | Knowledge of many metabolic pathways has accumulated over the past century ., For instance , glycolysis , citric acid cycle and oxidative phosphorylation fuel cellular processes through the generation of adenosine triphosphate; glycans and cholesterols not only serve as structural blocks but also mediate intercellular communication ., In fact , metabolites pervade every aspect of life 1 , 2 ., Their roles are increasingly appreciated , as advancing tools allow deeper scientific investigations ., The most notable progresses in recent years come from metabolomics and genome-scale metabolic models ., Metabolomics is the emerging field of comprehensive profiling of metabolites ., As metabolites are the direct readout of functional activity , metabolomics fills in a critical gap in the realm of systems biology , complementing genomics and proteomics 3–6 ., The technical platforms of metabolomics are mainly based on mass spectromety and nuclear magnetic resonance 4 , 7 ., Among them , untargeted LC/MS ( liquid chromatography coupled mass spectrometry ) , especially on high resolution spectrometers , produces unparalleled throughput , measuring thousands of metabolite features simultaneously 5 , 8–10 ., On the other hand , genome-scale metabolic models have been largely driven by genomics , as the total list of metabolic enzymes of a species can be derived from its genome sequence 11 , 12 ., The reconstruction of microbial metabolic network models is an established process 13 , 14 ., Intense manual curation , however , was required in the building of two high-quality human models 15 , 16 , which were followed by a number of derivatives 17–20 ., The coverage of these metabolic models greatly exceeds the conventional pathways ., Even though they are a perfect match in theory , metabolomics and genome-scale metabolic models have had little overlap so far ., The use of metabolomics in building metabolic models is rare 21 , due to the scarcity of well annotated metabolomics data ., The application of genome-scale metabolic models to metabolomics data is not common either 22 ., The limited throughput of targeted metabolomics usually does not motivate large scale network analysis ., Untargeted metabolomics cannot move onto pathway and network analysis without knowing the identity of metabolites ., A typical work flow of untargeted metabolomics is illustrated in Figure 1A ., After ionized molecules are scanned in the spectrometer , the spectral peaks are extracted , quantified and aligned into a feature table ., At this point , each feature is identified by a mass-to-charge ratio ( m/z ) and retention time in chromatography , but its chemical identity is not known ., To assign a spectral feature to a bona fide metabolite , it usually involves tandem mass spectrometry to examine the fragmentation pattern of a specific feature , or coelution of isotopically labeled known references - both are inherently low throughput ., Considerable effort is needed to build a spectral library , which is often of limited size and interoperability ., Thus , metabolite identification forms the bottleneck of untargeted metabolomics 23 ., A number of informatics tools have been developed for LC/MS metabolomics , ranging from feature extraction 24–26 , pathway analysis and visualization 27–29 to work flow automation 30–32 ., Yet , whereas pathway and network analysis is concerned , the existing tools require identified metabolites to start with ., Computational prediction of metabolite identity , based on m/z alone , is deemed inadequate as a single m/z feature can match multiple metabolites even with high instrumental accuracy 33 , 34 , and multiple forms of the same metabolite often exist in the mass spectra 35 ., Although automated MS/MS ( tandem mass spectrometry ) search in databases is improving the efficiency of metabolite identification 36 , 37 , this requires additional targeted experiments and relies on extensive databases , where data from different platforms often do not match ., How to bring untargeted metabolomics data to biological interpretation remains a great challenge ., In this paper , we report a novel approach of predicting network activity from untargeted metabolomics without upfront identification of metabolites , thus greatly accelerating the work flow ., This is possible because the collective power in metabolic networks helps resolve the ambiguity in metabolite prediction ., We will describe the computational algorithms , and demonstrate their application to the activation of innate immune cells ., The genome-scale human metabolic network in mummichog is based on KEGG 38 , UCSD Recon1 15 and Edinburgh human metabolic network 16 ., The integration process was described previously 39 ., The organization of metabolic networks has been described as hierarchical and modular 40 ., When we perform a hierarchical clustering on the metabolic reactions in our network , its modular structure is clear ( Figure 2A ) ., This modular organization , as reported previously 41 , often but not always correlates with conventional pathways ( Figure 2B ) ., The module definition in this work is adopted from Newman and Girvan 42 , 43 , where a module is a subnetwork that shows more internal connections than expected randomly in the whole network ., Modules are less biased than pathways , which are defined by human knowledge and conventions , and outgrown by genome-scale metabolic networks ., Activity of modules may exist within and in between pathways ., Deo et al 22 convincingly demonstrated the advantage of unbiased module analysis over pathways ., On the other hand , pathways have built-in human knowledge , which may be more sensitive under certain scenarios ., Pathway analysis and module analysis are rather complementary , and both are included in mummichog ., The reference metabolic network model contains both metabolites and enzymes ., Since metabolomics only measures metabolites , the model is converted to a metabolite centric network for statistical analysis ., Enzymes are only added later in the visualization step to aid user interpretation ., Within the predefined reference metabolic network model , mummichog searches for all modules that can be built on user input data , and compute their activity scores ., This process is repeated many times for the permutation data to estimate the background null distribution ., Finally , the statistical significance of modules based on user data is calculated on the null distribution ., The specific steps are as follows: The basic test for pathway enrichment here is Fishers exact test ( FET ) , which is widely used in transcriptomic analysis ., The concept of FET is , when we select features ( ) from a total of features ( ) , and find of the features present on a pathway of size , the chance of getting in theory can be estimated by enumerating the combinations of , and ., To apply FET to an enrichment test of metabolic features on pathways , we need to understand the additional layer of complexity ., Our metabolic features can be enumerated either in the m/z feature space or in the metabolite ( true compound ) space ., Since metabolic pathways are defined in the metabolite space , either way needs to factor in the many-to-many mapping between m/z features and metabolites ( Figure S1 ) ., This mapping is effectively covered in our permutation procedure , which starts from the m/z features and reruns the mapping every time ., The overall significance of a pathway enrichment is estimated based on a method by Berriz et al 44 , which ranks the p-value from real data among the p-values from permutation data to adjust for type I error ., Finally , a more conservative version of FET , EASE , is adopted to increase the robustness 45 ., The key idea of EASE is to take out one hit from each pathway , thus preferentially penalize pathways with fewer hits ., The specific steps are as follows: Both the module analysis and pathway analysis above serve as a framework to estimate the significance of functional activities ., In return , the predicted metabolites in significant activities are more likely to be real ., Mummichog collects these metabolites , and look up all their isotopic derivatives and adducts in ., A confidence rating system is applied to filter for qualified metabolites ., For instance , if both the single-charged form M+H1+ and the form M ( C13 ) +H1+ are present , this metabolite prediction carries a high confidence ., All the qualified metabolites carry over their connections in the reference metabolic network , and form the “activity network” for this specific experiment ( e . g . Figure 3 ) ., The activity network gears towards a quality and clear view of user data , as modules and pathways can be redundant and fragmented ., It also accentuates the activity in a specific experimental context , in contrast to the generic nature of the reference metabolic network ., We next illustrate the application of these algorithms to a novel set of immune cell activation data , and two published data sets on human urinary samples and yeast mutants ., The innate immunity plays a critical role in regulating the adaptive immunity , and the field was recognized by the 2011 Nobel Prize in Physiology or Medicine 46 ., According to the nature of stimuli , innate immune cells direct different downstream molecular programs , which are still under intense scientific investigation 47 , 48 ., In this study , we examine the metabolome of human monocyte-derived dendritic cells ( moDC ) under the stimulation of yellow fever virus ( YF17D , a vaccine strain ) ., We have shown previously that yellow fever virus activates multiple toll-like receptors , and induces cellular stress responses 49–51 ., This data set is , to our knowledge , the first high throughput metabolomics on any immune cells ( macrophages were previously studied by limited throughput ) ., The cell extracts from our activation experiment were analyzed by LC/MS metabolomics , and yielded 7 , 995 spectral features ( denoted as ) after quality control ., Among them , 601 features were significantly different between the infected samples and both the baseline and time-matched mock controls ( , student t-test; denoted as ) ., Using and , mummichog computes significant pathways and modules and the activity network ., Viral infection induced a massive shift of metabolic programs in moDCs ( pathways in Table S1 , modules in Figure S2 ) ., The predicted activity network is shown in Figure 3A , and we will focus our investigation on a small subnetwork ( Figure 3B ) , which includes the metabolisms of nucleotides , glutathione/glutathione disulfide and arginine/citrulline ., Nucleotides are required for viral replication , and the hijacking of host nucleotide metabolism by virus has been well described 52–54 ., Glutathione is best known as intracellular antioxidant , where it is oxidized to glutathione disulfide ( GSSG ) ., However , our data show that both glutathione and GSSG are depleted in activated moDCs , departing from this conventional wisdom ., The across-the-board depletion is consistent with the down-regulation of genes for glutathione synthesis ( Figure 4B ) ., Our data support the notion that glutathione is released by dendritic cells and conditions the extracellular microenvironment during their interaction with T cells 55–57 ., Arginine is known to be an important regulator in innate immune response 58 , 59 ., Arginine metabolism can lead to two pathways: to ornithine ( catalyzed by arginase ) or to citrulline ( catalyzed by nitric oxide synthase ) ., The decrease of arginine and increase of citrulline suggests the latter pathway , which is the main reaction of producing intracellular nitric oxide ., We indeed detected the inhibition of eNOS and iNOS expression later ( Figure 4C ) , a well documented feedback effect by nitric oxide 60 ., We also performed tandem mass spectrometry on the metabolites in Figure 3B , using authentic chemicals as references ., All the metabolites , except glutamylcysteine and thyroxine , were confirmed ( Figure 5 , Figure S3 ) ., The depletion of arginine and accumulation of citrulline in moDC was also triggered by dengue virus but not by lipopolysaccharide ( LPS , Figure S4 ) ., It is known that iNOS is induced in dendritic cells by LPS but not by virus 47 , 61 ., Our data suggest a different nitric oxide pathway in viral infection , driven by constitutive nitric oxide synthases ., The intracellular nitric oxide has a fast turnover and we did not detect its accumulation by fluoremetric assays ( data not shown ) ., We previously demonstrated that the phosphorylation of EIF2A was induced by YF17D 50 ., An upstream mechanism is now suggested by this metabolomic experiment , as both the production of nitric oxide and depletion of arginine induce the activity of EIF2A kinases 62 ., The nature of metabolomics data often varies by platforms and sample types ., We thus extend our mummichog approach to two published data sets on human urinary samples 63 and on yeast cell extracts 64 ., Both data sets carry metabolite annotation by the original authors , which can be used to evaluate the prediction by mummichog ., The human urinary data contained both formal identification by matching to local library of chemical references and putative identification by combining multiple public resources 63 ., We used mummichog to investigate the gender difference in this data set , and predicted an activity network of 45 metabolites ., Among them , 13 were not found in the original annotation ., For the remaining metabolites , 97% ( 31/32 ) were agreed between mummichog and the original annotation ( Figure 6 ) ., There is an option in mummichog to enforce the presence of M+H+ form ( for positive mode , M−H− for negative mode ) in metabolite prediction ., With this option , 3 out of 44 metabolites were not in the original annotation , and the remaining 41 metabolites were in 100% agreement ., The mummichog algorithms are not tied to a specific metabolic model ., We adopted the yeast metabolic model from BioCyc database 11 for the yeast data 64 , to predict an activity network contrasting mutant and wild type strains ., This data set was only annotated for 101 metabolites through the authors local library ., As a result , the majority of metabolites in the predicted network by mummichog were not found in the original annotation ., Out of the remaining 28 metabolites , 24 ( 86% ) were agreed between mummichog and the original annotation ( Figure 6 ) ., Enforcing the presence of primary ion M−H− ( data collected in negative ion mode ) had little effect to the result , since the original annotation was already biased to metabolites that are ionized easily ., These results show that the prediction by mummichog is robust cross platforms and sample types ., Critical to the success of mummichog is the integration of genome-scale metabolic models ., We have used in this study a recent human metabolic model ., An alternative human model from BioCyc 11 produced comparable results ( Figure S6 ) ., The coverage of the models in all three case studies is shown in Table 1 ., These genome-scale metabolic models are more extensive than conventional pathways , and were shown to capture activities in between pathways 22 ., The pathway organizations differ between the two human models , as the BioCyc model tends to use smaller pathways ., This creates some model dependency in the pathway analysis , but little effect to the “activity network” , as mummichog is more network centric ., The two test cases in Figure 6 also indicate that these models tend to capture more information than conventional annotations ., However , as mentioned earlier , the new data from metabolomic studies are yet to be integrated into these genome-scale metabolic models ., For example , a number of metabolites in metabolomics databases 36 , 65 , 66 are not in any of these metabolic models ., In general , the features from a high resolution profiling experiment by far exceed the current annotations in metabolite databases ., This leads to a de facto filtering when data are run on mummichog ( similiar situation in database searches ) ., Meanwhile , the features that can be mapped to the current metabolic model are more likely to be biologically relevant ., This “filtering” is pertinent to the metabolic model , not to mummichog algorithms - mummichog still has to choose the true metabolites from multiple possible candidates ( Figure S1B ) ., It will be an important future direction to advance metabolic modeling with the chemical data ., We also expect the metabolic models to improve on lipid annotation , physiological context and tissue specificity ., As lessons learned from transcriptomics , pathway and network analysis not only provides functional context , but also the robustness to counteract noises at individual feature level , which are commonly seen in omics experiments ., Similarly , the prediction on activity by mummichog is tolerant to errors at individual feature level ., In the moDC data , we chose by a cutoff value ., When we vary this cutoff from to , the program returned networks of a stable set of metabolites ( Figure S7 ) ., The module finding procedure in the program was designed to extensively sample subnetwork structures ., Among the modules will be many variations , but the subsequenct “activity network” will collapse on stable results ., In deed , we tested an alternative algorithm of modularization 67 , and it returned almost identical predicted networks , in spite of moderately different intermediate modules ( Figure S8 ) ., In theory , there are merits to incorporate a statistical matrix from the feature selection step prior to mummichogs analysis and mass flow balance of metabolic reactions 22 , 68 ., While these are appealing directions for future research , the current version of mummichog confers some practical robustness , such as tolerance to technological noise and biological sampling limitation ., For example , mass balance is almost impossible within serum or urine samples , because the reactions producing these metabolites are likely to occur in other tissues ., The number of overlap metabolites is used in the enrichment calculation in both module analysis and pathway analysis ., Sometimes , a single m/z feature may match to several metabolites in the same module/pathway , inflating the overlap number ., Thus , mummichog always compares the number of overlap metabolites and the number of corresponding m/z features , and uses the smaller number for enrichment calculation , since the smaller number is more likely to be true ., The size of each metabolic pathway is defined by the number of metabolites in the pathway ., mummichog uses only the metabolites that can be matched in to define a pathway size , because this reflects the analytical coverage of the experiment and is confined by the same coverage ., Overall , mummichog uses the whole feature list from an experiment for resampling , therefore the computation of statistical significances effectively circumvents analytical biases ., In spite of the fantastic progress in mass spectrometry , these are the early days of metabolomics ., Effective computational integration of resources , the combination of cheminformatics and bioinformatics , will greatly benefit the field 69 , 70 ., As data accumulate , further method refinement will become possible ., Mummichog presents a practical solution of one-step functional analysis , bypassing the bottleneck of upfront metabolite identification ., It trades off some sensitivity in the conventional approach for the much accelerated work flow of high throughput LC/MS metabolomics ., Mummichog is not designed to replace tandem mass spectrometry in metabolite identification ., It is the biological activity not metabolites per se that mummichog predicts ., Even with some errors on individual metabolites , as long as the biology is pinpointed to a subnetwork structure , investigators can focus on a handful of validations , steering away from the lengthy conventional work flow ., In conclusion , we have demonstrated that mummichog can successfully predict functional activity directly from a spectral feature table ., This benefits from the convergence of genome-scale metabolic models and metabolomics ., Mummichog will continue to improve as the metabolic network models evolve ., We expect this method to greatly accelerate the application of high throughput metabolomics ., The mummichog software is available at http://atcg . googlecode . com ., Human peripheral blood mononuclear cells ( PBMCs ) were isolated from Buffy coats by separation over a Lymphoprep gradient ., CD14+ monocytes were isolated from the PBMCs with MACS beads ( Miltenyi Biotec , Auburn , CA ) and cultured for 7 days with 20 ng/ml GM-CSF and 40 ng/ml IL-4 ( Peprotech , Rocky Hill , NJ ) ., MoDCs were then harvested , washed twice and resuspended in serum-free medium ., MoDCs ( ) were stimulated in triplicate in 48-well plates in a 200 µL volume with Yellow Fever virus ( M . O . I . of 1 ) , or mock infected ., After 2 hrs , 800 µL of 10% FBS-RPMI was added to all wells ., MoDCs were harvested at 6 hr or 24 hr after infection and centrifuged ., Supernatants were aspirated , and dry cell pellets were frozen at −80°C ., Supernatants of moDC cultures were assayed for the concentration of IL-6 and TNF using ELISA kits ( BD , San Diego , CA ) ., Three biological replicates were used for LC/MS and QPCR ., Full scan LC/MS ( m/z range 85–2000 ) was performed essentially as previously described 8 ., Cell extracts or supernatants were treated with acetonitrile ( 2∶1 , v/v ) and centrifuged at 14 , 000× g for 5 min at 4°C to remove proteins ., Samples were maintained at 4°C in an autosampler until injection ., A Thermo Orbitrap-Velos mass spectrometer ( Thermo Fisher , San Diego , CA ) coupled with anion exchange chromatography was used for data collection , via positive-ion electrospray ionization ( ESI ) ., Metabolites of interest were identified by tandem mass spectrometry on a LTQ-FTMS , where the biological sample , biological sample spiked in with authentic chemical and authentic chemical reference were run sequentially ., The and were done in the ion trap of the LTQ-FTMS , with an isolation width of 1 amu and a normalized collision energy of 35 eV ., The LC/MS data were processed with apLCMS program 25 for feature extraction and quantification ., Significant features were also verified by inspecting the raw data ( Figure S5 ) ., Features were removed if their intensity is below 10 , 000 in every sample class ., Missing intensity values were imputed to 1000 ., The intensities were log2 transformed ., Low quality features were further filtered out if their averaged in-class coefficient of variation was greater than 0 . 2 ., Averaged ion intensity over three machine replicates was used for subsequent analysis ., These 7 , 995 features constituted the reference list ., No normalization was used because total ion counts in all samples were very similar ., Students t-test was used to compare infected samples ( at 6 hr ) versus mock infections ( at 6 hr ) , and infected samples ( at 6 hr ) versus baseline controls ( at 0 hr ) ., Features with in both tests were included in the significant list ., The feature table , , and predictions are given in Dataset S1 ., For gene expression quantification , mRNA was extracted by RNeasy Mini Kit ( Qiagen ) according to manufacturers protocol , where the cell lysate was homogenized by QIAshredder spin columns ., Reverse transcription was performed with SuperScript III reverse transcriptase and oligo-dT ( Invitrogen ) according to manufacturers recommendation ., Real-time PCR was performed on a MyiQ Icycler ( BioRad ) , using SYBR Green SuperMix ( Quanta Biosciences ) ., The PCR protocol used 95°C 3 mins; 40 cycles of 95°C 30 seconds and 60°C for 60 seconds ., The amplicons were verified by melting curves ., Quantafication was performed by the method , normalized by microglobulin levels ., The primer sequences are given in Table S2 ., Data on human urinary samples 63 were retrieved from MetaboLights server 71 ., The positive ion feature table for study “439020” contained 14 , 720 features ., A feature is only included if its ion intensity is above 100 , 000 in 5 or more samples ., This leaves 11 , 086 features , which consist for this study ., Data were normalized by total ion counts ., We next compared the metabolite difference between females ( 8 samples of low testosterone glucuronide level ) and males ( 11 samples of high testosterone glucuronide level ) ., is consisted of 524 features ( by student t-test ) ., The original authors annotated 3 , 689 metabolite features , and their annotation was used to compare with the prediction by mummichog ., The yeast data 64 were downloaded from MAVEN website 32 in mzXML format ., Feature extraction was performed in MAVEN through two approaches: targeted approach and untargeted approach ., The targeted approach used chemical library from the same lab and produced 177 annotated features , which corresponded to 101 metabolites ., The untargeted approach extracted 6318 features without annotation ., After the same processing procedure as in our moDC data , contained 5707 features ., We thus used mummichog to predict on the untargeted data , and compared the result to MAVEN annotation ., The consisted of 426 features that were significantly different between the prototrophic wild type and the auxotrophic mutant ( by student t-test ) ., The yeast metabolic model was compiled from BioCyc data 11 . | Introduction, Results, Discussion, Methods | The functional interpretation of high throughput metabolomics by mass spectrometry is hindered by the identification of metabolites , a tedious and challenging task ., We present a set of computational algorithms which , by leveraging the collective power of metabolic pathways and networks , predict functional activity directly from spectral feature tables without a priori identification of metabolites ., The algorithms were experimentally validated on the activation of innate immune cells . | Mass spectrometry based untargeted metabolomics can now profile several thousand of metabolites simultaneously ., However , these metabolites have to be identified before any biological meaning can be drawn from the data ., Metabolite identification is a challenging and low throughput process , therefore becomes the bottleneck of the filed ., We report here a novel approach to predict biological activity directly from mass spectrometry data without a priori identification of metabolites ., By unifying network analysis and metabolite prediction under the same computational framework , the organization of metabolic networks and pathways helps resolve the ambiguity in metabolite prediction to a large extent ., We validated our algorithms on a set of activation experiment of innate immune cells ., The predicted activities were confirmed by both gene expression and metabolite identification ., This method shall greatly accelerate the application of high throughput metabolomics , as the tedious task of identifying hundreds of metabolites upfront can be shifted to a handful of validation experiments after our computational prediction . | systems biology, metabolic networks, biology, computational biology | null |
814 | journal.pcbi.1003595 | 2,014 | Agent-Based Modeling of Oxygen-Responsive Transcription Factors in Escherichia coli | The bacterium Escherichia coli is a widely used model organism to study bacterial adaptation to environmental change ., As an enteric bacterium , E . coli has to cope with an O2-starved niche in the host and an O2-rich environment when excreted ., In order to exploit the energetic benefits that are conferred by aerobic respiration , E . coli has two major terminal oxidases: cytochrome bd-I ( Cyd ) and cytochrome bo′ ( Cyo ) that are encoded by the cydAB and cyoABCDE operons , respectively 1 , 2 ., Cyd has a high affinity for O2 and is induced at low O2 concentrations ( micro-aerobic conditions ) , whereas Cyo has a relatively low affinity for O2 and is predominant at high O2 concentrations ( aerobic conditions ) 3 ., These two terminal oxidases contribute differentially to energy conservation because Cyo is a proton pump , whereas Cyd is not 1 , 2; however , the very high affinity of Cyd for O2 allows the bacterium to maintain aerobic respiration at nanomolar concentrations of O2 , thereby maintaining aerobic respiratory activity rather than other , less favorable , metabolic modes 4–6 ., The transcription factors , ArcA and FNR , regulate cydAB and cyoABCDE expression in response to O2 supply 7 ., FNR is an iron-sulfur protein that senses O2 in the cytoplasm 8 , 9 ., In the absence of O2 the FNR iron-sulfur cluster is stable and the protein forms dimers that are competent for site-specific DNA-binding and regulation of gene expression 10 ., The FNR iron-sulfur cluster reacts with O2 in such a way that the DNA-binding dimeric form of FNR is converted into a non-DNA-binding monomeric species 10 ., Under anaerobic conditions , FNR acts as a global regulator in E . coli 11–13 , including the cydAB and cyoABCDE operons , which are repressed by FNR when the O2 supply is restricted 7 ., Under aerobic conditions , repression of cydAB and cyoABCDE is relieved and Cyd and Cyo proteins are synthesized 3 ., In contrast , ArcA responds to O2 availability indirectly via the membrane-bound sensor ArcB ., In the absence of O2 ArcB responds to changes in the redox state of the electron transport chain and the presence of fermentation products by autophosphorylating 14–16 ., Phosphorylated ArcB is then able to transfer phosphate to the cytoplasmic ArcA regulator ( ArcA∼P ) , which then undergoes oligomerization to form a tetra-phosphorylated octomer that is capable of binding at multiple sites in the E . coli genome 17 , 18 , including those in the promoter regions of cydAB and cyoABCDE to enhance synthesis of Cyd and inhibit production of Cyo 7 , 17 ., Because the terminal oxidases ( Cyd and Cyo ) consume O2 at the cell membrane , a feedback loop is formed that links the activities of the oxidases to the regulatory activities of ArcA and FNR ( Figure 1 ) ., These features of the system - combining direct and indirect O2 sensing with ArcA∼P and FNR repression of cyoABCDE , and ArcA∼P activation and FNR repression of cydAB - result in maximal Cyd production when the O2 supply is limited ( micro-aerobic conditions ) and maximal Cyo content when O2 is abundant ( aerobic conditions ) 3 ., Although the cellular locations of the relevant genes ( cydAB and cyoABCDE ) , the regulators ( ArcBA and FNR ) and the oxidases ( Cyd and Cyo ) are likely to be fundamentally important in the regulation of this system , the potential significance of this spatial organization has not been investigated ., Therefore , a detailed agent-based model was developed to simulate the interaction between O2 molecules and the electron transport chain components , Cyd and Cyo , and the regulators , FNR and ArcBA , to shed new light on individual events within local spatial regions that could prove to be important in regulating this core component of the E . coli respiratory process ., The dynamics of the system were investigated by running the simulation through two cycles of transitions from 0–217% AU ., Figure 3a shows a top view of a 3-D E . coli cell at 0% AU ( steady-state anaerobic conditions ) ., Under these conditions , the FNR molecules are present as dimers , all ArcB molecules are phosphorylated and the ArcA is octameric ., The DNA binding sites for ArcA ( 120 in the model ) and FNR ( 350 in the model ) in the nucleoid are fully occupied ., The number of ArcA sites was chosen from the data reported by Liu and De Wulf 18 ., The model must include a mechanism for ArcA∼P to leave regulated promoters ., Upon introduction of O2 into anaerobic steady-state chemostat cultures ∼5 min was required to inactivate ArcA-mediated transcription 15 ., In the agent-based model presented here , each iteration represents 0 . 2 sec ., Therefore , assuming that ArcA∼P leaving the 120 DNA sites is a first order process , then t½ is ∼45 sec , which is equivalent to ∼0 . 3% ArcA∼P leaving the DNA per iteration ( Table 3 ) ., The number of FNR binding sites was based on ChIP-seq and ChIP-Chip measurements , which detected ∼220 FNR sites and a genome sequence analysis that predicted ∼450 FNR sites; thus a mid-range value of 350 was chosen 23–25 ., Interaction with O2 causes FNR to dissociate from the DNA ( Table 3 ) ., Under fully aerobic conditions ( 217% AU ) the FNR dimers are disassembled to monomers , and the different forms of ArcA coexist ( Figure 3b ) ., The ArcA- and FNR- DNA binding sites in the nucleoid are mostly unoccupied due to the lower concentrations of FNR dimers and ArcA octamers ., Examination of the system as it transits from 0% to 217% AU showed that the DNA-bound , transcriptionally active FNR was initially protected from inactivation by consumption of O2 at the cell membrane by the terminal oxidases and by reaction of O2 with the iron-sulfur clusters of FNR dimers in the bacterial cytoplasm - the progress of this simulation is shown in Video S1 ., This new insight into the buffering of the FNR response could serve a useful biological purpose by preventing pre-mature switching off of anaerobic genes when the bacteria are exposed to low concentration O2 pulses in the environment ., In the various niches occupied by E . coli , the bacterium can experience the full range of O2 concentrations from zero , in the anaerobic regions of a host alimentary tract , to full O2 saturation ( ∼200 µM , equivalent to ∼120 , 000 O2 molecules per cell ) , but fully aerobic metabolism is supported when the O2 supply exceeds 1 , 000 O2 molecules per cell ., The profiles of five repetitive simulations for each agent in the model are presented in Figure, 4 . From iteration 1 to 5000 and iteration 15000 to 20000 , O2 was supplied at a constant value of ∼6 , 500 molecules per cell such that the total number of O2 molecules entering the cell increased linearly; when the O2 supply was stopped ( 5000 to 15000 and 20000 to 30000 iterations ) no more O2 entered the cell and thus the number of O2 molecules that had entered the cell remained unchanged during these periods ( Figure 4a ) ., When O2 became available to the cell ( from iteration 1 ) , the sensor ArcB was de-phosphorylated and started to de-phosphorylate ArcA ., Consequently , the number of ArcA octamers bound at their cognate sites in the nucleoid decreased rapidly ., The ArcA tetramers and dimers produced during de-phosphorylation of the ArcA octamer were transformed to inactive ( de-phosphorylated ) ArcA dimers , ( Figure 4d–f ) ., Under aerobic conditions ( iteration 5000 ) all the ArcA was decomposed to inactive ArcA dimers ., When the O2 supply was stopped ( from iteration 5001 ) , the number of inactive ArcA dimers decreased rapidly as shown in Figure 4f , being transformed into phosphorylated ArcA dimers , tetramers and octamers ( Figure 4c–e ) ., Due to the phosphorylated ArcA dimers and tetramers combining to form ArcA octamers , their numbers dropped after initially increasing ., The rate at which the ArcA octomers accumulated ( ArcA activation ) after O2 withdrawal was slower than the rate of ArcA inactivation ( Figures 4b and c ) ., In this implementation of the modeled transition cycle , the numbers of ArcA octamers in the cytoplasm and bound to DNA did not reach that observed in the initial state before the second cycle of O2 supply began , indicating that a longer period is required to return to the fermentation state ., The numbers of FNR dimer bound to binding sites and free FNR dimer ( cytoplasmic FNR dimer ) decreased when O2 was supplied to the system ( Figures 4g–h ) , but the rate was slower than that for ArcA inactivation , consistent with O2 consumption at the membrane , which can be sensed by ArcB to initiate inactivation of ArcA , but lowers the signal for inactivation of FNR ., When O2 was removed from the system ( from iteration 5001 ) FNR was activated over a similar timeframe to ArcA ( Figures 4b and g ) , which was again consistent with previous observations 15 ., As with ArcA , free FNR dimers and FNR monomers did not fully return to their initial states after O2 supply was withdrawn in the model , indicating that further iterations are required to reach steady-state ( Figure 4h–i ) ., These results clearly indicate that the model is self-adaptive to the changes in O2 availability , and the reproducible responses prove the reliability and robustness of the model ., The ArcBA system simulated in this model is based on a preliminary biological assumption , and the agent-based model presented here should prove a reliable and flexible platform for exploring the key components of the system and testing new experimental findings ., In order to validate the model with biological measurements of FNR DNA-binding activity estimated using an FNR-dependent lacZ reporter , the ArcBA system agents were removed from the model by setting their agent numbers to zero ., The ArcBA system is an indirect O2 sensor and does not consume O2 , hence the FNR system was not affected by withdrawing ArcBA from the model , but this simplification increased simulation speed ., The O2 step length and other model parameters were estimated using the experimental data obtained at 31% AU ., Using the estimated O2 step length at 31% AU and defining the step length of O2 molecule , , as 0 at 0% AU , a linear model , , was constructed to predict the step lengths of O2 at other AU levels , where k\u200a=\u200a2 . 1 and represents the O2 concentration at different AU levels ( Table 4 ) ., The O2 step lengths predicted by this model were used to validate the model at 85% , 115% and 217% AU , and the accuracy of the linear model was shown by the good correlation between the model and experimental data ., Profiles of five repetitive simulations in which the simplified model was used to predict the numbers of active FNR dimers in steady-state cultures of bacteria grown at different AU values are presented in Figure, 5 . At 31% AU , the model implied that FNR-mediated gene expression is unaffected compared to an anaerobic culture ( 0% AU ) , i . e . the number of FNR binding sites occupied in the nucleoid remained unchanged ( Figures 5a and, e ) ., Even at 85% AU , ∼80% of the FNR-binding sites remained occupied ( Figures 5b and, f ) ., It was only when the O2 supply was equivalent to >115% AU that occupation of the FNR-binding sites in the nucleoid decreased ( Figures 5 c , d , g and h ) ., These outputs matched the FNR activities calculated from the measurements of an FNR-dependent reporter ( Table 5 ) and thus demonstrate the abilities of the model to simulate the general behavior of FNR dimers in steady-state cultures of E . coli ., A second validation approach using two FNR variants that are compromised in their ability to undergo monomer-dimer transitions was adopted ., The FNR variant FNR I151A can acquire an iron-sufur cluster in the absence of O2 , but subsequent dimerization is impaired 26 ., The FNR D154A variant can also acquire an iron-sulfur cluster under anaerobic conditions , but does not form monomers in the presence of O2 26 ., To mimic the behavior of these two FNR variants the interaction radius for FNR dimer formation was changed in the model ., Thus , the interaction distance for wild-type FNR monomers , which was initially set at 6 nm ( r3 , Table, 3 ) was increased to 2000 nm for the FNR D154A variant , essentially fixing the protein as a dimer , or decreased to 2 . 5 nm for the FNR I151A variant , making this protein predominantly monomeric under anaerobic conditions ., The results of simulations run under aerobic ( 217% aerobiosis ) and anaerobic conditions ( 0% aerobiosis ) suggested that under aerobic conditions wild-type FNR and FNR I151A should be unable to inhibit transcription from an FNR-repressed promoter ( i . e . the output from the reporter system is 100% ) , whereas FNR D154A should retain ∼50% activity ( Table 6 ) ., Under anaerobic conditions , wild-type FNR was predicted to exhibit maximum repressive activity ( i . e . 0% reporter output ) , whereas FNR I151A and FNR D154A mediated slightly enhanced repression compared to the simulated aerobic conditions ( Table 6 ) ., To test the accuracy of these predictions , the ability of wild-type FNR , FNR I151A and FNR D154A to repress transcription of a synthetic FNR-regulated promoter ( FFgalΔ4 ) under aerobic and anaerobic conditions was tested 27 ., The choice of a synthetic FNR-repressed promoter was made to remove complications that might arise due to iron-sulfur cluster incorporation influencing the protein-protein interactions between FNR and RNA polymerase; in the reporter system chosen FNR simply occludes the promoter of the reporter gene and as such DNA-binding by FNR controls promoter activity ., The experimental data obtained matched the general response of the FNR variants in the simulation , but not very precisely for FNR D154A , with the experimental data indicating more severe repression by FNR D154A under both aerobic and anaerobic conditions than predicted ( Table 6 ) ., This suggested that the interaction radius ( r2\u200a=\u200a5 nm; Table 3 ) , which controls the binding of FNR to its DNA target required adjustment to enhance DNA-binding of the FNR D154A variant ., Therefore , the simulations were rerun after adjusting r2 to 7 nm for all the FNR proteins considered here ., The results of the simulations for both FNR variants now matched the experimental data well ( Table 6 ) ., However , it was essential to ensure that the adjustment to r2 did not significantly influence the model output for wild-type FNR ., Therefore , simulations of the behaviour of wild-type FNR at 31 , 85 , 115 and 217% aerobiosis were repeated using the adjusted r2 value of 7 nm ., The model output was very similar to those obtained when r2 was at the initial value of 5 nm ( Table 7 ) ., These analyses imply that for FNR D154A , which is essentially fixed in a dimeric state , the rate of binding to the target DNA governs transcriptional repression , but for wild-type FNR the upstream monomer-dimer transition is the primary determinant controlling the output from the reporter ., The FNR switch has been the subject of several attempts to integrate extensive experimental data into coherent models that account for changes in FNR activity and target gene regulation in response to O2 availability 15 , 28–31 ., These models have provided estimates of active and inactive FNR in E . coli cells exposed to different O2 concentrations and the dynamic behavior of the FNR switch ., The ability of FNR to switch rapidly between active and inactive forms is essential for it to fulfill its physiological role as a global regulator and the models are able to capture this dynamic behavior ., Thus , it is thought that the ‘futile’ cycling of FNR between inactive and active forms under aerobic conditions has evolved to facilitate rapid activation of FNR upon withdrawal of O2 and hence the physiological imperative for rapid activation has determined the structure of the FNR regulatory cycle 30 , 31 ., However , it is less clear from these approaches how the system avoids undesirable switching between active and inactive states at low O2 availabilities ( micro-aerobic conditions , >0%–<100% AU ) ., To achieve rapid FNR response times it has been suggested that minimizing the range of O2 concentrations that constitute a micro-aerobic environment , from the viewpoint of FNR , is advantageous 31 ., Unlike previous models of the FNR switch , the agent-based model described here recognizes the importance of geometry and location in biology ., This new approach reveals that spatial effects play a role in controlling the inactivation of FNR in low O2 environments ., Consumption of O2 by terminal oxidases at the cytoplasmic membrane and reaction of O2 with the iron-sulfur clusters of FNR in the cytoplasm present two barriers to inactivation of FNR bound to DNA in the nucleoid , thereby minimizing exposure of FNR to micro-aerobic conditions by maintaining an essentially anaerobic cytoplasm for AU values up to ∼85% ., It is suggested that this buffering of FNR response makes the regulatory system more robust by preventing large amplitude fluctuations in FNR activity when the bacteria are exposed to micro-aerobic conditions or experience environments in which they encounter short pulses of low O2 concentrations ., Furthermore , investigation of FNR variants with altered oligomerization properties suggested that the monomer-dimer transition , mediated by iron-sulfur cluster acquisition , is the primary regulatory step in FNR-mediated repression of gene expression ., It is expected that the current model will act as a foundation for future investigations , e . g . predicting the effects of adding or removing a class of agent to identify the significant regulatory components of the system ., Knowledge of the rate of O2 supply , , to the E . coli cells was required in order to simulate the response of the regulators of cydAB and cyoABCDE to different O2 availabilities ., Therefore , un-inoculated chemostat vessels were used to measure dissolved O2 concentrations , , as a function of the percentage O2 in the input gas , Pi , in the absence of bacteria ., This allowed the rate at which O2 dissolves in the culture medium to be calculated from the equation: , yielding =\u200a5 . 898 µmol/L/min ., The number of O2 molecules distributed to a single bacterial cell was then calculated from the following equation: ( where , NA is the Avogadro constant ( 6 . 022×1023 ) , Vcell is the volume of E . coli cell ( 0 . 3925 µm3 ) and as a constant for this equation , n ( 3 . 3×10−9 ) includes the unit transformations , min to sec ( 60−1 ) and µmol to mol ( 10−6 ) , and the time unit represented by an iteration ( 0 . 2 sec ) ., In the model the individual agents ( Cyd , Cyo , ArcB , ArcA , FNR and O2 ) are able to move and interact within the confines of their respective locations in a 3-D-cylinder representing the E . coli cell ., To control the velocity of agents , the maximal distances they can move in 3-D space during one iteration ( step length ) were pre-defined ( Table 4 ) ., Thus , a step length is pre-defined in program header file ( . h ) and for each movement , this is multiplied by a randomly generated value within 0 , 1 to obtain a random moving distance , which in turn is directed towards a 3-D direction ( movement vector ) that was also randomly generated within defined spatial regions ., An example is shown in Figure 6 to illustrate the movements of an O2 molecule when it enters the cell ., Interactions between agents depend upon the biological rules governing their properties and being in close enough proximity to react ., The interaction radius of an agent encapsulates the 3-D space within which reactions occur ., As the interaction radii cannot be measured , they were first estimated on the basis of known biological properties ., For the radii r1…4 , r12 and r13 ( Table 3 ) , arbitrary values were initially set at 31% AU , and the model was then trained to match the experimental result for the number of FNR dimers at 31% AU ( Table 5 ) ., The modeled output of FNR dimer number at steady-state was compared with the experimental data , and the difference suggested re-adjustment of interaction radii ., The adjusted radii were then tested against the FNR dimer numbers at 85% , 115% and 217% AU ( Table 5 ) during model validation , and the results indicate that the interaction radii values are capable of describing the behavior of the system ., The interaction radii of Cyd and Cyo with O2 reflect their relative affinities for O2 ( i . e . Cyd has a high O2 affinity and thus reacts more readily , 7 nm interaction radius , than Cyo , which has a lower affinity for O2 , 3 nm interaction radius ) ., As , thus far , no accurate biological data is available for ArcBA system , the radii r5…11 were arbitrarily defined and were refined by training the model to match current biological expectations ., The rod-shaped E . coli cell was modeled as a cylinder ( 500 nm×2000 nm ) 32 with the nucleoid represented as a sphere with a diameter of 250 nm at the centre of the cell ., The experimentally-based parameters and locations of the agents in their initial state are listed in Table 2 ., As the number of ArcB molecules has not been determined experimentally , this value was arbitrarily assigned ( see above ) ., The interaction rules for the agents are shown in Table 3 ( additional descriptions of an exemplar agent ( O2 ) and the rules for ArcBA and FNR are provided in , Table S1 and Text S1 ) ., These rules , combined with the interaction radii , determine the final status of the system ., The scale of the model is such that high performance computers are required to implement it , and the flexible agent-based supercomputing framework , FLAME ( http://www . flame . ac . uk ) acted as the framework to enable the simulation 33 , 34 ., For more information on FLAME see Figure S2 and Text S2 ., Plasmids encoding the FNR variants were constructed by site-directed mutagenesis ( Quikchange , Agilent ) of pGS196 , which contains a 5 . 65 kb fragment of wild-type fnr ligated into pBR322 35 ., The three isogenic plasmids pGS196 ( FNR ) , pGS2483 ( FNR I151A ) and pGS2405 ( FNR D154A ) were used to transform E . coli JRG4642 ( an fnr lac mutant strain ) containing a pRW50-based reporter plasmid carrying the lac-operon under the control of the FFgalΔ4 promoter 27 ., β-Galactosidase assays were carried out as described previously on strains grown in LBK medium at pH 7 . 2 containing 20 mM glucose 36 , 37 ., Cultures were grown either aerobically ( 25 ml culture in a 250 ml flask at 250 rpm agitation with 1∶100 inoculation ) or anaerobically ( statically in a fully sealed 17 ml tube with 1∶50 inoculation ) ., Cultures ( three biological replicates ) were grown until mid-exponential phase ( OD600\u200a=\u200a0 . 35 ) before assaying for β-galactosidase activity . | Introduction, Results/Discussion, Methods | In the presence of oxygen ( O2 ) the model bacterium Escherichia coli is able to conserve energy by aerobic respiration ., Two major terminal oxidases are involved in this process - Cyo has a relatively low affinity for O2 but is able to pump protons and hence is energetically efficient; Cyd has a high affinity for O2 but does not pump protons ., When E . coli encounters environments with different O2 availabilities , the expression of the genes encoding the alternative terminal oxidases , the cydAB and cyoABCDE operons , are regulated by two O2-responsive transcription factors , ArcA ( an indirect O2 sensor ) and FNR ( a direct O2 sensor ) ., It has been suggested that O2-consumption by the terminal oxidases located at the cytoplasmic membrane significantly affects the activities of ArcA and FNR in the bacterial nucleoid ., In this study , an agent-based modeling approach has been taken to spatially simulate the uptake and consumption of O2 by E . coli and the consequent modulation of ArcA and FNR activities based on experimental data obtained from highly controlled chemostat cultures ., The molecules of O2 , transcription factors and terminal oxidases are treated as individual agents and their behaviors and interactions are imitated in a simulated 3-D E . coli cell ., The model implies that there are two barriers that dampen the response of FNR to O2 , i . e . consumption of O2 at the membrane by the terminal oxidases and reaction of O2 with cytoplasmic FNR ., Analysis of FNR variants suggested that the monomer-dimer transition is the key step in FNR-mediated repression of gene expression . | The model bacterium Escherichia coli has a modular electron transport chain that allows it to successfully compete in environments with differing oxygen ( O2 ) availabilities ., It has two well-characterized terminal oxidases , Cyd and Cyo ., Cyd has a very high affinity for O2 , whereas Cyo has a lower affinity , but is energetically more efficient ., Expression of the genes encoding Cyd and Cyo is controlled by two O2-responsive regulators , ArcBA and FNR ., However , it is not clear how O2 molecules enter the E . coli cell and how the locations of the terminal oxidases and the regulators influence the system ., An agent-based model is presented that simulates the interactions of O2 with the regulators and the oxidases in an E . coli cell ., The model suggests that O2 consumption by the oxidases at the cytoplasmic membrane and by FNR in the cytoplasm protects FNR bound to DNA in the nucleoid from inactivation and that dimerization of FNR in response to O2 depletion is the key step in FNR-mediated repression ., Thus , the focus of the agent-based model on spatial events provides information and new insight , allowing the effects of dysregulation of system components to be explored by facile addition or removal of agents . | systems biology, computer and information sciences, network analysis, regulatory networks, biology and life sciences, computational biology | null |
1,853 | journal.pcbi.1006171 | 2,018 | Thalamocortical and intracortical laminar connectivity determines sleep spindle properties | Sleep marks a profound change of brain state as manifested by the spontaneous emergence of characteristic oscillatory activities ., In humans , sleep spindles consist of waxing-and-waning bursts of field potentials oscillating at 11–15 Hz lasting for 0 . 5–3 s and recurring every 5–15 s ., Experimental and computational studies have identified that both the thalamus and the cortex are involved in the generation and propagation of spindles ., Spindles are known to occur in isolated thalamus after decortication in vivo and in thalamic slice recordings in vitro 1 , 2 , demonstrating that the thalamus is sufficient for spindle generation ., In in-vivo conditions , the cortex has been shown to be actively involved in the initiation and termination of spindles 3 as well as the long-range synchronization of spindles 4 5 ., Multiple lines of evidence indicate that spindle oscillations are linked to memory consolidation during sleep ., Spindle density is known to increase following training in hippocampal-dependent 6 as well as procedural memory 7 memory tasks ., Spindle density also correlates with better memory retention following sleep in verbal tasks 8 , 9 ., More recently , it was shown that pharmacologically increasing spindle density leads to better post-sleep performance in hippocampal-dependent learning tasks 10 ., Furthermore , spindle activity metrics , including amplitude and duration , were predictive of learning performance 11–13 , suggesting that spindle event occurrence , amplitude , and duration influence memory consolidation ., In human recordings , spindle occurrence and synchronization vary based on the recording modality ., Spindles recorded with magnetoencephalography ( MEG ) are more frequent and less synchronized , as compared to those recorded with electroencephalography ( EEG ) 14 ., It has been proposed that the contrast between MEG and EEG spindles reflects the differential involvement of the core and matrix thalamocortical systems , respectively 15 ., Core projections are focal to layer IV , whereas matrix projections are widespread in upper layers 16 ., This hypothesis is supported by human laminar microelectrode data which demonstrated two spindle generators , one associated with middle cortical layers and the other superficial 17 ., Taken together , these studies suggest that there could be two systems of spindle generation within the cortex and that these correspond to the core and matrix anatomical networks ., However , the network and cellular mechanisms whereby the core and matrix systems interact to generate both independent and co-occurring spindles across cortical layers are not understood ., In this study , we developed a computational model of thalamus and cortex that replicates known features of spindle occurrence in MEG and EEG recordings ., While our previous efforts have been focused on the neural mechanisms involved in the generation of isolated spindles5 , in this study we identified the critical mechanisms underlying the spontaneous generation of spindles across different cortical layers and their interactions ., Histograms of EEG and MEG gradiometer inter-spindle intervals are shown in Fig 1C ., For neither channel type are ISIs distributed normally as determined by Lilliefors tests ( D2571 = 0 . 1062 , p = 1 . 0e-3 , D4802 = 0 . 1022 , p = 1 . 0e-3 ) , suggesting that traditional descriptive statistics are of limited utility ., However , the ISI at peak of the respective distributions is longer for EEG than it is the MEG ., In addition , a two-sample Kolmogorov-Smirnov test confirms that EEG and MEG ISIs are not drawn from the same distribution ( D2571 , 4802 = 0 . 079 , p = 1 . 5e-9 ) ., While the data where not found to be drawn from any parametric distribution with 95% confidence , an exponential fit ( MEG ) and lognormal fit ( EEG ) are shown in red overlay for illustrative purposes ., These data are consistent with previous empirical recordings 18 and suggest that sleep spindles have different properties across superficial vs . deep cortical layers ., To investigate the mechanisms behind distinct spindle properties across cortical locations as observed in EEG and MEG signals , we constructed a model of thalamus and cortex that incorporated the two characteristic thalamocortical systems: core and matrix ., These systems contained distinct thalamic populations that projected to the superficial ( matrix ) and middle ( core ) cortical layers ., Four cell types were used to model distinct cell populations: thalamocortical relay ( TC ) and reticular ( RE ) neurons in the thalamus , and excitatory pyramidal ( PY ) and inhibitory ( IN ) neurons in each of three layers of the cortical network ., A schematic representation of the synaptic connections and cortical geometry of the network model is shown in Fig 2 ., In the matrix system , both thalamocortical ( from matrix TCs to the apical dendrites of layer 5 pyramidal neurons ( PYs ) located in the layer 1 ) and corticothalamic synapses ( from layer 5 PYs back to the thalamus ) formed diffuse connections ., The core system had a focal connection pattern in both thalamocortical ( from core TCs to PYs in the layer III/IV ) and corticothalamic ( from layer VI PYs to the thalamus ) projections ., Because spindles recorded in EEG signal reflect the activity of superficial layers while MEG records spindles originating from deeper layers ( Fig 1 and 19 ) , we compared the activity of the model’s matrix system , which has projections to the superficial layers , to empirical EEG recordings and compared the activity in model layer 3/4 to empirical MEG recordings ., In agreement with our previous studies 3 , 5 , 20 , 21 , simulated stage 2 sleep consisted of multiple spindle events involving thalamic and cortical neuronal populations ( Fig 3 ) ., During one such typical spindle event ( highlighted by the box in Fig 3A and 3B ) , cortical and thalamic neurons in both the core and matrix system had elevated and synchronized firing ( Fig 3A bottom ) , consistent with previous in-vivo experimental recordings 22 ., In the model , spindles within each system were initiated from spontaneous activity within cortical layers and then spread to thalamic neurons , similar to our previous study5 ., The spontaneous activity due to miniature EPSPs in glutamergic cortical synapses led to fluctuations in membrane voltage and sparse firing ., At random times , the miniature EPSPs summed such that a small number of locally connected PY neurons spiked within a short window ( <100ms ) , which then induced spiking in thalamic cells through corticothalamic connections ., This initiated spindle oscillations in the thalamic population mediated by TC-RE interactions as described before 20 , 23 , 24 ., Thalamic spindles in turn propagated to the neocortex leading to joint thalamocortical spindle events whose features were shaped by the properties of thalamocortical and corticothalamic connections ., In this study , we examined how the process of spindle generation occurs in a thalamocortical network with mutually interacting core and matrix systems , wherein the thalamic network of each system is capable of generating spindles independently ., Based on the anatomical data 16 , the main difference between the modeled core and matrix systems was the radii or fanout of connections in thalamocortical and corticothalamic projections ( in the baseline model , the fanout was 10 times wider for the matrix compared to the core system ) ., Furthermore , the strength of each synaptic connection was scaled by the number of input connections to each neuron 25 , 26 , leading to weaker individual thalamocortical projections in the matrix as compared to the core ., These differences in the strength and fanout of thalamocortical connectivity resulted in distinctive core and matrix spindle properties ( see Fig 3A , right vs left ) ., First , both cortical and thalamic spindles were more spatially focal in the core system as only a small subset of neurons was involved in a typical spindle event at any given time ., In contrast , within the matrix system spindles were global ( involving the entire cell population ) and highly synchronous across all cell types ., These results are consistent with our previous studies 5 and suggest that the connectivity properties of thalamocortical projections determine the degree of synchronization in the cortical network ., Second , spindle density was higher in the core system compared to the matrix system ., At every spatial location in the cortical network of the core system , the characteristic time between spindles was shorter compared to that between spindles in the matrix system ( Fig 3A left vs right ) ., In order to quantify the spatial and temporal properties of spindles , we computed an estimated LFP as an average of the dendritic synaptic currents for every group of contiguous 100 cortical neurons ., LFPs of the core system were estimated from the currents generated in the dendrites of layer 3/4 neurons while the LFP of the matrix system was computed from the dendritic currents of layer 5 neurons , located in the superficial cortical layers ( Fig 2 ) ., After applying a bandpass filter ( 6–15 Hz ) , the spatial properties of estimated core and matrix LFP ( Fig 3C ) closely matched the MEG and EEG recordings , respectively ( Fig 1 ) ., In subsequent analyses , we used this estimated LFP to further examine the properties of the spindle oscillations in the core and matrix systems ., We identified spindles in the estimated LFP using an automated spindle detection algorithm similar to that used in experimental studies ( details are provided in the method section ) ., The spindle density , defined as the number of spindles occurring per minute of simulation time , was greater in the core compared to the matrix ( Fig 4A ) as confirmed by an independent-sample t-test ( t ( 18 ) = 7 . 06 , p<0 . 001 for across estimated LFP channels and t ( 2060 ) = 19 . 2 , p<0 . 001 across all spindles ) ., The results of this analysis agree with the experimental observation that MEG spindles occur more frequently than EEG spindles ., While the average spindle density was significantly different between the core and matrix , in both systems the distribution of inter-spindle intervals peaks below 4 seconds and has a long tail ( Fig 4B ) ., A two sample KS test comparing the distributions of inter-spindle intervals confirmed that the intervals were derived from different distributions ( D1128 , 932 = 0 . 427 , p<0 . 001 ) ., The peak ISI of the core was shorter than that of the matrix system , suggesting that the core network experiences shorter and more frequent quiescence periods than the matrix population ., Furthermore , maximum-likelihood fits of the probability distributions ( red line in Fig 4B ) confirmed that the intervals of spindle occurrence cannot be described by a normal distribution ., The long tails of the distributions suggest that a Poisson like process , as oppose to a periodic process , is responsible for spindle generation ., This observation is consistent with previous experimental results 18 , 27 and suggests that our computational model replicates essential statistical properties of spindles observed in in vivo experiments ., Several other features of simulated core and matrix spindles were similar to those found in experimental recordings ., The average spindle duration was significantly higher in the core compared to the matrix system ( Fig 4C ) as confirmed by independent-sample t-test ( t ( 2060 ) = 16 . 3 , p<0 . 001 ) ., To quantify the difference in the spatial synchrony of spindles between the core and matrix systems , we computed the spatial correlation 28 between LFP groups at different distances ( measured by the location of a neuron group in the network ) ., The correlation strength decreased with distance for both systems ( Fig 4D ) ., However , the spindles in the core system were less spatially correlated overall when compared to spindles in the matrix system ., Simultaneous EEG and MEG measurements have found that about 50% of MEG spindles co-occur with EEG spindles , while about 85% of EEG spindles co-occur with MEG spindles 29 ., Further , a spindle detected in the EEG signal is found to co-occur with about 66% more MEG channels than a spindle detected in MEG ., Our model generates spindling patterns consistent with these features ., The co-occurrence probability revealed that during periods of spindles in the matrix system , there was about 80% probability that core was also generating spindles ( Fig 4E ) ., In contrast , there was only a 40% probability of observing a matrix spindle during a core system spindle ., An independent-sample t-test confirmed this difference between the systems across estimated LFP channels ( t ( 14 ) = 31 . 4 , p<0 . 001 ) ., Furthermore , we observed that the number of LFP channels that were simultaneously activated during a spindle event in the core system was higher when a spindle co-occurred in the matrix versus times when the spindles only occurred in the core ( Fig 4F , t ( 14 ) = 67 . 2 , p<0 . 001 ) ., This suggests that the co-occurrences of spindles in both systems are rare events but lead to the wide spread activation in both the core and matrix when they take place ., Finally , we examined the delay between spindles in the core and matrix systems ( Fig 4G ) ., We observed that on average ( red line in Fig 4G ) , the spindle originated from the core system then spread to the matrix system with a mean delay of about 300 ms ( delay was measured as the difference in onset times between co-occurring spindles within a window of 2 , 500 ms; negative delay values indicate spindles in which the core preceded matrix ) ., The peak at -750 ms corresponds to spindles originating from the core system , while the peak at +750 ms suggests that at some network sites , spindles originated in the matrix system and then spread to the core system ., While there were almost no events in which the matrix preceded the core by more than 1 sec ( right of Fig 4G ) , many events occurred in which the core preceded the matrix by more than 1 sec ( left of Fig 4G ) ., In sum , these results suggest that spindles were frequently initiated locally in the core system , then propagate to and spread throughout the matrix system ., This can trigger spindles at the other locations of the core , so eventually , even regions in the core system that were not previously involved become recruited ., These findings explain the experimental result that spindles are observed in more MEG channels when they also co-occur in the EEG 29 ., We leveraged our model to examine factors that may influence spindle occurrence across cortical layers ., The main difference between the core and matrix systems in the model was the breadth or fanout of the thalamic projections to the cortical network ., Neuroanatomical studies suggest that the core system has focused projections while matrix system projects widely 16 ., Here , we assessed the impacts of this characteristic by systematically varying the connection footprint of the thalamic matrix to superficial cortical regions , while holding the fanout of the thalamic core to layer 3/4 projections constant ., We also modulated the corticothalamic projections in proportion to the thalamocortical projections ., Using the estimated LFP from the cortical layers corresponding to core and matrix system , respectively , we quantified various spindle properties as the fanout was modulated ., Spindle density ( the number of spindles per minute ) in both layers was sensitive to the matrix system’s fanout ., ANOVA confirmed significant effects of fanout and layer location , as well as an interaction between layer and fanout ( fanout: F ( 6 , 112 ) = 66 . 4; p<0 . 01 , Layer: F ( 1 , 112 ) = 65 . 18; p<0 . 01 and interaction F ( 6 , 112 ) = 22 . 8; p<0 . 01 ) ., When the matrix and core thalamus had similar fanouts ( ratio 1 and 2 . 5 in Fig 5B ) , we observed a slightly higher density of spindles in the matrix than in the core system ., This observation is consistent with the properties of these circuits ( see Fig 2 ) , wherein the matrix system contains direct reciprocal projections connecting cortical and thalamic subpopulations and the core system routes indirect projections from cortical ( layer III/IV ) neurons through layer VI to the thalamic nucleus ., When the thalamocortical fanout of the matrix system was increased to above ~5 times the size of the core system , the density of spindles in the matrix system was reduced to around 4 spindles per minute ., Interestingly , the density of spindles in the core system was also reduced when the thalamocortical fanout of the matrix system was further increased to above ~10 times of that in the core system ( ratio above 10 in Fig 5B ) ., This suggests that spindle density in both systems is determined not only by the radius of thalamocortical vs . corticothalamic projections , but also by interactions between the systems among the cortical layers ., We further expound on the role of these cortical connections in the next section ., We also examined the effect of thalamocortical fanout on the distribution of inter-spindle intervals ( Fig 5C ) ., Although the mean value was largely independent of the projection radius , a long tailed distribution was observed for all values of fanout in the core ., Contrastingly , in the matrix system the mean and peak of the inter-spindle interval shifted to the right ( longer intervals ) with increased fanout ., With large fanouts , the majority of matrix system spindles had very long periods of silence ( 10-15s ) between them ., This suggests that thalamocortical fanout determines the peak of the inter-spindle interval distribution , but does not alter the stochastic nature of spindle occurrence ., The degree of thalamocortical fanout also influenced the co-occurrence of spindles in the core and matrix systems ( Fig 5D ) ., Increasing the fanout of the matrix system reduced spindle co-occurrence between two systems ., This reduction resulted mainly from lower spindle density in both layers ., However , the co-occurrence of core spindles during matrix spindles was higher for all values of fanout when matrix thalamocortical projections were at least 5 times broader than core projections ., This suggests that the difference in spindle co-occurrence between EEG and MEG as observed in experiments 14 depends mainly on the difference in the radius of thalamocortical projections between the core and matrix systems , while overall level of co-occurrence is determined by the interaction between cortical layers ., We examined how spatial correlations during periods of spindles vary depending on the fanout of thalamocortical projections ., The spatial correlation quantifies the degree of synchronization in the estimated LFP signals of network locations as a function of the distance between them ., As expected , increasing the distance reduced the spatial correlation ( Fig 4D ) ., We next measured the mean value of the spatial correlation for each fanout condition ., The mean correlation increased as a function of the fanout in the matrix system ( Fig 5A ) ., However , the spatial correlation within the core , and between the core and matrix systems , did not change with increases in the fanout , suggesting that the spatial synchronization of core spindles is largely influenced by thalamocortical fanout but not by interactions between the core and matrix systems as was observed for spindle density ., Does intra-cortical excitatory connectivity between layer 3/4 of the core system and layer 5 of the matrix system affect spindle occurrence ?, To answer this question , we first varied the strength of excitatory connections ( AMPA and NMDA ) from the core to matrix pyramidal neurons ( Fig 6A and 6B ) ., Here the reference point ( or 100% ) corresponds to the strength used in previous simulations , i . e . half the strength of a within-layer connection ., The spindle density varied with the strength of the interlaminar connections ( Fig 6A ) ., For low connectivity strengths ( below 100% ) , the spindle density of the matrix system was reduced significantly , while at high strengths ( above 140% ) the matrix system spindle density exceeded that of the control ( 100% ) ., There were significant effects of connection strength and layer on the spindle density , as well as an interaction between the two factors ( connection strength: F ( 5 , 96 ) = 24 . 7; p<0 . 01 , layer: F ( 5 , 96 ) = 386 . 6; p<0 . 01 and interaction F ( 5 , 96 ) = 36 . 9; p<0 . 01 ) that suggests a layer-specific effect of modulating excitatory interlaminar connection strength ., Similar to the spindle density , spindle co-occurrence between the core and matrix systems also increased as a function of interlaminar connection strength , reaching 80% for the both core and matrix at 150% connectivity ., In contrast , changing the strength of excitatory connections from layer 5 to layer 3/4 had little effect on the spindle density , ( Fig 6C ) ., Taken together , these results suggest that the strength of the cortical core-to-matrix excitatory connections is one of the critical factors in determining spindle density and co-occurrence among spindles across both cortical lamina and the core/matrix systems ., Using computational modeling and data from EEG/MEG recordings in humans we found that the properties of sleep spindles vary across cortical layers and are influenced by thalamocortical , corticothalamic and cortico-laminar connections ., This study was motivated by empirical findings demonstrating that spindles measured in EEG have different synchronization properties from those measured in MEG 14 , 29 ., EEG spindles occur less frequently and more synchronously in comparison to MEG spindles ., Our new study confirms the speculation that anatomical differences between the matrix thalamocortical system , which has broader projections that target the cortex superficially , and the core system , which consists of focal projections which target the middle layers , can account for the differences between EEG and MEG signals ., Furthermore , we discovered that the strength of corticocortical feedforward excitatory connections from the core to matrix neurons determines the spindle density in the matrix system , which predicts a specific neural mechanism for the interactions observed between MEG and EEG spindles ., There were several novel findings in this study ., First , we developed a novel computational model of sleep spindling in which spindles manifested as a rare but global synchronous occurrence in the matrix pathway and a frequent but local occurrence in the core pathway ., In other words , many spontaneous spindles occurred locally in the core system but only occasionally did this lead to globally organized spindles appearing in the matrix system ., As a result , only a fraction of spindles co-occurred between the pathways ( about 80% in matrix and 40% in core pathway ) ., This is consistent with data reported for EEG vs MEG in vivo ( Fig 1 ) ., In contrast , in our previous models 3 , 5 , spindles were induced by external stimulation and always occurred simultaneously in the core and matrix systems , but with different degrees of internal synchrony ., In addition , these studies did not examine how the core and matrix pathways interact during spontaneously occurring spindles ., Second , in this study we found that the distribution of the inter-spindle intervals between spontaneously occurring spindles in both the core and matrix pathways had long tails similar to a log-normal distribution ., This result is consistent with analyses of MEG and EEG data reported in this study and in our prior study 18 ., In our previous models 3 , 5 , spindles were induced by external stimulation and the statistics of spontaneously occurring spindles could not be explored ., Third , we demonstrated that the strength of thalamocortical and corticothalamic connections determined the density and occurrence of spontaneously generated spindles ., The spindle density was higher in the core pathway as compared to the matrix pathway with high co-occurrence of core spindles with matrix spindles ., These findings were corroborated with experimental evidence from EEG/MEG recordings ., Finally , we reported that laminar connections between the core and matrix could be a significant factor in determining spindle density , suggesting a possible mechanism of learning ., When the strength of these connections was increased in the model , there was a significant increase in spindle occurrence , similar to the experimentally observed increase in spindle density following recent learning 10 ., The origin of sleep spindle activity has been linked to thalamic oscillators based on a broad range of experimental studies 2 , 30 , 31 ., The excitatory and inhibitory connections between thalamic relay and reticular neurons are critical in generating spindles 20 , 23 , 32 , 33 ., However , in intact brain , the properties of sleep spindles are also shaped by cortical networks ., Indeed , the onset of a spindle oscillation and its termination are both dependent on cortical input to the thalamus 3 , 34 , 35 ., In model studies , spindle oscillations in the thalamus are initiated when sufficiently strong activity in the cortex activates the thalamic network , and spindle termination is partially mediated by the desynchronization of corticothalamic input towards the end of spindles 3 , 32 ., However , in simultaneous cortical and thalamic studies in humans , thalamic spindles were found to be tightly coupled to a preceding downstate , which in turn was triggered by converging cortical downstates 36 ., Further modeling is required to reconcile these experimental results ., In addition , thalamocortical interactions are known to be integral to the synchronization of spindles 5 , 33 ., In our new study , the core thalamocortical system revealed relatively high spindle density produced by focal and strong thalamocortical and corticothalamic projections ., Such a pattern of connectivity between core thalamus and middle cortical layers allowed input from a small region of the cortex to initiate and maintain focal spindles in the core system ., In contrast , the matrix system had relatively weak and broad thalamocortical connections requiring synchronized activity in broader cortical regions in order to initiate spindles in the thalamus ., We previously reported 5 that ( 1 ) within a single spindle event the synchrony of the neuronal firing is higher in the matrix than in the core system; ( 2 ) spindle are initiated in the core and with some delay in the matrix system ., The overal density of core and matrix spindle events was , however , the same in these earlier models ., In the new study we extended these previous results by explaining differences in the global spatio-temporal structure of spindle activity between the core and matrix systems ., Our new model predicts that the focal nature of the core thalamocortical connectivity can explain the more frequent occurrence of spindles in the core system as observed in vivo ., The strength of core-to-matrix intracortical connections determined the probability of core spindles to “propagate” to the matrix system ., In our new model core spindles remained localized and have never involved the entire network , again in agreement with in vivo data ., We observed that the distribution of inter-spindle intervals reflects a non-periodic stochastic process such as a Poisson process , which is consistent with previous data 18 , 27 ., The state of the thalamocortical network , determined by the level of the intrinsic and synaptic conductances , contributed to the stochastic nature of spindle occurrence ., Building off our previous work 21 , we chose the intrinsic and synaptic properties in the model that match those in stage 2 sleep , a brain state when concentrations of acetylcholine and monoamines are reduced 37–39 ., As a consequence , the K-leak currents and excitatory intracortical connections were set higher than in an awake-like state due to the reduction of acetylcholine and norepinephrine 40 ., The high K-leak currents resulted in sparse spontaneous cortical firing during periods between spindles with occasional surges of local synchrony sustained by recurrent excitation within the cortex that could trigger spindle oscillations in the thalamus ., Note that this mechanism may be different from spindle initiation during slow oscillation , when spindle activity appears to be initiated during Down state in thalamus 35 ., Furthermore , the release of miniature EPSPs and IPSPs in the cortex was implemented as a Poission process that contributed to the stochastic nature of the baseline activity ., All these factors led to a variable inter-spindle interval with long periods of silence when activity in the cortex was not sufficient to induce spindles ., While it is known that an excitable medium with noise has a Poisson event distribution in reduced systems 41 , here we show that a detailed biophysical model of spindle generation may lead to a Poission process due to specific intrinsic and network properties ., Layer IV excitatory neurons have a smaller dendritic structure compared to Layer V excitatory neurons 42 ., Direct recordings and detailed dendritic reconstructions have shown large post-synaptic potentials in layer IV due to core thalamic input 42 , 43 ., We examined the role of thalamocortical and corticothalamic connections in a thalamocortical network with only one cortical layer ( S1 Fig ) ., We found that increasing the synaptic strength of thalamocortical and corticothalamic connections both increased the density and duration of spindles , however it did not influence their synchronization ( S1A Fig ) ., In contrast , changing fanout led to an increase in spindle density , duration , and synchronization ., Furthermore , we examined the impact of thalamocortical and corticothalamic connections individually without applying a synaptic normalization rule ( see Methods ) ., We observed that the thalamocortical connections had a higher impact on spindle properties than corticothalamic connections ( S1B Fig ) ., In our full model with multiple layers , which included a weight normalization rule and wider fanout of the matrix pathway ( based on experimental findings16 ) , the synaptic strength of each thalamocortical synapse in the core pathway was higher than that in the matrix pathway ., The exact value of the synaptic strength was chosen from the reduced model to match experimentally observed spindle durations , as observed in EEG/MEG and laminar recordings 17 ., The simultaneous EEG and MEG recordings reported here and in our previous publications 14 , 29 revealed that, ( a ) MEG spindles occur earlier compared to the EEG spindles and, ( b ) EEG spindles are seen in a higher number of the MEG sensors compared to the spindles occurring only in the MEG recordings ., This resembles our current findings , in which the number of regions that were spindling in the core system during a matrix spindle was higher than when there was no spindle in the matrix system ., Further , the distribution of spindle onset delays between the systems indicates that during matrix spindles some neurons of the core system fired early , and presumably contributed to the initiation of the matrix spindle , while others fired late and were recruited ., Taken together , all the evidence suggests a characteristic and complex spatiotemporal evolution of spindle activity during co-occurring spindles , where spindles in the core spread to the matrix and in turn activate wider regions in the core leading to synchronized activation across cortical layers that is reflected by strong activity in both EEG and MEG ., Thus , the model predicts that co-occurring spindles could lead to the recruitment of the large cortical areas , which indeed has been reported in previous studies 28 , 44 ., At the same time , local spindles occurring in the model within deep cortical l | Introduction, Results, Discussion, Materials and methods | Sleep spindles are brief oscillatory events during non-rapid eye movement ( NREM ) sleep ., Spindle density and synchronization properties are different in MEG versus EEG recordings in humans and also vary with learning performance , suggesting spindle involvement in memory consolidation ., Here , using computational models , we identified network mechanisms that may explain differences in spindle properties across cortical structures ., First , we report that differences in spindle occurrence between MEG and EEG data may arise from the contrasting properties of the core and matrix thalamocortical systems ., The matrix system , projecting superficially , has wider thalamocortical fanout compared to the core system , which projects to middle layers , and requires the recruitment of a larger population of neurons to initiate a spindle ., This property was sufficient to explain lower spindle density and higher spatial synchrony of spindles in the superficial cortical layers , as observed in the EEG signal ., In contrast , spindles in the core system occurred more frequently but less synchronously , as observed in the MEG recordings ., Furthermore , consistent with human recordings , in the model , spindles occurred independently in the core system but the matrix system spindles commonly co-occurred with core spindles ., We also found that the intracortical excitatory connections from layer III/IV to layer V promote spindle propagation from the core to the matrix system , leading to widespread spindle activity ., Our study predicts that plasticity of intra- and inter-cortical connectivity can potentially be a mechanism for increased spindle density as has been observed during learning . | The density of sleep spindles has been shown to correlate with memory consolidation ., Sleep spindles occur more often in human MEG than EEG recordings ., We developed a thalamocortical network model that is capable of spontaneous generation of spindles across cortical layers and that captures the essential statistical features of spindles observed empirically ., Our study predicts that differences in thalamocortical connectivity , known from anatomical studies , are sufficient to explain the differences in the spindle properties between EEG and MEG which are observed in human recordings ., Furthermore , our model predicts that intracortical connectivity between cortical layers , a property influenced by sleep preceding learning , increases spindle density ., Results from our study highlight the role of intracortical and thalamocortical projections on the occurrence and properties of spindles . | learning, medicine and health sciences, sleep, brain electrophysiology, brain, electrophysiology, social sciences, neuroscience, learning and memory, physiological processes, clinical medicine, cognitive psychology, brain mapping, network analysis, bioassays and physiological analysis, neuronal dendrites, neuroimaging, electroencephalography, research and analysis methods, computer and information sciences, imaging techniques, clinical neurophysiology, animal cells, electrophysiological techniques, thalamus, cellular neuroscience, psychology, cell biology, anatomy, physiology, neurons, biology and life sciences, cellular types, magnetoencephalography, cognitive science, neurophysiology | null |
430 | journal.pcbi.1000425 | 2,009 | Dynamic Modeling of Vaccinating Behavior as a Function of Individual Beliefs | In the UK , MMR vaccine uptake started to decline after a controversial study linking MMR vaccine to autism 6 ., In a decade , vaccine coverage went well below the target herd immunity level of 95% ., Despite the confidence of researchers and most health professionals on the vaccine safety , the confidence of the public was deeply affected ., In an attempt to find ways to restore this confidence , several studies were carried out to identify factors associated with parents unwillingness to vaccinate their children ., They found that ‘Not receiving unbiased and adequate information from health professionals about vaccine safety’ and ‘medias adverse publicity’ were the most common reasons influencing uptake 7 ., Other important factors were: ‘lack of belief in information from the government sources’; ‘fear of general practitioners promoting the vaccine for personal reasons’; and ‘media scare’ ., Note that during this period the risk of acquiring measles was very low due to previously high vaccination coverage ., Sylvatic yellow fever ( SYF ) is a zoonotic disease , endemic in the north and central regions of Brazil ., Approximately 10% of infections with this flavivirus are severe and result in hemorrhagic fever , with case fatality of 50% 8 ., Since the re-introduction of A . aegypti in Brazil ( the urban vector of dengue and yellow fever ) , the potential reemergence of urban yellow fever is of concern 9 ., In Brazil , it is estimated that approximately 95% of the population living in the yellow fever endemic regions have been vaccinated ., In this area , small outbreaks occur periodically , especially during the rainy season , and larger ones are observed every 7 to 10 years 10 , in response to increased viral activity within the environmental reservoir ., In 2007 , increased detection of dead monkeys in the endemic zone , led the government to implement vaccine campaigns targeting travellers to these areas and the small fraction of the resident population who were still not protected by the vaccine ., The goal was to vaccinate 10–15% of the local population ., Intense notification in the press regarding the death of monkeys near urban areas , and intense coverage of all subsequent suspected and confirmed human cases and death events led to an almost country-wide disease scare ( Figure 1 ) , incompatible with the real risks 5 , which caused serious economic and health management problems , including waste of doses with already immunized people ( 60% of the population was vaccinated when only 10–15% would be sufficient ) , adverse events from over vaccination ( individuals taking multiple doses to ‘guarantee’ protection ) , national vaccine shortage and international vaccine shortage , since Brazil stopped exporting YF vaccine to supply domestic vaccination rush ( www . who . int/csr/don/2008_02_07/en/ ) ., The importance of public perceptions and collective behavior for the outcome of immunization campaigns are starting to be acknowledged by theoreticians 9 , 11 , 12 ., These factors have been examined in a game theoretical framework , where the influence of certain types of vaccinating behaviour on the stability and equilibria of epidemic models is analyzed ., In the present work , we propose a model for individual immunization behavior as an inference problem: Instead of working with fixed behaviors , we develop a dynamic model of belief update , which in turn determines individual behavior ., An individuals willingness to vaccinate is derived from his perception of disease risk and vaccine safety , which is updated in a Bayesian framework , according the epidemiological facts each individual is exposed to , in their daily life ., We also explore the global effects of individual decisions on vaccination adherence at the population level ., In summary , we propose a framework to integrate dynamic modeling of learning ( belief updating ) with decision and population dynamics ., We ran the model as described above for 100 days with parameters given by Table 1 , under various scenarios to reveal the interplay of belief and action under the proposed model ., Figures 2 and 3 show a summary output of the model dynamics under contrasting conditions ., In Figure 2 , we have VAE ( Vaccine adverse events ) preceding the occurrence of severe disease events ., As expected , VAE become the strongest influence on , keeping low with consequences to the attained vaccination coverage at the end of the simulation ., We characterize this behavior as a ‘vaccine scare’ behavior ., In a different scenario , Figure 3 , we observe the effect of severe disease events occurring in high frequency at the beginning of the epidemics ., In this case , disease scare pushes willingness to vaccinate ( ) to high levels ., This is very clear in Figure 3 where there is a cluster of serious disease cases around the 30th day of simulation ., right after the occurrence of this cluster , we see rise sharply above , meaning that willingness to vaccinate ( ) in this week was mainly driven by disease scare instead of considerations about vaccine safety ( ) ., A similar effect can be observed in Figure 2 , starting from day 45 or so ., Only here the impact of a cluster of serious disease cases is diminished by the effects of VAEs , and the fact that there arent many people left to make the decision of wether or not vaccinate ., The impact of individual beliefs on vaccine coverage is highly dependent on the visibility of the rare VAE ., Figure 4 shows the impact of the media amplification factor on and vaccination coverage after ≈14 weeks , for a infectious disease with and ., If no media amplification occurs , willingness to vaccinate and vaccine coverage are high , as severe disease events are common and severe adverse events are relatively rare ., As vaccine adverse events are amplified by the media , individuals willingness to vaccinate at the end of the 14 weeks tend to decrease ., Such belief change , however , has a low impact on the vaccine coverage ., The explanation for this is that vaccine coverage is a cumulative measure and , when VAE appear , a relatively large fraction of the population had already been vaccinated ., These results suggest that VAE should not strongly impact the outcome of an ongoing mass vaccination campaign , although it could affect the success of future campaigns ., Fixing amplification at and , we investigated how ( at the end of the simulation ) and vaccine coverage would be affected by increasing the rate of vaccine adverse events , ( Figure 5 ) ., As increases above , willingness to vaccinate drops quickly , while vaccine coverage diminishes but slightly ., In the present world of mass media channels and rapid and inexpensive communications , the spread of information , independent of its quality , is very effective , leading to considerable uncertainty and heterogeneity in public opinions ., The yellow fever scare in Brazil demonstrated clearly the impact of public opinion on the outcome of a vaccination campaign , and the difficulty in dealing with scare events ., For example , no official press release was taken at face value , as it was always colored by political issues 5 ., In multiple occasions , people reported to the press that they would do the exact opposite of what was being recommended by public health authorities due to their mistrust of such authorities ., This example shows us the complexity of modeling and predicting the success of disease containment strategies ., The goal of this work was to integrate into a unified dynamical modeling framework , the opinion and decision components that underlie the public response to mass vaccination campaigns , specially when vaccine or disease scares have a chance to occur ., The proposed analytical framework , although not intentionally parameterized to match any specific real scenario , qualitatively captured the temporal dynamics of vaccine uptake in Brasilia ( Figure 1 ) , a clear case of disease scare ( compare with simulation results , presented on Figure 2 ) ., After conducting large scale studies on the acceptance of the Influenza vaccine , Chapman et al . 13 conclude that perceived side-effects and effectiveness of vaccination are important factors in peoples decision to vaccinate ., Our model suggests that , if the perception of disease risk is high , it leads to a higher initial willingness to vaccinate , while adverse events of vaccination , even when widely publicized by the media , tend to have less impact on vaccination coverage ., VAE are more effective when happening at the beginning of vaccination campaigns , when they can sway the opinions of a larger audience ., Although disease scare can counteract , to a certain extent the undesired effects of VAE , public health officials must also be aware of the risks involved in overusing disease risk information , in vaccination campaign advertisements since this can lead to a rush towards immunization as seen in the 2008 Yellow Fever scare in Brazil ., Vaccinating behavior dynamics has been modelled in different ways in the recent literature , from behaviors that aim to maximize self-interest 12 to imitation behaviors 14 ., In this paper we modeled these perceptions dynamically , and showed its relevance to decision-making dynamics and the consequences to the underlying epidemiological system and efficacy of vaccination campaigns ., We highlight two aspects of our modeling approach that we think provide important contributions to the field ., First , the process through which people update beliefs which will direct their decisions , was modeled using a Bayesian framework ., We trust this approach to be the most natural one as the Bayesian definition of probability is based on the concept of belief and Bayesian inference methodology was developed as a representation human learning behavior 15 ., The learning process is achieved through an iterative incorporation of newly available information , which naturally fit into the standard Bayesian scheme ., Among the advantages of this approach is its ability to handle the entire probability distributions of the parameters of interest instead of operating on their expected values which would be the cased in a classical frequentist framework ., This is especially important where highly asymmetrical distributions are expected ., The resulting set of probability distributions , provides more complete model-based hypotheses to be tested against data ., The inferential framework has an added benefit of simplicity and computational efficiency due the use of conjugate priors , which gives us a closed-form expression for the Bayesian posterior without the need of complex posterior sampling algorithms such as MCMC ., The second contribution is the articulation between the belief and decision models through logarithmic pooling ., Logarithmic pooling has been applied in many fields 16 , 17 to derive consensus from multiple expert opinions described as probability distributions ., Genest et al . 15 , argue that Logarithmic pooling is the best way to combine probability distributions due to its property of “external Bayesianity” ., This means that finding the consensus among distributions commutes with revising distributions using the Bayes formula , with the consequence that the results of this procedure can be interpreted as a single Bayesian probability update ., Here , we apply logarithmic pooling to integrate the multiple sources of information ( equation ( 1 ) ) which go into the decision of whether or not to vaccinate ., In this context , the property of external bayesianity , is important since it allows the operations of pooling and Bayesian update ( of , equation ( 2 ) ) to be combined in any order , depending only on the availability of data ., This framework can be easily used as a base to compose more complex models ., Extended models might include multiple beliefs as a joint probability distribution , more layers of decision or multiple , independently evolving belief systems ., The contact strucure of the model was intentionally kept as simple as possible , since the goal of the model was to focus on the belief dynamics ., Therefore , a reasonably simple epidemiological model , with a simple spatial structure ( local and global spaces ) was constructed to drive the belief dynamics without adding potentially confounding extra dynamics ., In this work we have played with various probability levels of VAEs and SDs in an attempt to cover the most common and likely more interesting portions of parameter space ., However , to model specific scenarios , data regarding the actual probabilities of VAEs and SDs are a pre-requisite ., Also important are data regarding the perception of vaccine safety and efficacy 18 , obtainable through opinion surveys which could also include questions about factors driving changes in vaccination behavior ., We therefore suggest that questions regarding these variables should be included in future surveys concerning vaccine-preventable diseases ., This would improve our ability to predict of the outcome of vaccination campaigns ., The belief model describes the temporal evolution of each individuals willingness to vaccinate , , in response to his evaluation of vaccine safety and disease risk ., To account for the uncertainties regarding vaccinating behavior , is modeled as a random variable , whose distribution is updated weekly as the individual observes new events ., The update process is based on logarithmically pooling with other random variables as described below ., Logarithmic pooling is a standard way of combining probability distribution representing opinions , to form a consensus 15 ., The belief update model takes the form: ( 1 ) where must equal one as act as weights of the pooling operation ., We attributed equal weights to and ( ) , with remaining taking values according to the following conditions:where is the number of serious disease cases witnessed by the individual , and and are random variables describing individuals belief regarding vaccine safety and disease risk , respectively ., The values for and are set to 1/2 since either or are to be pooled against the combination of and : ., This choice of weights corresponds to the most unassuming scenario regarding the relative importance of each information source , different weights may be chosen for different scenarios ., Every individual starts off with a very low expected value for the Beta-distributed ., The last term in ( 1 ) , , is a reduction force which causes to move towards the minimum value of ., This term is important since without it , the psychological effects of witnessing serious disease events would continue to influence the individuals decisions for and indetermined period of time ., Thus , allows us to include the memory of such events in the model ., By setting appropriately , we can model events that leave no memory as well as ones that are retained indefinetly ., We model disease spread in a hypothetical city represented by a multilevel metapopulation individual-based model where individuals belong to groups that in turn belong to groups of groups , and so on ( Figure 9 ) , forming a hierarchy of scales 20 ., In this hypothetical city , individuals live in households with exactly 4 members each; neighborhoods are composed by 100 households and sets of 10 neighborhoods form the citys zones ., During the simulation , individuals commute between home and a randomly chosen neighborhood anywhere in the population graph ., Each individual has a probability 0 . 25 of leaving home daily ., This same hierarchical structure is used to define local and global events ., Locally visible events can only be witnessed by people living in the same neighborhood while globally visible events are visible to the entire population regardless of place of residence ., The epidemiological model describes a population being invaded by a new pathogen ., This pathogen causes an acute infection , lasting 11 days ( incubation period of 6 days and an infectious period of 5 days ) ., Once in the infectious period , individuals have a fixed probability , of becoming seriously ill ., After recovery , individuals become fully immune ., The proportion of the population in each immunological state at time is labeled as and , which stands for susceptibles , exposed , infectious and recovered states ., At the same time the disease is introduced in the population , a vaccination campaign is started , making available doses per week to the entire population , meaning that individuals may have to compete for a dose if many decide to vaccinate at the same time ., Once an individual is vaccinated , if he/she has not been exposed yet , he/she moves directly to the recovered class , with full immunity ( thus , a perfect vaccine is assumed ) ., If the individual is in the incubation period of the disease , disease progression is unaffected by vaccination ., Vaccination carries with it a fixed chance of causing adverse effects ., Transmission dynamics is modelled as follows: at each discrete time step , , each individual contacts others in two groups: in his residence and in the public space ., The probability of getting infected at home is given by where is the probability of transmission per household contact and is the number of infected members in the house ., In the public space , that is , in the neighborhood chosen as destination for the daily commutations , each infected person contacts persons at random , and if the contact is with a susceptible , infection is transmitted with probability . | Introduction, Results, Discussion, Materials and Methods | Individual perception of vaccine safety is an important factor in determining a persons adherence to a vaccination program and its consequences for disease control ., This perception , or belief , about the safety of a given vaccine is not a static parameter but a variable subject to environmental influence ., To complicate matters , perception of risk ( or safety ) does not correspond to actual risk ., In this paper we propose a way to include the dynamics of such beliefs into a realistic epidemiological model , yielding a more complete depiction of the mechanisms underlying the unraveling of vaccination campaigns ., The methodology proposed is based on Bayesian inference and can be extended to model more complex belief systems associated with decision models ., We found the method is able to produce behaviors which approximate what has been observed in real vaccine and disease scare situations ., The framework presented comprises a set of useful tools for an adequate quantitative representation of a common yet complex public-health issue ., These tools include representation of beliefs as Bayesian probabilities , usage of logarithmic pooling to combine probability distributions representing opinions , and usage of natural conjugate priors to efficiently compute the Bayesian posterior ., This approach allowed a comprehensive treatment of the uncertainty regarding vaccination behavior in a realistic epidemiological model . | A frequently made assumption in population models is that individuals make decisions in a standard way , which tends to be fixed and set according to the modelers view on what is the most likely way individuals should behave ., In this paper we acknowledge the importance of modeling behavioral changes ( in the form of beliefs/opinions ) as a dynamic variable in the model ., We also propose a way of mathematically modeling dynamic belief updates which is based on the very well established concept of a belief as a probability distribution and its temporal evolution as a direct application of the Bayes theorem ., We also propose the use of logarithmic pooling as an optimal way of combining different opinions which must be considered when making a decision ., To argue for the relevance of this issue , we present a model of vaccinating behaviour with dynamic belief updates , modeled after real scenarios of vaccine and disease scare recorded in the recent literature . | mathematics/statistics, computational biology/ecosystem modeling, infectious diseases/epidemiology and control of infectious diseases | null |
425 | journal.pcbi.1006960 | 2,019 | Modeling the temporal dynamics of the gut microbial community in adults and infants | There is increasing recognition that the human gut microbiome is a contributor to many aspects of human physiology and health including obesity , non-alcoholic fatty liver disease , inflammatory diseases , cancer , metabolic diseases , aging , and neurodegenerative disorders 1–14 ., This suggests that the human gut microbiome may play important roles in the diagnosis , treatment , and ultimately prevention of human disease ., These applications require an understanding of the temporal variability of the microbiota over the lifespan of an individual particularly since we now recognize that our microbiota is highly dynamic , and that the mechanisms underlying these changes are linked to ecological resilience and host health 15–17 ., Due to the lack of data and insufficient methodology , we currently have major gaps in our understanding of fundamental mechanisms related to the temporal behavior of the microbiome ., Critically , we currently do not have a clear characterization of how and why our gut microbiome varies in time , and whether these dynamics are consistent across humans ., It is also unclear whether we can define ‘stable’ or ‘healthy’ dynamics as opposed to ‘abnormal’ or ‘unhealthy’ dynamics , which could potentially reflect an underlying health condition or an environmental factor affecting the individual , such as antibiotics exposure or diet ., Moreover , there is no consensus as to whether the gut microbial community structure varies continuously or jumps between discrete community states , and whether or not these states are shared across individuals 18 , 19 ., Notably , recent work 20 suggests that the human gut microbiome composition is dominated by environmental factors rather than by host genetics , emphasizing the dynamic nature of this ecosystem ., The need for understanding the temporal dynamics of the microbiome and its interaction with host attributes have led to a rise in longitudinal studies that record the temporal variation of microbial communities in a wide range of environments , including the human gut microbiome ., These time series studies are enabling increasingly comprehensive analyses of how the microbiome changes over time , which are in turn beginning to provide insights into fundamental questions about microbiome dynamics 16 , 17 , 21 ., One of the most fundamental questions that still remains unanswered is to what degree the microbial community in the gut is deterministically dependent on its initial composition ( e . g . , microbial composition at birth ) ., More generally , it is unknown to what degree the microbial composition of the gut at a given time determines the microbial composition at a later time ., Additionally , there is only preliminary evidence of the long-term effects of early life events on the gut microbial community composition , and it is currently unclear whether these long-term effects traverse through a predefined set of potential trajectories 21 , 22 ., To address these questions , it is important to quantify the dependency of the microbial community at a given time on past community composition 23 , 24 ., This task has been previously studied in theoretical settings ., Specifically , the generalized Lotka-Volterra family of models infer changes in community composition through defined species-species or species-resource interaction terms , and are popular for describing internal ecological dynamics ., Recently , a few methods that rely on deterministic regularized model fitting using generalized Lotka-Volterra equations have been proposed ( e . g . , 25–27 ) ., Nonetheless , the importance of pure autoregressive factors ( a stochastic process in which future values are a function of the weighted sum of past values ) in driving gut microbial dynamics is , as yet , unclear ., Other approaches that utilize the full potential of longitudinal data , can often reveal insights about the autoregressive nature of the microbiome ., These include , for example , the sparse vector autoregression ( sVAR ) model , ( Gibbons et al . 24 ) , which assumes linear dynamics and is built around an autoregressive type of model , ARIMA Poisson ( Ridenhour et al . 28 ) , which assumes log-linear dynamics and suggests modeling the read counts along time using Poisson regression , and TGP-CODA ( Aijo et al . 2018 29 ) , which uses a Bayesian probabilistic model that combines a multinomial distribution with Gaussian processes ., Particularly , Gibbons et al . 24 , uses the sparse vector autoregression ( sVAR ) model to show evidence that the human gut microbial community has two dynamic regimes: autoregressive and non-autoregressive ., The autoregressive regime includes taxa that are affected by the community composition at previous time points , while the non-autoregressive regime includes taxa that their appearance in a specific time is random and or does not depend on the previous time points ., In this paper , we show that previous studies substantially underestimate the autoregressive component of the gut microbiome ., In order to quantify the dependency of taxa on past composition of the microbial community , we introduce Microbial community Temporal Variability Linear Mixed Model ( MTV-LMM ) , a ready-to-use scalable framework that can simultaneously identify and predict the dynamics of hundreds of time-dependent taxa across multiple hosts ., MTV-LMM is based on a linear mixed model , a heavily used tool in statistical genetics and other areas of genomics 30 , 31 ., Using MTV-LMM we introduce a novel concept we term ‘time-explainability’ , which corresponds to the fraction of temporal variance explained by the microbiome composition at previous time points ., Using time-explainability researchers can select the microorganisms whose abundance can be explained by the community composition at previous time points in a rigorous manner ., MTV-LMM has a few notable advantages ., First , unlike the sVAR model and the Bayesian approach proposed by Aijo et al . 29 , MTV-LMM models all the individual hosts simultaneously , thus leveraging the information across an entire population while adjusting for the host’s effect ( e . g , . host’s genetics or environment ) ., This provides MTV-LMM an increased power to detect temporal dependencies , as well as the ability to quantify the consistency of dynamics across individuals ., The Poisson regression method suggested by Ridenhour et al . 28 also utilizes the information from all individuals , but does not account for the individual effects , which may result in an inflated autoregressive component ., Second , MTV-LMM is computationally efficient , allowing it to model the dynamics of a complex ecosystem like the human gut microbiome by simultaneously evaluating the time-series of hundreds of taxa , across multiple hosts , in a timely manner ., Other methods , ( e . g . , TGP-CODA 29 , MDSINE 26 etc . ) can model only a small number of taxa ., Third , MTV-LMM can serve as a feature selection method , selecting only the taxa affected by the past composition of the microbiome ., The ability to identify these time-dependent taxa is crucial when fitting a time series model to study the microbial community temporal dynamics ., Finally , we demonstrate that MTV-LMM can serve as a standalone prediction model that outperforms commonly used models by an order of magnitude in predicting the taxa abundance ., We applied MTV-LMM to synthetic data , as suggested by Ajio et al . 2018 29 as well as to three real longitudinal studies of the gut microbiome ( David et al . 17 , Caporaso et al . 16 , and DIABIMMUNE 21 ) ., These datasets contain longitudinal abundance data using 16S rRNA gene sequencing ., Nonetheless , MTV-LMM is agnostic to the sequencing data type ( i . e . , 16s rRNA or shotgun sequencing ) ., Using MTV-LMM we find that in contrast to previous reports , a considerable portion of microbial taxa , in both infants and adults , display temporal structure that is predictable using the previous composition of the microbial community ., Moreover , we show that , on average , the time-explainability is an order of magnitude larger than previously estimated for these datasets ., We begin with an informal description of the main idea and utility of MTV-LMM ., A more comprehensive description can be found in the Methods ., MTV-LMM is motivated by our assumption that the temporal changes in the abundance of taxa are a time-homogeneous high-order Markov process ., MTV-LMM models the transitions of this Markov process by fitting a sequential linear mixed model ( LMM ) to predict the relative abundance of taxa at a given time point , given the microbial community composition at previous time points ., Intuitively , the linear mixed model correlates the similarity between the microbial community composition across different time points with the similarity of the taxa abundance at the next time points ., MTV-LMM is making use of two types of input data: ( 1 ) continuous relative abundance of focal taxa j at previous time points and ( 2 ) quantile-binned relative abundance of the rest of the microbial community at previous time points ., The output of MTV-LMM is prediction of continuous relative abundance , for each taxon , at future time points ., In order to apply linear mixed models , MTV-LMM generates a temporal kinship matrix , which represents the similarity between every pair of samples across time , where a sample is a normalization of taxa abundances at a given time point for a given individual ( see Methods ) ., When predicting the abundance of taxa j at time t , the model uses both the global state of the entire microbial community in the last q time points , as well as the abundance of taxa j in the previous p time points ., The parameters p and q are determined by the user , or can be determined using a cross-validation approach; a more formal description of their role is provided in the Methods ., MTV-LMM has the advantage of increased power due to a low number of parameters coupled with an inherent regularization mechanism , similar in essence to the widely used ridge regularization , which provides a natural interpretation of the model ., We evaluated MTV-LMM by testing its accuracy in predicting the abundance of taxa at a future time point using real time series data ., Such evaluation will mitigate overfitting , since the future data points are held out from the algorithm ., To measure accuracy on real data , we used the squared Pearson correlation coefficient between estimated and observed relative abundance along time , per taxon ., In addition we validated MTV-LMM using synthetic data , illustrating realistic dynamics and abundance distribution , as suggested by Aijo et al . 2018 29 ., Following 29 , we evaluate the performance of the model using the ‘estimation-error’ , defined to be the Euclidean distance between estimated and observed relative abundance , per time point ( see Supplementary Information S1 Note ) ., We used real time series data from three different datasets , each composed of longitudinal abundance data ., These three datasets are David et al . 17 ( 2 adult donors—DA , DB—average 250 time points per individual ) , Caporaso et al . 16 ( 2 adult donors—M3 , F4—average 231 time points per individual ) , and the DIABIMMUNE dataset 21 ( 39 infant donors—average 28 time points per individual ) ., In these datasets , the temporal parameters p and q were estimated using a validation set , and ranged from 0 to 3 ., See Methods for further details ., We compared the results of MTV-LMM to common approaches that are widely used for temporal microbiome modeling , namely the AR ( 1 ) model ( see Methods ) , the sparse vector autoregression model sVAR 24 , the ARIMA Poisson regression 28 and TGP-CODA 29 ., Overall , MTV-LMM’s prediction accuracy is higher than AR’s ( Supplementary Information S1 Table ) and significantly outperforms both the sVAR method and the Poisson regression across all datasets , using real time-series data ( Fig 1 ) ., In addition , since TGP-CODA can not be fully applied to these real datasets ( due to scalability limitations ) , we used synthetic data , considering a scenario of 200 taxa and 70 time points with realistic dynamics and abundance distribution , as suggested by the authors of this method ., Similarly to the real data , MTV-LMM significantly outperforms all the compared methods ( Supplementary Information S1 Fig ) ., We applied MTV-LMM to the DIABIMMUNE infant dataset and estimated the species-species association matrix across all individuals , using 1440 taxa that passed a preliminary screening according to temporal presence-absence patterns ( see Methods ) ., We found that most of these effects are close to zero , implying a sparse association pattern ., Next , we applied a principal component analysis ( PCA ) to the estimated species-species associations and found a strong phylogenetic structure ( PerMANOVA P-value = 0 . 001 ) suggesting that closely related species have similar association patterns within the microbial community ( Fig 2 ) ., These findings are supported by Thompson et al . 32 , who suggested that ecological interactions are phylogenetically conserved , where closely related species interact with similar partners ., Gomez et al . 33 tested these assumptions on a wide variety of hosts and found that generalized interactions can be evolutionary conserved ., We note that the association matrix estimated by MTV-LMM should be interpreted with caution since the number of possible associations is quadratic in the number of species , and it is , therefore , unfeasible to infer with high accuracy all the associations ., However , we can still aggregate information across species or higher taxonomic levels to uncover global patterns of the microbial composition dynamics ( e . g . , principal component analysis ) ., In order to address the fundamental question regarding the gut microbiota temporal variation , we quantify its autoregressive component ., Namely , we quantify to what degree the abundance of different taxa can be inferred based on the microbial community composition at previous time points ., In statistical genetics , the fraction of phenotypic variance explained by genetic factors is called heritability and is typically evaluated under an LMM framework 30 ., Intuitively , linear mixed models estimate heritability by measuring the correlation between the genetic similarity and the phenotypic similarity of pairs of individuals ., We used MTV-LMM to define an analogous concept that we term time-explainability , which corresponds to the fraction of temporal variance explained by the microbiome composition at previous time points ., In order to highlight the effect of the microbial community , we next estimated the time-explainability of taxa in each dataset , using the parameters q = 1 , p = 0 ., The resulting model corresponds to the formula: taxat = microbiome community ( t−1 ) + individual effect ( t−1 ) + unknown effects ., Of the taxa we examined , we identified a large portion of them to have a statistically significant time-explainability component across datasets ., Specifically , we found that over 85% of the taxa included in the temporal kinship matrix are significantly explained by the time-explainability component , with estimated time-explainability average levels of 23% in the DIABIMMUNE infant dataset ( sd = 15% ) , 21% in the Caporaso et al . ( 2011 ) dataset ( sd = 15% ) and 14% in the David el al . dataset ( sd = 10% ) ( Fig 3 , Supplementary Information S2 Fig ) ., Notably , we found that higher time explanability is associated with higher prediction accuracy ( Supplementary Information S3 Fig ) ., As a secondary analysis , we aggregated the time-explainability by taxonomic order , and found that in some orders ( non-autoregressive orders ) all taxa are non-autoregressive , while in others ( mixed orders ) we observed the presence of both autoregressive and non-autoregressive taxa ( Fig 4 , Supplementary Information S4 Fig ) , where an autoregressive taxa have a statistically significant time-explainability component ., Particularly , in the DIABIMMUNE infant data set , there are 7244 taxa , divided into 55 different orders ., However , the taxa recognized by MTV-LMM as autoregressive ( 1387 out of 7244 ) are represented in only 19 orders out of the 55 ., The remaining 36 orders do not include any autoregressive taxa ., Unlike the autoregressive organisms , these non-autoregressive organisms carry a strong phylogenetic structure ( t-test p-value < 10−16 ) , that may indicate a niche/habitat filtering ., This observation is consistent with the findings of Gibbons et al . 24 , who found a strong phylogenetic structure in the non-autoregressive organisms in the adult microbiome ., Notably , across all datasets , there is no significant correlation between the order dominance ( number of taxa in the order ) and the magnitude of its time-explainability component ( median Pearson r = 0 . 12 ) ., For example , in the DIABIMMUNE data set , the proportion of autoregressive taxa within the 19 mixed orders varies between 2% and 75% , where the average is approximately 20% ., In the most dominant order , Clostridiales ( representing 68% of the taxa ) , approximately 20% of the taxa are autoregressive and the average time-explainability is 23% ., In the second most dominant order , Bacteroidales , approximately 35% of the taxa are autoregressive and the average time-explainability is 31% ., In the Bifidobacteriales order , approximately 75% of the taxa are autoregressive , and the average time-explainability is 19% ( Fig 4 ) ., We hypothesize that the large fraction of autoregressive taxa in the Bifidobacteriales order , specifically in the infants dataset , can be partially attributed to the finding made by 34 , according to which some sub-species in this order appear to be specialized in the fermentation of human milk oligosaccharides and thus can be detected in infants but not in adults ., This emphasizes the ability of MTV-LMM to identify taxa that have prominent temporal dynamics that are both habitat and host-specific ., As an example of MTV-LMM’s ability to differentiate autoregressive from non-autoregressive taxa within the same order , we examined Burkholderiales , a relatively rare order ( less than 2% of the taxa in the data ) with 76 taxa overall , where only 19 of which were recognized as autoregressive by MTV-LMM ., Indeed , by examining the temporal behavior of each non-autoregressive taxa in this order , we witnessed abrupt changes in abundance over time , where the maximal number of consecutive time points with abundance greater than 0 is very small ., On the other hand , in the autoregressive taxa , we witnessed a consistent temporal behavior , where the maximal number of consecutive time points with abundance greater than 0 is well over 10 ( Supplementary Information S5 Fig ) ., The colonization of the human gut begins at birth and is characterized by a succession of microbial consortia 35–38 , where the diversity and richness of the microbiome reach adult levels in early childhood ., A longitudinal study has recently been used to show that infant gut microbiome begins transitioning towards an adult-like community after weaning 39 ., This observation is validated using our infant longitudinal data set ( DIABIMMUNE ) by applying PCA to the temporal kinship matrix ( Fig 5 ) ., Our analysis reveals that the first principal component ( accounting for 26% of the overall variability ) is associated with time ., Specifically , there is a clear clustering of the time samples from the first nine months of an infant’s life and the rest of the time samples ( months 10 − 36 ) which may be correlated to weaning ., As expected , we find a strong autoregressive component in an infant microbiome , which is highly associated with temporal variation across individuals ., By applying PCA to the temporal kinship matrix , we demonstrate that there is high similarity in the microbial community composition of infants at least in the first 9 months ., This similarity increases the power of our algorithm and thus helps MTV-LMM to detect autoregressive taxa ., In contrast to the infant microbiome , the adult microbiome is considered relatively stable 16 , 40 , but with considerable variation in the constituents of the microbial community between individuals ., Specifically , it was previously suggested that each individual adult has a unique gut microbial signature 41–43 , which is affected , among others factors , by environmental factors 20 and host lifestyle ( i . e . , antibiotics consumption , high-fat diets 17 etc . ) ., In addition , 17 showed that over the course of one year , differences between individuals were much larger than variation within individuals ., This observation was validated in our adult datasets ( David et al . and Caporaso et al . ) by applying PCA to the temporal kinship matrices ., In both David et al . and Caporaso et al . , the first principal component , which accounts for 61% and 43% of the overall variation respectively , is associated with the individual’s identity ( Fig 6 ) ., Using MTV-LMM we observed that despite the large similarity along time within adult individuals , there is also a non-negligible autoregressive component in the adult microbiome ., The fraction of variance explained by time across individuals can range from 6% up to 79% for different taxa ., These results shed more light on the temporal behavior of taxa in the adult microbiome , as opposed to that of infants , which are known to be highly affected by time 39 ., MTV-LMM uses a linear mixed model ( see 44 for a detailed review ) , a natural extension of standard linear regression , for the prediction of time series data ., We describe the technical details of the linear mixed model below ., We assume that the relative abundance levels of focal taxa j at time point t depend on a linear combination of the relative abundance levels of the microbial community at previous time points ., We further assume that temporal changes in relative abundance levels , in taxa j , are a time-homogeneous high-order Markov process ., We model the transitions of this Markov process using a linear mixed model , where we fit the p previous time points of taxa j as fixed effects and the q previous time points of the rest of the microbial community as random effects ., p and q are the temporal parameters of the model ., For simplicity of exposition , we present the generative linear mixed model that motivates the approach taken in MTV-LMM in two steps ., In the first step we model the microbial dynamics in one individual host ., In the second step we extend our model to N individuals , while accounting for the hosts’ effect ., We first describe the model assuming there is only one individual ., Consider a microbial community of m taxa measured at T equally spaced time points ., We get as input an m × T matrix M , where Mjt represents the relative-abundance levels of taxa j at time point t ., Let yj = ( Mj , p+1 , … , MjT ) t be a ( T −, p ) × 1 vector of taxa j relative abundance , across T − p time points starting at time point p + 1 and ending at time point T . Let Xj be a ( T −, p ) × ( p + 1 ) matrix of p + 1 covariates , comprised of an intercept vector as well as the first p time lags of taxa j ( i . e . , the relative abundance of taxa j in the p time points prior to the one predicted ) ., Formally , for k = 1 we have X t k j = 1 , and for 1 < k ≤ p + 1 we have X t k j = M j , t - k + 1 for t ≥ k ., For simplicity of exposition and to minimize the notation complexity , we assume for now that p = 1 . Let W be an ( T −, q ) × q ⋅ m normalized relative abundance matrix , representing the first q time lags of the microbial community ., For simplicity of exposition we describe the model in the case q = 1 , and then Wtj = Mjt ( in the more general case , we have Wtj = M⌈j/q⌉ , t− ( j mod, q ) , where p , q ≤ T − 1 ) ., With these notations , we assume the following linear model:, y j = X j β j + W u j + ϵ j , ( 1 ), where uj and ϵj are independent random variables distributed as uj∼ N ( 0 m , σ u j 2 I m ) and ϵ j ∼ N ( 0 T - 1 , σ ϵ j 2 I T - 1 ) ., The parameters of the model are βj ( fixed effects ) , σ u j 2 , and σ ϵ j 2 . We note that environmental factors known to be correlated with taxa abundance levels ( e . g . , diet , antibiotic usage 17 , 20 ) can be added to the model as fixed linear effects ( i . e . , added to the matrix Xj ) ., Given the high variability in the relative abundance levels , along with our desire to efficiently capture the effects of multiple taxa in the microbial community on each focal taxa j , we represent the microbial community input data ( matrix M ) using its quantiles ., Intuitively , we would like to capture the information as to whether a taxa is present or absent , or potentially introduce a few levels ( i . e . , high , medium , and low abundance ) ., To this end , we use the quantiles of each taxa to transform the matrix M into a matrix M ˜ , where M ˜ j t ∈ { 0 , 1 , 2 } depending on whether the abundance level is low ( below 25% quantile ) , medium , or high ( above 75% quantile ) ., We also tried other normalization strategies , including quantile normalization , which is typically used in gene expression eQTL analysis 45 , 46 , and the results were qualitatively similar ( see Supplementary Information S6 Fig ) ., We subsequently replace the matrix W by a matrix W ˜ , which is constructed analogously to W , but using M ˜ instead of M . Notably , both the fixed effect ( the relative abundance of yj at previous time points ) and the output of MTV-LMM are the continuous relative abundance ., The random effects are quantile-binned relative abundance of the rest of the microbial community at previous time points ( matrix W ˜ ) ., Thus , our model can now be described as, y j = X j β j + W ˜ u j + ϵ j ( 2 ) So far , we described the model assuming we have time series data from one individual ., We next extend the model to the case where time series data is available from multiple individuals ., In this case , we assume that the relative abundance levels of m taxa , denoted as the microbial community , have been measured at T time points across N individuals ., We assume the input consists of N matrices , M1 , … , MN , where matrix Mi corresponds to individual i , and it is of size m × T . Therefore , the outcome vector yj is now an n × 1 vector , composed of N blocks , where n = ( T − 1 ) N , and block i corresponds to the time points of individual, i . Formally , y k j = M j , ( k m o d ( T - 1 ) ) ⌈ k / ( T - 1 ) ⌉ ., Similarly , we define Xj and W ˜ as block matrices , with N different blocks , where corresponds to individual, i . When applied to multiple individuals , Model ( 2 ) may overfit to the individual effects ( e . g . , due to the host genetics and or environment ) ., In other words , since our goal is to model the changes in time , we need to condition these changes in time on the individual effects , that are unwanted confounders for our purposes ., We therefore construct a matrix H by randomly permuting the rows of each block matrix i in W ˜ , where the permutation is conducted only within the same individual ., Formally , we apply permutation πi ∈ ST−1 on the rows of each block matrix i , Mi , corresponding to individual i , where ST−1 is the set of all permutations of ( T − 1 ) elements ., In each πi , we are simultaneously permuting the entire microbial community ., Hence , matrix H corresponds to the data of each one of the individuals , but with no information about the time ( since the data was shuffled across the different time points ) ., With this addition , our final model is given by, y j = X j β j + W ˜ u j + H r + ϵ j , ( 3 ), where u j ∼ N ( 0 m , σ u j 2 I m ) and ϵ j ∼ N ( 0 n , σ ϵ j 2 I n ) , and r ∼ N ( 0 m , σ r 2 I m ) ., It is easy to verify that an equivalent mathematical representation of model 3 can be given by, y j ∼ N ( X j β j , σ A R j 2 K 1 + σ i n d 2 K 2 + σ ϵ j 2 I ) , ( 4 ), where σ A R j 2 = m σ u j 2 , K 1 = 1 m W ˜ W ˜ T , σ i n d 2 = m σ r 2 , K 2 = 1 m H H T . We will refer to K1 as the temporal kinship matrix , which represents the similarity between every pair of samples across time ( i . e . , represents the cross-correlation structure of the data ) ., We note that for the simplicity of exposition , we assumed so far that each sample has the same number of time points T , however in practice the number of samples may vary between the different individuals ., It is easy to extend the above model to the case where individual i has Ti time points , however the notations become cumbersome; the implementation of MTV-LMM , however takes into account a variable number of time points across the different individuals ., Once the distribution of yj is specified , one can proceed to estimate the fixed effects βj and the variance of the random effects using maximum likelihood approaches ., One common approach for estimating variance components is known as restricted maximum likelihood ( REML ) ., We followed the procedure described in the GCTA software package 47 , under ‘GREML analysis’ , originally developed for genotype data , and re-purposed it for longitudinal microbiome data ., GCTA implements the restricted maximum likelihood method via the average information ( AI ) algorithm ., Specifically , we performed a restricted maximum likelihood analysis using the function “–reml” followed by the option “–mgrm” ( reflects multiple variance components ) to estimate the variance explained by the microbial community at previous time points ., To predict the random effects by the BLUP ( best linear unbiased prediction ) method we use “–reml-pred-rand” ., This option is actually to predict the total temporal effect ( called “breeding value” in animal genetics ) of each time point attributed by the aggregated effect of the taxa used to estimate the temporal kinship matrix ., In both functions , to represent yj ( the abundance of taxa j at the next time point ) , we use the option “–pheno” ., For a detailed description see Supplementary Information S3 Note ., We define the term time-explainability , denoted as χ , to be the temporal variance explained by the microbial community in the previous time points ., Formally , for taxa j we define, χ j = σ A R j 2 σ A R j 2 + σ i n d 2 + σ ϵ j 2, The time-explainability was estimated with GCTA , using the temporal kinship matrix ., In order to measure the accuracy of time-explainability estimation , the average confidence interval width was estimated by computing the confidence interval widths for all autoregressive taxa and averaging the results ., Additionally , we adjust the time-explainability P-values for multiple comparisons using the Benjamini-Hochberg method 48 ., We now turn to the task of predicting y t j using the taxa abundance in time t − 1 ( or more generally in the last few time points ) ., Using our model notation , we are given xj and w ˜ , the covariates associated with a newly observed time point t in taxa j , and we would like to predict y t j with the greatest possible accuracy ., For a simple linear regression model , the answer is simply taking the covariate vector x and multiplying it by the estimated coefficients β ^ : y ^ t j = x T β ^ ., This practice yields unbiased estimates ., However , when attempting prediction in the linear mixed model case , things are not so simple ., One could adopt the same approach , but since the effects of the random components are not directly estimated , the vector of covariates w ˜ will not contribute directly to the predicted value of y t j , and will only affect the variance of the prediction , resulting in an unbiased but inefficient estimate ., Instead , one can use the correlation between the realized values of W ˜u , to attempt a better guess at the realization of w ˜ u for the new sample ., This is achieved by computing the distribution of the outcome of the new sample conditional on the full dataset , by using the following property of the multivariate | Introduction, Results, Materials and methods, Discussion | Given the highly dynamic and complex nature of the human gut microbial community , the ability to identify and predict time-dependent compositional patterns of microbes is crucial to our understanding of the structure and functions of this ecosystem ., One factor that could affect such time-dependent patterns is microbial interactions , wherein community composition at a given time point affects the microbial composition at a later time point ., However , the field has not yet settled on the degree of this effect ., Specifically , it has been recently suggested that only a minority of taxa depend on the microbial composition in earlier times ., To address the issue of identifying and predicting temporal microbial patterns we developed a new model , MTV-LMM ( Microbial Temporal Variability Linear Mixed Model ) , a linear mixed model for the prediction of microbial community temporal dynamics ., MTV-LMM can identify time-dependent microbes ( i . e . , microbes whose abundance can be predicted based on the previous microbial composition ) in longitudinal studies , which can then be used to analyze the trajectory of the microbiome over time ., We evaluated the performance of MTV-LMM on real and synthetic time series datasets , and found that MTV-LMM outperforms commonly used methods for microbiome time series modeling ., Particularly , we demonstrate that the effect of the microbial composition in previous time points on the abundance of taxa at later time points is underestimated by a factor of at least 10 when applying previous approaches ., Using MTV-LMM , we demonstrate that a considerable portion of the human gut microbiome , both in infants and adults , has a significant time-dependent component that can be predicted based on microbiome composition in earlier time points ., This suggests that microbiome composition at a given time point is a major factor in defining future microbiome composition and that this phenomenon is considerably more common than previously reported for the human gut microbiome . | The ability to characterize and predict temporal trajectories of the microbial community in the human gut is crucial to our understanding of the structure and functions of this ecosystem ., In this study we develop MTV-LMM , a method for modeling time-series microbial community data ., Using MTV-LMM we find that in contrast to previous reports , a considerable portion of microbial taxa in both infants and adults display temporal structure that is predictable using the previous composition of the microbial community ., In reaching this conclusion we have adopted a number of concepts common in statistical genetics for use with longitudinal microbiome studies ., We introduce concepts such as time-explainability and the temporal kinship matrix , which we believe will be of use to other researchers studying microbial dynamics , through the framework of linear mixed models ., In particular we find that the association matrix estimated by MTV-LMM reveals known phylogenetic relationships and that the temporal kinship matrix uncovers known temporal structure in infant microbiome and inter-individual differences in adult microbiome ., Finally , we demonstrate that MTV-LMM significantly outperforms commonly used methods for temporal modeling of the microbiome , both in terms of its prediction accuracy as well as in its ability to identify time-dependent taxa . | taxonomy, children, ecology and environmental sciences, microbiome, community structure, statistics, microbiology, multivariate analysis, age groups, phylogenetics, data management, non-coding rna, mathematics, infants, cellular structures and organelles, microbial genomics, families, research and analysis methods, computer and information sciences, medical microbiology, mathematical and statistical techniques, principal component analysis, evolutionary systematics, ribosomes, people and places, community ecology, biochemistry, rna, ribosomal rna, cell biology, ecology, nucleic acids, genetics, biology and life sciences, population groupings, physical sciences, genomics, evolutionary biology, statistical methods | null |
1,864 | journal.pcbi.1003985 | 2,014 | Segregating Complex Sound Sources through Temporal Coherence | Humans and animals can attend to a sound source and segregate it rapidly from a background of many other sources , with no learning or prior exposure to the specific sounds ., For humans , this is the essence of the well-known cocktail party problem in which a person can effortlessly conduct a conversation with a new acquaintance in a crowded and noisy environment 1 , 2 ., For frogs , songbirds , and penguins , this ability is vital for locating a mate or an offspring in the midst of a loud chorus 3 , 4 ., This capacity is matched by comparable object segregation feats in vision and other senses 5 , 6 , and hence understanding it will shed light on the neural mechanisms that are fundamental and ubiquitous across all sensory systems ., Computational models of auditory scene analysis have been proposed in the past to disentangle source mixtures and hence capture the functionality of this perceptual process ., The models differ substantially in flavor and complexity depending on their overall objectives ., For instance , some rely on prior information to segregate a specific target source or voice , and are usually able to reconstruct it with excellent quality 7 ., Another class of algorithms relies on the availability of multiple microphones and the statistical independence among the sources to separate them , using for example ICA approaches or beam-forming principles 8 ., Others are constrained by a single microphone and have instead opted to compute the spectrogram of the mixture , and then to decompose it into separate sources relying on heuristics , training , mild constraints on matrix factorizations 9–11 , spectrotemporal masks 12 , and gestalt rules 1 , 13 , 14 ., A different class of approaches emphasizes the biological mechanisms underlying this process , and assesses both their plausibility and ability to replicate faithfully the psychoacoustics of stream segregation ( with all their strengths and weaknesses ) ., Examples of the latter approaches include models of the auditory periphery that explain how simple tone sequences may stream 15–17 , how pitch modulations can be extracted and used to segregate sources of different pitch 18–20 , and models that handle more elaborate sound sequences and bistable perceptual phenomena 10 , 21–23 ., Finally , of particular relevance here are algorithms that rely on the notion that features extracted from a given sound source can be bound together by correlations of intrinsic coupled oscillators in neural networks that form their connectivity online 23 , 24 ., It is fair to say , however , that the diversity of approaches and the continued strong interest in this problem suggest that no algorithm has yet achieved sufficient success to render the “cocktail party problem solved from a theoretical , physiological , or applications point of view ., While our approach echoes some of the implicit or explicit ideas in the above-mentioned algorithms , it differs fundamentally in its overall framework and implementation ., It is based on the notion that perceived sources ( sound streams or objects ) emit features , that are modulated in strength in a largely temporally coherent manner and that they evoke highly correlated response patterns in the brain ., By clustering ( or grouping ) these responses one can reconstruct their underlying source , and also segregate it from other simultaneously interfering signals that are uncorrelated with it ., This simple principle of temporal coherence has already been shown to account experimentally for the perception of sources ( or streams ) in complex backgrounds 25–28 ., However , this is the first detailed computational implementation of this idea that demonstrates how it works , and why it is so effective as a strategy to segregate spectrotemporally complex stimuli such as speech and music ., Furthermore , it should be emphasized that despite apparent similarities , the idea of temporal coherence differs fundamentally from previous efforts that invoked correlations and synchronization in the following ways 29–33: ( 1 ) coincidence here refers to that among modulated feature channels due to slow stimulus power ( envelope ) fluctuations , and not to any intrinsic brain oscillations; ( 2 ) coincidences are strictly done at cortical time-scales of a few hertz , and not at the fast pitch or acoustic frequency rates often considered; ( 3 ) coincidences are measured among modulated cortical features and perceptual attributes that usually occupy well-separated channels , unlike the crowded frequency channels of the auditory spectrogram; ( 4 ) coincidence must be measured over multiple time-scales and not just over a single time-window that is bound to be too long or too short for a subset of modulations; and finally ( 5 ) the details we describe later for how the coincidence matrices are exploited to segregate the sources are new and are critical for the success of this effort ., For all these reasons , the simple principle of temporal coherence is not easily implementable ., Our goal here is to show how to do so using plausible cortical mechanisms able to segregate realistic mixtures of complex signals ., As we shall demonstrate , the proposed framework mimics human and animal strategies to segregate sources with no prior information or knowledge of their properties ., The model can also gracefully utilize available cognitive influences such as attention to , or memory of specific attributes of a source ( e . g . , its pitch or timbre ) to segregate it from its background ., We begin with a sketch of the model stages , with emphasis on the unique aspects critical for its function ., We then explore how separation of feature channel responses and their temporal continuity contribute to source segregation , and the potential helpful role of perceptual attributes like pitch and location in this process ., Finally , we extend the results to the segregation of complex natural signals such as speech mixtures , and speech in noise or music ., The critical information for identifying the perceived sources is contained in the instantaneous coincidence among the feature channel pairs as depicted in the C-matrices ( Fig . 1B ) ., At each modulation rate , the coincidence matrix at time is computed by taking the outer product of all cortical frequency-scale outputs ( ) ., Such a computation effectively estimates simultaneously the average coincidence over the time window implicit in each rate , i . e . , at different temporal resolutions , thus retaining both short- and long-term coincidence measures crucial for segregation ., Intuitively , the idea is that responses from pairs of channels that are strongly positively correlated should belong to the same stream , while channels that are uncorrelated or anti-correlated should belong to different streams ., This decomposition need not be all-or-none , but rather responses of a given channel can be parceled to different streams in proportion to the degree of the average coincidence it exhibits with the two streams ., This intuitive reasoning is captured by a factorization of the coincidence matrix into two uncorrelated streams by determining the direction of maximal incoherence between the incoming stimulus patterns ., One such factorization algorithm is a nonlinear principal component analysis ( nPCA ) of the C-matrices 35 , where the principal eigenvectors correspond to masks that select the channels that are positively correlated within a stream , and parcel out the others to a different stream ., This procedure is implemented by an auto-encoder network with two rectifying linear hidden units corresponding to foreground and background streams as shown in Fig . 1B ( right panel ) ., The weights computed in the output branches of each unit are associated with each of the two sources in the input mixture , and the number of hidden units can be automatically increased if more than two segregated streams are anticipated ., The nPCA is preferred over a linear PCA because the former assigns the channels of the two ( often anti-correlated ) sources to different eigenvectors , instead of combining them on opposite directions of a single eigenvector 36 ., Another key innovation in the model implementation is that the nPCA decomposition is performed not directly on the input data from the cortical model ( which are modulated at rates ) , but rather on the columns of the C-matrices whose entries are either stationary or vary slowly regardless of the rates of the coincident channels ., These common and slow dynamics enables stacking all C-matrices into one large matrix decomposition ( Fig . 1B ) ., Specifically , the columns of the stacked matrices are applied ( as a batch ) to the auto-encoder network at each instant with the aim of computing weights that can reconstruct them while minimizing the mean-square reconstruction error ., Linking these matrices has two critical advantages: It ensures that the pair of eigenvectors from each matrix decomposition is consistently labeled across all matrices ( e . g . , source 1 is associated with eigenvector 1 in all matrices ) ; It also couples the eigenvectors and balances their contributions to the minimization of the MSE in the auto-encoder ., The weight vectors thus computed are then applied as masks on the cortical outputs ., This procedure is repeated at each time step as the coincidence matrices evolve with the changing inputs ., The separation of feature responses on different channels and their temporal continuity are two important properties of the model that allow temporal coherence to segregate sources ., Several additional perceptual attributes can play a significant role including pitch , spatial location , and timbre ., Here we shall focus on pitch as an example of such attributes ., Speech mixtures share many of the same characteristics already seen in the examples of Fig . 2 and Fig . 3 ., For instance , they contain harmonic complexes with different pitches ( e . g . , males versus females ) that often have closely spaced or temporally overlapped components ., Speech also possesses other features such as broad bursts of noise immediately followed or preceded by voiced segments ( as in various consonant-vowel combinations ) , or even accompanied by voicing ( voiced consonants and fricatives ) ., In all these cases , the syllabic onsets of one speaker synchronize a host of channels driven by the harmonics of the voicing , and that are desynchronized ( or uncorrelated ) with the channels driven by the other speaker ., Fig . 4A depicts the clean spectra of two speech utterances ( middle and right panels ) and their mixture ( left panel ) illustrating the harmonic spectra and the temporal fluctuations in the speech signal at 3–7 Hz that make speech resemble the earlier harmonic sequences ., The pitch tracks associated with each of these panels are shown below them ., Fig . 4B illustrates the segregation of the two speech streams from the mixture using all available coincidence among the spectral ( frequency-scale ) and pitch channels in the C-matrices ., The reconstructed spectrograms are not identical to the originals ( Fig . 4A ) , an inevitable consequence of the energetic masking among the crisscrossing components of the two speakers ., Nevertheless , with two speakers there are sufficient gaps between the syllables of each speaker to provide clean , unmasked views of the other speakers signal 40 ., If more speakers are added to the mix , such gaps become sparser and the amount of energetic masking increases , and that is why it is harder to segregate one speaker in a crowd if they are not distinguished by unique features or a louder signal ., An interesting aspect of speech is that the relative amplitudes of its harmonics vary widely over time reflecting the changing formants of different phonemes ., Consequently , the saliency of the harmonic components changes continually , with weaker ones dropping out of the mixture as they become completely masked by the stronger components ., Despite these changes , speech syllables of one speaker maintain a stable representation of a sufficient number of features from one time instant to the next , and thus can maintain the continuity of their stream ., This is especially true of the pitch ( which changes only slowly and relatively little during normal speech ) ., The same is true of the spectral region of maximum energy which reflects the average formant locations of a given speaker , reflecting partially the timbre and length of their vocal tract ., Humans utilize either of these cues alone or in conjunction with additional cues to segregate mixtures ., For instance , to segregate speech with overlapping pitch ranges ( a mixture of male speakers ) , one may rely on the different spectral envelopes ( timbres ) , or on other potentially different features such as location or loudness ., Humans can also exploit more complex factors such as higher-level linguistic knowledge and memory as we discuss later ., In the example of Fig . 4C , the two speakers of Fig . 4A are segregated based on the coincidence of only the spectral components conveyed by the frequency-scale channels ., The extracted speech streams of the two speakers resemble the original unmixed signals , and their reconstructions exhibit significantly less mutual interference than the mixture as quantified later . Finally , as we discuss in more detail below , it is possible to segregate the speech mixture based on the pattern of correlations computed with one “anchor” feature such as the pitch channels of the female , i . e . , using only the columns of the C-matrix near the female pitch channels as illustrated in Fig . 4D ., Exactly the same logic can be applied to any auxiliary function that is co-modulated in the same manner as the rest of the speech signal ., For instance , one may “look” at the lip movements of a speaker which open and close in a manner that closely reflects the instantaneous power in the signal ( or its envelope ) as demonstrated in 41 ., These two functions ( inter-lip distance and the acoustic envelope ) can then be exploited to segregate the target speech much as with the pitch channels earlier ., Thus , by simply computing the correlation between the lip function ( Fig . 5B ) or the acoustic envelope ( Fig . 5C ) with all the remaining channels , an effective mask can be readily computed to extract the target female speech ( and the background male speech too ) ., This example thus illustrates how in general any other co-modulated features of the speech signal ( e . g . , location , loudness , timbre , and visual signals such as lip movements can contribute to segregation of complex mixtures ) ., The performance of the model is quantified with a database of 100 mixtures formed from pairs of male-female speech randomly sampled from the TIMIT database ( Fig . 6 ) where the spectra of the clean speech are compared to those of the corresponding segregated versions ., The signal-to-noise ratio is computed as ( 1 ) ( 2 ) where are the cortical representations of the segregated sentences and are the cortical representations of the original sentences and is the cortical representation of the mixture ., Average SNR improvement was 6 dB for mixture waveforms mixed at 0 dB ., Another way to demonstrate the effectiveness of the segregation is to compare the match between the segregated samples and their corresponding originals ., This is evidenced by the minimal overlap in Fig . 6B ( middle panel ) across the distributions of the coincidences computed between each segregated sentence and its original version versus the interfering speech ., To compare directly these coincidences for each pair of mixed sentences , the difference between coincidences in each mixture are scatter-plotted in the bottom panel ., Effective pairwise segregation ( e . g . , not extracting only one of the mixed sentences ) places the scatter points along the diagonal ., Examples of segregated and reconstructed audio files can be found in S1 Dataset ., So far , attention and memory have played no direct role in the segregation , but adding them is relatively straightforward ., From a computational point of view , attention can be interpreted as a focus directed to one or a few features or feature subspaces of the cortical model which enhances their amplitudes relative to other unattended features ., For instance , in segregating speech mixtures , one might choose to attend specifically to the high female pitch in a group of male speakers ( Fig . 4D ) , or to attend to the location cues or the lip movements ( Fig . 5C ) and rely on them to segregate the speakers ., In these cases , only the appropriate subset of columns of the C-matrices are needed to compute the nPCA decomposition ( Fig . 1B ) ., This is in fact also the interpretation of the simulations discussed in Fig . 3 for harmonic complexes ., In all these cases , the segregation exploited only the C-matrix columns marking coincidences of the attended anchor channels ( pitch , lip , loudness ) with the remaining channels ., Memory can also be strongly implicated in stream segregation in that it constitutes priors about the sources which can be effectively utilized to process the C-matrices and perform the segregation ., For example , in extracting the melody of the violins in a large orchestra , it is necessary to know first what the timbre of a violin is before one can turn the attentional focus to its unique spectral shape features and pitch range ., One conceptually simple way ( among many ) of exploiting such information is to use as ‘template’ the average auto-encoder weights ( masks ) computed from iterating on clean patterns of a particular voice or instrument , and use the resulting weights to perform an initial segregation of the desired source by applying the mixture to the stored mask directly ., A biologically plausible model of auditory cortical processing can be used to implement the perceptual organization of auditory scenes into distinct auditory objects ( streams ) ., Two key ingredients are essential: ( 1 ) a multidimensional cortical representation of sound that explicitly encodes various acoustic features along which streaming can be induced; ( 2 ) clustering of the temporally coherent features into different streams ., Temporal coherence is quantified by the coincidence between all pairs of cortical channels , slowly integrated at cortical time-scales as described in Fig . 1 ., An auto-encoder network mimicking Hebbian synaptic rules implements the clustering through nonlinear PCA to segregate the sound mixture into a foreground and a background ., The temporal coherence model segregates novel sounds based exclusively on the ongoing temporal coherence of their perceptual attributes ., Previous efforts at exploiting explicitly or implicitly the correlations among stimulus features differed fundamentally in the details of their implementation ., For example , some algorithms attempted to decompose directly the channels of the spectrogram representations 42 rather than the more distributed multi-scale cortical representations ., They either used the fast phase-locked responses available in the early auditory system 43 , or relied exclusively on the pitch-rate responses induced by interactions among the unresolved harmonics of a voiced sound 44 ., Both these temporal cues , however , are much faster than cortical dynamics ( >100 Hz ) and are highly volatile to the phase-shifts induced in different spectral regions by mildly reverberant environments ., The cortical model instead naturally exploits multi-scale dynamics and spectral analyses to define the structure of all these computations as well as their parameters ., For instance , the product of the wavelet coefficients ( entries of the C-matrices ) naturally compute the running-coincidence between the channel pairs , integrated over a time-interval determined by the time-constants of the cortical rate-filters ( Fig . 1 and Methods ) ., This insures that all coincidences are integrated over time intervals that are commensurate with the dynamics of the underlying signals and that a balanced range of these windows are included to process slowly varying ( 2 Hz ) up to rapidly changing ( 16 Hz ) features ., The biological plausibility of this model rests on physiological and anatomical support for the two postulates of the model: a cortical multidimensional representation of sound and coherence-dependent computations ., The cortical representation is the end-result of a sequence of transformations in the early and central auditory system with experimental support discussed in detail in 34 ., The version used here incorporates only a frequency ( tonotopic ) axis , spectrotemporal analysis ( scales and rates ) , and pitch analysis 37 ., However , other features that are pre-cortically extracted can be readily added as inputs to the model such as spatial location ( from interaural differences and elevation cues ) and pitch of unresolved harmonics 45 ., The second postulate concerns the crucial role of temporal coherence in streaming ., It is a relatively recent hypothesis and hence direct tests remain scant ., Nevertheless , targeted psychoacoustic studies have already provided perceptual support of the idea that coherence of stimulus-features is necessary for perception of streams 27 , 28 , 46 , 47 ., Parallel physiological experiments have also demonstrated that coherence is a critical ingredient in streaming and have provided indirect evidence of its mechanisms through rapidly adapting cooperative and competitive interactions between coherent and incoherent responses 26 , 48 ., Nevertheless , much more remains uncertain ., For instance , where are these computations performed ?, How exactly are the ( auto-encoder ) clustering analyses implemented ?, And what exactly is the role of attentive listening ( versus pre-attentive processing ) in facilitating the various computations ?, All these uncertainties , however , invoke coincidence-based computations and adaptive mechanisms that have been widely studied or postulated such as coincidence detection and Hebbian associations 49 , 50 ., Dimensionality-reduction of the coincidence matrix ( through nonlinear PCA ) allows us effectively to cluster all correlated channels apart from others , thus grouping and designating them as belonging to distinct sources ., This view bears a close relationship to the predictive clustering-based algorithm by 51 in which input feature vectors are gradually clustered ( or routed ) into distinct streams ., In both the coherence and clustering algorithms , cortical dynamics play a crucial role in integrating incoming data into the appropriate streams , and therefore are expected to exhibit for the most part similar results ., In some sense , the distinction between the two approaches is one of implementation rather than fundamental concepts ., Clustering patterns and reducing their features are often ( but not always ) two sides of the same coin , and can be shown under certain conditions to be largely equivalent and yield similar clusters 52 ., Nevertheless , from a biological perspective , it is important to adopt the correlation view as it suggests concrete mechanisms to explore ., Our emphasis thus far has been on demonstrating the ability of the model to perform unsupervised ( automatic ) source segregation , much like a listener that has no specific objectives ., In reality , of course , humans and animals utilize intentions and attention to selectively segregate one source as the foreground against the remaining background ., This operational mode would similarly apply in applications in which the user of a technology identifies a target voice to enhance and isolate from among several based on the pitch , timbre , location , or other attributes ., The temporal coherence algorithm can be readily and gracefully adapted to incorporate such information and task objectives , as when specific subsets of the C-matrix columns are used to segregate a targeted stream ( e . g . , Fig . 3 and Fig . 4 ) ., In fact , our experience with the model suggests that segregation is usually of better quality and faster to compute with attentional priors ., In summary , we have described a model for segregating complex sound mixtures based on the temporal coherence principle ., The model computes the coincidence of multi-scale cortical features and clusters the coherent responses as emanating from one source ., It requires no prior information , statistics , or knowledge of source properties , but can gracefully incorporate them along with cognitive influences such as attention to , or memory of specific attributes of a target source to segregate it from its background ., The model provides a testable framework of the physiological bases and psychophysical manifestations of this remarkable ability ., Finally , the relevance of these ideas transcends the auditory modality to elucidate the robust visual perception of cluttered scenes 53 , 54 ., Sound is first transformed into its auditory spectrogram , followed by a cortical spectrotemporal analysis of the modulations of the spectrogram ( Fig . 1A ) 34 ., Pitch is an additional perceptual attribute that is derived from the resolved ( low-order ) harmonics and used in the model 37 ., It is represented as a ‘pitch-gram’ of additional channels that are simply augmented to the cortical spectral channels prior to subsequent rate analysis ( see below ) ., Other perceptual attributes such as location and unresolved harmonic pitch can also be computed and represented by an array of channels analogously to the pitch estimates ., The auditory spectrogram , denoted by , is generated by a model of early auditory processing 55 , which begins with an affine wavelet transform of the acoustic signal , followed by nonlinear rectification and compression , and lateral inhibition to sharpen features ., This results in F\u200a=\u200a128 frequency channels that are equally spaced on a logarithmic frequency axis over 5 . 2 octaves ., Cortical spectro-temporal analysis of the spectrogram is effectively performed in two steps 34: a spectral wavelet decomposition followed by a temporal wavelet decomposition , as depicted in Fig . 1A ., The first analysis provides multi-scale ( multi-bandwidth ) views of each spectral slice , resulting in a 2D frequency-scale representation ., It is implemented by convolving the spectral slice with complex-valued spectral receptive fields similar to Gabor functions , parametrized by spectral tuning , i . e . , ., The outcome of this step is an array of FxS frequency-scale channels indexed by frequency and local spectral bandwidth at each time instant t ., We typically used =\u200a2 to 5 scales in our simulations ( e . g . , cyc/oct ) , producing copies of the spectrogram channels with different degrees of spectral smoothing ., In addition , the pitch of each spectrogram frame is also computed ( if desired ) using a harmonic template-matching algorithm 37 ., Pitch values and saliency were then expressed as a pitch-gram ( P ) channels that are appended to the frequency-scale channels ( Fig . 1B ) ., The cortical rate-analysis is then applied to the modulus of each of the channel outputs in the freq-scale-pitch array by passing them through an array of modulation-selective filters ( ) , each indexed by its center rate which range over Hz in octave steps ( Fig . 1B ) ., This temporal wavelet analysis of the response of each channel is described in detail in 34 ., Therefore , the final representation of the cortical outputs ( features ) is along four axes denoted by ., It consists of coincidence matrices per time frame , each of size x ( ( Fig . 1B ) ., The exact choice of all above parameters is not critical for the model in that the performance changes very gradually when the parameters or number of feature channels are altered ., All parameter values in the model were chosen based on previous simulations with the various components of the model ., For example , the choice of rates ( 2–32 Hz ) and scales ( 1–8 cyc/oct ) reflected their utility in the representation of speech and other complex sounds in numerous previous applications of the cortical model 34 ., Thus , the parameters chosen were known to reflect speech and music , but ofcourse could have been chosen differently if the stimuli were drastically different ., The least committal choice is to include the largest range of scales and rates that is computationally feasible ., In our implementations , the algorithm became noticeably slow when , , , and ., The decomposition of the C-matrices is carried out as described earlier in Fig . 1B ., The iterative procedure to learn the auto-encoder weights employs Limited-memory Broyden-Fletcher-Goldfarb-Shannon ( L-BFGS ) method as implemented in 56 ., The output weight vectors ( Fig . 1B ) thus computed are subsequently applied as masks on the input channels ., This procedure that is repeated every time step using the weights learned in the previous time step as initial conditions to ensure that the assignment of the learned eigenvectors remains consistent over time ., Note that the C matrices do not change rapidly , but rather slowly , as fast as the time-constants of their corresponding rate analyses allow ( ) ., For example , for the Hz filters , the cortical outputs change slowly reflecting a time-constant of approximately 250 ms . More often , however , the C-matrix entries change much slower reflecting the sustained coincidence patterns between different channels ., For example , in the simple case of two alternating tones ( Fig . 2A ) , the C-matrix entries reach a steady state after a fraction of a second , and then remain constant reflecting the unchanging coincidence pattern between the two tones ., Similarly , if the pitch of a speaker remains relatively constant , then the correlation between the harmonic channels remains approximately constant since the partials are modulated similarly in time ., This aspect of the model explains the source of the continuity in the streams ., The final step in the model is to invert the masked cortical outputs back to the sound 34 . | Introduction, Results, Discussion, Methods | A new approach for the segregation of monaural sound mixtures is presented based on the principle of temporal coherence and using auditory cortical representations ., Temporal coherence is the notion that perceived sources emit coherently modulated features that evoke highly-coincident neural response patterns ., By clustering the feature channels with coincident responses and reconstructing their input , one may segregate the underlying source from the simultaneously interfering signals that are uncorrelated with it ., The proposed algorithm requires no prior information or training on the sources ., It can , however , gracefully incorporate cognitive functions and influences such as memories of a target source or attention to a specific set of its attributes so as to segregate it from its background ., Aside from its unusual structure and computational innovations , the proposed model provides testable hypotheses of the physiological mechanisms of this ubiquitous and remarkable perceptual ability , and of its psychophysical manifestations in navigating complex sensory environments . | Humans and many animals can effortlessly navigate complex sensory environments , segregating and attending to one desired target source while suppressing distracting and interfering others ., In this paper , we present an algorithmic model that can accomplish this task with no prior information or training on complex signals such as speech mixtures , and speech in noise and music ., The model accounts for this ability relying solely on the temporal coherence principle , the notion that perceived sources emit coherently modulated features that evoke coincident cortical response patterns ., It further demonstrates how basic cortical mechanisms common to all sensory systems can implement the necessary representations , as well as the adaptive computations necessary to maintain continuity by tracking slowly changing characteristics of different sources in a scene . | auditory cortex, machine learning algorithms, neural networks, engineering and technology, noise control, audio signal processing, signal processing, brain, neuroscience, hearing, noise reduction, artificial neural networks, artificial intelligence, computational neuroscience, acoustical engineering, computer and information sciences, auditory system, speech signal processing, anatomy, biology and life sciences, sensory systems, sensory perception, computational biology, cognitive science, machine learning | null |
1,422 | journal.pcbi.1004014 | 2,014 | Bilinearity in Spatiotemporal Integration of Synaptic Inputs | For information processing , a neuron receives and integrates thousands of synaptic inputs from its dendrites and then induces the change of its membrane potential at the soma ., This process is usually known as dendritic integration 1–3 ., The dendritic integration of synaptic inputs is crucial for neuronal computation 2–4 ., For example , the integration of excitatory and inhibitory inputs has been found to enhance motion detection 5 , regularize spiking patterns 6 , and achieve optimal information coding 7 in many sensory systems ., They have also been suggested to be able to fine tune information processing within the brain , such as the modulation of frequency 8 and the improvement of the robustness 9 of gamma oscillations ., In order to understand how information is processed in neuronal networks in the brain , it is important to understand the computational rules that govern the dendritic integration of synaptic inputs ., Dendritic integration has been brought into focus with active experimental investigations ( see reviews 1 , 10 and references therein ) ., There have also been many theoretical developments based on physiologically realistic neuron models 11 , 12 ., Among those works , only a few investigate quantitative dendritic integration rules for a pair of excitatory and inhibitory inputs 3 , 13 and there has yet to be an extensive investigation of the integration of a pair of excitatory inputs or a pair of inhibitory inputs ., In this work , we propose a precise quantitative rule to characterize the dendritic integration for all types of synaptic inputs and validate this rule via realistic neuron modeling and electrophysiological experiments ., We first develop a theoretical approach to quantitatively characterize the spatiotemporal dendritic integration ., Initially , we introduce an idealized two-compartment passive cable model to understand the mathematical structure of the dendritic integration rule ., We then verify the rule by taking into account the complicated dendritic geometry and active ion channels ., For time-dependent synaptic conductance inputs , we develop an asymptotic approach to analytically solve the cable model ., In this approach , the membrane potential is represented by an asymptotic expansion with respect to the input strengths ., Consequently , a hierarchy of cable-type equations with different orders can be derived from the cable model ., These equations can be analytically solved order by order using the Greens function method ., The asymptotic solution to the second order approximation is shown to be in excellent agreement with the numerical solutions of the original cable model with physiologically realistic parameters ., Based on our asymptotic approach , we obtain a new theoretical result , namely , a nonlinear spatiotemporal dendritic integration rule for a pair of synaptic inputs: the summed somatic potential ( SSP ) can be well approximated by the summation of the two postsynaptic potentials and elicited separately , plus an additional third nonlinear term proportional to their product , i . e . , ( 1 ) The proportionality coefficient encodes the spatiotemporal information of the input signals , including the input locations and the input arrival times ., In addition , we demonstrate that the coefficient is nearly independent of the input strengths ., Because the correction term to the linear summation of and takes a bilinear form , we will refer to the rule ( 1 ) as the bilinear spatiotemporal dendritic integration rule ., In the remainder of the article , unless otherwise specified , all the membrane potentials will be referred to those measured at the soma ., We note that our bilinear integration rule is consistent with recent experimental observations 3 ., In the experiments 3 , the rule was examined at the time when the excitatory postsynaptic potential ( EPSP ) measured at the soma reaches its peak for a pair of excitatory and inhibitory inputs elicited concurrently ., We demonstrate that our bilinear integration rule is more general than that in Ref ., 3:, ( i ) our rule holds for a pair of excitatory and inhibitory inputs that can arrive at different times;, ( ii ) our rule is also valid at any time and is not limited to the peak time of the EPSP;, ( iii ) our rule is general for all types of paired synaptic input integration , including excitatory-inhibitory , excitatory-excitatory and inhibitory-inhibitory inputs ., Our bilinear integration rule is derived from the two-compartment passive cable model ., We then validate the rule in a biologically realistic pyramidal neuron model with active ion channels embedded ., The simulation results from the realistic model are consistent with the rule derived from the passive cable model ., We further validate the rule in electrophysiological experiments in rat hippocampal CA1 pyramidal neurons ., All of our results suggest that the form of the bilinear integration rule is preserved in the presence of active dendrites ., As mentioned previously , there are thousands of synaptic inputs received by a neuron in the brain ., We therefore further apply our analysis to describe the dendritic integration of multiple synaptic inputs ., We demonstrate that the spatiotemporal dendritic integration of all synaptic inputs can be decomposed into the sum of all possible pairwise dendritic integration , and each pair obeys the bilinear integration rule ( 1 ) , i . e . , ( 2 ) where denotes the SSP , denotes the individual EPSP , denotes the individual inhibitory postsynaptic potential ( IPSP ) , , , and are the corresponding proportionality coefficients with superscripts denoting the index of the synaptic inputs ., We then confirm the bilinear integration rule ( 2 ) numerically using realistic neuron modeling ., The decomposition of multiple inputs integration in rule ( 2 ) leads to a graph representation of the dendritic integration ., Each node in the graph corresponds to a synaptic input location , and each edge connecting two nodes represents the bilinear term for a pair of synaptic inputs given at the corresponding locations ., This graph evolves with time , and is all-to-all connected when stimuli are given at all synaptic sites simultaneously ., However , based on simulation results and experimental observations , we can estimate that there are only a small number of activated synaptic integration , or edges in the graph , within a short time interval ., Therefore , the graph representing the dendritic integration can indeed be functionally sparse ., Finally , we comment that , in general , it is theoretically challenging to analytically describe the dynamical response of a neuron with dendritic structures under time-dependent synaptic conductance inputs ., One simple approach to circumvent this difficulty is to analyze the steady state of neuronal input-output relationships by assuming that both the synaptic conductance and the membrane potential are constant 3 , 12 ., Such analyses can be applied to study dendritic integration , but they usually oversimplify the description of the spatial integration , and fail to describe the temporal integration ., Another approach to circumvent the difficulty is to study the cable model 14 , 15 analytically or numerically ., For the subthreshold regime , in which voltage-gated channels are weakly activated , the dendrites can be considered as a passive cable ., Along the cable , the membrane potential is linearly dependent on injected current input ., This linearity enables one to use the Greens function method to analytically obtain the membrane potential with externally injected current ., In contrast , the membrane potential depends nonlinearly on the synaptic conductance input 12 ., This nonlinearity greatly complicates mathematical analyses ., Therefore , in order to solve the cable model analytically , one usually makes the approximation of constant synaptic conductance 16 , 17 ., The approximation can help investigate some aspects of dendritic integration , however , the approximation in such a case is not sufficiently realistic because the synaptic conductances in vivo are generally time-dependent ., On the other hand , one can study the dendritic integration in the cable model numerically ., The compartmental modeling approach 14 enables one to solve the cable model with time-dependent synaptic inputs ., This approach has been used to investigate many aspects of dendritic integration ., For instance , it was discovered computationally that dendritic integration of excitatory inputs obeys a certain qualitative rule , i . e . , EPSPs are first integrated nonlinearly at individual branches before summed linearly at the soma 18 , 19 , which was verified later in experiments 20 , 21 ., Clearly , the computational approach can help gain insights into various phenomena of spatiotemporal dynamics observed at the dendrites , however , a deep , comprehensive understanding often requires analytical approaches ., Note that this point has also been emphasized in Ref ., 22 ., Here , our analytical asymptotic method can solve the cable model with time-dependent synaptic inputs analytically and reveal a precise quantitative spatiotemporal dendritic integration rule , as will be further illustrated below ., We begin to study the spatiotemporal dendritic integration of a pair of excitatory and inhibitory inputs ., An analytical derivation of the bilinear integration rule is described in the section of Derivation of the Rule ., The details of the cable model used in the derivation can be found in the section of Materials and Methods ., The validation of the bilinear integration rule using the realistic neuron modeling and electrophysiological experiments is described in the section of Validation of the Rule ., The spatial dependence of the coefficient in the rule is described in the section of Spatial Dependence of ., So far we have addressed the dendritic integration for a pair of excitatory and inhibitory inputs ., A natural question arises: how does a neuron integrate a pair of time-dependent synaptic conductance inputs with identical type ?, The dendritic integration of excitatory inputs has been extensively investigated in experiments ( reviewed in Ref . 1 ) , yet a precise quantitative characterization is still lacking ., According to our idealized cable model , given a pair of excitatory inputs with input strengths and at locations and and at times and , the dynamics of the membrane potential on the dendrite is governed by the following equation: ( 22 ) with the initial and boundary conditions the same as given in Equations ( 4 ) – ( 6 ) ., Similarly , we can represent its solution as an asymptotic series and solve it order by order to obtain the following bilinear integration rule: ( 23 ) where and are EPSPs induced by two individual excitatory inputs , and is the SSP when the two excitatory inputs are present ., Similar to the case of a pair of excitatory and inhibitory inputs , the shunting coefficient only depends on the excitatory input locations and the input time difference ., It does not depend on the EPSPs amplitudes ., Here will still be referred to as a shunting coefficient because the origin of the nonlinear integration for the paired excitatory inputs is exactly the same as that for the paired excitatory and inhibitory inputs from the passive cable model ., The bilinear integration rule ( 23 ) is found to be consistent with the numerical results obtained using the same realistic pyramidal neuron model as the one used in the section of Bilinear Rule for E–I Integration ., For a pair of excitatory inputs with their locations fixed on the dendritic trunk , the rule holds when the amplitude of each EPSP is less than ., For the case of concurrent inputs , at the time when one of the EPSPs reaches its peak valueis found to be linearly dependent of , as shown in Fig . 6A ., This linear relationship indicates is independent of the amplitudes of the two EPSPs ., In addition , as shown in Fig . 6B , the bilinear integration rule is numerically verified in the time interval , for , within which the amplitude of EPSPs are relatively large ., For the case of nonconcurrent inputs , the bilinear integration rule is also numerically verified in the same way , as shown in Fig . 6C–D ., In addition , we find that when the input strengths become sufficiently strong so as to make the depolarized membrane potential too large , i . e . , there is a deviation from the bilinear integration rule ( 23 ) ., This deviation can be ascribed to the voltage-gated ionic channel activities in our realistic pyramidal neuron model ., After blocking the active channels , the rule becomes valid with a different value of for large EPSPs amplitudes , as shown in Fig . 7 ., However , we note that , regardless of input strengths , the amplitude of SC is always two orders of magnitude smaller than the amplitude of SSP ., Therefore , the integration of two excitatory inputs can be naturally approximated by the linear summation of two individual EPSPs , i . e . ., We then perform electrophysiological experiments with a pair of excitatory synaptic inputs to confirm the linear summation ., As expected , this linear summation is also observed in our experiments for both concurrent and nonconcurrent input cases , as shown in Fig . 6E and 6F , respectively ., Note that , the linear summation is also consistent with experimental observations as reported in Ref ., 24 ., Similarly , for a pair of inhibitory inputs , we can arrive at the following bilinear integration rule from the cable model: ( 24 ) where and are IPSPs induced by two individual inhibitory inputs , and is the SSP when the two inhibitory inputs are present ., Here , is the shunting coefficient that is independent of the IPSPs amplitudes but is dependent on the input time difference and input locations ., The above bilinear integration rule ( 24 ) is consistent with our numerical results using the realistic pyramidal neuron model , as shown in Fig . 8A–D ., Our electrophysiological experimental observations further confirm this rule , as shown in Fig . 8E–H ., In the previous sections , we have discussed the integration of a pair of synaptic inputs ., In vivo , a neuron receives thousands of excitatory and inhibitory inputs from dendrites 2 ., Therefore , we now address the question of whether the integration rule derived for a pair of synaptic inputs can be generalized to the case of multiple inputs ., Our theoretical analysis shows that , for multiple inputs , the SSP can be approximated by the linear sum of all individual EPSPs and IPSPs , plus the bilinear interactions between all the paired inputs with shunting coefficients , , and respectively ( the superscript labels the synaptic inputs ) , i . e . , ( 25 ) We next validate the rule ( 25 ) using the realistic pyramidal neuron model ., It has been reported that , for a CA1 neuron , inhibitory inputs are locally concentrated on the proximal dendrites while excitatory inputs are broadly distributed on the entire dendrites 25 ., Based on this observation , we randomly choose 15 excitatory input locations and 5 inhibitory input locations on the model neurons dendrites ( Fig . 9A ) ., In the simulation , all inputs are elicited starting randomly from to ., In order to compare Equation ( 25 ) with the SSP simulated in the realistic neuron model , we first measure , , and pair by pair for all possible pairs ., We then record all membrane potential traces and induced by the corresponding individual synaptic inputs ., Our results show that the SSP measured from our simulation is indeed given by the bilinear integration rule ( 25 ) , as shown in Fig . 9B and 9C ., In contrast , the SSP in our numerical simulation deviates significantly from the linear summation of all individual EPSPs and IPSPs ., According to our bilinear integration rule ( 25 ) , the dendritic integration of multiple synaptic inputs can be decomposed into the summation of all possible pairwise dendritic integration ., Therefore , we can map dendritic computation in a dendritic tree onto a graph ., Each dendritic site corresponds to a node in the graph and the corresponding shunting component is mapped to the weight of the edge connecting the two nodes ., We refer to such a graph as a dendritic graph ., The dendritic graph is an all-to-all connected graph if all stimuli are given concurrently ( Fig . 10A ) ., However , the dendritic integration for all possible pairs of synaptic inputs is usually not activated concurrently in realistic situations ., For instance , if the arrival time difference between two inputs is sufficiently large , there is no interaction between them ., The activated level of the nonlinear dendritic integration for a pair of synaptic inputs can be quantified by the SC amplitude—the weight of the edge in the graph ., The simulation result shows that the number of activated edges at any time is relatively small on the dendritic graph ( Fig . 10B–D ) , compared with the total number of edges on the all-to-all connected graph ( Fig . 10A ) ., Therefore , for the case of a hippocampal pyramidal neuron , the dendritic graph could be functionally sparse in time ., The functional sparsity of a dendritic graph may also exist in neocortical pyramidal neurons ., In vivo , a cortical pyramidal neuron receives about synaptic inputs 26 ., Most of them are from other cortical neurons 27 , 28 , which typically fire about 10 spikes per second in awake animals 29 , 30 ., Thus , the neuron can be expected to receive synaptic inputs per second ., The average number of synaptic inputs within ( membrane potential time constants in vivo ) is ., The number of activated dendritic integration pairs within the interval is , which is relatively small compared with the total possible synaptic integration pairs ., Therefore , the activated integrations or edges in the dendritic graph within a short time window can be indeed functionally sparse ( ) ., In general , the neuronal firing rates vary across different cell types , cortical regions , brain states and so on ., Therefore , based on the above estimate , in an average sense , the graph of dendritic integration is functionally sparse ., Our bilinear dendritic integration rule ( 21 ) is consistent with the rule previously reported 3 , but is more general in the following aspects:, ( i ) Our dendritic integration rule holds at any time and is not limited to the time when the EPSP reaches its peak value ., ( ii ) The rule holds when the two inputs are even nonconcurrent ., This situation often occurs because the excitatory and inhibitory inputs may not always arrive at precisely the same time ., ( iii ) The form of the rule can be extended to describe the integration between a pair of excitatory inputs , a pair of inhibitory inputs , and even multiple inputs of mixed-types ., The spatiotemporal information of synaptic inputs interaction is coded in the shunting coefficient , which is a function of the input locations and input arrival time difference ., Our bilinear integration rule holds in the subthreshold regime for a large range of membrane potential ., When we derive the bilinear rule from the passive cable model , we assume that the input strengths or the amplitudes of membrane potentials require to be small ., This assumption forms the basis of the asymptotic analysis , because the second order asymptotic solutions of EPSP , IPSP and SSP converge to their exact solutions as the asymptotic parameters and ( denoting the excitatory and inhibitory input strengths ) approach zero ., In general , in the passive cable model , the bilinear rule will be more accurate for small amplitudes of EPSPs and IPSPs than large amplitudes ., Importantly , the assumption holds naturally that in the physiological regime when EPSP amplitude is less than 6mV and IPSP amplitude is less than -3mV , and are small ., However , even for EPSP amplitude close to the threshold , i . e . , 10mV , which is unusually large physiologically , we can show that the second order asymptotic solution can still well approximate the EPSP with a relative error less than 5% ., Thus the bilinear rule is still valid for large depolarizations near the threshold ., The validity of the bilinear rule for large membrane potentials is also confirmed in both simulations and experiments ., In particular , in the analysis of our experimental data , to validate the bilinear rule , we have already included all the data when the EPSP amplitude is below and close to the threshold because we have only excluded those data corresponding to the case when a neuron fires ., Our bilinear dendritic integration rule ( 21 ) is derived from the passive cable model ., However , the simulation results and the experimental observations demonstrate that the form of dendritic integration is preserved for active dendrites ., Additional simulation results show that for the same input locations , the shunting coefficients are generally larger on the active dendrites than those on the passive dendrites with all active channels blocked ., We also note that the value of in simulation is different from the value measured in experiments ., This difference may arise from the fact that some parameters of the passive membrane properties , such as the membrane leak conductance , may not be exactly the same as those in the biological neuron , and we have only used a limited set of ion channels in simulation compared with those in the biological neuron ., In addition , the input locations in the simulation and the experiments are different , which may also contribute to this derivation ., However , the bilinear form is a universal feature in both simulation and experiment ., By fixing excitatory input location while varying inhibitory input location , our model exhibits that there exists a region in the distal dendritic trunk within which the shunting inhibition can be more powerful , i . e , a larger , than in proximal dendrites ., This result is consistent with what is reported in Ref ., 31 ., Compared with Ref ., 31 , our work provides a different perspective of dendritic computation ., In their work , the multiple inhibitory inputs can induce a global shunting effect on the dendrites ., However , if we focus on the shunting effect only at the soma instead of the dendrites , our theory shows that all the interactions among multiple inputs can then be decomposed into pairwise interactions , as described by the bilinear integration rule ( 25 ) ., In addition , in this work , we focus on the somatic membrane potential that is directly related to the generation of an action potential ., However , it is also important to investigate the local integration of membrane potentials measured at a dendritic site instead of that measured at the soma ., Asymptotic analysis of the cable model can show that our bilinear integration rule is still valid for the description of the integration on the dendrites ., On the dendrites , the broadly distributed dendritic spines with high neck resistances 32 , 33 will filter a postsynaptic potential to a few millivolts on a branch 34 , 35 ., Within this regime our bilinear integration rule is valid ., Note that our rule may fail to capture the supralinear integration of synaptic inputs measured on the dendrites during the generation of a dendritic spike 36 ., However , if the integration is measured at the soma , our rule remains valid even when there is a dendritic spike induced by a strong excitatory input and an inhibitory synaptic input on different branches 3 ., The bilinear integration rule ( 25 ) can help improve the computational efficiency in a simulation of neuronal network with dendritic structures ., By our results , once the shunting coefficients for all pairs of input locations are measured , we can predict the neuronal response at the soma by the bilinear integration rule ( 25 ) ., By taking advantage of this , one can establish library-based algorithms to simulate the membrane potential dynamics of a biologically realistic neuron ., An example of a library-based algorithm can be found in Ref ., 37 ., To be specific , based on the full simulation of a realistic neuron model , we can measure the time-dependent shunting coefficient as a function of the arrival time difference and input locations for all possible pairs of synaptic inputs and record them in a library in advance ., For a particular simulation task , given the specific synaptic inputs on the dendrites , we can then search the library for the corresponding shunting coefficients to compute the neuronal response according to the bilinear integration rule ( 25 ) directly ., In such a computational framework , one can avoid directly solving partial differential equations that govern the spatiotemporal dynamics of dendrites and greatly reduces the computational cost for large-scale simulations of networks of neurons incorporating dendritic integration ., The animal-use protocol was approved by the Animal Management Committee of the State Key Laboratory of Cognitive Neuroscience & Learning , Beijing Normal University ( Reference NO . : IACUC-NKLCNL2013-10 ) ., We consider an idealized passive neuron whose isotropic spherical soma is attached to an unbranched cylindric dendrite with finite length and diameter ., Each small segment in the neuron can be viewed as an RC circuit with a constant capacitance and leak conductance density 11 , 38 ., The current conservation within a segment on the dendrite leads to ( 26 ) where is the membrane potential with respect to the resting potential on the dendrite , is the membrane capacitance per unit area , and is the leak conductance per unit area ., Here , is the synaptic current given by: ( 27 ) where and are excitatory and inhibitory synaptic conductance per unit area and and are their reversal potentials , respectively ., When excitatory inputs are elicited at dendritic sites and inhibitory inputs are elicited at dendritic sites , we have ( 28 ) where ., For a synaptic input of type , is the input strength of the input at the location , is the arrival time of the input at the location , is the input location ., The unitary conductance is often modeled as ( 29 ) with the peak value normalized to unity by the normalization factor , and with and as rise and decay time constants , respectively 38 ., Here is a Heaviside function ., The axial current can be derived based on the Ohms law , ( 30 ) where is the axial resistivity ., Taking the limit , Equation ( 26 ) becomes our unbranched dendritic cable model , ( 31 ) In particular , for a pair of excitatory and inhibitory inputs with strength and received at and , and at time and , respectively , we have ( 32 ) Similarly , for a pair of excitatory or inhibitory inputs with strengths and received at and , and at time and ( ) , respectively , we have ( 33 ) For the boundary condition of the cable model Equation ( 31 ) , we assume one end of the dendrite is sealed: ( 34 ) For the other end connecting to the soma , which can also be modeled as an RC circuit , by the law of current conservation , we have ( 35 ) where is the somatic membrane area , and is the somatic membrane potential ., The dendritic current flowing to the soma , , takes the form of Equation ( 30 ) at ., Because the membrane potential is continuous at the connection point ( 36 ) we arrive at the other boundary condition at : ( 37 ) For a resting neuron , the initial condition is simply set as ( 38 ) In the absence of synaptic inputs , Equation ( 31 ) is a linear system ., Using a impulse input , its Greens function can be obtained from ( 39 ) with the following boundary conditions and initial condition , For simplicity , letting , , , the solution of Equation ( 39 ) can be obtained from the following system , ( 40 ) with rescaled boundary and initial conditions , where ., Taking the Laplace transform of Equation ( 40 ) , we obtain ( 41 ) Combining the two boundary conditions ( is thus eliminated ) , we have ( 42 ) where ( 43 ) whose denominator is denoted as for later discussions ., For the inverse Laplace transform , we need to deal with singular points that are given by the roots of ., It can be easily verified that these singularities are simple poles and is analytic at infinity ., Then can be written as ( 44 ) where is a constant coefficient in the complex domain , and are the singular points ., Then taking the inverse Laplace transform of Equation ( 44 ) , we obtain ( 45 ) Now we only need to solve and in Equation ( 45 ) to obtain the Greens function of Equation ( 40 ) ., We solve the singular points first ., Defining yields ( 46 ) whose roots can be determined numerically ., There are solutions for with for and Next , to determine the factors we use the residue theorem for integrals ., For a contour that winds in the counter-clockwise direction around the pole and that does not include any other singular points , the integral of on this contour is given by ( 47 ) Using Equations ( 42–44 ) and ( 47 ) , we obtain ( 48 ) where ( 49 ) for ., The solution of the original Greens function for Equation ( 39 ) can now be expressed as ( 50 ) We first consider the case when a pair of excitatory and inhibitory inputs are received by a neuron ., Similar results can be obtained for a pair of excitatory inputs and a pair of inhibitory inputs ., For the physiological regime ( the amplitude of an EPSP being less than and the amplitude of an IPSP being less than ) , the corresponding required input strengths and are relatively small ., Therefore , given an excitatory input at location and time , and an inhibitory input at location and time , we represent as an asymptotic series in the powers of and , ( 51 ) Substituting Equation ( 51 ) into the cable equation ( 31 ) , order by order , we obtain a set of differential equations ., For the zeroth-order , we have ( 52 ) Using the boundary and initial conditions Equations ( 34 ) , ( 37 ) , and ( 38 ) , the solution is simply ( 53 ) For the first order of excitation , we have ( 54 ) With the help of Greens function , the solution can be expressed as ( 55 ) here ‘’ denotes convolution in time ., For the second order of excitation , we have ( 56 ) Because is given by Equation ( 55 ) , the solution of Equation ( 56 ) is ( 57 ) Similarly , we can have the first and second order inhibitory solutions , ( 58 ) ( 59 ) For the order of , we have ( 60 ) whose solution is obtained as follows , ( 61 ) For the numerical simulation of the two-compartment passive cable model Equation ( 3 ) , the Crank-Nicolson method 39 was used with time step and space step ., Parameters in our simulation are within the physiological regime 3 , 12 with , , , , , , , ., , , , and ., The time constants here were chosen to be consistent with the conductance inputs in the experiment 3 ., The realistic pyramidal model is the same as that in Ref ., 3 ., The morphology of the reconstructed pyramidal neuron includes 200 compartments and was obtained from the Duke-Southampton Archive of neuronal morphology 40 ., The passive cable properties and the density and distribution of active conductances in the model neruon were based on published experimental data obtained from hippocampal and cortical pyramidal neurons 18 , 19 , 34 , 41–50 ., We used the NEURON software Version 7 . 3 51 to simulate the model with time step ., The experimental measurements of summation of EPSPs or IPSPs in single hippocampal CA1 pyramidal cells in the acute brain slice followed a method described in Ref ., 3 , with some modifications ., A brief description of modified experimental procedure is as follows ., Acute hippocampal slices ( thick ) were prepared from Sprague Dawley rats ( postnatal day 14–16 ) , using a vibratome ( VT1200 , Leica ) ., The slices were incubated at 34°C for 30 min before transferring to a recording chamber perfused with the aCSF solution ( 2ml/min; 30–32°C ) ., The aCSF contained ( in mM ) 125 NaCl , 3 KCl , 2 CaCl2 , 2 MgSO4 , 1 . 25 NaH2PO4 , 1 . 3 sodium ascorbate , 0 . 6 sodium pyruvate , 26 NaHCO3 | Introduction, Results, Discussion, Materials and Methods | Neurons process information via integration of synaptic inputs from dendrites ., Many experimental results demonstrate dendritic integration could be highly nonlinear , yet few theoretical analyses have been performed to obtain a precise quantitative characterization analytically ., Based on asymptotic analysis of a two-compartment passive cable model , given a pair of time-dependent synaptic conductance inputs , we derive a bilinear spatiotemporal dendritic integration rule ., The summed somatic potential can be well approximated by the linear summation of the two postsynaptic potentials elicited separately , plus a third additional bilinear term proportional to their product with a proportionality coefficient ., The rule is valid for a pair of synaptic inputs of all types , including excitation-inhibition , excitation-excitation , and inhibition-inhibition ., In addition , the rule is valid during the whole dendritic integration process for a pair of synaptic inputs with arbitrary input time differences and input locations ., The coefficient is demonstrated to be nearly independent of the input strengths but is dependent on input times and input locations ., This rule is then verified through simulation of a realistic pyramidal neuron model and in electrophysiological experiments of rat hippocampal CA1 neurons ., The rule is further generalized to describe the spatiotemporal dendritic integration of multiple excitatory and inhibitory synaptic inputs ., The integration of multiple inputs can be decomposed into the sum of all possible pairwise integration , where each paired integration obeys the bilinear rule ., This decomposition leads to a graph representation of dendritic integration , which can be viewed as functionally sparse . | A neuron , as a fundamental unit of brain computation , exhibits extraordinary computational power in processing input signals from neighboring neurons ., It usually integrates thousands of synaptic inputs from its dendrites to achieve information processing ., This process is known as dendritic integration ., To elucidate information coding , it is important to investigate quantitative spatiotemporal dendritic integration rules ., However , there has yet to be extensive experimental investigations to quantitatively describe dendritic integration ., Meanwhile , most theoretical neuron models considering time-dependent synaptic inputs are difficult to solve analytically , thus impossible to be used to quantify dendritic integration ., In this work , we develop a mathematical method to analytically solve a two-compartment neuron model with time-dependent synaptic inputs ., Using these solutions , we derive a quantitative rule to capture the dendritic integration of all types , including excitation-inhibition , excitation-excitation , inhibition-inhibition , and multiple excitatory and inhibitory inputs ., We then validate our dendritic integration rule through both realistic neuron modeling and electrophysiological experiments ., We conclude that the general spatiotemporal dendritic integration structure can be well characterized by our dendritic integration rule ., We finally demonstrate that the rule leads to a graph representation of dendritic integration that exhibits functionally sparse properties . | computational neuroscience, neuroscience, biology and life sciences, computational biology | null |
811 | journal.pcbi.1005088 | 2,016 | Exome Sequencing and Prediction of Long-Term Kidney Allograft Function | Survival of patients afflicted with End Stage Renal Disease ( ESRD ) is superior following kidney transplantation compared to dialysis therapy ., The short-term outcomes of kidney grafts have steadily improved since the early transplants with refinements in immunosuppressive regimens , use of DNA-based human leukocyte antigen ( HLA ) typing , and better infection prophylaxis 1–3 ., Despite these advances , data collected across the USA and Europe show that 40–50% of kidney allografts fail within ten years of transplantation 4 ., This observation strongly suggests that as yet uncharacterized factors , including genomic loci , may adversely impact long-term post-transplantation outcomes ., The HLA is a cluster of genes on the short arm of chromosome 6 and constitutes the major histocompatibility complex ( MHC ) responsible for self/non-self discrimination in humans ., Multiple clinical studies have demonstrated the importance of HLA-matching to improve kidney graft outcome ., Therefore , in many countries , including the USA , donor kidney allocation algorithms includes consideration of HLA matching of the kidney recipient and donor ., With widespread incorporation of HLA matching in kidney organ allocation decisions , it has become clearer that HLA mismatching represents an important risk factor for kidney allograft failure but fails to fully account for the invariable decline in graft function and failure in a large number of recipients over time ., Indeed , only a 15% survival difference exist at 10 years post transplantation between the fully matched kidneys and the kidneys mismatched for both alleles at the HLA-A , B and DR loci 5 ., Findings from large cohorts of kidney graft recipients have also been studied to separate the immunological effect mediated by HLA and the non-HLA effects 6 ., Overall , prior observations suggest that mismatches at non-HLA loci in the genome could influence long-term graft outcomes ., Also , antibodies directed at HLA as well as non-HLA ( e . g . , MHC class I polypeptide-related sequence MICA ) have been associated with allograft rejection and reduced graft survival rates ., Indeed , it has been reported that the presence of anti-MICA antibodies in the pre-transplant sera is associated with graft failure despite HLA matching of the kidney recipient with the organ donor ., Here , we used exome sequencing to determine the sequences of the HLA as well as non-HLA peptides encoded by the donor organ and displayed on its cell surface , as well as bioinformatics analyses to determine donor sequences not present in the recipient ., The allogenomics approach integrates the unique features of transplantation , such as the existence of two genomes in a single individual , and the recipient’s immune system mounting an immune response directed at either HLA or non-HLA antigens displayed by the donor kidney ., In this report , we show that this new concept helps predict long-term kidney transplant function from the genomic information available prior to transplantation ., We found that a statistical model that incorporates time as covariate , HLA , donor age and the AMS ( allogenomics mismatch score , introduced in this study ) , predicts graft function through time better than a model that includes the other factors and covariates , but not the AMS ., The allogenomics concept is the hypothesis that interrogation of the coding regions of the entire genome for both the organ recipient and organ donor DNA can identify the number of incompatible amino-acids ( recognized as non-self by the recipient ) that inversely correlates with long-term function of the kidney allograft ., Fig 1A is a schematic illustration of the allogenomics concept ., Because human autosomes have two copies of each gene , we consider two possible alleles in each genome of a transplant pair ., To this end , we estimate allogenomics score contributions between zero and two , depending on the number of different amino acids that the donor genome encodes for at a given protein position ., Fig 1B shows the possible allogenomics score contributions when the amino acids in question are either an alanine , or a phenylalanine or an aspartate amino acid ., The allogenomics mismatch score ( AMS ) is a sum of amino acid mismatch contributions ., Each contribution represents an allele coding for a protein epitope that the donor organ may express and that the recipient immune system could recognize as non-self ( see Equation 1 and 2 in Fig 1C and Materials and Methods and full description in S1 File ) ., We have developed and implemented a computational approach to estimate the AMS from genotypes derived for pairs of recipient and donor genomes ., ( See Materials and Methods for a detailed description of this approach and its software implementation , the allogenomics scoring tool , available at http://allogenomics . campagnelab . org . ), Our approach was designed to consider the entire set of protein positions measured by a genotyping assay , or restrict the analysis to a subset of positions P in the genome ., In this study , we focused on the subset of genomic sites P that encode for amino acids in trans-membrane proteins ., It is possible that some secreted or intra-cellular proteins can contribute to the allogenomics response , but the set of trans-membrane proteins was considered in this study in order to enrich contributions for epitopes likely to be displayed at the surface of donor kidney cells ., While proteins expressed in kidney could appear to be a better choice , the technical challenge of defining a list of proteins expressed by kidney alone , and perhaps only transiently in some kidney cell type exposed to the surface of the kidney , argues against relying on a kidney expression filter ., Similarly , we did not consider other sets of proteins , and make no claim that the set of transmembrane proteins is an optimal choice ., Because the AMS sums contributions from thousands of genomic sites across the genome , it is an example of a burden test , albeit summed across an entire exome ., The procedure is akin to averaging and the resulting score is much less sensitive to errors introduced by the genotyping assays or analysis approach than previous association studies which considered genotypes individually ., The AMS approach yields a single score per transplant ., This eliminates the need to correct for tens of thousands of statistical tests , which are common in classical association studies ., The allogenomics approach therefore also decreases the number of samples needed to reach statistical power ., In order to test the allogenomics hypothesis , we isolated DNA from kidney graft recipients and their living donors ., We assembled three cohorts: Discovery Cohort ( 10 transplant pairs ) where the allogenomics observation was first made ( these patients were a subset of patients enrolled in a multicenter Clinical Trial in Organ Transplantation-04 study of urinary cell mRNA profiling , from whom tissue/cells were collected for future mechanistic studies 7 , 10 transplant pairs ) , and two validation cohorts: one from recipients transplanted at the New York Presbyterian Weill Cornell Medical Center ( Cornell Validation Cohort , 24 pairs ) , and a second validation cohort from recipients transplanted in Paris hospitals ( French Validation Cohort , 19 pairs ) ., Table 1 provides demographic and clinical information about the patients included in our study ., Exome data were obtained for each cohort ., For the Discovery cohort , we used the Illumina TrueSeq exome enrichment kit v3 , covering 62Mb of the human genome ., For the two validation cohorts , DNA sequencing was performed using the Agilent Haloplex assay covering 37Mb of the coding sequence of the human genome ., Primary sequence data analyses were conducted with GobyWeb 8 ( data and analysis management ) , Last 9 ( alignment to the genome ) and Goby 10 ( genotype calls ) ., Table A in S1 File provides statistics of coverage for the exome assays ., Kidney graft function is a continuous phenotype and is clinically evaluated by measuring serum creatinine levels or using estimated glomerular filtration rate ( eGFR ) 11 ., In this study , kidney graft function was evaluated at several time points for each recipient , with the precise time points varying by cohort ., In the discovery cohort , kidney allograft function was measured at 12 , 24 , 36 and 48 months following transplantation using serum creatinine levels and eGFR , calculated using the 2011 MDRD 11 formula ., We examined whether the allogenomics mismatch score is associated with post-transplantation allograft function ., In Fig 2 , we illustrate the association observed between AMS and creatinine levels or eGFR in the Discovery Cohort ., We found positive linear associations between the allogenomics mismatch score and serum creatinine level at 36 months post transplantation ( r2 adj . = 0 . 78 , P = 0 . 002 , n = 10 ) but not at 12 or 24 months following kidney transplantation ( Fig 2A , 2B and 2C ) ., We also found a negative linear relationship between the score and eGFR at 36 months post transplantation ( r2 adj . = 0 . 57 , P = 0 . 02 ) but not at 12 or 24 months following kidney transplantation ( Fig 2D , 2E and 2F ) ., These findings suggest that in the Discovery cohort the AMS is predictive of long-term graft function ., It is also possible that the AMS score would predict short-term graft function , but that more data is needed to detect smaller changes in eGFR at early time points , whereas cumulative effects on graft function become detectable at later time points ., Similar observations were made in the two validation cohorts ( see Figures A and B in S1 File ) and discussed in detail in an earlier preprint 12 ., In the models presented so far , we have considered the prediction of graft function separately at different time points ., An alternative analysis would consider time since transplantation , as well as other established predictors of graft function as covariates in the model ., This is particularly useful when studying cohorts where graft function was assessed at several distinct time points ( e . g . , in the French cohort , clinical data describes graft function from 1 to 96 months post transplantation , but few time points have observations for all recipients ) ., To implement this alternative analysis , we fit a mixed linear model of the form: eGFR ~ donor age at time of transplant + AMS + T + ( 1|P ) ( Equation 3 ) , where T is the time post-transplantation , measured in months , and ( 1|P ) a random effect which models separate model intercepts for each donor/recipient pairs ., To determine the effect of AMS on eGFR , we compared the fit of models that did or did not include the AMS ., We found that the effect of AMS is significant ( P = 0 . 0042 , χ2 = 8 . 1919 , d . f . = 1 ) ., A similar result was obtained if HLA was also used as a covariate in the model ( i . e . , eGFR ~ donor age at time of transplant + AMS + T + HLA + ( 1|P ) ( Equation 4 ) , comparing model with AMS or without , P = 0 . 038 , χ2 = 4 . 284 , d . f . = 1 ) ., In contrast , models that included AMS , but did or did not include the number of ABDR HLA mismatches fit the data equally well ( testing the effect of HLA , P = 0 . 60 , χ2 = 0 . 2737 , d . f . = 1 ) , confirming that the effect of AMS was independent of the number of HLA mismatches ., The models of equations 3 and 4 include a random effect for the transplant pair ( 1|P ) term ., This term models the differences among pairs , such as level of graft function in the days post-transplantation , as well as correlations between repeated measurements for the same recipient ., See Fig C in S1 File for a more direct comparison between AMS and HLA ABDR mismatches ., This comparison indicates that there is a moderate correlation between AMS and the number of HLA ABDR mismatches ., Taken together , these results indicate that the predictive ability of the AMS effect is mostly independent of the number of ABDR mismatches at the HLA loci ., In order to determine if the AMS effect is robust , we fit the model from equation 3 in each cohort independently ., The estimates for the AMS effect are shown in Table 2 ., Despite a limited amount of data to fit the model in each cohort , the estimates are very similar , strongly suggesting that the AMS effect is robust and can be observed even in small cohorts ( 10 , 19 and 24 transplant pairs ) ., In Fig D in S1 File we plot the minor allele frequencies ( MAF ) of the variations that contribute to the AMS in the Discovery and Validation cohorts ., We find that many polymorphisms that contribute to the AMS have low MAF , indicating that they are rare in human populations ., This point needs to be considered for replication studies ., For instance , GWAS genotyping platforms may require adequate imputation to infer polymorphisms with low MAF ., Table 3 presents confidence intervals for the parameters of the full model ( equation 4 , including HLA term ) , fit across 53 transplant pairs , as well as the effective range of each of the model predictors ., The table shows the expected impact of each predictor on eGFR when this predictor is varied over its range , assuming all other predictors are kept constant ., For instance , assume that donor age at time of transplant varies from 20 years old to 80 years old ( range: 60 ) ., Across this range , eGFR will decrease by an estimated 28 units as the donor gets older ., The AMS effect has an effective range of 1 , 700 and the corresponding eGFR decrease is 19 units ., This comparison indicates that the strength of the AMS effect is similar to that of donor age and more than five times larger than the effect of HLA- ABDR mismatches ., While HLA-matching is a necessary requirement for successful hematopoietic cell transplants , full HLA compatibility is not an absolute prerequisite for all types of transplantations as indicated by the thousands of solid organ transplants performed yearly despite lack of full matching between the donor and recipient at the HLA-A , B and DR loci ., In view of better patient survival following transplantation compared to dialysis , kidney transplants have become the standard of care for patients with end stage kidney disease and transplants are routinely performed with varying degrees of HLA-class I and II mismatches ., Although , graft outcomes improve with better HLA-matching 13 , excellent long-term graft outcomes with stable graft function have been observed in patients with full HLA -ABDR mismatches ., The success of these transplants clearly suggests that factors other than HLA compatibility may influence the long-term clinical outcome of kidney allografts ., Furthermore , grafts do fail even with the best HLA match 13 , suggesting that antigens other than HLA are targets of alloimmune response ., Indeed , several non-HLA antibodies have been identified for renal and cardiac allograft recipients and found detrimental to long-term outcome 14 , 15 ., These antibodies were found to target antigens expressed on endothelial and epithelial cells but also on a variety of parenchymal and immune cells and can be measured prior to transplantation ., These prior studies support the notion that non-HLA antibodies can influence long-term outcome in transplantation ., Recipients of a kidney transplant have two genomes in their body: their germline DNA , and the DNA of the donor ., It is clear that a Mendelian genetic transmission mechanism is not at play in transplantation , yet , this assumption has been made in most of the transplantation genomic studies published to date 16 , 17 ., While several case-control studies have been conducted with large organ transplant cohorts , the identification of genotype/phenotype associations has been limited to the discoveries of polymorphisms with small effect , that have been reviewed in 18 , and have often not been replicated 19–21 ., Rather than focusing on specific genomic sites , the allogenomics concept sums contributions of many mismatches that can impact protein sequence and structure and could engender an immune response in the graft recipient ., These allogenomics mismatches , captured in our study , represent the sequences of non-HLA trans-membrane proteins , some of which may help initiate cellular and humoral immunity directed at the allograft ., This study used eGFR as a surrogate marker for long-term graft survival ., The advantage of focusing on eGFR is that it is measured as part of clinical care on a yearly basis for each recipient , and eGFR has been associated with long-term outcome in multiple studies ., Since acute rejection has also been associated with a decrease in long-term graft survival , it may also serve as a surrogate marker for long-term kidney allograft survival ., Acute rejection however is a rare event with current immunosuppressive regimens and given the relatively small size of our study cohort , we would not have had sufficient cases to examine the association between acute rejection and the allogenomics score ., Another consideration for not using acute rejection is that acute rejection only represents a fraction of the mechanisms that lead to graft loss 22 ., The allogenomics concept that we present in this manuscript postulates a mechanism for the development of the immune response in the transplant recipient: immunological and biophysical principles strongly suggest that alleles present in the donor genome , but not in the recipient genome , will have the potential to produce epitopes that the recipient immune system will recognize as non-self ., This reasoning explains why the allogenomics score is not equivalent to the genetic measures of allele sharing distance that have been used to perform genetic clustering of individuals 23 ., This manuscript also suggests that allogenomic mismatches in proteins expressed at the surface of donor cells could explain why some recipients’ immune systems mount an attack against the donor organ , while other patients tolerate the transplant for many years , when given similar immunosuppressive regimens ., If the results of this study are confirmed in additional independent transplant cohorts ( renal transplants , solid or hematopoeitic cell transplants ) , they may prompt the design of prospective clinical trials to evaluate whether allocating organs to recipients with a combination of low allogenomics mismatch scores and different HLA mismatch scores improves long term graft outcome ., A positive answer to this question could profoundly impact the current clinical and regulatory framework for assigning organs to ESRD patients ., In this study , we introduced the allogenomics concept to quantitatively estimate the histoincompatibility between living donor and recipient outside of the HLA loci ., We tested the simplest model derived from this concept to calculate an allogenomics mismatch score ( AMS ) reflecting the possible donor specific epitopes displayed on the cell surface ., We demonstrated that the AMS , which can be estimated before transplantation , helps predict post-transplantation kidney graft function more accurately than HLA-mismatches alone ., Interestingly , the strength of the correlation increases with the time post transplantation , an intriguing finding observed in both the discovery cohort and the validation cohorts ., We chose the simplest model to test the allogenomics concept and did not restrict the score to contributions from the peptides that can fit in the HLA groove despite their computational predictability 24 ., It is possible that such restriction would increase the score’s ability to predict renal function post transplantation ., However , such a filter assumes that HLA and associated peptides are the only stimuli for the anti-allograft response and does not take into consideration allorecognition involving innate effectors ( NK cells or NKT cells for example , the Killer-cell Immunoglobulin-like Receptor KIR genes , iTCR , the invariant T Cell Receptor , and TLR , Toll Like Receptor , among others ) 25 ., The allogenomics concept incorporating amino acid mismatches capable of triggering adaptive as well as innate immunity could be considered an important strength of the approach ., Recent evidence indicates that mutations in splice sites , although rare , are responsible for a large proportion of disease risk 26 ., The allogenomics approach presented in this manuscript does not incorporate knowledge of how polymorphisms in splice sites affect protein sequences ., We anticipate that future developments would consider longer splice forms in the donor as allogenomics ., Such an approach could score additional donor protein residues as allogenomics mismatches when the sequence is not present in the predicted proteome of the recipient ., We chose to focus this study on living , ABO compatible ( either related or non-related ) donors because kidney transplantation can be planned in advance and because differences in cold ischemia times and other covariates common in deceased donor transplants are negligible when focusing on living donors , especially in small cohorts ., The selection criteria for deceased donors include consideration of HLA matching , calculated panel reactive antibody and the age of the recipient ., Compared to live donors we expect that the range of the AMS in deceased donors will be comparable to that in our discovery cohort composed primarily of unrelated donors ., Since many additional factors can independently influence graft function after transplantation from a deceased donor ( e . g . cold ischemia time ) , potentially much larger cohorts may be required in such settings to achieve sufficient power to adequately control for the covariates relevant to deceased donors and to detect the allogenomics effect ., While we have not attempted to optimize the set of sites considered to estimate the allogenomics mismatch score , it is possible that a reduced and more focused subsets of amino acid mismatches could increase the predictive ability of the score ., For instance , the AMS could be applied to look for genes with a high allogenomic mismatch burden ., Such studies would require larger cohorts and may enable the discovery of loci enriched in allogenomics mismatches responsible for a part of the recipient alloresponse against yet unsuspected donor antigens ., Their discovery might foster the development of new immunosuppressive agents targeting the expression of these immuno-dominant epitopes ., However , our study also raises a novel mechanistic hypothesis: the total burden of allogenomics mismatches might be more predictive of graft function , than mismatches at specific loci , as was previously widely expected 17 ., The study was reviewed and approved by the Weill Cornell Medical College Institutional Review Board ( protocol #1407015307 “Predicting Long-Term Function of Kidney Allograft by Allogenomics Score” , approved 09/09/2014 ) ., The second study involving the French cohort was approved by the Comité de Protection des Personnes ( CPP ) , Ile de France 5 , ( 02/09/2014 ) ., Codes were used to ensure donor and recipient anonymity ., All subjects gave written informed consent ., Living donor ABO compatible kidney transplantations were performed according to common immunological rules for kidney transplantation with a mandatory negative IgG T-cell complement-dependent cytotoxicity cross-match ., Briefly , genotypes of donors and recipients were assayed by exome sequencing ( Illumina TruSeq enrichment kit for the Discovery Cohort and Agilent Haloplex kit for the Cornell Validation Cohort and the French Validation Cohort ) ., Reads were aligned to the human genome with the Last 9 aligner integrated as a plugin in GobyWeb 8 ., Genotype calls were made with Goby 10 and GobyWeb 8 ., Prediction of polymorphism impact on the protein sequence were performed with the Variant Effect Predictor 27 ., Genes that contain at least one transmembrane segment were identified using Ensembl Biomart 28 ., We selected 10 kidney transplant recipients from those who had consented to participate in the Clinical Trials in Organ Transplantation-04 ( CTOT-04 ) , a multicenter observational study of noninvasive diagnosis of renal allograft rejection by urinary cell mRNA profiling ., We included only the recipients who had a living donor kidney transplant and along with their donors , had provided informed consent for the use of their stored biological specimens for future research ., Pairs were limited to those where enough DNA could be extracted to perform the exome assay for both donor and recipient ., Subjects were not selected on the basis of eGFR , whose values were collected after obtaining sequence data ., The demographic and clinical information of the Discovery cohort is shown in Table 1 ., DNA was extracted from stored peripheral blood using the EZ1 DNA blood kit ( Qiagen ) based on the manufacturer’s recommendation ., DNA was enriched for exome regions with the TruSeq exome enrichment kit v3 ., Sequencing libraries were constructed using the Illumina TruSeq kit DNA sample preparation kit ., Briefly , 1 . 8 μg of genomic DNA was sheared to average fragment size of 200 bp using the Covaris E220 ( Covaris , Woburn , MA , USA ) ., Fragments were purified using AmpPureXP beads ( Beckman Coulter , Brae , CA , USA ) to remove small products ( <100 bp ) , yielding 1 μg of material that was end-polished , A-tailed and adapter ligated according to the manufacturer’s protocol ., Libraries were subjected to minimal PCR cycling and quantified using the Agilent High Sensitivity DNA assay ( Agilent , Santa Clara , CA , USA ) ., Libraries were combined into pools of six for solution phase hybridization using the Illumina ( Illumina , San Diego , CA , USA ) TruSeq Exome Enrichment Kit ., Captured libraries were assessed for both quality and yield using the Agilent High Sensitivity DNA assay Library Quantification Kit ., Sequencing was performed with six samples per lane using the Illumina HiSeq 2000 sequencer and version 2 of the sequencing-by-synthesis reagents to generate 100 bp single-end reads ( 1×100SE ) ., We studied 24 kidney transplant recipients who had a living donor transplant at the NewYork-Presbyterian Weill Cornell Medical Center ., This was an independent cohort and none of the recipients had participated in the CTOT-04 trial ., Recipients were selected randomly based on the availability of archived paired recipient-donor DNA specimens obtained at the time of transplantation at our Immunogenetics and Transplantation Laboratory ., DNA extraction from peripheral blood was done using the EZ1 DNA blood kit ( Qiagen ) based on the manufacturer’s recommendation ., We studied 19 kidney transplant recipients who had a living donor transplant at Tenon Hospital ., This represented a third independent cohort ., Recipients were selected randomly based on the availability of archived paired recipient-donor DNA specimens obtained either at the Laboratoire dhistocompatibilité , Hôpital Saint Louis APHP , Paris or during patient’s follow-up between October 2014 and January 2015 ., DNA extraction from peripheral blood was done using the Nucleospin blood L kit ( Macherey-Nagel ) based on the manufacturer’s recommendation ., The Cornell and French Validation cohorts were both assayed with the Agilent Haloplex exome sequencing assay ., The Haloplex assay enriches 37 Mb of coding sequence in the human genome and was selected for the validation cohort because it provides a strong and consistent exome enrichment efficiency for regions of the genome most likely to contribute to the allogenomics contributions in protein sequences ., In contrast , the TrueSeq assay ( used for the Discovery Cohort ) enriches 63Mb of sequence and includes regions in untranslated regions ( 5’ and 3’ UTRs ) , which do not contribute to allogenomics scores and therefore do not need to be sequenced to estimate the score ., Libraries were prepared as per the Agilent recommended protocol ., Sequencing was performed on an Illumina 2500 sequencer with the 100bp paired-end protocol recommended by Agilent for the Haloplex assay ., Libraries were multiplexed 6 per lane to yield approximately 30 million paired end reads per sample ., We determined the minor allele frequency of sites used in the calculation of the allogenomics mismatch score using data from the Exome Aggregation Consortium ( ExAC ) ., This resource made it possible to estimate MAF for most of the variations that are observed in the subjects included in our discovery and validation cohort ., Data was downloaded and analyzed with R and MetaR scripts ( see analysis scripts provided at https://bitbucket . org/campagnelaboratory/allogenomicsanalyses ) ., We use the NHLBI Exome Sequencing Project ( ESP ) release ESP6500SI-V2 30 ., The ESP measured genotypes in a population of 6 , 503 individuals across the EA and AA populations using an exome-sequencing assay 30 ., Of 12 , 657 sites measured in the validation cohort with an allogenomics contribution strictly larger than zero ( 48 exomes , sites with contributions across 24 clinical pairs of transplants ) , 9 , 765 ( 78% ) have also been reported in ESP ( 6 , 503 exomes ) ., Illumina sequence base calling was performed at the Weill Cornell Genomics Core Facility ., Sequence data in FASTQ format were converted to the compact-reads format using the Goby framework 14 ., Compact-reads were uploaded to the GobyWeb8 system and aligned to the 1000 genome reference build for the human genome ( corresponding to hg19 , released in February 2009 ) using the Last 9 , 31 aligner ( parallelized in a GobyWeb 8 plugin ) ., Single nucleotide polymorphisms ( SNPs ) and small indels genotype were called using GobyWeb with the Goby 32 discover-sequence-variants mode ( parameters: minimum variation support = 3 , minimum number of distinct read indices = 3 ) and annotated using the Variant Effect Predictor 27 ( VEP version 75–75 . 7 ) from Ensembl ., The data were downloaded as a Variant Calling format 33 ( VCF ) file from GobyWeb 8 and further processed with the allogenomics scoring tool ( see http://allogenomics . campagnelab . org ) ., The allogenomics mismatch score Δ ( r , d ) is estimated for a recipient r and donor d as the sum of score mismatch contributions ( see Fig 1C and supplementary methods in S1 File ) ., Analyses were conducted with either JMP Pro version 11 ( SAS Inc . ) or metaR ( http://metaR . campagnelab . org ) ., Fig 2 as well as Figures in S1 File were constructed with metaR analysis scripts and edited with Illustrator CS6 to increase some font sizes or adjust the text of some axis labels ., The model that includes the time post-transplantation as a covariate was constructed in metaR and JMP ., The R implementation of train linear model uses the lm R function ., This model was executed using the R language 3 . 1 . 3 ( 2015-03-09 ) packaged in the docker image fac2003/rocker-metar:1 . 4 . 0 ( https://hub . docker . com/r/fac2003/rocker-metar/ ) ., Models with random effects were estimated with metaR 1 . 5 . 1 and R ( train mixed model and compare mixed models statements , which use the lme4 R package 34 ) ., Comparison of fit for models with random effects was obtained by training each model alternative with REML = FALSE an performing an anova test , as described in 35 ., We distribute the code necessary to reproduce most of the analysis presented in this manuscript at https://bitbucket . org/campagnelaboratory/allogenomicsanalyses . | Introduction, Results, Discussion, Materials and Methods | Current strategies to improve graft outcome following kidney transplantation consider information at the human leukocyte antigen ( HLA ) loci ., Cell surface antigens , in addition to HLA , may serve as the stimuli as well as the targets for the anti-allograft immune response and influence long-term graft outcomes ., We therefore performed exome sequencing of DNA from kidney graft recipients and their living donors and estimated all possible cell surface antigens mismatches for a given donor/recipient pair by computing the number of amino acid mismatches in trans-membrane proteins ., We designated this tally as the allogenomics mismatch score ( AMS ) ., We examined the association between the AMS and post-transplant estimated glomerular filtration rate ( eGFR ) using mixed models , considering transplants from three independent cohorts ( a total of 53 donor-recipient pairs , 106 exomes , and 239 eGFR measurements ) ., We found that the AMS has a significant effect on eGFR ( mixed model , effect size across the entire range of the score: -19 . 4 -37 . 7 , -1 . 1 , P = 0 . 0042 , χ2 = 8 . 1919 , d . f . = 1 ) that is independent of the HLA-A , B , DR matching , donor age , and time post-transplantation ., The AMS effect is consistent across the three independent cohorts studied and similar to the strong effect size of donor age ., Taken together , these results show that the AMS , a novel tool to quantify amino acid mismatches in trans-membrane proteins in individual donor/recipient pair , is a strong , robust predictor of long-term graft function in kidney transplant recipients . | The article describes a new concept to help match donor organs to recipients for kidney transplantation ., The concept relies on the ability to measure the individual DNA of potential donors and recipients ., When the data about genomes ( i . e . , DNA ) of possible donors and recipients are available , the article describes how data can be computationally compared to identify differences in these genomes and quantify the possible future impact of these differences on the functioning of the graft ., The concept presented in the article determines a score for each pair of possible donor and recipient ., This score is called the allogenomics mismatch score ., The study tested the ability of this score to predict graft function ( the ability of the graft to filter blood ) in the recipient several years after transplantation surgery ., The study found that , in three small sets of patients tested , the score is a strong predictor of graft function ., Prior studies often assumed that only a small number of locations in the genome were most likely to have an impact on graft function , while this study found initial evidence that differences across DNA that code for a large number of proteins can have a combined impact on graft function . | urinary system procedures, medicine and health sciences, organ transplantation, immunology, biomarkers, human genomics, surgical and invasive medical procedures, clinical medicine, renal transplantation, genome analysis, kidneys, transplantation, immune system proteins, proteins, creatinine, biochemistry, anatomy, clinical immunology, transplantation immunology, genetics, biology and life sciences, renal system, genomics, computational biology, genomic medicine | null |
2,265 | journal.pcbi.1002267 | 2,011 | Dynamical and Structural Analysis of a T Cell Survival Network Identifies Novel Candidate Therapeutic Targets for Large Granular Lymphocyte Leukemia | Living cells perceive and respond to environmental perturbations in order to maintain their functional capabilities , such as growth , survival , and apoptosis ., This process is carried out through a cascade of interactions forming complex signaling networks ., Dysregulation ( abnormal expression or activity ) of some components in these signaling networks affects the efficacy of signal transduction and may eventually trigger a transition from the normal physiological state to a dysfunctional system 1 manifested as diseases such as diabetes 2 , 3 , developmental disorders 4 , autoimmunity 5 and cancer 4 , 6 ., For example , the blood cancer T-cell large granular lymphocyte ( T-LGL ) leukemia exhibits an abnormal proliferation of mature cytotoxic T lymphocytes ( CTLs ) ., Normal CTLs are generated to eliminate cells infected by a virus , but unlike normal CTLs which undergo activation-induced cell death after they successfully fight the virus , leukemic T-LGL cells remain long-term competent 7 ., The cause of this abnormal behavior has been identified as dysregulation of a few components of the signal transduction network responsible for activation-induced cell death in T cells 8 ., Network representation , wherein the systems components are denoted as nodes and their interactions as edges , provides a powerful tool for analyzing many complex systems 9 , 10 , 11 ., In particular , network modeling has recently found ever-increasing applications in understanding the dynamic behavior of intracellular biological systems in response to environmental stimuli and internal perturbations 12 , 13 , 14 ., The paucity of knowledge on the biochemical kinetic parameters required for continuous models has called for alternative dynamic approaches ., Among the most successful approaches are discrete dynamic models in which each component is assumed to have a finite number of qualitative states , and the regulatory interactions are described by logical functions 15 ., The simplest discrete dynamic models are the so-called Boolean models that assume only two states ( ON or OFF ) for each component ., These models were originally introduced by S . Kauffman and R . Thomas to provide a coarse-grained description of gene regulatory networks 16 , 17 ., A Boolean network model of T cell survival signaling in the context of T-LGL leukemia was previously constructed by Zhang et al 18 through performing an extensive literature search ., This network consists of 60 components , including proteins , mRNAs , and small molecules ( see Figure 1 ) ., The main input to the network is “Stimuli” , which represents virus or antigen stimulation , and the main output node is “Apoptosis” , which denotes programmed cell death ., Based on a random order asynchronous Boolean dynamic model of the assembled network , Zhang et al identified a minimal number of dysregulations that can cause the T-LGL survival state , namely overabundance or overactivity of the proteins platelet-derived growth factor ( PDGF ) and interleukin 15 ( IL15 ) ., Zhang et al carried out a preliminary analysis of the networks dynamics by performing numerical simulations starting from one specific initial condition ( corresponding to resting T cells receiving antigen stimulation and over-abundance of the two proteins PDGF and IL15 ) ., Once the known deregulations in T-LGL leukemia were reproduced , each of these deregulations was interrupted individually , by setting the nodes status to the opposite state , to predict key mediators of the disease ., Yet , a complete dynamic analysis of the system , including identification of the attractors ( e . g . steady states ) of the system and their corresponding basin of attraction ( precursor states ) , as well as a thorough perturbation analysis of the system considering all possible initial states , is lacking ., Performing this analysis can provide deeper insights into unknown aspects of T-LGL leukemia ., Stuck-at-ON/OFF fault is a very common dysregulation of biomolecules in various cancer diseases 19 ., For example , stuck-at-ON ( constitutive activation ) of the RAS protein in the mitogen-activated protein kinase pathways leads to aberrant cell proliferation and cancer 19 , 20 ., Thus identifying components whose stuck-at values result in the clearance , or alternatively , the persistence of a disease is extremely beneficial for the design of intervention strategies ., As there is no known curative therapy for T-LGL leukemia , identification of potential therapeutic targets is of utmost importance 21 ., In this paper , we carry out a detailed analysis of the T-LGL signaling network by considering all possible initial states to probe the long-term behavior of the underlying disease ., We employ an asynchronous Boolean dynamic framework and a network reduction method , which we previously proposed 22 , to identify the attractors of the system and analyze their basins of attraction ., This analysis allows us to confirm or predict the T-LGL states of 54 components of the network ., The predicted state of one of the components ( SMAD ) is validated by new wet-bench experiments ., We then perform node perturbation analysis using the dynamic approach and a structural method proposed in 23 to study to what extent does each component contribute to T-LGL leukemia ., Both methods give consistent results and together identify 19 key components whose disruption can reverse the abnormal state of the signaling network , thereby uncovering potential therapeutic targets for this disease , some of which are also corroborated by experimental evidence ., Boolean models belong to the class of discrete dynamic models in which each node of the network is characterized by an ON ( 1 ) or OFF ( 0 ) state and usually the time variable t is also considered to be discrete , i . e . it takes nonnegative integer values 24 , 25 ., The future state of each node vi is determined by the current states of the nodes regulating it according to a Boolean transfer function , where ki is the number of regulators of vi ., Each Boolean function ( rule ) represents the regulatory relationships between the components and is usually expressed via the logical operators AND , OR and NOT ., The state of the system at each time step is denoted by a vector whose ith component represents the state of node vi at that time step ., The discrete state space of a system can be represented by a state transition graph whose nodes are states of the system and edges are allowed transitions among the states ., By updating the nodes states at each time step , the state of the system evolves over time and following a trajectory of states it eventually settles down into an attractor ., An attractor can be in the form of either a fixed point , in which the state of the system does not change , or a complex attractor , where the system oscillates ( regularly or irregularly ) among a set of states ., The set of states leading to a specific attractor is called the basin of attraction of that attractor ., In order to evaluate the state of each node at a given time instant , synchronous as well as asynchronous updating strategies have been proposed 24 , 25 ., In the synchronous method all nodes of the network are updated simultaneously at multiples of a common time step ., The underlying assumption of this update method is that the timescales of all the processes occurring in a system are similar ., This is a quite strong and potentially unrealistic assumption , which in particular may not be suited for intracellular biological processes due to the variety of timescales associated with transcription , translation and post-translational mechanisms 26 ., To overcome this limitation , various asynchronous methods have been proposed wherein the nodes are updated based on individual timescales 25 , 27 , 28 , 29 , 30 , including deterministic methods with fixed node timescales and stochastic methods such as random order asynchronous method 27 wherein the nodes are updated in random permutations ., In a previous work 22 , we carried out a comparative study of three different asynchronous methods applied to the same biological system ., That study suggested that the general asynchronous ( GA ) method , wherein a randomly selected node is updated at each time step , is the most efficient and informative asynchronous updating strategy ., This is because deterministic asynchronous 22 or autonomous 30 Boolean models require kinetic or timing knowledge , which is usually missing , and random order asynchronous models 27 are not computationally efficient compared to the GA models ., In addition , the superiority of the GA approach has been corroborated by other researchers 29 and the method has been used in other studies as well 31 , 32 ., We thus chose to employ the GA method in this work , and we implemented it using the open-source software library BooleanNet 33 ., It is important to note that the stochasticity inherent to this method may cause each state to have multiple successors , and thus the basins of attraction of different attractors may overlap ., For systems with multiple fixed-point attractors , the absorption probabilities to each fixed point can be computed through the analysis of the Markov chain and transition matrix associated with the state transition graph of the system 34 ., Given a fixed point , node perturbations can be performed by reversing the state of the nodes i . e . by knocking out the nodes that stabilize in an ON state in the fixed point or over-expressing the ones that stabilize in an OFF state ., A Boolean network with n nodes has a total of 2n states ., This exponential dependence makes it computationally intractable to map the state transition graphs of even relatively small networks ., This calls for developing efficient network reduction approaches ., Recent efforts towards addressing this challenge consists of iteratively removing single nodes that do not regulate their own function and simplifying the redundant transfer functions using Boolean algebra 35 , 36 ., Naldi et al 35 proved that this approach preserves the fixed points of the system and that for each ( irregular ) complex attractor in the original asynchronous model there is at least one complex attractor in the reduced model ( i . e . network reduction may create spurious oscillations ) ., Boolean networks often contain nodes whose states stabilize in an attracting state after a transient period , regardless of updating strategy or initial conditions ., The attracting states of these nodes can be readily identified by inspection of their Boolean functions ., In a previous work 22 we proposed a method of network simplification by, ( i ) pinpointing and eliminating these stabilized nodes and, ( ii ) iteratively removing a simple mediator node ( e . g . a node that has one incoming edge and one outgoing edge ) and connecting its input ( s ) to its target ( s ) ., Our simplification method shares similarities with the method proposed in 35 , 36 , with the difference that we only remove stabilized nodes ( which have the same state on every attractor ) and simple mediator nodes rather than eliminating each node without a self loop ., Thus their proof regarding the preservation of the steady states by the reduction method holds true in our case ., We employed this simplification method for the analysis of a signal transduction network in plants and verified by using numerical simulations that it preserves the attractors of that system ., In this work , we employ this reduction method to simplify the T-LGL leukemia signal transduction network synthesized by Zhang et al 18 , thereby facilitating its dynamical analysis ., We also note that the first step of our simplification method is similar to the logical steady state analysis implemented in the software tool CellNetAnalyzer 37 , 38 ., We thus refer to this step as logical steady state analysis throughout the paper ., It should be noted that the fixed points of a Boolean network are the same for both synchronous and asynchronous methods ., In order to obtain the fixed points of a system one can solve the set of Boolean equations independent of time ., To this end , we first fix the state of the source nodes ., We then determine the nodes whose rules depend on the source nodes and will either stabilize in an attracting state after a time delay or otherwise their rules can be simplified significantly by plugging in the state of the source nodes ., Iteratively inserting the states of stabilized nodes in the rules ( i . e . employing logical steady state analysis ) will result in either the fixed point ( s ) of the system , or the partial fixed point ( s ) and a remaining set of equations to be solved ., In the latter case , if the remaining set of equations is too large to obtain its fixed point ( s ) analytically , we take advantage of the second step of our reduction method 22 to simplify the resulting network and to determine a simpler set of Boolean rules ., By solving this simpler set of equations ( or performing numerical simulations , if necessary ) and plugging the solutions into the original rules , we can then find the states of the removed nodes and determine the attractors of the whole system accordingly ., For the analysis of basins of attraction of the attractors , we perform numerical simulations using the GA update method ., The topology ( structure ) and the function of biological networks are closely related ., Therefore , structural analysis of biological networks provides an alternative way to understand their function 39 , 40 ., We have recently proposed an integrative method to identify the essential components of any given signal transduction network 23 ., The starting point of the method is to represent the combinatorial relationship of multiple regulatory interactions converging on a node v by a Boolean rule:where uijs are regulators of node v . The method consists of two main steps ., The first step is the expansion of a signaling network to a new representation by incorporating the sign of the interactions as well as the combinatorial nature of multiple converging interactions ., This is achieved by introducing a complementary node for each component that plays a role in negative regulations ( NOT operation ) as well as introducing a composite node to denote conditionality among two or more edges ( AND operation ) ., This step eliminates the distinction of the edge signs; that is , all directed edges in the expanded network denote activation ., In addition , the AND and OR operators can be readily distinguished in the expanded network , i . e . , multiple edges ending at composite nodes are added by the AND operator , while multiple edges ending at original or complementary nodes are cumulated by the OR operator ., The second step is to model the cascading effects following the loss of a node by an iterative process that identifies and removes nodes that have lost their indispensable regulators ., These two steps allow ranking of the nodes by the effects of their loss on the connectivity between the networks input ( s ) and output ( s ) ., We proposed two connectivity measures in 23 , namely the simple path ( SP ) measure , which counts the number of all simple paths from inputs to outputs , and a graph measure based on elementary signaling modes ( ESMs ) , defined as a minimal set of components that can perform signal transduction from initial signals to cellular responses ., We found that the combinatorial aspects of ESMs pose a substantial obstacle to counting them in large networks and that the SP measure has a similar performance as the ESM measure since both measures incorporate the cascading effects of a nodes removal arising from the synergistic relations between multiple interactions ., Therefore , we employ the SP measure and define the importance value of a component v as:where NSP ( Gexp ) and NSP ( GΔv ) denote the total number of simple paths from the input ( s ) to the output ( s ) in the original expanded network Gexp and the damaged network GΔv upon disruption of node v , respectively ., This essentiality measure takes values in the interval 0 , 1 , with 1 indicating a node whose loss causes the disruption of all paths between the input and output node ( s ) ., In this paper , we also make use of this structural method to identify essential components of the T-LGL leukemia signaling network ., We then relate the importance value of nodes to the effects of their knockout ( sustained OFF state ) in the dynamic model and the importance value of complementary nodes to the effects of their original nodes constitutive activation ( sustained ON state ) in the dynamic model ., The T-LGL signaling network reconstructed by Zhang et al 18 contains 60 nodes and 142 regulatory edges ., Zhang et al used a two-step process: they first synthesized a network containing 128 nodes and 287 edges by extensive literature search , then simplified it with the software NET-SYNTHESIS 42 , which constructs the sparsest network that maintains all of the causal ( upstream-downstream ) effects incorporated in a redundant starting network ., In this study , we work with the 60-node T-LGL signaling network reported in 18 , which is redrawn in Figure 1 ., The Boolean rules for the components of the network were constructed in 18 by synthesizing experimental observations and for convenience are given in Table S1 as well ., The description of the node names and abbreviations are provided in Table S2 ., To reduce the computational burden associated with the large state space ( more than 1018 states for 60 nodes ) , we simplified the T-LGL network using the reduction method proposed in 22 ( see Materials and Methods ) ., We fixed the six source nodes in the states given in 18 , i . e . Stimuli , IL15 , and PDGF were fixed at ON and Stimuli2 , CD45 , and TAX were fixed at OFF ., We used the Boolean rules constructed in 18 , with one notable difference ., The Boolean rules for all the nodes in 18 , except Apoptosis , contain the expression “AND NOT Apoptosis” , meaning that if Apoptosis is ON , the cell dies and correspondingly all other nodes are turned OFF ., To focus on the trajectory leading to the initial turning on of the Apoptosis node , we removed the “AND NOT Apoptosis” from all the logical rules ., This allows us to determine the stationary states of the nodes in a live cell ., We determined which nodes states stabilize using the first step of our simplification method , i . e . logical steady state analysis ( see Materials and Methods ) ., Our analysis revealed that 36 nodes of the network stabilize in either an ON or OFF state ., In particular , Proliferation and Cytoskeleton signaling , two output nodes of the network , stabilize in the OFF and ON state , respectively ., Low proliferation in leukemic LGL has been observed experimentally 43 , which supports our finding of a long-term OFF state for this output node ., The ON state of Cytoskeleton signaling may not be biologically relevant as this node represents the ability of T cells to attach and move which is expected to be reduced in leukemic T-LGL compared to normal T cells ., The nodes whose stabilized states cannot be readily obtained by inspection of their Boolean rules form the sub-network represented in Figure 2A ., The Boolean rules of these nodes are listed in Table S3 wherein we put back the “AND NOT Apoptosis” expression into the rules ., Next , we identified the attractors ( long-term behavior ) of the sub-network represented in Figure 2A ( see Materials and Methods ) ., We found that upon activation of Apoptosis all other nodes stabilize at OFF , forming the normal fixed point of the system , which represents the normal behavior of programmed cell death ., When Apoptosis is stabilized at OFF , the two nodes in the top sub-graph oscillate while all the nodes in the bottom sub-graph are stabilized at either ON or OFF ., As shown in Figure 3 , the state space of the two oscillatory nodes , TCR and CTLA4 , forms a complex attractor in which the average fraction of ON states for either node is 0 . 5 ., Given that these two nodes have no effect on any other node under the conditions studied here ( i . e . stable states of the source nodes ) , their behavior can be separated from the rest of the network ., The bottom sub-graph exhibits the normal fixed point , as well as two T-LGL ( disease ) fixed points in which Apoptosis is OFF ., The only difference between the two T-LGL fixed points is that the node P2 is ON in one fixed point and OFF in the other , which was expected due to the presence of a self-loop on P2 in Figure 2A ., P2 is a virtual node introduced to mediate the inhibition of interferon-γ translation in the case of sustained activity of the interferon-γ protein ( IFNG in Figure 2A ) ., The node IFNG is also inhibited by the node SMAD which stabilizes in the ON state in both T-LGL fixed points ., Therefore IFNG stabilizes at OFF , irrespective of the state of P2 , as supported by experimental evidence 44 ., Thus the biological difference between the two fixed points is essentially a memory effect , i . e . the ON state of P2 indicates that IFNG was transiently ON before stabilizing in the OFF state ., In the two T-LGL fixed points for the bottom sub-graph of Figure 2A , the nodes sFas , GPCR , S1P , SMAD , MCL1 , FLIP , and IAP are ON and the other nodes are OFF ., We found by numerical simulations using the GA method ( see Materials and Methods ) that out of 65 , 536 total states in the state transition graph , 53% are in the exclusive basin of attraction of the normal fixed point , 0 . 24% are in the exclusive basin of attraction of the T-LGL fixed point wherein P2 is ON and 0 . 03% are in the exclusive basin of attraction of the T-LGL fixed point wherein P2 is OFF ., Interestingly , there is a significant overlap among the basins of attraction of all the three fixed points ., The large basin of attraction of the normal fixed point is partly due to the fact that all the states having Apoptosis in the ON state ( that is , half of the total number of states ) belong to the exclusive basin of the normal fixed point ., These states are not biologically relevant initial conditions but they represent potential intermediary states toward programmed cell death and as such they need to be included in the state transition graph ., Since the state transition graph of the bottom sub-graph given in Figure 2A is too large to represent and to further analyze ( e . g . to obtain the probabilities of reaching each of the fixed points ) , we applied the second step of the network reduction method proposed in 22 ., This step preserves the fixed points of the system ( see Materials and Methods ) , and since the only attractors of this sub-graph are fixed points , the state space of the reduced network is expected to reflect the properties of the full state space ., Correspondingly , the nodes having in-degree and out-degree of one ( or less ) in the sub-graph on Figure 2A , such as sFas , MCL1 , IAP , GPCR , SMAD , and CREB , can be safely removed without losing any significant information as such nodes at most introduce a delay in the signal propagation ., In addition , we note that although the node P2 has a self-loop and generates a new T-LGL fixed point as described before , it can also be removed from the network since the two fixed points differ only in the state of P2 and thus correspond to biologically equivalent disease states ., We revisit this node when enumerating the attractors of the original network ., In the resulting simplified network , the nodes BID , Caspase , and IFNG would also have in-degree and out-degree of one ( or less ) and thus can be safely removed as well ., This reduction procedure results in a simple sub-network represented in Figure 2B with the Boolean rules given in Table 1 ., Our attractor analysis revealed that this sub-network has two fixed points , namely 000001 and 110000 ( the digits from left to right represent the state of the nodes in the order as listed from top to bottom in Table 1 ) ., The first fixed point represents the normal state , that is , the apoptosis of CTL cells ., Note that the OFF state of other nodes in this fixed point was expected because of the presence of “AND NOT Apoptosis” in all the Boolean rules ., The second fixed point is the T-LGL ( disease ) one as Apoptosis is stabilized in the OFF state ., We note that the sub-network depicted in Figure 2B contains a backbone of activations from Fas to Apoptosis and two nodes ( S1P and FLIP ) which both have a mutual inhibitory relationship with the backbone ., If activation reaches Apoptosis , the system converges to the normal fixed point ., In the T-LGL fixed point , on the other hand , the backbone is inactive while S1P and FLIP are active ., We found by simulations that for the simplified network of Figure 2B , 56% of the states of the state transition graph ( represented in Figure 4 ) are in the exclusive basin of attraction of the normal fixed point while 5% of the states form the exclusive basin of attraction of the T-LGL fixed point ., Again , the half of state space that has the ON state of Apoptosis belongs to the exclusive basin of attraction of the normal fixed point ., Notably , there is a significant overlap between the basins of attraction of the two fixed points , which is illustrated by a gray color in Figure, 4 . The probabilities of reaching each of the two fixed points starting from these gray-colored states , found by analysis of the corresponding Markov chain ( see Materials and Methods ) , are given in Figure, 5 . As this figure represents , for the majority of cases the probability of reaching the normal fixed point is higher than that of the T-LGL fixed point ., The three states whose probabilities to reach the T-LGL fixed point are greater than or equal to 0 . 7 are one step away either from the T-LGL fixed point or from the states in its exclusive basin of attraction ., In two of them , the backbone of the network in Figure 2B is inactive , and in the third one the backbone is partially inactive and most likely will remain inactive due to the ON state of S1P ( one of the two nodes having mutual inhibition with the backbone ) ., Based on the sub-network analysis and considering the states of the nodes that stabilized at the beginning based on the logical steady state analysis , we conclude that the whole T-LGL network has three attractors , namely the normal fixed point wherein Apoptosis is ON and all other nodes are OFF , representing the normal physiological state , and two T-LGL attractors in which all nodes except two , i . e . TCR and CTLA4 , are in a steady state , representing the disease state ., These T-LGL attractors are given in the second column of Table 2 , which presents the predicted T-LGL states of 54 components of the network ( all but the six source nodes whose state is indicated at the beginning of the Results section ) ., We note that the two T-LGL attractors essentially represent the same disease state since they only differ in the state of the virtual node P2 ., Moreover , this disease state can be considered as a fixed point since only two nodes oscillate in the T-LGL attractors ., For this reason we will refer to this state as the T-LGL fixed point ., It is expected that the basins of attraction of the fixed points have similar features as those of the simplified networks ., Experimental evidence exists for the deregulated states of 36 ( 67% ) components out of the 54 predicted T-LGL states as summarized in the third column of Table 2 ., For example , the stable ON state of MEK , ERK , JAK , and STAT3 indicates that the MAPK and JAK-STAT pathways are activated ., The OFF state of BID is corroborated by recent evidence that it is down-regulated both in natural killer ( NK ) and in T cell LGL leukemia 45 ., In addition , the node RAS was found to be constitutively active in NK-LGL leukemia 41 , which indirectly supports our result on the predicted ON state of this node ., For three other components , namely , GPCR , DISC , and IFNG , which were classified as being deregulated without clear evidence of either up-regulation or down-regulation in 18 , we found that they eventually stabilize at ON , OFF , and OFF , respectively ., The OFF state of IFNG and DISC is indeed supported by experimental evidence 44 , 46 ., In the second column of Table 2 , we indicated with an asterisk the stabilized state of 17 components that were experimentally undocumented before and thus are predictions of our steady state analysis ( P2 was not included as it is a virtual node ) ., We note that ten of these cases were also predicted in 18 by simulations ., The predicted T-LGL states of these 17 components can guide targeted experimental follow-up studies ., As an example of this approach , we tested the predicted over-activity of the node SMAD ( see Materials and Methods ) ., As described in 18 the SMAD node represents a merger of SMAD family members Smad 2 , 3 , and, 4 . Smad 2 and 3 are receptor-regulated signaling proteins which are phosphorylated and activated by type I receptor kinases while Smad4 is an unregulated co-mediator 47 ., Phosphorylated Smad2 and/or Smad3 form heterotrimeric complexes with Smad4 and these complexes translocate to the nucleus and regulate gene expression ., Thus an ON state of SMAD in the model is a representation of the predominance of phosphorylated Smad2 and/or phosphorylated Smad3 in T-LGL cells ., In relative terms as compared to normal ( resting or activated ) T cells , the predicted ON state implies a higher level of phosphorylated Smad2/3 in T-LGL cells as compared to normal T cells ., Indeed , as shown in Figure 6 , T cells of T-LGL patients tend to have high levels of phosphorylated Smad2/3 , while normal activated T cells have essentially no phosphorylated Smad2/3 ., Thus our experiments validate the theoretical prediction ., A question of immense biological importance is which manipulations of the T-LGL network can result in consistent activation-induced cell death and the elimination of the dysregulated ( diseased ) behavior ., We can rephrase and specify this question as which node perturbations ( knockouts or constitutive activations ) lead to a system that has only the normal fixed point ., These perturbations can serve as candidates for potential therapeutic interventions ., To this end , we performed node perturbation analysis using both structural and dynamic methods ., In this paper we presented a comprehensive analysis of the T-LGL survival signaling network to unravel the unknown facets of this disease ., By using a reduction technique , we first identified the fixed points of the system , namely the normal and T-LGL fixed points , which represent the healthy and disease states , respectively ., This analysis identified the T-LGL states of 54 components of the network , out of which 36 ( 67% ) are corroborated by previous experimental evidence and the rest are novel predictions ., These new predictions include RAS , PLCG1 , IAP , TNF , NFAT , GRB2 , FYN , SMAD , P27 , and Cytoskeleton signaling , which are predicted to stabilize at ON in T-LGL leukemia and GAP , SOCS , TRADD , ZAP70 , and CREB which are predicted to stabilize at OFF ., In addition , we found that the node P2 can stabilize in either the ON or OFF state , whereas two nodes , TCR and CTLA4 , oscillate ., We have experimentally validated the prediction that the node SMAD is over-active in leukemic T-LGL by demonstrating the predominant phosphorylation of the SMAD family members Smad2 and Smad3 ., The predicted T-LGL states of other nodes provide valuable guidance for targeted experimental follow-up studies of T-LGL leukemia ., Among the predicted states , the ON state of Cytoskeleton signaling may not be biologically relevant as this node represents the ability of T cells to attach and move which is expected to be reduced in leukemic T-LGL compared to normal T cells ., This discrepancy may be due to the fact that the network contains insufficient det | Introduction, Materials and Methods, Results, Discussion | The blood cancer T cell large granular lymphocyte ( T-LGL ) leukemia is a chronic disease characterized by a clonal proliferation of cytotoxic T cells ., As no curative therapy is yet known for this disease , identification of potential therapeutic targets is of immense importance ., In this paper , we perform a comprehensive dynamical and structural analysis of a network model of this disease ., By employing a network reduction technique , we identify the stationary states ( fixed points ) of the system , representing normal and diseased ( T-LGL ) behavior , and analyze their precursor states ( basins of attraction ) using an asynchronous Boolean dynamic framework ., This analysis identifies the T-LGL states of 54 components of the network , out of which 36 ( 67% ) are corroborated by previous experimental evidence and the rest are novel predictions ., We further test and validate one of these newly identified states experimentally ., Specifically , we verify the prediction that the node SMAD is over-active in leukemic T-LGL by demonstrating the predominant phosphorylation of the SMAD family members Smad2 and Smad3 ., Our systematic perturbation analysis using dynamical and structural methods leads to the identification of 19 potential therapeutic targets , 68% of which are corroborated by experimental evidence ., The novel therapeutic targets provide valuable guidance for wet-bench experiments ., In addition , we successfully identify two new candidates for engineering long-lived T cells necessary for the delivery of virus and cancer vaccines ., Overall , this study provides a birds-eye-view of the avenues available for identification of therapeutic targets for similar diseases through perturbation of the underlying signal transduction network . | T-LGL leukemia is a blood cancer characterized by an abnormal increase in the abundance of a type of white blood cell called T cell ., Since there is no known curative therapy for this disease , identification of potential therapeutic targets is of utmost importance ., Experimental identification of manipulations capable of reversing the disease condition is usually a long , arduous process ., Mathematical modeling can aid this process by identifying potential therapeutic interventions ., In this work , we carry out a systematic analysis of a network model of T cell survival in T-LGL leukemia to get a deeper insight into the unknown facets of the disease ., We identify the T-LGL status of 54 components of the system , out of which 36 ( 67% ) are corroborated by previous experimental evidence and the rest are novel predictions , one of which we validate by follow-up experiments ., By deciphering the structure and dynamics of the underlying network , we identify component perturbations that lead to programmed cell death , thereby suggesting several novel candidate therapeutic targets for future experiments . | biology, computational biology, signaling networks | null |
1,739 | journal.pntd.0005239 | 2,017 | Risk mapping of clonorchiasis in the People’s Republic of China: A systematic review and Bayesian geostatistical analysis | Clonorchiasis is an important food-borne trematodiases in Asia , caused by chronic infection with Clonorchis sinensis 1 , 2 ., Symptoms of clonorchiasis are related to worm burden; ranging from no or mild non-specific symptoms to liver and biliary disorders 3 , 4 ., C . sinensis is classified as a carcinogen 5 , as infection increases the risk of cholangiocarcinoma 6 ., Conservative estimates suggest that around 15 million people were infected with C . sinensis in 2004 , over 85% of whom were concentrated in the People’s Republic of China ( P . R . China ) 6–8 ., It has also been estimated that , in 2005 , clonorchiasis caused a disease burden of 275 , 000 disability-adjusted life years ( DALYs ) , though light and moderate infections were excluded from the calculation 9 ., Therefore , two national surveys have been conducted for clonorchiasis in P . R . China; the first national survey was done in 1988–1992 and the second national survey in 2001–2004 ., Of note , the two surveys used an insensitive diagnostic approach with only one stool sample subjected to a single Kato-Katz thick smear ., The first survey covered 30 provinces/autonomous regions/municipalities ( P/A/M ) with around 1 . 5 million people screened , and found an overall prevalence of 0 . 37% 10 ., Data from the second survey , which took place in 31 P/A/M and screened around 350 , 000 people , showed an overall prevalence of 0 . 58% 7 ., Another dataset in the second national survey is a survey pertaining to clonorchiasis conducted in 27 endemic P/A/M using triplicate Kato-Katz thick smears from single stool samples ., The overall prevalence was 2 . 4% , corresponding to 12 . 5 million infected people 8 ., Two main endemic settings were identified; the provinces of Guangdong and Guangxi in the south and the provinces of Heilongjiang and Jilin in the north-east 1 , 2 , 6 ., In the latter setting , the prevalence was especially high in Korean ( minority ) communities ., In general , males showed higher infection prevalence than females and the prevalence increased with age 6 , 8 ., The life cycle of C . sinensis involves specific snails as first intermediate hosts , freshwater fish or shrimp as the second intermediate host , and humans or other piscivorous mammals as definitive hosts , who become infected through consumption of raw or insufficiently cooked infected fish 1 , 2 , 11 , 12 ., Behavioral , environmental , and socioeconomic factors that influence the transmission of C . sinensis or the distribution of the intermediate hosts affect the endemicity of clonorchiasis ., For example , temperature , rainfall , land cover/usage , and climate change that affect the activities and survival of intermediate hosts , are considered as potential risk factors 13 , 14 ., Socioeconomic factors and consumption of raw freshwater fish are particularly important in understanding the epidemiology of clonorchiasis 15 ., Consumption of raw fish dishes is a deeply rooted cultural practice in some areas of P . R . China , while in other areas it has become popular in recent years , partially explained by perceptions that these dishes are delicious or highly nutritious 1 , 2 , 16 , 17 ., Treatment with praziquantel is one of the most important measures for the management of clonorchiasis , provided to infected individuals or entire at-risk groups through preventive chemotherapy 18 , 19 ., Furthermore , information , education , and communication ( IEC ) , combined with preventive chemotherapy , is suggested for maintaining control sustainability 20 ., Elimination of raw or insufficiently cooked fish or shrimp is an effective way for prevention of infection , but this strategy is difficult to implement due to deeply rooted traditions and perceptions 1 ., Environmental modification is an additional way of controlling clonorchiasis , such as by removing unimproved lavatories built adjacent to fish ponds in endemic areas , thus preventing water contamination by feces 1 , 21 ., Maps displaying where a specific disease occurs are useful to guide prevention and control interventions ., To our knowledge , only a province-level prevalence map of C . sinensis infection is available for P . R . China , while high-resolution , model-based risk estimates based on up-to-date survey data are currently lacking 1 ., Bayesian geostatistical modeling is a rigorous inferential approach to produce risk maps ., The utility of this method has been demonstrated for a host of neglected tropical diseases , such as leishmaniasis , lymphatic filariasis , schistosomiasis , soil-transmitted helminthiasis , and trachoma 22–28 ., The approach relies on the qualification of the association between disease risk at observed locations and potential risk factors ( e . g . , environmental and socioeconomic factors ) , thus predicting infection risk in areas without observed data 28 ., Random effects are usually introduced to the regression equation to capture the spatial correlation between locations via a spatially structured Gaussian process 26 ., Here , we compiled available survey data on clonorchiasis in P . R . China , identified important climatic , environmental , and socioeconomic determinants , and developed Bayesian geostatistical models to estimate the risk of C . sinensis infection at high spatial resolution throughout the country ., This work is based on clonorchiasis survey data extracted from the peer-reviewed literature and national surveys in P . R . China ., All data were aggregated and do not contain any information at individual or household levels ., Hence , there are no specific ethical issues that warranted attention ., A systematic review was undertaken in PubMed , ISI Web of Science , China National Knowledge Internet ( CNKI ) , and Wanfang Data from January 1 , 2000 until January 10 , 2016 to identify studies reporting community , village , town , and county-level prevalence data of clonorchiasis in P . R . China ., The search terms were “clonorchi*” ( OR “liver fluke*” ) AND “China” for Pubmed and ISI Web of Science , and “huazhigaoxichong” ( OR “ganxichong” ) for CNKI and Wanfang ., Government reports and other grey literature ( e . g . , MSc and PhD theses , working reports from research groups ) were also considered ., There were no restrictions on language or study design ., County-level data on clonorchiasis collected in 27 endemic P/A/M in the second national survey were provided by the National Institute of Parasitic Diseases , Chinese Center for Disease Control and Prevention ( NIPD , China CDC; Shanghai , P . R . China ) ., Titles and abstracts of articles were screened to identify potentially relevant publications ., Full text articles were obtained from seemingly relevant pieces that were screened for C . sinensis infection prevalence data ., Data were excluded if they stemmed from school-based surveys , hospital-based surveys , case-control studies , clinical trials , drug efficacy studies , or intervention studies ( except for baseline or control group data ) ., Studies on clearly defined populations ( e . g . , travellers , military personnel , expatriates , nomads , or displaced or migrating populations ) that were not representative of the general population were also excluded ., We further excluded data based on direct smear or serum diagnostics due to the known low sensitivity or the inability to differentiate between past and active infection , respectively ., All included data were georeferenced and entered into the open-access Global Neglected Tropical Diseases ( GNTDs ) database 29 ., Environmental , socioeconomic , and demographic data were obtained from different accessible data sources ( Table 1 ) ., The data were extracted at the survey locations and at the centroids of a prediction grid with grid cells of 5×5 km spatial resolution ., Land cover data were re-grouped to the following five categories:, ( i ) forests ,, ( ii ) scrublands and grass ,, ( iii ) croplands ,, ( iv ) urban , and, ( v ) wet areas ., They were summarized at each location ( of the survey or grid cell ) by the most frequent category over the period 2001–2004 for each pixel of the prediction grid ., Land surface temperature ( LST ) and normalized difference vegetation index ( NDVI ) were averaged annually ., We used human influence index ( HII ) , urban extents , and gross domestic product ( GDP ) per capita as socioeconomic proxies ., The latter was obtained from the P . R . China yearbook full-text database at county-level for the year 2008 and georeferenced for the purpose of our study ., Details about data processing are provided in Lai et al . 26 ., We georeferenced surveys reporting aggregated data at county level by the county centroid and linked them to the average values of our covariates within the specific county ., The mean size of the corresponding counties was around 2 , 000 km2 ., We grouped survey years into two categories ( before 2005 and from 2005 onwards ) ., We selected 2005 as the cutoff year because after the second national survey on important parasitic diseases in 2001–2004 , the Chinese government set specific disease control targets and launched a series of control strategies 7 , 30 ., We standardized continuous variables to mean zero and standard deviation one ( SD = 1 ) ., We calculated Pearson’s correlation between continuous variables and dropped one variable among pairs with correlation coefficient greater than 0 . 8 to avoid collinearity , which can lead to wrong parameter estimation 31 ., Researchers have suggested different correlation thresholds of collinearity ranging from 0 . 4 to 0 . 85 31 ., To test the sensitivity of our threshold , we also considered two other thresholds , i . e . , 0 . 5 and 0 . 7 ., Three sets of variables were obtained corresponding to the three thresholds and were used separately in the variable selection procedure ., Furthermore , continuous variables were converted to two- or three-level categorical ones according to preliminary , exploratory , graphical analysis ., We carried out Bayesian variable selection to identify the most important predictors of the disease risk ., In particular , we assumed that the number of positive individuals Yi arises from a binominal distribution Yi∼Bn ( pi , ni ) , where ni and pi are the number of individuals examined and the probability of infection at location i ( i = 1 , 2 , … , L ) , respectively ., We modeled the covariates on the logit scale , that is logit ( pi ) =β0+∑k=1βk×Xi ( k ) , where βk is the regression coefficient of the kth covariate X ( k ) ., For a covariate in categorical form , βk is a vector of coefficients {βkl} , l = 1 , … , Mk , where Mk is the number of categories , otherwise it has a single element βk0 ., We followed a stochastic search variable selection approach 32 , and for each predictor X ( k ) we introduced a categorical indicator parameter Ik which takes values j , j = 0 , 1 , 2 with probabilities πj such that π0 + π1 + π2 = 1 ., Ik = 0 indicates exclusion of the predictor from the model , Ik = 1 indicates inclusion of X ( k ) in linear form and Ik = 2 suggests inclusion in categorical form ., We adopted a mixture of Normal prior distribution for the parameters βk0 , known as spike and slab prior , proposing a non-informative prior βk0∼N ( 0 , σB2 ) with probability π1 in case X ( k ) is included in the model ( i . e . , Ik = 1 ) in linear form ( slab ) and an informative prior βk0∼N ( 0 , ϑ0σB2 ) with probability ( 1 − π1 ) , shrinking βk0 to zero ( spike ) if the linear form is excluded from the model ., ϑ0 is a constant , fixed to a small value i . e . , ϑ0 = 0 . 00025 forcing the variance to be close to zero ., In a formal way the above prior is written βk0∼δ1 ( Ik ) N ( 0 , σB2 ) + ( 1−δ1 ( Ik ) ) N ( 0 , ϑ0σB2 ) where δj ( Ik ) is the Dirac function taking the value 1 if Ik = j and zero otherwise ., Similarly , for the coefficients {βkl} , l = 1 , … , Mk corresponding to the categorical form of X ( k ) with Mk categories , we assume that βkl∼δ2 ( Ik ) N ( 0 , σBl2 ) + ( 1−δ2 ( Ik ) ) N ( 0 , ϑ0σBl2 ) ., For the inclusion/exclusion probabilities πj , we adopt a non-informative Dirichlet prior distribution , i . e . ( π0 , π1 , π2 ) T∼Dirichlet ( 3 , a ) , a = ( 1 , 1 , 1 ) T ., We also used non-informative inverse gamma prior distributions , IG ( 2 . 01 , 1 . 01 ) for the variance hyperparameters σB2 and σBl2 , l=1 , … , Mk ., We considered as important , those predictors with posterior inclusion probabilities of πj greater than 50% ., The above procedure fits all models generated by all combinations of our potential predictors and selects as important those predictors which are included in more than 50% of the models ., Bayesian geostatistical logistic regression models were fitted on C . sinensis survey data to obtain spatially explicit estimates of the infection risk ., The predictors selected from the variable selection procedure were included in the model ., The model extended the previous formulation by including location random effects on the logit scale , that is logit ( pi ) =β0+∑k=1βk×Xi ( k ) +εi , where covariate X ( k ) are the predictors ( with functional forms ) that have been identified as important in the variable selection procedure ., We assumed that location-specific random effects ε = ( ε1 , … , εL ) T followed a multivariate normal prior distribution ε∼MVN ( 0 , Σ ) , with exponential correlation function Σij=σsp2exp\u2061 ( −ρdij ) , where dij is the Euclidean distance between locations , and ρ is the parameter corresponding to the correlation decay ., We also considered non-informative normal prior distributions for the regression coefficient βkl , l=0 , 1 , … , Mk , that is βkl∼N ( 0 , 102 ) , an inverse gamma prior distribution for the spatial variance σsp2∼IG ( 2 . 01 , 1 . 01 ) , and a gamma prior for the correlation decay ρ∼G ( 0 . 01 , 0 . 01 ) ., We estimated the spatial range as the minimum distance with spatial correlation less than 0 . 1 equal to −log ( 0 . 1 ) /ρ ., We formulated the model in a Bayesian framework and applied Markov chain Monte Carlo ( MCMC ) simulation to estimate the model parameters in Winbugs version 1 . 4 ( Imperial College London and Medical Research Council; London , United Kingdom ) 33 ., We assessed convergence of sampling chains using the Brooks-Gelman-Rubin diagnostic 34 ., We fitted the model on a random subset of 80% of survey locations and used the remaining 20% for model validation ., Mean error and the percentage of observations covered by 95% Bayesian credible intervals ( BCIs ) of posterior predicted prevalence were calculated to access the model performance ., Bayesian kriging was employed to predict the C . sinensis infection risk at the centroids of a prediction grid over P . R . China with grid cells of 5 × 5 km spatial resolution 35 ., This spatial resolution is often used for estimation of disease risk across large regions as it is a good trade-off between disease control needs and computational burden ., Furthermore , predictions become unreliable when the grid cells have higher resolution than that of the predictors used in the model ., Population-adjusted prevalence ( median and 95% BCI ) for each province was calculated using samples of size 500 from the predictive posterior distribution estimated over the gridded surface ., These samples available for each grid cell were converted to samples from the predictive distribution of the population-adjusted prevalence for each province by multiplying them with the gridded population data , summing them over the grid cells within each province and divided them by the province population ., The samples from the population-adjusted prevalence for each province were summarized by their median and 95% BCI ., Our disease data consist of point-referenced ( village- or town-level ) and areal ( county-level ) data ., Analyses ignoring the areal data may loss valuable information , especially in regions where point-referenced data is sparse ., Here , we assumed a uniform distribution of infection risk within each survey county and treated the areal data as point-referenced data by setting the survey locations as the centroids of the corresponding counties ., To assess the effect of this assumption on our estimates , we simulated data over a number of hypothetical survey locations within the counties and compared predictions based on approaches using the county aggregated data together with the data at individual georeferenced survey locations and using the data at individual georeferenced survey locations only ( excluded the county aggregated data ) ., The former approach gave substantially better disease risk prediction compared to the later one ., The methodology for the simulation study and its results are presented in Supplementary Information S1 Text and S1 Fig , respectively ., A data selection flow chart for the systematic review is presented in Fig, 1 . We identified 7 , 575 records through the literature search and obtained one additional report provided by NIPD , China CDC ( Shanghai , P . R . China ) ., According to our inclusion and exclusion criteria , we obtained 143 records for the final analysis , resulting in 691 surveys for C . sinensis at 633 unique locations published from 2000 onwards ., A summary of our survey data , stratified by province , is provided in Table, 2 . The geographic distribution of locations and observed C . sinensis prevalence are shown in Fig 2B ., We obtained data from all provinces except Inner Mongolia , Ningxia , Qinghai , and Tibet ., We collected more than 50 surveys in Guangdong , Guangxi , Hunan , and Jiangsu provinces ., Over 45% of surveys were conducted from 2005 onwards ., Around 90% of surveys used the Kato-Katz technique for diagnosis , while 0 . 14% of surveys had no information on the diagnostic technique employed ., The overall raw prevalence , calculated as the total number of people infected divided by the total number of people examined from all observed surveys , was 9 . 7% ., We considered a total of 12 variables ( i . e . , land cover , urban extents , precipitation , GDP per capita , HII , soil moisture , elevation , LST in the daytime , LST at night , NDVI , distance to the nearest open water bodies , and pH in water ) for Bayesian variable selection ., Elevation , NDVI , distance to the nearest open water bodies , and land cover were selected for the final geostatistical logistic regression model ., The variables that were selected via the Bayesian variable selection method are listed in Supporting Information S1 Table ., The list was not affected by the collinearity threshold ( i . e . , 0 . 5 , 0 . 7 , and 0 . 8 ) we have considered ., The parameter estimates arising from the geostatistical model fit are shown in Table, 3 . The infection risk of C . sinensis was higher from 2005 onwards than that before 2005 ., Elevation had a negative effect on infection risk ., People living at distance between 2 . 5 and 7 . 0 km from the nearest open water bodies had a lower risk compared to those living in close proximity ( <2 . 5 km ) ., The risk of C . sinensis infection was lower in areas covered by forest , shrub , and grass compared to crop ., Furthermore , NDVI was positively correlated with the risk of C . sinensis infection ., Model validation indicated that the Bayesian geostatistical logistic regression models were able to correctly estimate ( within a 95% BCI ) 71 . 7% of locations for C . sinensis ., The mean error was -0 . 07% , suggesting that our model may slightly over-estimate the infection risk of C . sinensis ., Fig 2A shows the model-based predicted risk map of C . sinensis for P . R . China ., High prevalence ( ≥20% ) was estimated in some areas of southern and northeastern parts of Guangdong province , southwestern and northern parts of Guangxi province , southwestern part of Hunan province , the western part of bordering region of Heilongjiang and Jilin provinces , and the eastern part of Heilongjiang province ., Most regions of northwestern P . R . China and eastern coastal areas had zero to very low prevalence ( <0 . 01% ) ., The prediction uncertainty is shown in Fig 2C ., Table 4 reports the population-adjusted predicted prevalence and the number of individuals infected with C . sinensis in P . R . China , stratified by province , based on gridded population of 2010 ., The overall population-adjusted predicted prevalence of clonorchiasis was 1 . 18% ( 95% BCI: 1 . 10–1 . 25% ) in 2010 , corresponding to 14 . 8 million ( 95% BCI: 13 . 8–15 . 8 million ) infected individuals ., The three provinces with the highest infection risk were Heilongjiang ( 7 . 21% , 95% BCI: 5 . 95–8 . 84% ) , Guangdong ( 6 . 96% , 95% BCI: 6 . 62–7 . 27% ) , and Guangxi ( 5 . 52% , 95% BCI: 4 . 97–6 . 06% ) ., Provinces with very low risk estimates ( median predicted prevalence < 0 . 01% ) were Gansu , Ningxia , Qinghai , Shanghai , Shanxi , Tibet , and Yunnan ., Guangdong , Heilongjiang , and Guangxi were the top three provinces with the highest number of people infected: 6 . 34 million ( 95% BCI: 6 . 03–6 . 62 million ) , 3 . 05 million ( 2 . 52–3 . 74 million ) , and 2 . 08 million ( 1 . 87–2 . 28 million ) , respectively ., To our knowledge , we present the first model-based , high-resolution estimates of C . sinensis infection risk in P . R . China ., Risk maps were produced through Bayesian geostatistical modeling of clonorchiasis survey data from 2000 onwards , readily adjusting for environmental/climatic predictors ., Our methodology is based on a rigorous approach for spatially explicit estimation of neglected tropical disease risk 27 ., Surveys pertaining to prevalence of C . sinensis in P . R . China were obtained through a systematic review in both Chinese and worldwide scientific databases to obtain published work from 2000 onwards ., Additional data were provided by the NIPD , China CDC ., We estimated that 14 . 8 million ( 95% BCI: 13 . 8–15 . 8 million; 1 . 18% ) people in P . R . China were infected with C . sinensis in 2010 , which is almost 20% higher than the previous estimates of 12 . 5 million people for the year 2004 , based on empirical analysis of data from a large survey of clonorchiasis conducted from 2002–2004 in 27 endemic P/A/M ., The mean error for the model validation was slightly smaller than zero , suggesting that our model might somewhat over-estimate the true prevalence of clonorchiasis ., The overall raw prevalence of the observed data was 9 . 7% ., This can be an over-estimation of the overall prevalence as many surveys were likely to have been conducted in places with relatively high infection risk ( preferential sampling ) ., Our population-adjusted , model-based estimates was much lower ( 1 . 18% , 95% BCI: 1 . 10–1 . 25% ) and it should reflect the actual situation because it takes into account the distribution of the population and of the disease risk across the country ., Indeed , geostatistical models get their predictive strength from regions with large amount of data that allow more accurate estimation of the relation between the disease risk and its predictors , therefore they are the most powerful statistical tools for predicting the disease risk in areas with sparse data ., Still , the estimates in regions with scarce data should be interpreted with caution ., However , even though our data did not include surveys from four provinces ( Inner Mongolia , Ningxia , Qinghai , and Tibet ) , our model obtained low or zero prevalence estimates which are consistent with data summaries of the second national survey aggregated at provincial level for these four provinces 7 ., On the other hand , our model may overestimate the overall infection risk for Heilongjiang province , as the high risk areas in the southeastern and southwestern parts of the province may influence the prediction in the northern part , where no observed data were available ., We found an increase of infection risk of C . sinensis for the period from 2005 onwards , which may be due to several reasons , including higher consumption of raw fish , lack of self-protection awareness of food hygiene , low health education , and rapid growth of aquaculture 13 , 36 ., Consumption of raw freshwater fish is related to C . sinensis infection risk 15 , 37 , however , such information is unavailable for P . R . China ., Elevation was one of the most important predictors in our model ., Different elevation levels correspond to different environmental/climatic conditions that can influence the distribution of intermediate host snails ., Our results show a positive association of NDVI and the prevalence of C . sinensis ., We found that distance to the nearest water bodies was significantly related to infection risk ., Traditionally , areas adjacent to water bodies were reported to have a higher prevalence of C . sinensis , however , due to improvement of trade and transportation channels , this situation may be changing , which may explain our result showing a non-linear relationship between distance to nearest water bodies and infection risk 2 , 13 ., Furthermore , our analysis supports earlier observations , suggesting an association between land cover type and infection risk 13 , 14 ., Interestingly , the risk of infection with other neglected tropical diseases , such as soil-transmitted helminthiasis and schistosomiasis , has declined in P . R . China over the past 10–15 years due to socioeconomic development and large-scale interventions 38 ., However , clonorchiasis , the major food-borne trematodiases in P . R . China , shows an increased risk in recent years , which indicates the Chinese government needs to pay more attention to this disease ., Several areas with high infection risk in P . R . China are indicated ( Supporting Information S2 Fig ) , where control strategies should be focused ., The recommended treatment guidelines for clonorchiasis of the WHO advocate praziquantel administration for all residents every year in high endemic areas ( prevalence ≥20% ) and for all residents every two years or individuals regularly eating raw fish every year in moderate endemic areas ( prevalence <20% ) 19 ., As re-infection or super-infection is common in heavy endemic areas , repeated preventive chemotherapy is necessary to interrupt transmission 18 ., On the other hand , to maintain control sustainability , a comprehensive control strategy must be implemented , including IEC , preventive chemotherapy , and improvement of sanitation 20 , 21 ., Through IEC , residents may conscientiously reduce or stop consumption of raw fish ., Furthermore , by removing unimproved latrines around fish ponds , the likelihood of fish becoming infected with cercariae declines 39 ., A successful example of comprehensive control strategies is Shangdong province , where clonorchiasis was endemic , but after rigorous implementation of comprehensive control programs for more than 10 years , the disease has been well controlled 40 ., The Chinese Ministry of Health set a goal to halve the prevalence of clonorchiasis ( compared to that observed in the second national survey in 2001–2004 ) in highly endemic areas by 2015 using integrated control measures 30 ., In practice , control measures are carried out in endemic villages or counties with available survey data ., However , large-scale control activities are lacking in most endemic provinces , as control plans are difficult to make when the epidemiology is only known at provincial level 41 ., Our high-resolution infection risk estimates provide important information for targeted control ., Our analysis is based on historical survey data compiled from studies that may differ in study design , diagnostic methods and distribution of age groups ., As more than 90% of surveys applied Kato-Katz as diagnostic method , we assumed similar diagnostic sensitivity across all surveys ., However , the sensitivity may vary in space as a function of infection intensity ., Most of the survey data are aggregated over age groups , thus we could not obtain age-specific risk estimates ., Moreover , bias might occur when age distribution in survey population differ across locations as different age group may have different infection risk ., In conclusion , we present the first model-based , high-resolution risk estimates of C . sinensis infection in P . R . China , and identified areas of high priority for control ., Our findings show an increased risk from 2005 onwards , suggesting that the government should put more efforts on control activities of clonorchiasis in P . R . China . | Introduction, Methods, Results, Discussion | Clonorchiasis , one of the most important food-borne trematodiases , affects more than 12 million people in the People’s Republic of China ( P . R . China ) ., Spatially explicit risk estimates of Clonorchis sinensis infection are needed in order to target control interventions ., Georeferenced survey data pertaining to infection prevalence of C . sinensis in P . R . China from 2000 onwards were obtained via a systematic review in PubMed , ISI Web of Science , Chinese National Knowledge Internet , and Wanfang Data from January 1 , 2000 until January 10 , 2016 , with no restriction of language or study design ., Additional disease data were provided by the National Institute of Parasitic Diseases , Chinese Center for Diseases Control and Prevention in Shanghai ., Environmental and socioeconomic proxies were extracted from remote-sensing and other data sources ., Bayesian variable selection was carried out to identify the most important predictors of C . sinensis risk ., Geostatistical models were applied to quantify the association between infection risk and the predictors of the disease , and to predict the risk of infection across P . R . China at high spatial resolution ( over a grid with grid cell size of 5×5 km ) ., We obtained clonorchiasis survey data at 633 unique locations in P . R . China ., We observed that the risk of C . sinensis infection increased over time , particularly from 2005 onwards ., We estimate that around 14 . 8 million ( 95% Bayesian credible interval 13 . 8–15 . 8 million ) people in P . R . China were infected with C . sinensis in 2010 ., Highly endemic areas ( ≥ 20% ) were concentrated in southern and northeastern parts of the country ., The provinces with the highest risk of infection and the largest number of infected people were Guangdong , Guangxi , and Heilongjiang ., Our results provide spatially relevant information for guiding clonorchiasis control interventions in P . R . China ., The trend toward higher risk of C . sinensis infection in the recent past urges the Chinese government to pay more attention to the public health importance of clonorchiasis and to target interventions to high-risk areas . | Clonorchiasis is an important food-borne trematodiases and it has been estimated that more than 12 million people in China are affected ., Precise information on where the disease occurs can help to identify priority areas for where control interventions should be implemented ., We collected data from recent surveys on clonorchiasis and applied Bayesian geostatistical models to produce model-based , high-resolution risk maps for clonorchiasis in China ., We found an increasing trend of infection risk from 2005 onwards ., We estimated that approximately 14 . 8 million people in China were infected with Clonorchis sinensis in 2010 ., Areas where the high prevalence of C . sinensis was predicted were concentrated in the provinces of Guangdong , Guangxi , and Heilongjiang ., Our results suggest that the Chinese government should pay more attention on the public health importance of clonorchiasis and that specific control efforts should be implemented in high-risk areas . | invertebrates, medicine and health sciences, helminths, china, tropical diseases, geographical locations, vertebrates, parasitic diseases, animals, simulation and modeling, trematodes, freshwater fish, clonorchis sinensis, foodborne trematodiases, probability distribution, mathematics, neglected tropical diseases, infectious disease control, research and analysis methods, infectious diseases, fishes, flatworms, clonorchiasis, research assessment, probability theory, people and places, helminth infections, asia, clonorchis, systematic reviews, biology and life sciences, physical sciences, organisms | null |
2,433 | journal.pcbi.1005331 | 2,017 | Scalable Parameter Estimation for Genome-Scale Biochemical Reaction Networks | In the life sciences , the abundance of experimental data is rapidly increasing due to the advent of novel measurement devices ., Genome and transcriptome sequencing , proteomics and metabolomics provide large datasets 1 at a steadily decreasing cost ., While these genome-scale datasets allow for a variety of novel insights 2 , 3 , a mechanistic understanding on the genome scale is limited by the scalability of currently available computational methods ., For small- and medium-scale biochemical reaction networks mechanistic modeling contributed greatly to the comprehension of biological systems 4 ., Ordinary differential equation ( ODE ) models are nowadays widely used and a variety of software tools are available for model development , simulation and statistical inference 5–7 ., Despite great advances during the last decade , mechanistic modeling of biological systems using ODEs is still limited to processes with a few dozens biochemical species and a few hundred parameters ., For larger models rigorous parameter inference is intractable ., Hence , new algorithms are required for massive and complex genomic datasets and the corresponding genome-scale models ., Mechanistic modeling of a genome-scale biochemical reaction network requires the formulation of a mathematical model and the inference of its parameters , e . g . reaction rates , from experimental data ., The construction of genome-scale models is mostly based on prior knowledge collected in databases such as KEGG 8 , REACTOME 9 and STRING 10 ., Based on these databases a series of semi-automatic methods have been developed for the assembly of the reaction graph 11–13 and the derivation of rate laws 14 , 15 ., As model construction is challenging and as the information available in databases is limited , in general , a collection of candidate models can be constructed to compensate flaws in individual models 16 ., For all these model candidates the parameters have to be estimated from experimental data , a challenging and usually ill-posed problem 17 ., To determine maximum likelihood ( ML ) and maximum a posteriori ( MAP ) estimates for model parameters , high-dimensional nonlinear and non-convex optimization problems have to be solved ., The non-convexity of the optimization problem poses challenges , such as local minima , which have to be addressed by the selection of optimization methods ., Commonly used global optimization methods are multi-start local optimization 18 , evolutionary and genetic algorithms 19 , particle swarm optimizers 20 , simulated annealing 21 and hybrid optimizers 22 , 23 ( see 18 , 24–26 for a comprehensive survey ) ., For ODE models with a few hundred parameters and state variables multi-start local optimization methods 18 and related hybrid methods 27 have proven to be successful ., These optimization methods use the gradient of the objective function to establish fast local convergence ., While the convergence of gradient based optimizers can be significantly improved by providing exact gradients ( see e . g . 18 , 28 , 29 ) , the gradient calculation is often the computationally most demanding step ., The gradient of the objective function is usually approximated by finite differences ., As this method is neither numerically robust nor computationally efficient , several parameter estimation toolboxes employ forward sensitivity analysis ., This decreases the numerical error and computation time 18 ., However , the dimension of the forward sensitivity equations increases linearly with both the number of state variables and parameters , rendering its application for genome-scale models problematic ., In other research fields such as mathematics and engineering , adjoint sensitivity analysis is used for parameter estimation in ordinary and partial differential equation models ., Adjoint sensitivity analysis is known to be superior to the forward sensitivity analysis when the number of parameters is large 30 ., Adjoint sensitivity analysis has been used for inference of biochemical reaction networks 31–33 ., However , the methods were never picked up by the systems and computational biology community , supposedly due to the theoretical complexity of adjoint methods , a missing evaluation on a set of benchmark models , and an absence of an easy-to-use toolbox ., In this manuscript , we provide an intuitive description of adjoint sensitivity analysis for parameter estimation in genome-scale biochemical reaction networks ., We describe the end value problem for the adjoint state in the case of discrete-time measurement and provide an user-friendly implementation to compute it numerically ., The method is evaluated on seven medium- to large-scale models ., By using adjoint sensitivity analysis , the computation time for calculating the objective function gradient becomes effectively independent of the number of parameters with respect to which the gradient is evaluated ., Furthermore , for large-scale models adjoint sensitivity analysis can be multiple orders of magnitude faster than other gradient calculation methods used in systems biology ., The reduction of the time for gradient evaluation is reflected in the computation time of the optimization ., This renders parameter estimation for large-scale models feasible on standard computers , as we illustrate for a comprehensive kinetic model of ErbB signaling ., We consider ODE models for biochemical reaction networks ,, x ˙ = f ( x , θ ) , x ( t 0 ) = x 0 ( θ ) , ( 1 ), in which x ( t , θ ) ∈ R n x is the concentration vector at time t and θ ∈ R n θ denotes the parameter vector ., Parameters are usually kinetic constants , such as binding affinities as well as synthesis , degradation and dimerization rates ., The vector field f : R n x × R n θ ↦ R n x describes the temporal evolution of the concentration of the biochemical species ., The mapping x 0 : R n θ ↦ R n x provides the parameter dependent initial condition at time t0 ., As available experimental techniques usually do not provide measurements of the concentration of all biochemical species , we consider the output map h : R n x × R n θ ↦ R n y ., This map models the measurement process , i . e . the dependence of the output ( or observables ) y ( t , θ ) ∈ R n y at time point t on the state variables and the parameters ,, y ( t , θ ) = h ( x ( t , θ ) , θ ) ., ( 2 ), The i-th observable yi can be the concentration of a particular biochemical species ( e . g . yi = xl ) as well as a function of several concentrations and parameters ( e . g . yi = θm ( xl1 + xl2 ) ) ., We consider discrete-time , noise corrupted measurements, y ¯ i j = y i ( t j , θ ) + ϵ i j , ϵ i j ∼ N ( 0 , σ i j 2 ) , ( 3 ), yielding the experimental data D = { ( ( y ¯ i j ) i = 1 n y , t j ) } j = 1 N . The number of time points at which measurements have been collected is denoted by N . Remark: For simplicity of notation we assume throughout the manuscript that the noise variances , σ i j 2 , are known and that there are no missing values ., However , the methods we will present in the following as well as the respective implementations also work when this is not the case ., For details we refer to the S1 Supporting Information ., We estimate the unknown parameter θ from the experimental data D using ML estimation ., Parameters are estimated by minimizing the negative log-likelihood , an objective function indicating the difference between experiment and simulation ., In the case of independent , normally distributed measurement noise with known variances the objective function is given by, J ( θ ) = 1 2 ∑ i = 1 n y ∑ j = 1 N y ¯ i j - y i ( t j , θ ) σ i j 2 , ( 4 ), where yi ( tj , θ ) is the value of the output computed from Eqs ( 1 ) and ( 2 ) for parameter value θ ., The minimization ,, θ * = arg min θ ∈ Θ J ( θ ) , ( 5 ), of this weighted least squares J yields the ML estimate of the parameters ., The optimization problem Eq ( 5 ) is in general nonlinear and non-convex ., Thus , the objective function can possess multiple local minima and global optimization strategies need to be used ., For ODE models multi-start local optimization has been shown to perform well 18 ., In multi-start local optimization , independent local optimization runs are initialized at randomly sampled initial points in parameter space ., The individual local optimizations are run until the stopping criteria are met and the results are collected ., The collected results are visualized by sorting them according to the final objective function value ., This visualization reveals local optima and the size of their basin of attraction ., For details we refer to the survey by Raue et al . 18 ., In this study , initial points are generated using latin hypercube sampling and local optimization is performed using the interior point and the trust-region-reflective algorithm implemented in the MATLAB function fmincon . m ., Gradients are computed using finite differences , forward sensitivity analysis or adjoint sensitivity analysis ., A näive approximation to the gradient of the objective function with respect to θk is obtained by finite differences ,, ∂ J ∂ θ k ≈ J ( θ + a e k ) - J ( θ - b e k ) a + b , ( 6 ), with a , b ≥ 0 and the kth unit vector ek ., In practice forward differences ( a = ϵ , b = 0 ) , backward differences ( a = 0 , b = ϵ ) and central differences ( a = ϵ , b = ϵ ) are widely used ., For the computation of forward finite differences , this yields a procedure with three steps: In theory , forward and backward differences provide approximations of order ϵ while central differences provide more accurate approximations of order ϵ2 , provided that J is sufficiently smooth ., In practice the optimal choice of a and b depends on the accuracy of the numerical integration 18 ., If the integration accuracy is high , an accurate approximation of the gradient can be achieved using a , b ≪ 1 . For lower integration accuracies , larger values of a and b usually yield better approximations ., A good choice of a and b is typically not clear a priori ( cf . 34 and the references therein ) ., The computational complexity of evaluating gradients using finite differences is affine linear in the number of parameters ., Forward and backward differences require in total nθ + 1 function evaluations ., Central differences require in total 2nθ function evaluations ., As already a single simulation of a large-scale model is time-consuming , the gradient calculation using finite differences can be limiting ., State-of-the-art systems biology toolboxes , such as the MATLAB toolbox Data2Dynamics 7 , use forward sensitivity analysis for gradient evaluation ., The gradient of the objective function is, ∂ J ∂ θ k = ∑ i = 1 n y ∑ j = 1 N y ¯ i j - y i ( t j , θ ) σ i j 2 s i , k y ( t j ) , ( 7 ), with s i , k y ( t ) : t 0 , t N ↦ R denoting the sensitivity of output yi at time point t with respect to parameter θk ., Governing equations for the sensitivities are obtained by differentiating Eqs ( 1 ) and ( 2 ) with respect to θk and reordering the derivatives ., This yields, s ˙ k x = ∂ f ∂ x s k x + ∂ f ∂ θ k , s k x ( t 0 ) = ∂ x 0 ∂ θ k s i , k y = ∂ h i ∂ x s k x + ∂ h i ∂ θ k ( 8 ), with s k x ( t ) : t 0 , t N ↦ R n x denoting the sensitivity of the state x with respect to θk ., Note that here and in the following , the dependencies of f , h , x0 and their ( partial ) derivatives on t , x and θ are not stated explicitly but have the to be assumed ., For a more detailed presentation we refer to the S1 Supporting Information Section 1 . Forward sensitivity analysis consists of three steps: Step 1 and 2 are often combined , which enables simultaneous error control and the reuse of the Jacobian 30 ., The simultaneous error control allows for the calculation of accurate and reliable gradients ., The reuse of the Jacobian improves the computational efficiency ., The number of state and output sensitivities increases linearly with the number of parameters ., While this is unproblematic for small- and medium-sized models , solving forward sensitivity equations for systems with several thousand state variable bears technical challenges ., Code compilation can take multiple hours and require more memory than what is available on standard machines ., Furthermore , while forward sensitivity analysis is usually faster than finite differences , in practice the complexity still increases roughly linearly with the number of parameters ., In the numerics community , adjoint sensitivity analysis is frequently used to compute the gradients of a functional with respect to the parameters if the function depends on the solution of a differential equation 35 ., In contrast to forward sensitivity analysis , adjoint sensitivity analysis does not rely on the state sensitivities s k x ( t ) but on the adjoint state p ( t ) ., The calculation of the objective function gradient using adjoint sensitivity analysis consists of three steps: Step 1 and 2 , which are usually the computationally intensive steps , are independent of the parameter dimension ., The complexity of Step 3 increases linearly with the number of parameters , yet the computation time required for this step is typically negligible ., The calculation of state and output trajectories ( Step 1 ) is standard and does not require special methods ., The non-trivial element in adjoint sensitivity analysis is the calculation of the adjoint state p ( t ) ∈ R n x ( Step 2 ) ., For discrete-time measurements—the usual case in systems and computational biology—the adjoint state is piece-wise continuous in time and defined by a sequence of backward differential equations ., For t > tN , the adjoint state is zero , p ( t ) = 0 . Starting from this end value the trajectory of the adjoint state is calculated backwards in time , from the last measurement t = tN to the initial time t = t0 ., At the time points at which measurements have been collected , tN , … , t1 , the adjoint state is reinitialised as, p ( t j ) = lim t → t j + p ( t ) + ∑ i = 1 n y ∂ h i ∂ x T y ¯ i j - y i ( t j ) σ i j 2 , ( 9 ), which usually results in a discontinuity of p ( t ) at tj ., Starting from the end value p ( tj ) as defined in Eq ( 9 ) the adjoint state evolves backwards in time until the next measurement point tj−1 or the initial time t0 is reached ., This evolution is governed by the time-dependent linear ODE, p ˙ = - ∂ f ∂ x T p ., ( 10 ), The repeated evaluation of Eqs ( 9 ) and ( 10 ) until t = t0 yields the trajectory of the adjoint state ., Given this trajectory , the gradient of the objective function with respect to the individual parameters is, ∂ J ∂ θ k = - ∫ t 0 t N p T ∂ f ∂ θ k d t - ∑ i , j ∂ h i ∂ θ k y ¯ i j - y i ( t j ) σ i j 2 - p ( t 0 ) T ∂ x 0 ∂ θ k ., ( 11 ), Accordingly , the availability of the adjoint state simplifies the calculation of the objective function to nθ one-dimensional integration problems over short time intervals whose union is the total time interval t0 , tN ., Algorithm 1: Gradient evaluation using adjoint sensitivity analysis % State and output Step 1 Compute state and output trajectories using Eqs ( 1 ) and ( 2 ) ., % Adjoint state Step 2 . 1 Set end value for adjoint state , ∀t > tN: p ( t ) = 0 . for j = N to 1 do Step 2 . 2 Compute end value for adjoint state according to the jth measurement using Eq ( 9 ) ., Step 2 . 3 Compute trajectory of adjoint state on time interval t = ( tj−1 , tj by solving Eq ( 10 ) ., end % Objective function gradient for k = 1 to nθ do Step 3 Evaluation of the sensitivity ∂J/∂θk using Eq ( 11 ) ., end Pseudo-code for the calculation of the adjoint state and the objective function gradient is provided in Algorithm 1 . We note that in order to use standard ODE solvers the end value problem Eq ( 10 ) can be transformed in an initial value problem by applying the time transformation τ = tN − t ., The derivation of the adjoint sensitivities for discrete-time measurements is provided in the S1 Supporting Information Section 1 . The key difference of the adjoint compared to the forward sensitivity analysis is that the derivatives of the state and the output trajectory with respect to the parameters are not explicitly calculated ., Instead , the sensitivity of the objective function is directly computed ., This results in practice in a computation time of the gradient which is almost independent of the number of parameters ., A visual summary of the different sensitivity analysis methods is provided in Fig 1 . Besides the procedures also the computational complexity is indicated ., The implementation of adjoint sensitivity analysis is non-trivial and error-prone ., To render this method available to the systems and computational biology community , we implemented the Advanced Matlab Interface for CVODES and IDAS ( AMICI ) ., This toolbox allows for a simple symbolic definition of ODE models ( 1 ) and ( 2 ) as well as the automatic generation of native C code for efficient numerical simulation ., The compiled binaries can be executed from MATLAB for the numerical evaluation of the model and the objective function gradient ., Internally , the SUNDIALS solvers suite is employed 30 , which offers a broad spectrum of state-of-the-art numerical integration of differential equations ., In addition to the standard functionality of SUNDIALS , our implementation allows for parameter and state dependent discontinuities ., The toolbox and a detailed documentation can be downloaded from http://ICB-DCM . github . io/AMICI/ ., For the comparison of different gradient calculation methods , we consider a set of standard models from the Biomodels Database 37 and the BioPreDyn benchmark suite 27 ., From the biomodels database we considered models for the regulation of insulin signaling by oxidative stress ( BM1 ) 38 , the sea urchin endomesoderm network ( BM2 ) 39 , and the ErbB sigaling pathway ( BM3 ) 40 ., From BioPreDyn benchmark suite we considered models for central carbon metabolism in E . coli ( B2 ) 41 , enzymatic and transcriptional regulation of carbon metabolism in E . coli ( B3 ) 42 , metabolism of CHO cells ( B4 ) 43 , and signaling downstream of EGF and TNF ( B5 ) 44 ., Genome-wide kinetic metabolic models of S . cerevisiae and E . coli ( B1 ) 45 contained in the BioPreDyn benchmark suite and the Biomodels Database 15 , 45 were disregarded due to previously reported numerical problems 27 , 45 ., The considered models possess 18-500 state variable and 86-1801 parameters ., A comprehensive summary regarding the investigated models is provided in Table 1 ., To obtain realistic simulation times for adjoint sensitivities realistic experimental data is necessary ( see S1 Supporting Information Section 3 ) ., For the BioPreDyn models we used the data provided in the suite , for the ErbB signaling pathway we used the experimental data provided in the original publication and for the remaining models we generated synthetic data using the nominal parameter provided in the SBML definition ., In the following , we will compare the performance of forward and adjoint sensitivities for these models ., As the model of ErbB signaling has the largest number of state variables and is of high practical interest in the context of cancer research , we will analyze the scalability of finite differences and forward and adjoint sensitivity analysis for this model in greater detail ., Moreover , we will compare the computational efficiency of forward and adjoint sensitivity analysis for parameter estimation for the model of ErbB signaling ., The evaluation of the objective function gradient is the computationally demanding step in deterministic local optimization ., For this reason , we compared the computation time for finite differences , forward sensitivity analysis and adjoint sensitivity analysis and studied the scalability of these approaches at the nominal parameter θ0 which was provided in the SBML definitions of the investigated models ., For the comprehensive model of ErbB signaling we found that the computation times for finite differences and forward sensitivity analysis behave similarly ( Fig 2a ) ., As predicted by the theory , for both methods the computation time increased linearly with the number of parameters ., Still , forward sensitivities are computationally more efficient than finite differences , as reported in previous studies 18 ., Adjoint sensitivity analysis requires the solution to the adjoint problem , independent of the number of parameters ., For the considered model , solving the adjoint problem a single time takes roughly 2-3-times longer than solving the forward problem ., Accordingly , adjoint sensitivity analysis with respect to a small number of parameter is disadvantageous ., However , adjoint sensitivity analysis scales better than forward sensitivity analysis and finite differences ., Indeed , the computation time for adjoint sensitivity analysis is almost independent of the number of parameters ., While computing the sensitivity with respect to a single parameter takes on average 10 . 09 seconds , computing the sensitivity with respect to all 219 parameters takes merely 14 . 32 seconds ., We observe an average increase of 1 . 9 ⋅ 10−2 seconds per additional parameter for adjoint sensitivity analysis which is significantly lower than the expected 3 . 24 seconds for forward sensitivity analysis and 4 . 72 seconds for finite differences ., If the sensitivities with respect to more than 4 parameters are required , adjoint sensitivity analysis outperforms both forward sensitivity analysis and finite differences ., For 219 parameters , adjoint sensitivity analysis is 48-times faster than forward sensitivities and 72-times faster than finite differences ., To ensure that the observed speedup is not unique to the model of ErbB signaling ( BM3 ) we also evaluated the speedup of adjoint sensitivity analysis over forward sensitivity analysis on models B2-5 and BM1-2 ., The results are presented in Fig 2b and 2c ., We find that for all models , but model B3 , gradient calculation using adjoint sensitivity is computationally more efficient than gradient calculation using forward sensitivities ( speedup > 1 ) ., For model B3 the backwards integration required a much higher number of integration steps ( 4 ⋅ 106 ) than the forward integration ( 6 ⋅ 103 ) , which results to a poor performance of the adjoint method ., One reason for this poor performance could be that , in contrast to other models , the right hand side of the differential equation of model B3 consists almost exclusively of non-linear , non-mass-action terms ., Excluding model B3 we find an polynomial increase in the speedup with respect to the number of parameters nθ ( Fig 2b ) , as predicted by theory ., Moreover , we find that the product nθ ⋅ nx , which corresponds to the size of the system of forward sensitivity equations , is an even better predictor ( R2 = 0 . 99 ) than nθ alone ( R2 = 0 . 83 ) ., This suggest that adjoint sensitivity analysis is not only beneficial for systems with a large number of parameters , but can also be beneficial for systems with a large number of state variables ., As we are not aware of any similar observations in the mathematics or engineering community , this could be due to the structure of biological reaction networks ., Our results suggest that adjoint sensitivity analysis is an excellent candidate for parameter estimation in large-scale models as it provides good scaling with respect to both , the number of parameters and the number of state variables ., Efficient local optimization requires accurate and robust gradient evaluation 18 ., To assess the accuracy of the gradient computed using adjoint sensitivity analysis , we compared this gradient to the gradients computed via finite differences and forward sensitivity analysis ., Fig 3 visualizes the results for the model of ErbB signaling ( BM3 ) at the nominal parameter θ0 which was provided in the SBML definition ., The results are similar for other starting points ., The comparison of the gradients obtained using finite differences and adjoint sensitivity analysis revealed small discrepancies ( Fig 3a ) ., The median relative difference ( as defined in S1 Supporting Information Section, 2 ) between finite differences and adjoint sensitivity analysis is 1 . 5 ⋅ 10−3 ., For parameters θk to which the objective function J was relatively insensitive , ∂J/∂θk < 10−2 , there are much higher discrepancies , up to a relative error of 2 . 9 ⋅ 103 ., Forward and adjoint sensitivity analysis yielded almost identical gradient elements over several orders of magnitude ( Fig 3b ) ., This was expected as both forward and adjoint sensitivity analysis exploit error-controlled numerical integration for the sensitivities ., To assess numerical robustness of adjoint sensitivity analysis , we also compared the results obtained for high and low integration accuracies ( Fig 3c ) ., For both comparisons we found the similar median relative and maximum relative error , namely 2 . 6 ⋅ 10−6 and 9 . 3 ⋅ 10−4 ., This underlines the robustness of the sensitivitity based methods and ensures that differences observed in Fig 3a indeed originate from the inaccuracy of finite differences ., Our results demonstrate that adjoint sensitivity analysis provides objective function gradients which are as accurate and robust as those obtained using forward sensitivity analysis ., As adjoint sensitivity analysis provides accurate gradients for a significantly reduced computational cost , this can boost the performance of a variety of optimization methods ., Yet , in contrast to forward sensitivity analysis , adjoint sensitivities do not yield sensitivities of observables and it is thus not possible to approximate the Hessian of the objective function via the Fisher Information Matrix 46 ., This prohibits the use of possibly more efficient Newton-type algorithms which exploit second order information ., Therefore , adjoint sensitivities are limited to quasi-Newton type optimization algorithms , e . g . the Broyden-Fletcher-Goldfarb-Shanno ( BFGS ) algorithm 47 , 48 , for which the Hessian is iteratively approximated from the gradient during optimization ., In principle , the exact calculation of the Hessian and Hessian-Vector products is possible via second order forward and adjoint sensitivity analysis 49 , 50 , which possess similar scaling properties as the first order methods ., However , both forward and adjoint approaches come at an additional cost and are thus not considered in this study ., To assess whether the use of adjoint sensitivities for optimization is still viable , we compared the performance of the interior point algorithm using adjoint sensitivity analysis with the BFGS approximation of the Hessian to the performance of the trust-region reflective algorithm using forward sensitivity analysis with Fisher Information Matrix as approximation of the Hessian ., For both algorithms we used the MATLAB implementation in fmincon . m ., The employed setup of the trust-region algorithm is equivalent to the use of lsqnonlin . m which is the default optimization algorithm in the MATLAB toolbox Data2Dynamics 7 , which was employed to win several DREAM challenges ., For the considered model the computation time of forward sensitivities is comparable in Data2Dynamics and AMICI ., Therefore , we expect that Data2Dynamics would perform similar to the trust-region reflective algorithm coupled to forward sensitivity analysis ., We evaluated the performance for the model of ErbB signaling based on 100 multi-starts which were initialized at the same initial points for both optimization methods ., For 41 out of 100 initial points the gradient could not be evaluated due numerical problems ., These optimization runs are omitted in all further analysis ., To limit the expected computation to a bearable amount we allowed a maximum of 10 iterations for the forward sensitivity approach and 500 iterations for the adjoint sensitivity approach ., As the previously observed speedup in gradient computation was roughly 48 fold , we expected this setup should yield similar computation times for both approaches ., We found that for the considered number of iterations , both approaches perform similar in terms of objective function value compared across iterations ( Fig 4a and 4b ) ., However , the computational cost of one iteration was much cheaper for the optimizer using adjoint sensitivity analysis ., Accordingly , given a fixed computation time the interior-point method using adjoint sensitivities outperforms the trust-region method employing forward sensitivities and the FIM ( Fig 4c and 4d ) ., In the allowed computation time , the interior point algorithm using adjoint sensitivities could reduce the objective function by up to two orders of magnitude ( Fig 4c ) ., This was possible although many model parameters seem to be non-identifiable ( see S1 Supporting Information Section 4 ) , which can cause problems ., To quantify the speedup of the optimization using adjoint sensitivity analysis over the optimization using forward sensitivity analysis , we performed a pairwise comparison of the minimal time required by the adjoint sensitivity approach to reach the final objective function value of the forward sensitivity approach for the individual points ( Fig 4e ) ., The median speedup achieved across all multi-starts was 54 ( Fig 4f ) , which was similar to the 48 fold speedup achieved in the gradient computation ., The availability of the Fisher Information Matrix for forward sensitivities did not compensate for the significantly reduced computation time achieved using adjoint sensitivity analysis ., This could be due to the fact that adjoint sensitivity based approach , being able to carry out many iterations in a short time-frame , can build a reasonable approximation of the Hessian approximation relatively fast ., In summary , this application demonstrates the applicability of adjoint sensitivity analysis for parameter estimation in large-scale biochemical reaction networks ., Possessing similar accuracy as forward sensitivities , the scalability is improved which results in an increased optimizer efficiency ., For the model of ErbB signaling , optimization using adjoint sensitivity analysis outperformed optimization using forward sensitivity analysis ., Mechanistic mathematical modeling at the genome scale is an important step towards a holistic understanding of biological processes ., To enable modeling at this scale , scalable computational methods are required which are applicable to networks with thousands of compounds ., In this manuscript , we present a gradient computation method which meets this requirement and which renders parameter estimation for large-scale models significantly more efficient ., Adjoint sensitivity analysis , which is extensively used in other research fields , is a powerful tool for estimating parameters of large-scale ODE models of biochemical reaction networks ., Our study of several benchmark models with up to 500 state variables and up to 1801 parameters demonstrated that adjoint sensitivity analysis provides accurate gradients in a computation time which is much lower than for established methods and effectively independent of the number of parameters ., To achieve this , the adjoint state is computed using a piece-wise continuous backward differential equation ., This backward differential equation has the same dimension as the original model , yet the computation time required to solve it usually is slightly larger ., As a result , finite differences and forward sensitivity analysis might be more efficient if the sensitivities with respect to a few parameters are required ., The same holds for alternatives like complex-step derivative approximation techniques 51 and forward-mode automatic differentiation 28 , 52 ., For systems with many parameters , adjoint sensitivity analysis is advantageous ., A scalable alternative might be reverse-mode automatic differentiation 28 , 53 , which remains to be evaluated for the considered class of problems ., | Introduction, Methods, Results, Discussion | Mechanistic mathematical modeling of biochemical reaction networks using ordinary differential equation ( ODE ) models has improved our understanding of small- and medium-scale biological processes ., While the same should in principle hold for large- and genome-scale processes , the computational methods for the analysis of ODE models which describe hundreds or thousands of biochemical species and reactions are missing so far ., While individual simulations are feasible , the inference of the model parameters from experimental data is computationally too intensive ., In this manuscript , we evaluate adjoint sensitivity analysis for parameter estimation in large scale biochemical reaction networks ., We present the approach for time-discrete measurement and compare it to state-of-the-art methods used in systems and computational biology ., Our comparison reveals a significantly improved computational efficiency and a superior scalability of adjoint sensitivity analysis ., The computational complexity is effectively independent of the number of parameters , enabling the analysis of large- and genome-scale models ., Our study of a comprehensive kinetic model of ErbB signaling shows that parameter estimation using adjoint sensitivity analysis requires a fraction of the computation time of established methods ., The proposed method will facilitate mechanistic modeling of genome-scale cellular processes , as required in the age of omics . | In this manuscript , we introduce a scalable method for parameter estimation for genome-scale biochemical reaction networks ., Mechanistic models for genome-scale biochemical reaction networks describe the behavior of thousands of chemical species using thousands of parameters ., Standard methods for parameter estimation are usually computationally intractable at these scales ., Adjoint sensitivity based approaches have been suggested to have superior scalability but any rigorous evaluation is lacking ., We implement a toolbox for adjoint sensitivity analysis for biochemical reaction network which also supports the import of SBML models ., We show by means of a set of benchmark models that adjoint sensitivity based approaches unequivocally outperform standard approaches for large-scale models and that the achieved speedup increases with respect to both the number of parameters and the number of chemical species in the model ., This demonstrates the applicability of adjoint sensitivity based approaches to parameter estimation for genome-scale mechanistic model ., The MATLAB toolbox implementing the developed methods is available from http://ICB-DCM . github . io/AMICI/ . | applied mathematics, simulation and modeling, algorithms, optimization, genomic databases, mathematics, genome analysis, research and analysis methods, genome complexity, biological databases, differential equations, biochemistry, biochemical simulations, database and informatics methods, genetics, biology and life sciences, physical sciences, genomics, computational biology | null |
2,222 | journal.pcbi.1005747 | 2,017 | Exploiting ecology in drug pulse sequences in favour of population reduction | To quantify pulse efficiency , we primarily study the minimal population size nmin of our two-species system as a proxy for the extinction probability of the population ., Antibiotic stewardship programmes suggest that for some diseases , such as pneumonia , the immune system can clear the residual infection once the bacterial population size is sufficiently reduced 19 , 20 ., Thus , the minimal population size may be a more relevant parameter than the exact extinction probability itself ., Additionally , the general behaviour of the deterministic system and its observable nmin is more robust than the extinction probability in the stochastic model , as in the latter , the precise form of stochastic noise , or the system size , would be important ., The total population minimum nmin can still serve to gauge the latter , which scales as exp ( nmin ) 21 ., Before introducing our model in detail , we jump ahead and summarise the essential result of our work in Fig 1 , which we discuss later in more detail ., Fig 1 shows the value of the minimal total population size in the configuration space of drug concentration profiles , spanned by the width of the pulse ( or duration of its high stress environment ) on the x-axis and the form of the pulse on the y-axis , which will be explained later ., Practically , these two properties of a pulse—its high stress duration and its form—are likely constrained: a very long duration of the high stress environment or stronger drug might be detrimental for patients due to , for example , a destructive impact on the gut microbiome 22 , 23 ., Similarly , some pulse forms , such as those where the highest possible drug concentration suddenly drops to zero at the pathogen location at the end of the pulse ( here denoted by temporal skewness s = 1 ) , may not be realistic for clinical treatments ., However , since we do not want to make any assumptions on which parts of the configuration space should be accessible , we examine our system for all possible combinations of pulse form and durations of the high stress environments ., The colour code ( symbols ) signifies which of the four possible pulse sequences sketched on the right of Fig 1 most effectively reduces the population size ., Fig 1 clearly shows that in our simple model setup , different pulse sequences are favourable in different regions of configuration space ., The aim of this work is to outline phenomenologically which pulse sequence yields the lowest minimal population for which part of configuration space in Fig 1 , and might therefore be most likely to drive the species to extinction ., The best pulse sequence at any one point of Fig 1 tends to be the one that maximally exploits the competition between the more resistant and wild-type species , represented by logistic growth in our model ., The simplicity of our approach makes explicit why some references might argue for more moderate treatments involving e ., g ., shorter or lower drug concentrations , but also what the limitation of models and observables are and hence why such moderate treatments may not work in real setups ., We also examine how the population composition ( a measure of how strongly the more resistant species dominates the population ) evolves , should such a pulse sequence not lead to achieving extinction ., Finally , we highlight the need for microbial experiments in such temporally varying drug gradients , in order to evaluate the applicability of simple models to real systems ., The simplest model that can be used to study the effect of the temporal concentration profile on a heterogeneous population ( n ) consists of two phenotypically different species , a susceptible “wild type” species ( w ) , and a more tolerant or resistant species ( r ) ., Its increased resistance comes at the cost of a reduced fitness in the drug-free environment , which is reflected in a smaller growth rate ., As in previous works 24 , 25 , we assume that the drug is bacteriostatic , that is , it only affects growth , such that growth of each species ceases as soon as its minimum inhibitory concentration ( MIC ) is exceeded ., Thus , in this deterministic population dynamics model for the birth-death process , sketched in the inset of Fig 2 , the growth rate of each species η ∈ {r , w} is given by ϕη ( t , n ( t ) ) = ΘMICη − c ( t ) λη ( 1 − n ( t ) ) , where n ( t ) = w ( t ) + r ( t ) is the total number of species at time t expressed in terms of a carrying capacity , which does not require specification as it serves merely as a unit for the population size ., The Heaviside-Theta step function Θ implies that the growth rate is only non-zero when the drug concentration is lower than the MIC of the corresponding species ., The index η ∈ {r , w} refers to the type of species ( resistant or wild-type ) , and λη is its growth rate ., The more resistant species has a lower basal growth rate in the drug-free environment , i . e . , λr = λw − k≔ λ − k , where k > 0 can be interpreted as a cost that the resistant species incurs for being more resistant ., The logistic growth assumed in this model introduces competition between the wild-type and the resistant species for limited space and/or resources , and places an upper bound on the population size ., We also include a constant death rate δ for both species , meaning that a species decays at rate δ when c ( t ) >MICη: For these higher concentrations , growth of species η is inhibited and , since switching is negligible , the species can only die ., All rates and times in this work are given in units of λ ., The time evolution of the population can be studied in terms of the differential equations, w ˙ ( t ) = ϕ w ( t , n ( t ) ) - δ - μ w w ( t ) + μ r r ( t ) r ˙ ( t ) = μ w w ( t ) + ϕ r ( t , n ( t ) ) - δ - μ r r ( t ) ( 1 ) since for sufficiently large populations stochastic fluctuations can be neglected ., The two species are coupled via the competition from logistic growth , as well as via the switching rates μw and μr ., Phenotypically more resistant states can be characterised by a reduced growth rate , or complete growth arrest , often known as tolerance or persistence 26–28 ( for a recent review , see Ref . 29 ) ., Provided that μw , r ≪ δ , which is the case for both mutation and phenotypic switching , our choice of μw = 10−6 λ and μr = 0 does not qualitatively affect the results ., For this entire work , we used exemplary values of δ = 0 . 1λ and k = 0 . 1λ , where λ ≡ 1 , i ., e ., we used λ as the basic unit of time ., We investigate several other combinations of costs and death rates , in particular combinations with the same death rate , but a smaller and larger cost , in S1 Text ., There , we show that our results and general statements are still valid for these cases ., We chose the values of δ = 0 . 1 and k = 0 . 1 since this combination allowed us to show the complete and most general picture of possible best pulse shapes in Fig, 1 . A smaller ( yet also biologically possible ) fitness cost would not have contained all different scenarios ., We ask the reader to refer to S1 Text for more details ., Since in our model the only relevant information about the antibiotic concentration is whether it is above or below the MIC of the corresponding species , any pulse sequence is fully determined by the temporal arrangement of low-stress ( low ) and high-stress ( high ) environments ., In these ( low ) and ( high ) environments , the antibiotic concentration is low , MICw < c ( t ) < MICr , or high , c ( t ) > MICr , respectively ( sketched for a single pulse in the top panel of Fig 2 ) ., Before the pulse sequence , the system is in the drug-free environment ( free ) , where the concentration of the antibiotic c ( t ) is less than either MIC , c ( t ) < MICw , r ., We assume that the ( free ) environment appears only before , but not during , a pulse sequence ., Thus , the ( free ) environment determines the initial condition of the population , which we take to be at its fixed point , ( w ( t = 0 ) , r ( t = 0 ) ) = ( w ( free ) * , r ( free ) * ) , shown as the purple dot near the w-axis of the phase space panel ( free ) of Fig, 2 . The change in population size and composition in each of these environments is characterised by the flow field in phase space ( w , r ) , shown in the three lower panels of Fig, 2 . In the ( low ) environment , the population flows towards the more resistant species ( high r and low w ) , while in the ( high ) environment , it flows towards the origin , meaning that both species die out exponentially ., Thus , the effect of a single pulse on the population crucially depends on the times spent by the system in the ( low ) and ( high ) environments ., A single pulse involves a single ( connected ) environment of ( high ) antibiotic exposure , with ( low ) environment potentially preceding or succeeding this ( high ) environment ., In reality , the duration of these ( low ) environments will depend on the experimental setup or host ., A pulse sequence is composed of a succession of identical single pulses ., We refer to the total time of the pulse as τ , and the time during which the system is in the ( high ) environment as tr ., The time periods during which the system is in the ( low ) environment ( initially ) before tr and ( finally ) after tr are denoted by t w ( i ) and t w ( f ) , respectively ., As this would overparameterise the pulse , we combine the latter two time scales into a skewness parameter s = ( t w ( i ) - t w ( f ) ) / ( τ - t r ) , signifying how tr is positioned within τ ., Skewness s = − 1 ( s = 1 ) thus denotes a pulse which starts ( ends ) with the ( high ) environment , while skewness s = 0 denotes a symmetric pulse ., We compared pulse sequences of up to N = 4 pulses ( same tr and, s ) for constant treatment time τ for all possible skewnesses s and durations tr ., Thus , a single pulse with τ = 60 and given tr and s is compared with a sequence of N identical pulses , each defined by τ ( N ) = 60/N and t r ( N ) = t r / N and s ., ( Further values of τ are discussed in S1 Text ) ., The retention of the same skewness within a sequence is motivated by the fact that we assume that the rate of increase or decrease in concentration is primarily determined by the host system of the bacteria ., In this comparison , the ‘best’ pulse sequence for given ( tr ,, s ) is defined as the one that yields the lowest population minimum nmin and so has the highest likelihood of eliminating the pathogen ., In situations where the entire configuration space is accessible , the maximal tr yields the overall lowest population minimum , independent of skewness s ., Since practically the maximal duration tr acceptable for treatments may be limited , it is important to know which pulse sequence is best for each ( tr ,, s ) , such that we can provide intuition on any situation and parameter choice that may arise ., The colour ( and corresponding symbols ) in Fig 1 show the best pulse sequences ( i . e . the best N ) , and the shade indicates the value of nmin ( dark denotes high values ) ., We found that a single pulse is most effective over a large range of parameters ( blue in Fig 1 ) ., In particular , for each duration in the ( high ) environment tr , the lowest minimum across all skewnesses is obtained by a single pulse ( blue line ) ., This means that in practical situations which allow all different pulse skewnesses , a single pulse with a skewness on the blue line would give the lowest minimum ., If , however , the possible pulse skewness is limited due to the host setup , a single pulse may not be the best choice ., For ease of comparison , Fig 3 a shows nmin for just a single pulse of constant treatment time τ = 60 , with the white line marking the lowest minimum ( the blue line in Fig 1 ) ., In the next paragraph we focus on a single pulse in order to understand which pulse parameters ( s , tr ) yield this lowest minimum ., In the previous section , we learnt which pulse sequences yield the maximal relative reduction in population size for which regions in ( tr ,, s ) -space ., This minimal population nmin served as a proxy for gauging when extinction would most likely occur in a setting where an immune response can destroy the population when it is already small ., Now , we would like to address a complementary question: in the event that extinction does not occur , whether because nmin was too high , or the population was small for too short , what is the effect of such a ‘failed’ pulse on the bacterial population ?, We already saw that the composition of the population shifts more towards r with each pulse ., In terms of real treatments , it might often be better not to pursue treatments which , if unsuccessful , entail a high risk of creating a fully resistant population ., In order to evaluate the pulsed treatments associated with the most effective population reductions based on Fig 1 , we now focus on the population composition , quantified by the ratio of resistant to wild-type species , r/w , at the end of the best pulse within the best sequence ., Evaluating r/w at the end of the pulse that yields the global minimum is motivated by the fact that the treatment can be stopped after , but not during , an individual pulse ., Fig 5b shows the dependence of r/w on the pulse configuration , which can be best understood by first considering how the population evolves in ( w ,, r ) phase space during the different pulse sequences ., In Fig 5a , we show trajectories for three pulse sequences , consisting of a single , two , or three pulses respectively , with τ = 60 and tr = 10 ., The qualitative behaviour of the phase space trajectory is independent of skewness ( in Fig 5a , s = 0 . 9 , the corresponding trajectories for s = −0 . 5 , s = 0 . 2 and s = 0 . 5 can be found in Fig D in S1 Text ) ., The colour of the trajectory darkens progressively with every pulse in the sequence ., The trajectory starts at the ( free ) fixed point close to the w-axis beyond the limits of Fig 5a , and evolves towards the r-axis ., Within each sequence , r/w steadily increases from pulse to pulse , as r progressively takes over the population during the ( low ) regimes ., Thus , in the top left corner of Fig 5b , where the first pulse of the sequence yields the lowest minimum , r/w is comparatively smaller ( lighter shading ) ., Indeed , the higher the N of the best sequence , the lower the ratio in Fig 5b , provided the global minimum is reached in the first pulse ( such as in the red region in Fig 1 ) ., The region marked with the white line in Fig 1 , where intermediate pulses ( and not the first pulse ) in the sequence yielded the lowest global minimum , also shows up clearly as darker in Fig 5b ., Here , r has grown more than for a single pulse , as more pulses were applied before the population minimum was reached ., Thus , in our model , when both population reduction and composition are considered , pulse sequences where the minimum is attained in the first pulse are generally more effective than a single long pulse: maintaining the ( low ) regime in the first pulse for around to keeps r/w as well as nmin small ., This argument suggests that treating with this first pulse only achieves the best result , and additionally comes with a shorter total treatment duration τ and a shorter tr ., We would like to note that even if the population does not die out during this short treatment , multiple pulses of this form could be added in order to give the immune system more opportunities to eliminate the infection ., These additional pulses would not drastically change r/w compared to the composition obtained after the single long pulse of τ = 60 ., This can be seen also in Fig 5a , where for all pulse sequences the population composition is similar at the end of the entire sequence ., Experiments with microbes can help investigate minimal antibiotic dosages and treatment times in a well-controlled test tube setup , where the impact of certain treatments on the microbial species itself can be studied without interfering effects , for example from the immune system ., Such microbial experiments have , for example , helped suggest drug combinations or treatment regimens which could retard the development of antibiotic resistance 30–33 ., Increasingly , these experiments try to incorporate practically important aspects of heterogeneities in the environment 34 , such as drug concentration gradients ., These gradients can enhance the development of bacterial resistance relative to spatially homogeneous systems 24 , 25 , 35 , 36 , as the more resistant species can successfully compete with a faster growing , but more susceptible wild-type species ., Not enhancing the selective advantage of the more resistant species , in the context of temporal heterogeneities in drug levels , including the duration , frequency and even the concentration profile during a single antibiotic pulse , as studied also in this work , is also important in real treatments 7 , 37 , and is thus within the limits of current experiments ., Our model makes two drastic simplifications compared to real microbial species ., First , we study only two species , instead of a series of possible phenotypically or genotypically different species ., Typically , the evolutionary pathway that leads to a fully resistant species involves a variety of intermediate mutants , even when the mutational paths are constrained 38 ., Since the fitness benefit diminishes with each successive mutation in a series 39 , 40 , we assumed that the strongest effect is conferred by the first mutation , and neglected all higher order mutants ., For phenotypic switches , it is reasonable to consider only two species , corresponding to , for example , the expression or repression of a protein 41 , 42 ., Thus , our model should be applicable to experimental systems , while in real patients , different types of tolerant or persister cells might be involved 43 , or even interact 44 ., The second simplification concerns how these two species are affected by the antibiotic ., In our model , we assume that the antibiotic is bacteriostatic , i . e . only affects the growth of the species 24 , 25 ., We also assume that the growth rate of each species falls abruptly to zero when the antibiotic concentration is higher than their respective minimum inhibitory concentration ( MIC ) ( see e . g . 45 ) ., The experimental situation is more complex: cessation of growth is not instantaneous , the space occupied by a dead cell may not immediately become available 42 , and the general use of the MIC as an indicator for slow growth is questionable 46 ., However , an abrupt change in growth rate at MIC has been verified experimentally for E . coli and chloramphenicol ., 25 ., Additionally , our analysis is based on large numbers rather than extinction events which would be model specific; thus , small changes in the model ( such as reduced but non-zero growth rate ) should still give qualitatively similar results ., Evaluating the effect of different pulse sequences should be possible within a microfluidics setup , where , for example , periodically fluctuating environments have already been investigated for E . coli and tetracycline 42 ., We expect that one should be able to observe that the ( low ) environment of drug concentration can be exploited in order to increase extinction probabilty for a ( high ) environment that is present for as short as possible , with the treatment time being constant ., How long this duration of the ( low ) environment is for best exploitation would be sensitive to the growth rate of the more resistant species , which for tetracycline could be generated using a specific promoter , namely the agn43 promotor 42 , 47 , 48 ., Just as shown in Fig 1 , we expect higher N pulse sequences to do better when this duration is optimal for them , but not for the longer pulse ., In addition , further study of E . coli in combination with other antibiotics and more resistant strains should also show this , in addition to being more realistic than our simple model . | Introduction, Materials and methods, Results, Discussion | A deterministic population dynamics model involving birth and death for a two-species system , comprising a wild-type and more resistant species competing via logistic growth , is subjected to two distinct stress environments designed to mimic those that would typically be induced by temporal variation in the concentration of a drug ( antibiotic or chemotherapeutic ) as it permeates through the population and is progressively degraded ., Different treatment regimes , involving single or periodical doses , are evaluated in terms of the minimal population size ( a measure of the extinction probability ) , and the population composition ( a measure of the selection pressure for resistance or tolerance during the treatment ) ., We show that there exist timescales over which the low-stress regime is as effective as the high-stress regime , due to the competition between the two species ., For multiple periodic treatments , competition can ensure that the minimal population size is attained during the first pulse when the high-stress regime is short , which implies that a single short pulse can be more effective than a more protracted regime ., Our results suggest that when the duration of the high-stress environment is restricted , a treatment with one or multiple shorter pulses can produce better outcomes than a single long treatment ., If ecological competition is to be exploited for treatments , it is crucial to determine these timescales , and estimate for the minimal population threshold that suffices for extinction ., These parameters can be quantified by experiment . | The possibilities of lower antibiotic dosages and treatment times , as demanded by antibiotic stewardship programmes have been investigated with complex mathematical models to account for , for example , the presence of an immune host ., At the same time , microbial experiments are getting better at mimicking real setups , such as those where the drug gradually permeates in and out of the region with the infectious population ., Our work systematically discusses an extremely simple and thus conceptually easy model for an infectious two species system ( one wild-type and one more resistant population ) , interacting via logistic growth , subject to low and high stress environments ., In this model , well-defined timescales exist during which the low stress environment is as efficient in reducing the population as the high stress environment ., We explain which temporal patterns of low and high stress , corresponding to sequences of drug treatments , lead to the best population reduction for a variety of durations of high stress within a constant long low stress environment ., The complexity of the spectrum of best treatments merits further experimental investigation , which could help clarify the relevant timescales ., This could then give useful feedback towards the more complex models of the medical community . | antimicrobials, medicine and health sciences, ecology and environmental sciences, drugs, immunology, microbiology, antibiotic resistance, probability distribution, mathematics, pharmaceutics, antibiotics, pharmacology, population biology, skewness, research and analysis methods, conservation biology, sequence analysis, antimicrobial resistance, bioinformatics, probability theory, immune system, conservation science, population metrics, species extinction, population size, database and informatics methods, microbial control, biology and life sciences, physical sciences, drug therapy, evolutionary biology, evolutionary processes | null |
1,021 | journal.pbio.2004015 | 2,018 | Manipulating the revision of reward value during the intertrial interval increases sign tracking and dopamine release | Lesaint and colleagues 1 recently proposed a new computational model—the “STGT model” ( for sign tracking and goal tracking ) —which accounts for a large set of behavioral , physiological , and pharmacological data obtained from studies investigating individual variation in Pavlovian conditioned approach behavior 2–8 ., Most notably , the model can account for recent work by Flagel and colleagues ( 2011 ) that has shown that phasic dopamine ( DA ) release does not always correspond to a reward prediction error ( RPE ) signal arising from a classical model-free ( MF ) system 9 ., In their experiments , Flagel and colleagues trained rats on a classical autoshaping procedure , in which the presentation of a retractable-lever conditioned stimulus ( CS; 8 s ) was followed immediately by delivery of a food pellet ( unconditioned stimulus US ) into an adjacent food cup ., In procedures like this , some rats , known as sign trackers ( STs ) , learn to rapidly approach and engage the CS lever , whereas other rats , known as goal trackers ( GTs ) , learn to approach and enter the food cup upon presentation of the CS lever ., Although both sign and goal trackers learn the CS-US relationship equally well , it was elegantly shown that phasic DA release in the nucleus accumbens core ( NAc ) matched RPE signals only in STs 4 ., Specifically , during learning in ST rats , DA release to reward decreased , while DA release to the CS increased ., In contrast , even though GTs acquired a Pavlovian conditioned approach response , DA release to reward did not decline , and CS-evoked DA was weaker ., Furthermore , administration of a DA antagonist blocked acquisition of the ST conditioned response but did not impact the GT conditioned response 4 , 10 ., Several computational propositions have argued that these data could be interpreted in terms of different contributions of model-based ( MB ) —with an explicit internal model of the consequences of actions in the task—and MF—without any internal model—reinforcement learning ( RL ) in GTs and STs during conditioning 1 , 11 ., Nevertheless , only the STGT model predicted that manipulating the intertrial interval ( ITI ) should change DA signaling in these animals: the model suggests that GTs revise the food cup value multiple times during and in between trials during the 90-s ITI ., During the trial , the food cup gains value because reward is delivered; however , visits to the food cup during the ITI do not produce reward , thus reducing the value assigned to the food cup ., This mechanism prevents the progressive transfer of reward value signal in the model from US time to CS time and hence explains the absence of DA RPE pattern in goal trackers ., This aspect of the model predicts that decreasing the ITI should reduce the amplitude of US DA burst ( i . e . , less time to negatively revise the value of the food cup and reduce the size of the RPE ) and that higher food cup value should lead to an increase in the tendency to GT in the overall population ., In contrast , increasing the ITI should have the opposite effect ., That is , lengthening the ITI and therefore increasing the number of nonrewarded food cup entries should increase the amplitude of US DA burst ( i . e . , more time to negatively revise the value of the food cup during the ITI and increase the size of the RPE ) and lower the value of the food cup , leading to a decreased tendency to GT and an increase tendency to ST . The latter would be accompanied by a large phasic DA response to the highly salient lever CS , as previously observed in STs 4 ., Here , we tested these predictions by recording DA release in the NAc using fast-scan cyclic voltammetry ( FSCV ) during 10 d of Pavlovian conditioning in rats that had either a short ITI of 60 s or a long ITI of 120 s ., DA release was recorded from NAc ( S1B–S1E Fig ) during a standard Pavlovian conditioned approach behavior task ( S1A Fig ) for 10 d ., Each trial began with the presentation of a lever ( CS ) located to the left or right side of a food cup ( counterbalanced ) for 8 s ., Upon the lever’s retraction , a 45-mg sucrose pellet was delivered into the food cup , independent of any interaction with the lever ., Each behavioral session consisted of 25 trials presented at a random time interval of either 60 s ( n = 7 rats ) or 120 s ( n = 12 rats ) ., To quantify the degree to which rats engaged in sign- versus goal-tracking behavior , we used the Pavlovian Conditioned Approach ( PCA ) index 12 , which comprised the average of three ratios: ( 1 ) the response bias , which is ( Lever Presses − Food Cup Entries ) / ( Lever Presses + Food Cup Entries ) , ( 2 ) the probability ( P ) difference , which is ( Plever − Preceptacle ) , and ( 3 ) the latency index , which is ( x¯ Cup Entry Latency − x¯ Lever Press Latency ) / 8 ., All of these ratios range from −1 . 0 to +1 . 0 ( similarly for PCA index ) and are more positive and negative for animals that sign track and goal track , respectively ., All behavioral indices were derived from sessions during which DA was recorded ., For the initial analysis described in this section , behavior and DA were examined across all sessions; the development of behavior and DA over training is examined in later sections ., The distributions of behavioral session scores are shown in Fig 1A–1D for each group ., As predicted , rats with the 120-s ITI tended to sign track more , whereas rats with the 60-s ITI tended to goal track more ., Across all behavioral indices ( i . e . , response bias , probability , latency , PCA ) , the mean distributions were positive ( biased toward sign tracking ) and significant from sessions for rats in the 120-s ITI group ( Fig 1A–1D , left; Wilcoxon; μ’s > 0 . 17 , p < 0 . 05 ) ., Opposite trends were observed in the 60-s ITI group in that all distributions were negatively shifted from zero ( Fig 1A–1D , right; Wilcoxon; response bias: μ = −0 . 06 , p = 0 . 06; lever probability: μ = −0 . 03 , p = 0 . 58; PCA index: μ = −0 . 11 , p = 0 . 097 ) ; however , only the shift in the latency difference distribution reached significance ( Fig 1C , right; Wilcoxon; μ = −0 . 10; p < 0 . 05 ) ., Direct comparisons between 60-s and 120-s ITI groups produced significant differences across all four measures ( Wilcoxons; p < 0 . 01 ) ., Thus , we conclude that lengthening the ITI increased sign-tracking behavior , as predicted by the STGT model 1 , 13 ., Notably , the degree of sign/goal tracking within the 60-s ITI group was highly dependent on when behavior was examined during the 8-s CS period ., This is illustrated in Fig 1G and Fig 1H , which show percent beam breaks in the food cup ( solid lines ) and lever pressing ( dashed lines ) over the time of the trial ., Consistent with the ratio analysis described above ( Fig 1A–1D ) , rats in the 120-s ITI group ( red ) showed sustained pressing ( red dashed ) that started shortly after lever extension and persisted throughout the 8-s CS period , while showing no increase in food cup entries ( red solid ) after CS presentation ( Fig 1G , red solid versus dashed ) ., Although it is clear that rats in the 120-s ITI group sign track more than goal track during the CS period , the relationship between lever pressing and food cup entry was far more dynamic during sessions with 60-s ITIs ( Fig 1G; blue ) ., During 60-s ITI sessions , rats would briefly enter the food cup for approximately 2 s immediately upon CS presentation ( Fig 1G , solid blue ) , before engaging with the lever ( Fig 1G , dashed blue ) ., As a result , lever pressing was delayed in the 60-s ITI group relative to the 120-s ITI group ( Fig 1G and 1H; blue versus red dashed ) ., This suggests that the goal-tracking tendencies described above during the entire 8-s CS period were largely due to the distribution of behaviors observed early in the CS period ., To quantify this observation , we recomputed the PCA index using either the first or the last 4 s of the 8-s CS period ., For the 120-s ITI group , the PCA index was significantly shifted in the positive direction during both the first and last 4 s of the cue period ( i . e . , more sign tracking; Fig 1E and 1F , left; Wilcoxon; μ’s > 0 . 16; p < 0 . 05 ) ., For the 60-s ITI group , the PCA index was significantly shifted in the negative direction during the first 4 s ( i . e . , more goal tracking; Fig 1E , right; Wilcoxon; μ = −0 . 16; p < 0 . 05 ) but not significantly shifted during the last 4 s ( Fig 1F , Wilcoxon; μ = 0 . 01; p = 0 . 81 ) ., Interestingly , this part of the results goes beyond the STGT model , which simplifies time by considering a single behavior/action during that period ., To further demonstrate sign- and goal-tracking tendencies over the 8-s cue period and the differences between groups , we simply subtracted 60-s ITI lever pressing and food cup entries from 120-s ITI lever pressing ( Fig 1I; orange ) and food cup entries ( Fig 1I; green ) , respectively ., Shortly after cue onset , the green line representing the difference between 120-s and 60-s ITI food cup entries dropped significantly below zero ., Throughout the cue period ( 8 s ) , there were more contacts with the food cup in sessions with a 60-s ITI compared with the 120-s ITI group ( green tick marks represent differences between 120-s and 60-s ITI across sliding 100-ms bins; t test; p < 0 . 05 ) ., For lever pressing ( orange ) , values were constantly higher shortly after the cue for the first half of the cue period ( orange tick marks represent differences between 120-s and 60-s ITI across sliding 100-ms bins; t test; p < 0 . 05 ) , indicating that there were more contacts with the lever in sessions with a 120-s ITI compared with the 60-s ITI group early in the cue period ., The behavioral data described above globally support model predictions that increasing and decreasing the ITI would produce more and less sign tracking , respectively ., Nevertheless , they also pave the way for improvements of the model by showing a rich temporal dynamic of behavior during the trial , rather than the single behavioral response per trial simulated in the model ., By plotting lever presses and food cup entries over time , we see that sometimes rats initially go to the lever and then go to the food cup , or vice versa ., In contrast , the model was designed to account only for the initial action performed by rats ., This was sufficient to account for the main results of the present study ., Nevertheless , it would be interesting to extend the model to enable it to account for different decisions made sequentially by the same animal during a given trial ., Next , we tested the prediction that longer ITIs would elevate DA release to the US , while shorter ITIs would reduce DA release to the US ., The average DA release over all sessions for the 60-s and 120-s groups is shown in Fig 2A ., Rats in the 120-s ITI group exhibited significantly higher DA release to the CS and the US relative to rats in the 60-s ITI group ( CS t test: t = 2 . 99 , df = 178 , p < 0 . 05; US t test: t = 3 . 07 , df = 178 , p < 0 . 05 ) ., In the 120-s ITI group , DA release to both the CS and the US was significantly higher than baseline ( CS t test: t = 14 . 77 , df = 119 , p < 0 . 05; US t test: t = 4 . 79 , df = 119 , p < 0 . 05 ) ; however , in the 60-s ITI group , this was only true during CS presentation ( t test: t = 7 . 34 , df = 59 , p < 0 . 05 ) ; DA release at the US was not different than baseline ( t test: t = 0 . 99 , df = 59 , p = 0 . 33 ) ., Similar results were obtained when averaging across sessions within each rat and then averaging across rats ( Fig 2B ) ; DA release was higher during the CS and US for rats in the 120-s ITI group ( CS t test: t = 1 . 87 , df = 17 , p < 0 . 05; US t test: t = 1 . 83 , df = 17 , p < 0 . 05 ) and was higher than baseline for both periods ( CS t test: t = 6 . 15 , df = 11 , p < 0 . 05; US t test: t = 2 . 16 , df = 11 , p < 0 . 05 ) , whereas DA release was only significantly higher during the CS period for rats in the 60-s ITI group ( CS t test: t = 6 . 68 , df = 6 , p < 0 . 05; US t test: t = 0 . 70 , df = 6 , p = 0 . 26 ) ., These results are in line with the STGT model , which predicted that reducing ITI duration would prevent the downward revision of the food cup value and hence would permit the high predictive value associated with the food cup to produce a DA response at CS but not US , consistent with the DA RPE hypothesis 9 ., Conversely and also consistent with model predictions , DA release during sessions with the longer ITI was significantly higher during US delivery because there were more positive RPEs , which may result from the positive surprise associated with being rewarded in a food cup whose value has been more strongly decreased during multiple visits to the food cup during long ITIs ., Nevertheless , at the CS time , the increased DA burst at CS indicates an even more complex process that goes beyond model predictions ., All of this suggests that DA release should be positively correlated with the time spent breaking the beam in the food cup during the ITI ., To test this hypothesis , we computed how much time was spent in the food cup during the ITI for each session ., This was done by determining the total number of beam breaks within each ITI ( 10-ms resolution ) and then averaging over trials to determine each session mean ., Importantly , the ITI time did not vary across sessions within each group , and the analysis was performed separately for the two groups ( 60-s group and 120-s group ) ., Thus , any correlation between DA and food cup interaction time during the ITI cannot reflect a correlation between DA and ITI time ., As expected , rats in the 120-s ITI group spent significantly more time in the food cup than did rats in the 60-s ITI group ( 120-s ITI group = 15 . 1 s; 60-s ITI group = 6 . 8 s; t test: t = 4 . 91 , df = 178 , p < 0 . 05 ) ., For both groups , there was a significant positive correlation between average time spent in the food cup during the ITI and DA release during the reward period ( Fig 2C , 120-s ITI: r2 = 0 . 12 , p < 0 . 05; Fig 2D , 60-s ITI: r2 = 0 . 08 , p < 0 . 05 ) ., During the cue period for the 120-s ITI group , but not the 60-s ITI group , there was also positive correlation ( Fig 2E , 120-s ITI: r2 = 0 . 04 , p < 0 . 05; Fig 2F , 60-s ITI: r2 = 0 . 01 , p = 0 . 36 ) ., Finally , when examining with data collapsed across both groups , there was a significant positive correlation during both cue and reward epochs ( Cue: r2 = 0 . 05 , p < 0 . 05; Reward: r2 = 0 . 14 , p < 0 . 05 ) ., Thus , we conclude that DA release to the CS and US tended to be higher the longer rats visited the food cup during the ITI ., In the analysis above , we averaged DA release and behavior from all recording sessions ., Next , we asked how behavior and DA release patterns evolved with training ., As a first step to addressing this issue , we recomputed the PCA analysis for the first and last 5 d of training ., For the 60-s ITI group , the PCA index distribution was significantly shifted in the negative direction ( i . e . , goal tracking ) during the first five sessions ( Wilcoxon; μ = −0 . 38 , p < 0 . 05 ) but not in the last five sessions ( Wilcoxon; μ = 0 . 15 , p = 0 . 07 ) ., Thus , early in training , rats with the 60-s ITI exhibited goal tracking more than sign tracking but did not fully transition to sign tracking , at least when we averaged over the last five sessions ., For the 120-s ITI group , the PCA index was significantly shifted in the positive direction ( i . e . , sign tracking ) during the last five sessions ( Wilcoxon; μ = 0 . 28 , p < 0 . 05 ) but was not during the first five sessions ( Wilcoxon; μ = 0 . 10 , p = 0 . 11 ) ., Thus , when the ITI was long ( 120 s ) , rats sign and goal tracked in roughly equal proportions during the first five sessions but tended to sign track significantly more during later sessions ., To more accurately pinpoint when during training rats in the 120-s group shift toward sign tracking , we examined the four distributions individually for each session ., Sign tracking became apparent during session 4 , when the latency and lever probability distributions first became significant ( Wilcoxon; latency: μ = 0 . 28 , p < 0 . 05; lever probability: μ = 0 . 40 , p < 0 . 05 ) ., To visualize changes in behavior and DA release that occurred before and after session 4 , we plotted food cup beam breaks , lever pressing , and DA release averaged across the first 3 d of training and across days 4–10 ( Fig 3; for visualization of behavior during each of the 10 sessions , please see S4 Fig ) ., Consistent with the distributions of behavioral indices described above , the 120-s ITI group showed roughly equal food cup entries and lever pressing during the CS period in the first 3 d of training ( Fig 3A , thin pink solid versus thin pink dashed ) , whereas later in training ( days 4–10; red ) , there was a strong preference for the lever ( Fig 3A; thick red dashed versus thick red solid ) ., Indeed , the distribution of PCA indices averaged during days 4–10 were significantly shifted in the positive direction ( Wilcoxon; μ = 0 . 27 , p < 0 . 05 ) ., These results suggest that in sessions in which the ITI was set at 120 s , sign-tracking tendencies developed relatively quickly during the first several recording sessions ( Fig 3A and 3C ) ., This is consistent with the STGT model , which predicted that increasing the ITI duration would increase the global tendency to sign track within the population and would thus speed up the acquisition of lever pressing behavior 1 , 13 ., In contrast , the model also predicted that reducing the ITI duration would increase the global tendency to goal track and would thus slow down the acquisition of lever pressing behavior ., Interestingly , the behavior of the 60-s ITI group was far more complicated than behavior of the 120-s group , with changes in goal and sign tracking occurring over training and CS presentation time ., Early in training , rats in the 60-s ITI group clearly visited the food cup ( Fig 3B , solid turquoise ) more than they pressed the lever ( Fig 3B , dashed turquoise ) ; food cup entries increased shortly after presentation of the CS and continued throughout the CS period ( Fig 3B , solid turquoise ) ., During later sessions ( i . e . , 4–10 ) , rats in the 60-s ITI group still entered the food cup upon CS presentation—which corresponds to the goal-tracking behavior predicted by the model in this case—but this only lasted about 2 s , at which point they transitioned to the lever ( Fig 3B and 3D ) ., In sessions 4–10 , none of the distributions of behavioral indices were significantly shifted from zero when examining the CS period as a whole ( Wilcoxons; Response bias: μ = 0 . 27 , p = 0 . 83; Latency: μ = −0 . 05 , p = 0 . 13; Probability: μ = 0 . 08 , p = 0 . 16; PCA: μ = 0 . 02 , p = 0 . 82 ) or during the first half of the CS period ( Response bias: μ = −0 . 11 , p = 0 . 027; Probability: μ = −0 . 04 , p = 0 . 18; PCA: μ = −0 . 07 , p = 0 . 25 ) ; however , when examining the last 4 s of the CS period , distributions were significantly shifted in the positive direction ( Wilcoxons; Response bias: μ = 0 . 32 , p < 0 . 05; Probability: μ = 0 . 28 , p < 0 . 05; PCA: μ = 0 . 24 , p < 0 . 05 ) ., Together , this suggests that rats in the 60-s groups were largely goal tracking early in training and that over the course of training , goal-tracking tendencies did not disappear but became focused to early portions of the CS period , while sign-tracking behavior developed toward the end of the CS period , later in training ( Fig 3B and 3D; S4 Fig ) ., Interestingly , these results go again beyond the computational model and suggest that it should be extended to account for within-trial behavioral variations ., Behavioral analyses clearly demonstrate that manipulation of the ITI impacts sign- and goal-tracking behavior and that both groups learned that the CS predicted reward ( Fig 3; S4 Fig ) ., Next , we determined how DA patterns changed during training ., Fig 3E and 3F illustrate DA release averaged across the first 3 d and days 4–10 of sessions with 120-s and 60-s ITIs , respectively , and DA release for each session is plotted in Fig 3G and 3H ., As shown previously , both groups started with modest DA release to both the CS and US during the first session ( Fig 3G and 3H; trial 1 ) ., For the 120-s ITI group , DA release was significantly higher to CS presentation later ( red ) compared to earlier ( pink ) in learning ( Fig 3E; t test: t = 2 . 51 , df = 119 , p < 0 . 05 ) ., DA release during US delivery did not significantly differ between early and late phases of training ( t test: t = 1 . 27 , df = 119 , p = 0 . 21 ) ., Hence , similarly to the sign trackers in the original study of Flagel and colleagues ( 2011 ) , the increase of DA response to the CS is consistent with the RPE hypothesis ., The difference is that here , the increase in the time available to down-regulate the value associated with the food cup during the ITI may have resulted in a remaining positive surprise at the time of reward delivery , hence preventing the progressive decrease of response to the US across training , in accordance with the model predictions ., In the 60-s ITI group ( Fig 3F and 3H ) , DA release to the US was initially high during the first 3 d ( turquoise ) but declined during days 4–10 ( blue ) ., Directly comparing DA release during the first 3 d with the remaining days revealed significant differences during the US period ( t test: t = 1 . 14 , df = 59 , p < 0 . 05 ) but not the CS period ( t test: t = 0 . 08 , df = 59 , p = 0 . 93 ) ., As a consequence , their post-training DA pattern—with a high response to the CS but no response to the US ( Fig 3F , blue ) —now resembles the traditional RPE pattern ( i . e . , high CS DA and low US DA after learning ) ., This is a clear demonstration that the DA RPE signal can be observed in goal trackers with a manipulation of the ITI , as predicted by the STGT model ., In a final analysis , we examined DA patterns during pure sign and goal tracking within each ITI group ., For this analysis , we examined only sessions during which either the lever was pressed or the food cup was entered during the cue period ., As shown previously 4 , phasic DA responses were apparent during both the CS and US during sessions with goal tracking ( Fig 3I and 3J , GT = orange ) ., In addition to replicating previous results , the figure also illustrates modulation of the DA pattern in line with model predictions ., Specifically , it shows that the DA response to the US was higher in the 120-s group than in the 60-s group during both sign- and goal-tracking sessions ( sign-tracking: t test , t = 3 . 66 , df = 25 , p < 0 . 05; goal-tracking: t test , t = 1 . 44 , df = 29 , p = 0 . 16 ) and that the DA response to the US was significantly lower than the DA response to the CS in GTs of the 60-s group ( t = 3 . 87 , df = 17 , p < 0 . 05 ) , suggesting that even though there is still a DA response to the US , shortening the ITI reduced the US-evoked DA response compared with what has been previously reported 4 ., The results reported here support the STGT model’s predictions that manipulating the ITI would impact the proportion of sign-tracking ( STs ) and goal-tracking ( GTs ) behaviors as well as DA release ., It predicted that shortening the ITI would result in fewer negative revisions of the food cup value and reduce the US DA burst ., It also predicted that the resulting higher food cup value would lead to an increase in the tendency to GT across sessions 1 , 13 , which it did ., The model also predicted that lengthening the ITI would have the opposite effect ., We found that there were significantly more food cup entries during the ITI for the 120-s ITI group and that they showed an increased tendency to sign track ., Furthermore , we show that the time spent in the food cup during the ITI was positively correlated with the amplitude of the CS and US DA bursts for the 120-s ITI group , which is consistent with the hypothesis that lengthening the ITI to allow for more time to decrease the value of the food cup would result in stronger positive RPEs during the trial ., Consistent with the model , we claim that increased sign tracking and DA release result from the additional time spent in the food cup during the ITI ., Indeed , these were positively correlated ., Importantly , this impact of ITI manipulations had not been predicted by other computational models of sign trackers and goal trackers 11 , 14 ., However , several alternative explanations should be considered , which may have also contributed to observed changes in behavior and DA release ., For example , it has been shown that rewards delivered after longer delays yield higher DA responses to the US 15–17 and that uncertain reward increases sign tracking 18 ., Although the reward was highly predictable in our study ( i . e . , always delivered 8 s after cue onset ) , it is possible that uncertainty associated with US delivery impacted behavior and DA release ., Notably , it is likely that these factors are intertwined in that manipulating delays and certainty impact the number of visits to the food cup that are not rewarded , thus leading to a negative revision of the food cup , as predicted by the model ., Future work that modifies food cup entries without manipulating ITI length and rewards uncertainty is necessary to determine the unique contributions that these factors play in goal-/sign-tracking behavior and associated DA release ., Another explanation for increased sign tracking and DA release in the rats in the 120-s ITI group is the possibility that they learned faster than rats in the 60-s ITI group because of differing ratios between US presentations and the interval between the CS and US in that , the shorter the CS-US interval relative to the ITI , the faster the learning 19 ., In the context of our study it is difficult to determine which group learned faster ., Although rats in the 120-s ITI group did lever press more often early in training , rats in the 60-s ITI group made more anticipatory food cup entries during the cue period prior to reward delivery ., Furthermore , both food cup entries and lever pressing were present in the first behavioral session ( S4 Fig ) ., Thus , both groups appear to learn the CS-US relationship at similar speeds , but it is just that the behavior readout of learning differs across groups , making it difficult to determine which group learned the association faster ., In our opinion , our results suggest that rats in both groups learned at similar rates , much like sign and goal trackers do; however , future experiments and iterations of the model are necessary to determine what role the US-US/CS-US ratio plays in sign/goal tracking and corresponding DA release ., Standard RL 20 is a widely used normative framework for modelling learning experiments 21 , 22 ., To account for a variety of observations suggesting that multiple valuation processes coexist within the brain , two main classes of models have been proposed: MB and MF models 23 , 24 ., MB systems employ an explicit , although approximate , internal model of the consequences of actions , which makes it possible to evaluate situations by forward inference ., Such systems best explain goal-directed behaviors and rapid adaptation to novel or changing environments 25–28 ., In contrast , MF systems do not rely on internal models but directly associate stored ( cached ) values with actions or states based on experience , such that higher valued situations are favored ., Such systems best explain habits and persistent behaviors 28–30 ., Learning in MF systems relies on a computed reinforcement signal , the RPE ( actual minus predicted reward ) ., This signal has been shown to correlate with the phasic response of midbrain DA neurons that increase and decrease firing to unexpected appetitive and aversive events , respectively 9 , 31 ., Recent work by Flagel and colleagues 4 has questioned the validity of classical MF RL methods in Pavlovian conditioning experiments ., Their autoshaping procedure reported in that article was nearly identical to the one presented here in that a retractable-lever CS was presented for 8 s , followed immediately by delivery of a food pellet into an adjacent food cup ., The only major difference was that the length of the ITI in their study was 90 s ., In their study , they showed that in STs , phasic DA release in the NAc matched RPE signaling ., That is , the DA burst to reward that was present early in learning transferred to the cue after learning ., They also showed that DA transmission was necessary for the acquisition of sign tracking ., In contrast , despite the fact that GTs acquired a Pavlovian conditioned approach response , this was not accompanied by the expected RPE-like DA signal , nor was the acquisition of the goal-tracking conditioned response blocked by administration of a DA antagonist ( see also Danna and Elmer 10 ) ., To account for these and other results , Khamassi and colleagues 1 proposed a new computational model—the STGT model—that explains a large set of behavioral , physiological , and pharmacological data obtained from studies on individual variation in Pavlovian conditioned approach 2–8 ., Importantly , the model can reproduce previous experimental data by postulating that both MF and MB learning mechanisms occur during behavior , with simulated interindividual variability resulting from a different weight associated with the contribution of each system ., The model accounts for the full spectrum of observed behaviors ranging from one extreme—from sign tracking associated with a small contribution of the MB system in the model—to the other—goal tracking associated with a high contribution of the MB system in the model 12 ., Above all , by allowing the MF system to learn different values associated with different stimuli , depending on the level of interaction with those stimuli , the model potentially explains why the lever CS and the food cup might acquire different motivational values in different individuals , even when they undergo the same training in the same task 26 ., The STGT model explains why the RPE-like dopaminergic response was observed in STs but not GTs—the proposition being that GTs would focus on the reward-predictive value of the food cup , which would have been down-regulated during the ITI ., Furthermore , the STGT explains why inactivating DA in the core of the nucleus accumbens or in the entire brain results in blocking specific components and not others ., Here , the model proposes that learning in GTs relies more heavily on the DA-independent MB system , and thus DA blockade would not impair learning in these individuals 4 , 8 ., More importantly , the model has led to a series of new experimentally testable predictions that assess and strengthen the proposed computational theory and allow for a better understanding of the DA-dependent and DA-independent mechanisms underlying interindividual differences in learning 1 , 13 ., The key computational mechanism in the model is that both the approach and the consumption-like engagement observed in sign trackers ( STs ) on the lever and in goal trackers ( GTs ) on the food cup result from the acquisition of incentive salience by these reward-predicting stimuli ., Acquired incentive salience is stimulus specific: stimuli most predictive of reward will be the most “wanted” by the animal ., The MF system attributes accumulating salience to the lever or the food cup as a function of the simulated DA phasic signals ., In the model simulations , because the food cup is accessible but not rewarding during the ITI , a simulated negative DA error signal occurs each time the animal visits the food cup and does not find a reward ., The food cup therefore acquires less incentive salience compared with the lever , which is only presented prior to reward delivery ., In simulated STs , behavior is highly subject to incentive salience because of a higher weight attributed to the MF system than to the MB | Introduction, Results, Discussion, Materials and methods | Recent computational models of sign tracking ( ST ) and goal tracking ( GT ) have accounted for observations that dopamine ( DA ) is not necessary for all forms of learning and have provided a set of predictions to further their validity ., Among these , a central prediction is that manipulating the intertrial interval ( ITI ) during autoshaping should change the relative ST-GT proportion as well as DA phasic responses ., Here , we tested these predictions and found that lengthening the ITI increased ST , i . e . , behavioral engagement with conditioned stimuli ( CS ) and cue-induced phasic DA release ., Importantly , DA release was also present at the time of reward delivery , even after learning , and DA release was correlated with time spent in the food cup during the ITI ., During conditioning with shorter ITIs , GT was prominent ( i . e . , engagement with food cup ) , and DA release responded to the CS while being absent at the time of reward delivery after learning ., Hence , shorter ITIs restored the classical DA reward prediction error ( RPE ) pattern ., These results validate the computational hypotheses , opening new perspectives on the understanding of individual differences in Pavlovian conditioning and DA signaling . | In classical or Pavlovian conditioning , subjects learn to associate a previously neutral stimulus ( called “conditioned” stimulus; for example , a bell ) with a biologically potent stimulus ( called “unconditioned” stimulus; for example , a food reward ) ., In some animals , the incentive salience of the conditioned stimuli is so strong that the conditioned response is to engage the conditioned stimuli instead of immediately approaching the food cup , where the predicted food will be delivered ., These animals are referred to as “sign trackers . ”, Other animals , referred to as “goal trackers , ” proceed directly to the food cup upon presentation of the conditioned stimulus to obtain reward ., Understanding the mechanisms by which these divergent behaviors develop under identical environmental conditions will provide powerful insight into the neurobiological substrates underlying learning ., Here , we test predictions made by a recent computational model that accounts for a large set of studies examining goal-/sign-tracking behavior and the role that dopamine plays in learning ., We show that increasing the duration of the time between trials leads more to the development of a sign-tracking response and to the release of dopamine in the nucleus accumbens ., During conditioning with shorter intertrial intervals , goal tracking was more prominent , and dopamine was released upon presentation of the conditioned stimulus but not during the time of reward delivery after training ., Thus , shorter intertrial intervals restored the classical dopamine reward prediction error pattern ., Our results validate the computational hypothesis and open the door for understanding individual differences to classical conditioning . | learning, medicine and health sciences, neurochemistry, chemical compounds, classical conditioning, vertebrates, social sciences, conditioned response, neuroscience, animals, mammals, learning and memory, organic compounds, hormones, animal models, surgical and invasive medical procedures, model organisms, cognitive psychology, mathematics, functional electrical stimulation, probability distribution, membrane electrophysiology, experimental organism systems, amines, neurotransmitters, bioassays and physiological analysis, catecholamines, dopamine, research and analysis methods, animal studies, behavior, chemistry, electrophysiological techniques, short reports, probability theory, biochemistry, behavioral conditioning, rodents, psychology, eukaryota, electrode recording, organic chemistry, biogenic amines, biology and life sciences, physical sciences, cognitive science, amniotes, organisms, rats | null |
1,893 | journal.pgen.1007400 | 2,018 | Spontaneous gain of susceptibility suggests a novel mechanism of resistance to hybrid dysgenesis in Drosophila virilis | Transposable elements are selfish elements that have the capacity to proliferate in genomes even if they are harmful 1 ., In response to this threat , mechanisms of small-RNA based silencing have evolved to limit TE proliferation ., In the germline of animals , Piwi-interacting RNAs ( piRNAs ) function to maintain TE repression through both transcriptional and post-transcriptional silencing 2 ., Critically , the epigenetic and transgenerational nature of piRNA-mediated TE control has been revealed by syndromes of hybrid dysgenesis ( HD ) 3 , 4 ., HD is a syndrome of TE-mediated sterility that occurs when males carrying active copies of TEs are crossed with females where such copies are rare or absent 5–7 ., The hybrid dysgenesis syndrome ( HD ) is defined as a combination of various genetic disorders such as genic mutations and chromosomal aberrations that lead to sterility in the progeny of intraspecific crosses 5–7 ., Sterility during HD is mediated by mobilization of certain TE families carried by the paternal genome and absent in the maternal genome 6 , 7 ., To date , there are several independent HD systems in Drosophila melanogaster ., The most well described are the I-R and P-M systems , controlled by the I-element ( a non-LTR ( long terminal repeat ) retrotransposon ) and the P-element ( a DNA transposon ) , respectively 6–8 ., Activation of paternally inherited TEs is explained by the fact that only the female maintains transgenerational TE repression via piRNAs transmitted through maternal deposition ., When the female genome lacks certain TE families , female gametes also lack piRNAs that target these families ., Thus , TE families solely transmitted through the male germline become de-repressed in the absence of repressive piRNAs inherited from the mother 2–4 , 9 ., HD in D . virilis was initially observed when males of laboratory strain 160 and females of wild-type strain 9 were crossed ., The F1 progeny exhibited up to 60% sterility , while sterility in the progeny of reciprocal crosses did not exceed 5–7% 10 ., Similar to the D . melanogaster P-M system , the sterility of hybrids from dysgenic crosses is apparently the result of abnormal development ( atrophy ) of male and female gonads 10–12 ., By analogy with the P-M system , strain 160 and strain 9 were called “P-like” ( P ) and “M-like” ( M ) , respectively ., In contrast to I-R and P-M systems , the study of HD in D . virilis has demonstrated that multiple unrelated TEs belonging to different families are mobilized in dysgenic progeny 13–16 ., The TEs presumably causal of dysgenesis and absent in M-like strain 9 include Penelope ( a representative of the Penelope-like element ( PLE ) superfamily ) , Paris and Polyphemus ( DNA transposons ) , as well as a non-LTR retrotransposon Helena 13–16 ., A typical M-like strain 9 contains only diverged inactive remnants of these TEs ., Additionally , piRNAs targeting Penelope , Paris , Polyphemus and Helena are highly abundant in the germline of strain 160 and are practically absent in strain 9 17 , 18 ., Thus , it has been suggested that the combined activity of these four asymmetric TEs , present only in strain 160 , underlies gonadal atrophy and other manifestations of HD in D . virilis ., This large asymmetry in TE abundance between strains suggests that HD in D . virilis may be considered a model for understanding the consequences of intermediate divergence in TE profiles within a species ., Nonetheless , recent studies have called into question whether the standard model of HD–described in D . melanogaster where sterility is caused by the absence of maternal piRNAs that target specific inducing TE families—applies in D . virilis 3 , 4 , 18 , 19 ., This is because several “neutral” ( N ) strains exhibit “immunity” to HD in dysgenic crosses but lack maternal piRNA corresponding to Penelope elements , the presumptive primary driver of dysgenesis 19 ., If Penelope is a key driver of dysgenesis , how do neutral strains exhibit immunity in the absence of maternally transmitted Penelope piRNA ?, Two fundamental issues arise ., First , as observed in D . melanogaster , is there a single major element that serves as a key driver of HD in D . virilis ?, Second , do N-strains confer their resistance to HD solely through maternally provisioned piRNA or through alternate mechanisms ?, Despite significant progress in understanding the morphogenetic events occurring during gametogenesis and embryogenesis in the progeny of D . virilis dysgenic crosses , these questions still need to be answered 11 , 18 ., To answer these questions , by using small RNA deep-sequencing and qPCR , we decided to perform a comparative survey of maternal piRNA profiles across several “neutral” strains of different origin that did not quite fit the HD paradigm developed in the previous studies of this phenomenon 3 , 4 , 9 , 19 ., Additionally , we developed transgenic strains containing a presumptive causative TE and did not detect a cytotype change after its propagation in the genome ., The accumulated data failed to pinpoint a single TE or specific set of TEs responsible for their “immunity” and support a model in which resistance to TE-mediated sterility during dysgenesis may be achieved by a mechanism that varies across strains ., We thus propose an alternate model to explain resistance to TE mediated sterility in D . virilis ., Instead of solely being explained by maternal piRNAs that target inducing TE families , the chromatin profile of repeats in the maternal genome may confer general immunity to the harmful effects of TE mobilization ., To characterize the piRNA profiles across diverse strains that vary in resistance to HD , we performed small RNA sequencing on six D . virilis strains obtained from various sources ( see Materials and Methods ) and maintained in our laboratory for more than 20 years ., These strains exhibit different levels of gonadal atrophy when crossed with males of P-like strain 160 ., Two of them ( 9 and 13 ) represent strong M-strains ( they exhibit up to 65% of gonadal atrophy in the F1 progeny of the dysgenic cross ) and four ( 140 , Argentina , Magarach and 101 ) behave as “neutral” or N-strains when crossed with strain 160 males and , hence , did not exhibit gonadal atrophy ( less than 10% atrophied gonads ) in such crosses ( Fig 1 ) ., Previous studies suggest Penelope element as a key driver of HD in D . virilis 15 , 20 , 21 ., However , while N-strains 140 and Argentina both carry Penelope elements , two other N-strains–Magarach and 101 contain neither functional Penelope copies nor Penelope-derived small RNAs 19 ., This observation questions the key role of Penelope as a factor determining HD in D . virilis and suggests that piRNAs targeting other asymmetric TEs , e . g . Polyphemus , Helena and possibly Paris , may provide immunity to HD 14 , 15 , 17 , 21 , 22 ., To explore this possibility we performed a comparative analysis of both classes of small RNAs ( piRNAs and siRNAs ) in the ovaries of all selected M- and N-strains using the extended list of TEs and other repeats recently defined in D . virilis genome 18 ., This analysis indicates that the total repertoire of targets for small RNA silencing in strain 160 ( P ) is significantly higher than in all other studied strains ( Figs 2A , 2B , S1A and S1B ) ., Surprisingly , the global piRNA profile for known D . virilis TEs and other repeats is more similar between strain 160 ( P ) and M-strains ( R ( 160:9 ) = 0 . 83; R ( 160:13 ) = 0 . 74 , Spearman’s correlation coefficient ) than between strain 160 ( P ) and several N-strains ( R ( 160:140 ) = 0 . 71; R ( 160:101 ) = 0 . 7 ) ( Fig 2A and 2B ) ., This suggests the possibility that protection is not mediated by a general maternal piRNA profile , but rather to certain specific TEs yet to be identified ., To identify such candidates , we compared sets of piRNA targets distinguishing strain 160 ( P ) from both typical M-strains , 9 and 13 , and obtained a list of ten TEs in common across comparisons ( Fig 2C ) ., These are TEs for which piRNAs are more abundant in strain 160 ( P ) when compared to both M-strains: Polyphemus , Penelope , Paris , Helena , Uvir , Skippy , 190 , 463 , 608 , and 1012 ., However , comparing 160 ( P ) and N-strains , we find that piRNAs from Helena and Skippy are uniquely found at high levels in strain 160 ( P ) ., Thus , if neutrality is conferred by piRNAs that uniformly target the same TE family or families , Helena and Skippy piRNAs are not likely to be required to prevent HD ., However , among the eight remaining candidates , there is no shared family among the neutral strains ( N-strains and 160 ( P ) ) that have a piRNA profile that is similar across strains ., For example , in contrast to 160 ( P ) , Penelope-derived piRNAs are more lowly expressed in strain Magarach ( N ) , Polyphemus-targeted piRNAs are more lowly expressed in strain 101 ( N ) and , finally , Paris-related piRNAs are lowly expressed in strain Argentina ( N ) and in strain 101 ( N ) ( Fig 2D ) ., Thus , we failed to detect one candidate causative TE or combinations of certain TEs present in all neutral strains whose piRNAs guarantee immunity to HD ( Fig 2D ) ., This suggests the possibility that maternal protection in crosses with strain 160 ( P ) males may be conferred by different mechanisms across the strains ., A similar comparative analysis of siRNA expression between strain 160 ( P ) and M-strains demonstrated that siRNAs complementary to only Penelope and Helena elements are absent in the ovaries of strain 9 ( M ) and 13 ( M ) ( S1A and S1B Fig ) ., However , we detected Penelope-homologous siRNAs only in half of the studied neutral trains i . e . strains Argentina and 140 ( S1C Fig ) ., In the context of immunity to HD syndrome manifestations , probably the most important condition is to constantly maintain effective piRNA production in the germline ., It is well known that ovarian piRNA pools consist of molecules generated by primary and secondary processing mechanisms ., Due to germline expression of Ago3 and Aub proteins necessary for secondary processing ( “ping-pong” amplification ) , the germline specific piRNA pool can be assessed quantitatively by counting of “ping-pong” pairs 2 , 23 ., We analyzed the “ping-pong” signature of piRNAs to the selected TEs and showed that these piRNA species contain ping-pong pairs in varying degrees ( S2 Fig ) ., Importantly , all of them exhibit a signature of secondary piRNA processing indicating that production of these piRNAs takes place in the germline but each element lacks such a ping-pong signature in at least one or more of the neutral strains ., In addition , Penelope expression was previously shown to be germline-specific by whole-mount RNA in situ hybridization 24 ., In the present study , using the same technique with the ovaries of P-strain 160 , we confirmed that Paris , Polyphemus and Helena elements exhibit germline-specific expression pattern as well ( S3 Fig ) ., We further examined the pattern of divergence among piRNAs that map to the consensus TEs since piRNAs derived from divergent sequences are likely derived from degraded TE insertions ., Among the selected HD-implicated TEs , the ovarian piRNA pool contains a very small amount of Paris-targeting piRNAs that were detected only in two studied N-strains—140 and Magarach ., Interestingly , only 10% of both sense and antisense-oriented piRNAs apparently originate from modern active copies of Paris elements while the rest of the Paris-complementary piRNAs were produced from ancestral highly diverged ones ( S4 Fig ) ., The same applies to the Penelope-derived piRNAs in strain 101 ( N ) ., All other piRNA species to HD-implicated TEs , especially in the antisense-orientation , in all studied neutral strains were practically identical to the consensus and , hence , apparently originated from active copies of these elements ( S4 Fig ) ., This analysis further indicates that there is no active candidate inducer family , represented by sequence similar piRNAs , shared across all six neutral strains ., Overall , these data indicate that , in terms of piRNA-mediated protection to HD in D . virilis neutral strains , there is no general rule in the context of ovarian piRNAs complementary to particular TEs implicated in HD ., In other words , in neutral strains the maternally transmitted piRNA pool may include different amounts of piRNAs corresponding to various TEs and the repertoire of these TEs often radically differs between the strains with same cytotype ., Syndromes of HD are explained by maternal protection against paternal induction and Penelope has long been considered the primary driver of paternal induction 18 , 20 , 22 ., In the previous section we demonstrated that maternal piRNAs that target Penelope are not necessary to confer neutrality but , as neutrality may arise through different mechanisms , we sought to determine whether Penelope was either sufficient for induction or Penelope piRNA sufficient for protection ., We thus characterized a simulation of natural invasion through the analysis of two transgenic strains of D . virilis containing full-size Penelope copies introduced into a typical D . virilis M-like strain 9 ( the stock is assigned as w3 ) originally devoid of functional copies of this TE ., Our previous experiments demonstrated that introduced Penelope underwent active amplification and occupied more than ten sites in the chromosomes of the transgenic strains 19 ., However , at that time ( in 2012 ) we did not detect any Penelope-derived small RNA species in these transgenic strains ., Subsequent to the early analysis performed in 2011–2012 , we have now found that Penelope is actively transcribed in these two strains and exhibits steady-state RNA levels equal to or even higher than in strain 160 ( Fig 3A ) ., We further observed piRNAs in both transgenic strains , indicating that some of Penelope copies acquired the properties of piRNA-generating locus ( Fig 3B ) ., Thus , in strain Tf2 the level of piRNAs homologous to Penelope is only half as much as that observed in P-like strain 160 ., The analysis of Penelope-derived piRNAs indicates a distribution of piRNAs along the entire Penelope body and clear-cut ping-pong signature ( Fig 3B ) ., Similar to strain 160 , more than half of the Penelope-derived piRNAs in both strains originate from active and highly similar Penelope copies with few mismatches to the canonical sequence ( Fig 3C ) ., In contrast , Penelope piRNAs identified in the untransformed M-like strain 9 ( w3 ) are highly divergent and likely derive from inactivated Penelope copies ( termed “Omegas” ) located in heterochromatic regions of the genome 25 , 26 ., Interestingly , the pool of Penelope derived small RNAs in transgenic strains are primarily piRNAs ., This is in contrast to inducer strain 160 and D . melanogaster strains transformed with Penelope 19 , where Penelope-derived siRNAs are the major class ( S5 Fig ) ., Surprisingly , both transgenic strains containing multiple Penelope copies and abundant piRNAs behave exactly as the original M-like strain 9 in dysgenic crosses ( Fig 4 ) ., They neither have the capacity to induce HD paternally nor protect against HD maternally ., Therefore , the introduction of full-size Penelope into an M-like strain accompanied by its propagation , active transcription and piRNAs production was not sufficient to modify the cytotype ., These results also indicate that the presence of piRNA complementary to Penelope in the oocyte is not the only prerequisite to prevent gonadal sterility when crossed with males of P-like strain 160 ., Along these lines , it has been shown recently that the number of P-element and hobo copies per se has very little influence on gonadal sterility suggesting that HD is not determined solely by the dosage of HD-causative elements 27 ., The above results demonstrate that the maternal piRNAs that target all , or even most , asymmetric TEs that likely cause dysgenesis are not necessary to confer neutral strain status ( Fig 2 ) ., Furthermore , Penelope piRNAs are not sufficient for maternal protection and the presence of active Penelope copies is not sufficient for paternal induction ( Figs 3 and 4 ) ., This begs the question: What are the necessary and sufficient factors of HD in D . virilis ?, Among the analyzed strains , neutral strain 101 represents a special case ., This is due to the fact that the genome of this strain does not produce piRNAs to the most-described HD-implicated TEs , e . g . Paris , Helena , Polyphemus and a very small amount of divergent Penelope-homologous piRNAs ( Figs 2 and S4 ) ., In the course of our long-term monitoring of the gonadal atrophy observed in the progeny of dysgenic crosses involving P-like strain and various laboratory and geographical strains of D . virilis , we often observed significant variation in the level of sterility in the progeny of the same crosses occurring with time ., Strikingly , among these strains , we have identified a spontaneous change from neutral cytotype to M-like one ., Thus , while an old laboratory strain 101 kept in the Stock Center of Koltzov Institute of Developmental Biology RAS maintained a neutral cytotype for the whole period of observation ( 2011–2017 ) the same strain kept in our laboratory gradually became M-like strain ( Fig 5 ) ., We considered the possibility that this shift in cytotype could be explained by changes in the TE profile between the strains ., Surprisingly , Southern blot and PCR analyses demonstrate that 101 N- and M- substrains have identical TE profiles for Penelope , Paris , Polyphemus and Helena ( Figs 6A and S6 ) ., Additionally , qPCR analysis failed to detect any significant changes in the expression levels of the major asymmetric TEs as well as other described TEs in the compared variants ( neutral vs M-like ) of this strain ( Fig 6B ) ., These data rule out the possibility of strain contamination with a lab M-strain ., To understand the observed differences in the cytotype of strain 101 variants we performed additional small-RNA sequencing ., Indeed , the piRNA profile of strain 101 ( N ) has significantly higher piRNA levels ( compared to 101 ( M ) ) for five previously undescribed repeats ( 315 , 635 , 850 , 904 and 931 ) ( Fig 7A ) , indicating that differences in cytotype could be attributed to these repeats ., Among these piRNA species , only piRNAs targeting 315 and 635 elements comprise many ping-pong pairs and , hence , are generated predominantly by germline-specific secondary processing mechanism ( Fig 7B ) ., Based on sequence similarity to the TE consensus , at least 25% of antisense-oriented piRNA molecules apparently originated from modern active elements , with the exception of piRNAs targeting the 904-element ( S7A Fig ) ., Focusing on the three elements ( 315 , 635 , 850 ) with maximal piRNA expression levels , we compared both variants of strain 101 in more detail to determine if differences in repeat profile could explain differences in cytotype ., Element 315 encodes three open reading frames ( ORF ) ., According to the protein-domain structure , two ORFs appear to encode gag and pol genes ., The third ORF has no homology to the described TEs and possibly encodes an env gene ., Thus , element 315 probably represents a retroelement ., Since we failed to find any homology of the 315 element to the described families of TEs in Sophophora subgenus we propose that this element is an exclusive resident of Drosophila subgenus ., Element 635 has some homology to the Invader element of D . melanogaster , which belongs to the Gypsy family of LTR-containing retrotransposons ., However , it has no long terminal repeats ( LTRs ) in its sequence ., Finally , short 850 element ( 749 nt ) doesn’t encode any ORF and seems to be non-autonomous ., Importantly , based on Southern blot and PCR analysis , these particular repeats did not undergo amplification in the neutral variant of strain 101 and both compared substrains exhibit identical restriction patterns of these elements , similar to that of P-like strain 160 ( S7B and S7C Fig ) ., Hence , the observed cytotype shift as well as the differences in piRNA pool to these elements apparently do not stem from differences in copy number among 101 substrains ., Interestingly , we observed a significant increase of expression levels of 315 and 635 elements ( p < 0 . 05; t-test ) , but not 850 , in the ovarian mRNA pool of M-like substrain 101 compared to the neutral substrain ( Fig 7C ) ., Overall , these results demonstrate that the capacity for these repeats to produce piRNAs is lower in the 101 ( M ) strain , even in the absence of movement ., What could lead to differences in the piRNA profile for these repeats between the 101 ( N ) and 101 ( M ) strains in the absence of movement ?, Studies of piRNA-generating loci in Drosophila revealed that the H3K9me3 mark , which serves as a binding site to recruit HP1a and its germline homolog Rhino , is required for transcription of dual-strand piRNA-clusters and transposon silencing in ovaries 2 , 28 , 29 ., We hypothesized that a shift of the chromatin state in strain 101 modified the ability of particular genomic loci , carrying 315 , 635 , 850 elements , to produce piRNA species ., These changes in piRNA profile may be an indication of a chromatin-based modification that may confer resistance to HD sterility in the neutral 101 substrain ., To test this hypothesis , we estimated the levels of H3K9me3 and HP1a marks by ChIP combined with qPCR analysis in the ovaries of two cytotype variants of strain 101 ., The analysis showed significant increase of H3K9me3 levels on genomic regions containing 315 , 635 and 850 elements ( enrichment > 2 . 5 , p < 0 . 05 ) as well as slight increase of HP1a enrichment in the neutral variant of strain 101 compared to the M-like substrain ( Figs 7D and S8 ) ., In turn , Ulysses carrying regions used as a control demonstrated equal levels of the H3K9me3 mark , consistent with Ulysses-targeting piRNA levels being almost equal in the strain 101 variants ( Fig 7D ) ., This indicates that certain repeats have experienced shift in their chromatin profile , but that this shift is not global ., A similar phenomenon has been recently described in I-R HD system in D . melanogaster 30 ., In that comparative analysis of two reactive strains ( weak and strong ) , it was shown that despite having a similar number of copies of the I-element , these strains significantly differ by enrichment of Rhino at the 42AB piRNA-cluster containing I-elements remnants ., Furthermore , a lower level of I-element targeted piRNA species was observed in the strong-reactive strain as a result 30 ., Given these differences , it is possible that these elements are the primary drivers of dysgenesis in D . virilis ., To further test the hypothesis that activation of these elements could contribute to HD , we compared first piRNA levels of all these elements in the ovaries of the F1 progeny from dysgenic-like and reciprocal crosses using variants of strain 101 and P-like strain 160 ., These experiments demonstrate that piRNAs targeting 315 , 635 , and 934 elements showed similar levels in the ovaries of F1 hybrids from dysgenic crosses ( 101 ( N ) x 160 ) and parental neutral strain 101 , but lower levels in progeny of reciprocal crosses where such piRNAs would not be maternal ( 160 x 101 ( N ) ) ( Fig 7E ) ., Thus , the maternally provisioned piRNAs complementary to 315 , 635 and 931 elements are required to stimulate the generation of the corresponding piRNAs in the progeny , as shown in other systems of HD 3 , 4 ., However , in the analysis of steady-state mRNA levels of these TEs in the ovaries of dysgenic and reciprocal progeny of crosses between 101 substrains and P-like strain 160 , we failed to obtain any induction of 315 , 635 and 850 elements exceeding their levels of parental strains ( Fig 7F and 7G ) ., On the contrary , the ovaries of F1 hybrids from the reciprocal ( non-dysgenic ) crosses involving strains 101 ( N ) males and 160 ( P ) females showed even significantly higher expression levels of these elements in comparison to dysgenic ones ., Moreover , the dysgenic and reciprocal hybrids of M-like substrain 101 and strain 160 ( P ) showed no differences in the mRNA levels of the studied elements ( Fig 7F and 7G ) ., These results indicate that activation of these elements per se is unlikely to be causative to HD because 101 ( N ) and 101 ( M ) have identical TE profiles ., We therefore considered the possibility that what distinguishes strain 101 ( N ) from 101 ( M ) may have an epigenetic basis or , alternately , an unknown genetic change that alters repeat chromatin ., If so , then lack of piRNAs to these elements in 101 ( M ) could explain the M-cytotype ., To test this , we compared piRNA levels and family level abundance with inducer strain 160 ( P ) ., Critically , none of these elements show increased piRNA levels in strain 160 ( P ) compared to strain 9 ( M ) ( Fig 7H ) ., Thus , asymmetry in the piRNA pool for these particular elements is not a necessary condition for dysgenesis ., According to the recent studies differences in parental expression levels of genic piRNAs may contribute to the dysgenic manifestations in the progeny 18 , 30 ., With this in mind , we compared the expression of genic piRNAs in the ovaries of both 101 substrains and did not observe significant differences in their levels ( S9A Fig ) ., Ping-pong of genic piRNA profiles are also exhibit high similarity between these strains ( S9B Fig ) ., Based on these data , we concluded that differences in genic piRNAs unlikely have impact on the observed cytotype shift ., Overall , we have shown that the enrichment of heterochromatic marks ( H3K9me3 and HP1a ) in the genomic regions containing 315 , 635 and 850 elements is significantly lower in M-like variant of strain 101 compared to neutral one ., Together , these data provide further evidence that the mechanism of maternal repression may significantly vary among strains ., However , additional experiments involving Rhino ChIP and genome sequencing of strain 101 are needed to clearly prove this assumption and identify the loci responsible for the enhanced piRNA production in one of the two 101 substrains ., One of the main consequences of activation of a particular asymmetric TE in the progeny of dysgenic crosses is their expression level excess compared to both paternal strains and reciprocal hybrids 3 , 15 , 18 , 31 ., Studies of the I-R syndrome of HD in D . melanogaster demonstrate higher expression of the I-element in the F1 progeny from dysgenic crosses compared to reciprocal ones 3 , 30 , 31 ., This is due to the maternal deposition of piRNAs targeting the I-element and its effective silencing in only one direction of the cross ., Additionally , various studies of HD systems , including the D . virilis syndrome , demonstrated that transgenerational inheritance of piRNAs is able to trigger piRNA expression in the next generation by changing the chromatin of piRNA-clusters due to paramutation 3 , 4 , 32–34 ., However , a pattern of higher TE expression in the absence of complementary maternal piRNA is less apparent in D . virilis ., Despite strain asymmetry in genomic content and piRNA abundance of Penelope and several other TEs , germline piRNA pools do not differ drastically between reciprocal F1 progeny , with the exception of Helena element 18 ., We therefore sought to determine whether this atypical pattern was also observed in crosses with other strains , focusing on asymmetric Penelope , Paris , Polyphemus and Helena as well as Ulysses present in all strains ., As expected , ovarian mRNA levels revealed a complete correspondence with the piRNA expression levels among strains ( Figs 8A , 2A and 2B ) ., For example , we detected both Penelope mRNA and piRNA expression in 140 ( N ) and Argentina ( N ) , but neither were evident in Magarach ( N ) and 101 ( N ) ., However , in all cases when females from M-like strains are crossed with strain 160 males , ovarian levels of expression are uniformly significantly higher for only one asymmetric TE–Polyphemus ( fold change 3 , 5 , 3 . 5 , p < 0 . 05 , t-test , in dysgenic hybrids with strains 9 ( M ) , 13 ( M ) and 101 ( M ) , respectively ) ( Fig 8B and 8C ) ., In most cases the observed differences in expression for Penelope and Paris elements in the ovaries of dysgenic and reciprocal hybrids were not dramatic and when exist rarely exceed 1 . 5–2 fold ., Moreover , in the crosses involving neutral strains and strain 160 , we failed to detect any characteristic differences in TEs expression between reciprocal hybrids ( Fig 8B and 8C ) ., Thus , independent of maternal piRNA profile , all reciprocal crosses with neutral strains show similar levels of expression ., However , the two variants of strain 101 give different results when crossed with P-like strain 160 ., In spite of the fact that 101 substrains contain equal levels of piRNAs complementary to the HD-implicated TEs , in the case of the M-like variant we observed higher levels of expression in the dysgenic hybrids for Penelope ( fold change 3; p < 0 . 05 , t-test ) and Polyphemus ( fold change 3 . 5 , p < 0 . 05 , t-test ) ., Moreover , increase of Ulysses element ( found in all D . virilis strains ) expression ( fold change 3 , p < 0 . 05 , t-test ) was demonstrated in the dysgenic ovaries of 13 ( M ) and 160 ( P ) hybrids ( S10A Fig ) ., These results demonstrate that factors other than maternal piRNA abundance lead to variation in resident TE expression in crosses between strain 160 and 101 substrains ., For the neutral 101 strain , we failed to detect significant differences in the hybrids from both directions of crosses for any of TEs tested ( Fig 8B ) ., With the exception of a few TEs and repeats , piRNA abundance in the ovaries from dysgenic and reciprocal progeny exhibited no drastic differences including piRNAs complementary to asymmetric TEs ( Figs 8B and S11 ) ., Surprisingly , Helena , which maintains high level of asymmetry of the maternal pool of piRNAs in the progeny , exhibits very similar levels of correspondent mRNA expression in the hybrids obtained in both directions of crosses ( Fig 8B ) ., In spite of overall similarity , piRNA pools in the ovaries of F1 progeny are able to comprise significantly different number of ping-pong pairs to all of transposons studied ( S10B Fig ) ., For example , in the ovaries from dysgenic progeny ( strain 160 males ) with strains 9 ( M ) and Argentina ( N ) females , the number of ping-pong pairs to Penelope , Paris and Polyphemus was 2-3-fold lower than in the ovaries from reciprocal hybrids ( S10B Fig ) ., We have also found that enrichment of the H3K9me3 mark on Penelope , Paris , Polyphemus and Helena sequences does not differ significantly in the F1 progeny of dysgenic and reciprocal crosses ( S10C Fig ) ., Thus , we propose that piRNA-mediated transcriptional gene silencing of these HD-implicated TEs is similar in both directions of crosses and maternally provisioned piRNAs to these TEs are not necessary to stimulate the production of correspondent piRNA species in the progeny ., These results are in agreement with recently published data 18 ., In summary , it should be emphasized that in contrast to the I-R system in D . melanogaster , where maternal deposition of I-element piRNAs results in dramatic increase of piRNA expression targeting I-element in the progeny and efficient suppression of I-element activity , in D . virilis maternally provisioned piRNAs do not always guarantee efficient generation of the correspondent piRNAs in the progeny to maintain silencing of complementary TEs and provide adaptive genome defense ., We conclude that in D . virilis the determination of asymmetric TEs expression levels in the ovaries of the progeny from dysgenic and reciprocal crosses does not allow one to unambiguously assign causality for HD to specific TE families ., This fact points to an alternate mode of HD in D . virilis ., The standard explanation for the phenomenon of hybrid d | Introduction, Results and discussion, Conclusions, Materials and methods | Syndromes of hybrid dysgenesis ( HD ) have been critical for our understanding of the transgenerational maintenance of genome stability by piRNA ., HD in D . virilis represents a special case of HD since it includes simultaneous mobilization of a set of TEs that belong to different classes ., The standard explanation for HD is that eggs of the responder strains lack an abundant pool of piRNAs corresponding to the asymmetric TE families transmitted solely by sperm ., However , there are several strains of D . virilis that lack asymmetric TEs , but exhibit a “neutral” cytotype that confers resistance to HD ., To characterize the mechanism of resistance to HD , we performed a comparative analysis of the landscape of ovarian small RNAs in strains that vary in their resistance to HD mediated sterility ., We demonstrate that resistance to HD cannot be solely explained by a maternal piRNA pool that matches the assemblage of TEs that likely cause HD ., In support of this , we have witnessed a cytotype shift from neutral ( N ) to susceptible ( M ) in a strain devoid of all major TEs implicated in HD ., This shift occurred in the absence of significant change in TE copy number and expression of piRNAs homologous to asymmetric TEs ., Instead , this shift is associated with a change in the chromatin profile of repeat sequences unlikely to be causative of paternal induction ., Overall , our data suggest that resistance to TE-mediated sterility during HD may be achieved by mechanisms that are distinct from the canonical syndromes of HD . | Transposable elements ( TE ) can proliferate in genomes even if harmful ., In response , mechanisms of small-RNA silencing have evolved to repress germline TE activity ., Syndromes of hybrid dysgenesis in Drosophila—where unregulated TE activity in the germline causes sterility—have also revealed that maternal piRNAs play a critical role in maintaining TE control across generations ., However , a syndrome of hybrid dysgenesis in D . virilis has identified additional complexity in the causes of hybrid dysgenesis ., By surveying factors that modulate hybrid dysgenesis in D . virilis , we show that protection against sterility cannot be entirely explained by piRNAs that control known inducer TEs ., Instead , spontaneous changes in the chromatin state of repeat sequences of the mother may also contribute to protection against sterility . | sequencing techniques, invertebrates, medicine and health sciences, reproductive system, gene regulation, invertebrate genomics, animals, animal models, drosophila melanogaster, model organisms, experimental organism systems, molecular biology techniques, epigenetics, rna sequencing, drosophila, chromatin, research and analysis methods, small interfering rnas, genomics, artificial gene amplification and extension, chromosome biology, gene expression, comparative genomics, molecular biology, animal genomics, insects, arthropoda, ovaries, biochemistry, rna, eukaryota, anatomy, nucleic acids, cell biology, polymerase chain reaction, genetics, biology and life sciences, computational biology, non-coding rna, organisms | null |
1,166 | journal.pcbi.1003071 | 2,013 | Constraint and Contingency in Multifunctional Gene Regulatory Circuits | Gene regulatory circuits are at the heart of many fundamental biological processes , ranging from developmental patterning in multicellular organisms 1 to chemotaxis in bacteria 2 ., Regulatory circuits are usually multifunctional ., This means that they can form different metastable gene expression states under different physiological conditions , in different tissues , or in different stages of embryonic development ., The segment polarity network of Drosophila melanogaster offers an example , where the same regulatory circuit affects several developmental processes , including embryonic segmentation and the development of the flys wing 3 ., Similarly , in the vertebrate neural tube , a single circuit is responsible for interpreting a morphogen gradient to produce three spatially distinct ventral progenitor domains 4 ., Other notable examples include the bistable competence control circuit of Bacillus subtilis 5 and the lysis-lysogeny switch of bacteriophage lambda 6 ., Multifunctional regulatory circuits are also relevant to synthetic biology , where artificial oscillators 7 , toggle switches 8 , and logic gates 9 are engineered to control biological processes ., The functions of gene regulatory circuits are embodied in their gene expression patterns ., An important property of natural circuits , and a design goal of synthetic circuits , is that these patterns should be robust to perturbations ., Such perturbations include nongenetic perturbations , such as stochastic fluctuations in protein concentrations and environmental change ., Much attention has focused on understanding 1 , 2 , 4 , 10 , 11 and engineering 12–14 circuits that are robust to nongenetic perturbations ., Equally important is the robustness of circuit functions to genetic perturbations , such as those caused by point mutation or recombination ., Multiple studies have asked what renders biological circuitry robust to such genetic changes 15–20 ., With few exceptions 21 , 22 , these studies have focused on circuits with one function , embodied in their gene expression pattern ., Such monofunctional circuits tend to have several properties ., First , many circuits exist that have the same gene expression pattern 17–19 , 23–28 ., Second , these circuits can vary greatly in their robustness 16 , 18 , 29 ., And third , they can often be reached from one another via a series of function-preserving mutational events 18 , 19 , 30 ., Taken together , these observations suggest that the robustness of the many circuits with a given regulatory function can be tuned via incremental mutational change ., Most circuits have multiple functions , but how these observations translate to such multifunctional circuits is largely unknown ., In a given space of possible circuits , how many circuits exist that have a given number of k specific functions ( expression patterns ) ?, What is the relationship between this number of functions and the robustness of each function ?, Do circuits with any combination of functions exist , or are some combinations “prohibited ? ”, Pertinent earlier work showed that there are indeed fewer multifunctional circuits than monofunctional circuits 21 , but this investigation had two main limitations ., First , it considered circuits so large that the space of circuits and their functions could not be exhaustively explored , and restricted itself to mostly bifunctional circuits ., Second , it included only topological circuit variants ( i . e . , who interacts with whom ) , and ignored variations in the signal-integration logic of cis-regulatory regions ., These regions encode regulatory programs , which specify the input-output mapping of regulatory signals ( input ) to gene expression pattern ( output ) 31–33 ., Variations in cis-regulatory regions 34 , such as mutations that change the spacing between transcription factor binding sites 35 , are known to impact circuit function 36 , 37 , and their inclusion in a computational model of regulatory circuits is thus important ., Here , we overcome these limitations by focusing on regulatory circuits that are sufficiently small that an entire space of circuits can be exhaustively explored ., Specifically , we focus on circuits that comprise only three genes and all possible regulatory interactions between them ., Small circuits like this play an important role in some biological processes ., Examples include the kaiABC gene cluster in Cyanobacteria , which is responsible for circadian oscillations 38 , the gap gene system in Dropsophila , which is responsible for the interpretation of morphogen gradients during embryogenesis 19 , and the krox-otx-gatae feedback loop in starfish , which is necessary for endoderm specification 39 ., Additionally , theoretical studies of small regulatory circuits have provided several general insights into the features of circuit design and function ., Examples include biochemical adaptation in feedback loops 40 and response delays in feed-forward loops 41 , among others 16 , 19 , 23 , 42–45 ., Lastly , there is a substantial body of evidence suggesting that small regulatory circuits form the building blocks of larger regulatory networks 34 , 46–48 , further warranting their study ., For two reasons , we chose Boolean logic circuits 49 as our modeling framework ., First , they allow us not only to vary circuit topology 45 , but also a circuits all-important signal-integration logic 44 ., Second , Boolean circuits have been successful in explaining properties of biological circuits ., For example , they have been used to explain the dynamics of gene expression in the segment polarity genes of Drosophila melanogaster 50 , the development of primordial floral organ cells of Arabidopsis thaliana 51 , gene expression cascades after gene knockout in Saccharomyces cerevisiae 52 , and the temporal and spatial expression dynamics of the genes responsible for endomesoderm specification in the sea urchin embryo 53 ., We consider a specific gene expression pattern as the function of a circuit like this , because it is this pattern that ultimately drives embryonic pattern formation and physiological processes ., Multifunctional circuits are circuits with multiple gene expression patterns , and here we study the constraints that multifunctionality imposes on the robustness and other properties of regulatory circuits ., The questions we ask include the following:, ( i ) How many circuits have a given number k of functions ?, ( ii ) What is the relationship between multifunctionality and robustness to genetic perturbation ?, ( iii ) Are some multifunctional circuits more robust than others ?, ( iv ) Is it possible to change one multifunctional circuit into another through a series of small genetic changes that do not jeopardize circuit function ?, We consider circuits of genes ( Fig . 1A ) ., We choose a compact representation of a circuits genotype G that allows us to represent both a circuits signal-integration logic and its architecture by a single binary vector of length ( Fig . 1B ) ., Changes to this vector can be caused by mutations in the cis-regulatory regions of DNA ., Such mutations may alter the binding affinity of a transcription factor to its binding site , thereby creating or removing a regulatory interaction 34 ., Alternatively , they may affect the distance of a transcription factor binding site from the transcription start site , changing its rotational position on the DNA helix ., In turn , this may alter the regulatory effect of the transcription factor 54 , and change the downstream genes signal-integration logic ., Lastly , such mutations may change the distance between adjacent transcription factor binding sites , enabling or disabling a functional interaction between proximally bound transcription factors 35 ., We note that mutations in G could also be conceptualized as changes in the DNA binding domain of a transcription factor ., However , evolutionary evidence from microbes suggest that alterations in the structure and logic of regulatory circuits occurs preferentially via changes in cis-regulatory regions , rather than via changes in the transcription factors that bind these regions 55 ., The dynamics of the expression states of a circuits N genes begin with a prespecified initial state , which represents regulatory influences outside or upstream of the circuit , such as transcription factors that are not part of the circuit but can influence its expression state ., The initial state reflects the fact that small circuits are typically embedded in larger regulatory networks 34 , 46–48 , which provide the circuit with different regulatory inputs under different environmental or tissue-specific conditions ., Through the regulatory interactions specified in the circuits genotype , the circuits gene expression state changes from this initial state , until it may reach a stable ( i . e . , fixed-point ) equilibrium state ., We consider a circuits function to be a mapping from an initial expression state to an equilibrium expression state ( Fig . 1C ) ., In the main text , we consider only circuit functions that involve fixed point equilibria , but we consider periodic equilibrium states in the Supporting Online Material ., A circuit could in principle have as many as functions , as long as the initial expression states are all different from one another , and the equilibrium expression states are all different from one another ( Material and Methods ) ., The circuits we study may map multiple initial states to the same equilibrium state , but our definition of function ignores all but one of these initial states ., While a definition of function that includes many-to-one mappings between initial and equilibrium states can be biologically sensible , our intent is to investigate specific pairs of inputs ( i . e . , ) and outputs ( i . e . , ) , as is typical for circuits in development and physiology 56–58 ., We emphasize that a circuit can express its k functions individually , or in various combinations , such that the same circuit could be said to have between one and k functions ., For brevity , we refer to a specific set of k functions as a multifunction or a k-function and to circuits that have at least one function as viable ., The space of circuits we explore here contains possible genotypes ., We exhaustively determine the equilibrium expression states of each genotype for all initial states , thereby providing a complete genotype-to-phenotype ( function ) map ., We use this map to partition the space of genotypes into genotype networks 17–19 , 21 ., A genotype network consists of a single connected set of genotypes ( circuits ) that have identical functions , and where two circuits are connected neighbors if their corresponding genotypes differ by a single element ( Fig . 1D ) ., Note that such single mutations may correspond to larger mutational changes in the cis-regulatory regions of DNA ., For example , mutations that change the distance between binding sites , or between a binding site and a transcription start site , may involve the addition or deletion of large segments of DNA 26 , 59–62 ., We first asked how the number of genotypes that have k functions depends on k ., Fig . 2 shows that this number decreases exponentially , implying that multifunctionality constrains the number of viable genotypes severely ., For instance , increasing k from 1 to 2 decreases the number of viable genotypes by 34%; further increasing k from 2 to 3 leads to an additional 39% decrease ., However , there is always at least one genotype with a given number k of functions , for any ., In other words , even in these small circuits , multiple genotypes exist that have many functions ., Thus far , we have determined the number of genotypes with a given number k of functions , but we did not distinguish between the actual functions that these genotypes can have ., For example , there are 64 variants of function , since there are potential initial states and potential equilibrium states ( ) ., Analogously , simple combinatorics ( Text S1 ) shows that there are 1204 variants of functions , and the number of variants increases dramatically with greater k , up to a maximum of variants of functions ., This is possible because individual functions can occur in different possible combinations in multifunctional circuits ( Material and Methods ) ., The solid line in the inset of Fig . 2 indicates how this number of possible different functions scales with k ., We next asked whether there exist circuits ( genotypes ) for each of these possible combinations of functions , or whether some multifunctions are prohibited ., The open circles in the inset of Fig . 2 show the answer: These circles lie exactly on the solid line that indicates the number of possible combinations of functions for each value of k ( Text S1 ) ., This means that no multifunction is prohibited ., In other words , even though multifunctionality constrains the number of viable genotypes , there is always at least one genotype with k functions , and in any possible combination ., As gene regulatory circuits are often involved in crucial biological processes , their functions should be robust to perturbation ., We therefore asked whether the constraints imposed by multifunctionality also impact the robustness of circuits and their functions ., In studying robustness , we differentiate between the robustness of a genotype ( circuit ) and the robustness of a k-function ., We assess the robustness of a genotype as the proportion of all possible single-mutants that have the same k-function , and the robustness of a k-function as the average robustness of all genotypes with that k-function 17 , 18 , 51 , 63 ( Material and Methods ) ., We refer to the collection of genotypes with a given k-function as a genotype set , which may comprise one or more genotype networks ., We emphasize that a genotype may be part of several different genotype sets , because genotypes typically have more than one k-function ., Fig . 3A shows that the robustness of a k-function decreases approximately linearly as k increases , indicating a trade-off between multifunctionality and robustness ., However , some degree of robustness is maintained so long as ., For larger k , some functions exist that have zero robustness ( Text S1 ) , that is , none of the circuits with these functions can tolerate a change in their regulatory genotype ., The inset of Fig . 3A reveals a similar inverse relationship between the size of a genotype set and the number of functions k , implying that multifunctions become increasingly less “designable” 64 — fewer circuits have them — as k increases ( Text S1 ) ., For example , for as few as functions , the genotype set may comprise a single genotype , reducing the corresponding robustness of the k-function to zero ., For each value of k , the maximum proportion of genotypes with a given k-function is equal to the square of the maximum proportion of genotypes with a function , explaining the triangular shape of the data in the inset ., This triangular shape indicates that the genotype set of a given k-function is always smaller than the union of the k constituent genotypes sets ., Additionally , we find that the robustness of a k-function and the size of its genotype set are strongly correlated ( Fig . S1 ) , indicating that the genotypes of larger genotype sets are , on average , more robust than those of smaller genotype sets ., This result is not trivial because the structure of a genotype set may change with its size ., For example , large genotype sets may comprise many isolated genotypes , or their genotype networks might be structured as long linear chains ., In either case , the robustness of a k-function would decrease as the size of its genotype set increased ., We have so far focused on the properties of the genotype sets of k-functions , but have not considered the properties of the genotype networks that make up these sets ., Therefore , we next asked how genotypic robustness varies across the genotype networks of k-functions ., In Figs ., 3B–D , we show the distributions of genotypic robustness for representative genotype networks with functions ., These distributions highlight the inherent variability in genotypic robustness that is present in the genotype networks of multifunctions , indicating that genotypic robustness is an evolvable property of multifunctional circuits ., Indeed , in Fig . S2 , we show the results of random walks on these genotype networks , which confirm that it is almost always possible to increase genotypic robustness through a series of mutational steps that preserve the k-function ., In Fig . S3 , we show in which dynamic regimes ( Material and Methods ) the circuits in these same genotype networks lie ., We have shown that the genotype set of any k-function is non-empty ( Fig . 2 ) , meaning that there are no “prohibited” k-functions ., We now ask how the genotypes with a given k-function are organized in genotype space ., More specifically , is it possible to connect any two circuits with the same k-function through a sequence of small genotypic changes where each change in the sequence preserves this k-function ?, In other words , are all genotypes with a given k-function part of the same genotype network , or do such genotypes occur on multiple disconnected genotype networks ?, Fig . 4 shows the relationship between the number of genotype networks in a genotype set and the number of circuit functions k ., For monofunctional circuits ( ) , the genotype set always consists of a single , connected genotype network ., This implies that any genotype in the genotype set can be reached from any other via a series of function-preserving mutational events ., In contrast , for circuits with functions , the genotype set often fragments into several isolated genotype networks , indicating that some regions of the genotype set cannot be reached from some others without jeopardizing circuit function ., The most extreme fragmentation occurs for functions , where some genotype sets break up into more than 20 isolated genotype networks ., Fig . S4 provides a schematic illustration of how fragmentation can occur in a k-functions genotype set , despite the fact that the genotype sets of the k constituent monofunctions consist of genotype networks that are themselves connected ., Fig . S5 provides a concrete example of fragmentation , depicting one genotype from each of the several genotype networks of a bifunctions genotype set ., The proportion of k-functions with genotype sets that comprise a single genotype network is shown in the inset of Fig . 4 ., This proportion decreases dramatically as the number of functions increases from to , such that only 16% of genotype sets comprise a single genotype network when ., Figs ., 4B–D show that the distributions of the number of genotype networks per genotype set are typically left-skewed ., This implies that when fragmentation occurs , the genotype set usually fragments into only a few genotype networks ., However , the distribution of genotype network sizes across all genotype sets is heavy-tailed and often spans several orders of magnitude ( Fig . S6 ) ., This means that the number of genotypes per genotype network is highly variable ., We next ask whether the number of genotypes in the genotype set of a k-function can be predicted from the number of genotypes in the genotype sets of the k constituent monofunctions ., To address this question , we define the fractional size of a genotype set as the number of genotypes in the set , divided by the number of genotypes in genotype space ., We first observe that the maximum fractional size of a genotype set of a k-function is equal to ( Fig . S6 ) , which is the maximum fractional size of a genotype set for monofunctional circuits 44 raised to the kth power ., In general , we find that the fractional size of a genotype set of a k-function can be approximated with reasonable accuracy by the product of the fractional sizes of the genotype sets of the k constituent monofunctions , but that the accuracy of this approximation decreases as k increases ( Fig . S7 ) ., While these fractional genotype set sizes may be quite small , we note that their absolute sizes are still fairly large , even in the tiny circuits considered here ., For example , for functions the maximum genotype set size is 262 , 144 ., For functions , the maximum is 32 , 768 ., In evolution , a circuit may acquire a new regulatory function while preserving its pre-existing functions ., An example is the highly-conserved hedgehog regulatory circuit , which patterns the insect wing blade ., In butterflies , this regulatory circuit has acquired a new function ., It helps form the wings eyespots , an antipredatory adaptation that arose after the insect body plan 65 ., This example illustrates that a regulatory circuit may acquire additional functions incrementally via gradual genetic change ., The order in which the mutations leading to a new function arise and go to fixation can have a profound impact upon the evolution of such phenotypes 66 ., In particular , early mutations have the potential to influence the phenotypic effects of later mutations , which can lead to a phenomenon known as historical contingency ., We next ask whether it is possible for a circuit to incrementally evolve regulatory functions in any order , or whether this evolutionary process is susceptible to historical contingency ., In other words , is it possible that some sequence of genetic changes that lead a circuit to have k functions also preclude it from gaining an additional function ?, The genotype space framework allows us to address this question in a systematic way , because it permits us to see contingency as a result of genotype set fragmentation ., Specifically , contingency means that , as a result of fragmentation , the genotype network of a new function may become inaccessible from at least one of the genotype networks of a k-functions genotype set ., To ask whether this occurs in our model regulatory circuits , we considered all permutations of every k-function ., These permutations reflect every possible order in which a circuit may acquire a specific combination of k functions through a sequence of genetic changes ., To determine the frequency with which historical contingency occurs , we calculate the number of genotype networks per genotype set , as the k functions are incrementally added ., This procedure is outlined in Fig . S4 and detailed in the Material and Methods section ., We note that historical contingency is not possible when because all monofunctions comprise genotype sets with a single connected genotype network ., Historical contingency is also not possible when , because there is only one genotype that yields this combination ( Fig . 2 ) ., In Fig . 5 , we show the relationship between the proportion of k-functions that exhibit historical contingency and the number of functions k ., For as few as functions , 43% of all k-functions exhibit historical contingency ., This percentage is highest for , where 94% of combinations are contingent ., The inset of Fig . 5 shows the proportion of the permutations of a k-function in which genotype set fragmentation may preclude the evolution of the k-function ., Again , this proportion is highest for functions ., These results highlight an additional constraint of multifunctionality ., Not only does the number of genotypes with k functions decrease as k increases , but the dependence upon the temporal order in which these functions evolve tends to increase ., In the Supporting Online Material , we repeat the above calculations to show how our results scale to equilibrium expression states with period ( For the sake of computational tractability , we restrict our attention to the case where all equilibrium expression states have the same period P ) ., We show that the exponential decrease in the number of circuits with k functions also holds for periodic equilibrium expression states , but that the maximum number of functions per circuit decreases with increasing ( Fig . S8 ) ., So long as , it is possible for a circuit to have more than one function ., In this case , the inverse relationship between robustness to genetic perturbation and the number of functions k also holds ( Fig . S9 ) ., Similarly , the results pertaining to genotype set fragmentation hold so long as ( Fig . S10 ) ., Lastly , the results pertaining to historical contingency only hold when ., This is because it is not possible for a circuit with an equilibrium expression pattern of period to have more than functions , which is a prerequisite for historical contingency ( Material and Methods ) ., Taken together , these additional observations show that the results obtained for fixed-point equilibrium expression states can also apply to periodic equilibrium expression states , so long as is not too large ., We have used a Boolean model of gene regulatory circuits to exhaustively characterize the functions of all possible combinations of circuit topologies and signal-integration functions in three-gene circuits ., The most basic question we have addressed is whether multifunctionality is easy or difficult to attain in regulatory circuits ., Our results show that while the number of circuits with k functions decreases sharply as k increases , there are generally thousands of circuits with k functions , so long as k is not exceedingly large ., Thus , multifunctionality is relatively easy to attain , even in the tiny circuits examined here ., It is worth considering how this result might translate to larger circuits ., In a related model of gene regulatory circuits with genes , the genotype sets of bifunctions comprised an average of circuits 21 , which is over an order of magnitude more circuits per bifunction than observed here ( Fig . 3 , inset ) ., For a greater number of functions k , we expect the number of circuits per k-function to increase as the number of genes N in the regulatory circuit increases ., This is because the maximum number of circuits with a given k-function is , which is the total number of circuits with N genes ( ) multiplied by the maximum proportion of circuits per multifunction ( ) ., For a given number of functions k , this quotient will increase hyper-exponentially as N increases , indicating a dramatic increase in the maximum number of circuits per k-function ., More generally , because the fractional size of a k-functions genotype set can be approximated as the product of the fractional sizes of the genotype sets of its k constituent monofunctions ( Fig . S7 ) and because the total number of circuits increases exponentially with N , our observation that there are many circuits with k functions is expected to scale to larger circuits ., The next question we asked is whether there is a tradeoff between the robustness of a k-function and the number of functions k ., We found that the robustness of a k-function decreases as k increases ., However , some degree of robustness is generally maintained , so long as k is not too large ., These observations suggest that the number of circuit functions generally does not impose severe constraints on the evolution of circuit genotypes , unless the number of functions is very large ., Our current knowledge of biological circuits is too limited to allow us to count the number of functions per circuit ., However , we can ask whether the functional “burden” on biological circuits is very high ., If so , we would expect that the genes that form these circuits and their regulatory regions cannot tolerate genetic perturbations , and that they have thus accumulated few or no genetic changes in their evolutionary history ., However , this is not the case ., The biochemical activities and regulatory regions of circuit genes can diverge extensively without affecting circuit function 55 , 59 , 61 , 67 , and the very different circuit architectures of distantly related species can have identical function 24 , 28 ., Further , circuits are highly robust to the experimental perturbation of their architecture , such as the rewiring of regulatory interactions 20 ., More indirect evidence comes from the study of genes with multiple functions , identified through gene ontology annotations ., The rate of evolution of these genes is significantly but only weakly correlated with the number of known functions 68 ., Thus , the functional burden on biological genes and circuits is not sufficiently high to preclude evolutionary change ., Previous studies of monofunctional regulatory circuits have revealed broad distributions of circuit robustness to genetic perturbation 16 , 18 , 29 ., We therefore asked if this is also the case for multifunctional circuits ., We found that circuit robustness was indeed variable , but that the mean and variance of the distributions of circuit robustness decreased as the number of functions k increased ., Thus , variation in circuit robustness persists in multifunctional circuits , so long as k is not too large ., This provides further evidence that robustness to mutational change may be considered the rule , rather than the exception , in biological networks 1 , 18 , 20 , 29 ., However , to make the claim that robustness to genetic perturbation is an evolvable property in multifunctional regulatory circuits requires not only variability in circuit robustness , but also the ability to change one circuit into another via a series of mutations that do not affect any of the circuits functions ., We therefore asked whether it is possible to interconvert any two circuits with the same function via a series of function-preserving mutational changes ., We showed that this is always possible for monofunctions , but not necessarily for multifunctions , because these often comprise fragmented genotype sets ., Genotype set fragmentation has also been observed at lower levels of biological organization , such as the mapping from RNA sequence to secondary structure 69 ., Such fragmentation has two evolutionary implications , as has recently been discussed for RNA phenotypes 70 ., First , the mutational robustness of a phenotype ( function ) depends upon which genotype network its sequences inhabit , as we have also shown for regulatory circuits ( Fig . S11 ) ., Second , it can lead to historical contingency , where the phenotypic effects of future mutations depend upon the current genetic background ., Such contingency indeed occurs in our circuits , because the specific genotype network that a circuit ( genotype ) occupies may be influenced by the temporal order in which a circuits functions ( phenotypes ) have evolved ., This order in turn may affect a circuits ability to evolve new functions ., These observations hinge on the assumption that the space between two ( disconnected ) parts of a fragmented genotype set is not easily traversed ., For example , in RNA it is well known that pairs of so-called compensatory mutations can allow transitions between genotype networks 71 , thus alleviating the historical contingency caused by fragmentation ., To assess whether an analogous phenomenon might exist for regulatory circuits , we calculated the average distance between all pairs of genotypes on distinct genotype networks for circuits with the same k-function ., We found that this distance decreases as the number of functions k increases , indicating an increased proximity between genotype networks ( Fig . S12 ) ., However , those pairs of genotypes in any two different genotype networks that had the minimal distance of two mutations never exceeded 1% of all pairs of genotypes on these networks , and was as low as 0 . 03% for functions ( Fig . S12A , inset ) ., This means that transitions between genotype networks through few mutations are not usually possible in these model regulatory circuits ., Thus , the multiple genotype networks of a genotype set can indee | Introduction, Results, Discussion, Materials and Methods | Gene regulatory circuits drive the development , physiology , and behavior of organisms from bacteria to humans ., The phenotypes or functions of such circuits are embodied in the gene expression patterns they form ., Regulatory circuits are typically multifunctional , forming distinct gene expression patterns in different embryonic stages , tissues , or physiological states ., Any one circuit with a single function can be realized by many different regulatory genotypes ., Multifunctionality presumably constrains this number , but we do not know to what extent ., We here exhaustively characterize a genotype space harboring millions of model regulatory circuits and all their possible functions ., As a circuits number of functions increases , the number of genotypes with a given number of functions decreases exponentially but can remain very large for a modest number of functions ., However , the sets of circuits that can form any one set of functions becomes increasingly fragmented ., As a result , historical contingency becomes widespread in circuits with many functions ., Whether a circuit can acquire an additional function in the course of its evolution becomes increasingly dependent on the function it already has ., Circuits with many functions also become increasingly brittle and sensitive to mutation ., These observations are generic properties of a broad class of circuits and independent of any one circuit genotype or phenotype . | Many essential biological processes , ranging from embryonic patterning to circadian rhythms , are driven by gene regulatory circuits , which comprise small sets of genes that turn each other on or off to form a distinct pattern of gene expression ., Gene regulatory circuits often have multiple functions ., This means that they can form different gene expression patterns at different times or in different tissues ., We know little about multifunctional gene regulatory circuits ., For example , we do not know how multifunctionality constrains the evolution of such circuits , how many circuits exist that have a given number of functions , and whether tradeoffs exist between multifunctionality and the robustness of a circuit to mutation ., Because it is not currently possible to answer these questions experimentally , we use a computational model to exhaustively enumerate millions of regulatory circuits and all their possible functions , thereby providing the first comprehensive study of multifunctionality in model regulatory circuits ., Our results highlight limits of circuit designability that are relevant to both systems biologists and synthetic biologists . | systems biology, computer science, genetics, biology, computational biology, computerized simulations, gene networks | null |
1,313 | journal.pcbi.1000445 | 2,009 | Topography of Extracellular Matrix Mediates Vascular Morphogenesis and Migration Speeds in Angiogenesis | The physical properties of the ECM , such as density , heterogeneity , and stiffness , that affect cell behavior is also an area of current investigation ., Matrigel , a popular gelatinous protein substrate for in vitro experiments of angiogenesis , is largely composed of collagen and laminin and contains growth factors , all of which provide an environment conducive to cell survival ., In experiments of endothelial cells on Matrigel , increasing the stiffness of the gel or disrupting the organization of the cellular cytoskeleton , inhibits the formation of vascular cell networks 10 , 11 ., Cells respond to alterations in the mechanical properties of the ECM , for example , by upregulating their focal adhesions on stiffer substrates 12 ., For anchorage-dependent cells , including endothelial cells , increasing the stiffness of the ECM therefore results in increased cell traction and slower migration speeds 12 ., Measurements of Matrigel stiffness as a function of density show a positive relationship between these two mechanical properties 13 ., That is , as density increases , so does matrix stiffness ., In light of these two findings , it is not surprising that this experimental study also shows slower cell migration speeds as matrix density increases 13 ., Moreover , matrices with higher fiber density transfer less strain to the cell 14 and experiments of endothelial cells cultured on collagen gels demonstrate that directional sprouting , called branching , is induced by collagen matrix tension 15 ., Thus , via integrin receptors , the mechanical properties of the ECM influence cell-matrix interactions and modulate cell shape , cell migration speed , and the formation of vascular networks ., Understanding how individual cells interpret biochemical and mechanical signals from the ECM is only a part of the whole picture ., Morphogenic processes also require multicellular coordination ., In addition to the guidance cues cells receive from the ECM , they also receive signals from each other ., During new vessel growth , cells adhere to each other through cell-cell junctions , called cadherins , and in order to migrate , cells must coordinate integrin mediated focal adhesions with these cell-cell bonds ., This process is referred to as collective or cluster migration 16 ., During collective migration , cell clusters often organize as two-dimensional sheets 16 ., Cells also have the ability to condition the ECM for invasion by producing proteolytic enzymes that degrade specific ECM proteins 17 ., In addition , cells can synthesize ECM components , such as collagen and fibronectin 11 , 18 , and can further reorganize the ECM by the forces they exert on it during migration 10 , 11 , 14 ., Collagen fibrils align in response to mechanical loading and cells reorient in the direction of the applied load 14 ., Tractional forces exerted by vascular endothelial cells on Matrigel cause cords or tracks of aligned fibers to form promoting cell elongation and motility 11 ., As more experimental data are amassed , the ECM is emerging as the vital component to morphogenic processes ., In this work , we extend our cellular model of angiogenesis 19 and validate it against empirical measurements of sprout extension speeds ., We then use our model to investigate the effect of ECM topography on vascular morphogenesis and focus on mechanisms controlling cell shape and orientation , sprout extension speeds , and sprout morphology ., We show the dependence of sprout extension speed and morphology on matrix density , fiber network connectedness , and fiber orientation ., Notably , we observe that varying matrix fiber density affects the likelihood of capillary sprout branching ., The model predicts an optimal density for capillary network formation and suggests matrix heterogeneity as a mechanism for sprout branching ., We also identify unique ranges of matrix density that promote sprout extension or that interrupt normal angiogenesis , and show that maximal sprout extension speeds are achieved within a density range similar to the density of collagen found in the cornea ., Finally , we quantify the effects of proteolytic matrix degradation by the tip cell on sprout velocity and demonstrate that degradation promotes sprout growth at high densities , but has an inhibitory effect at lower densities ., Based on these findings , we suggest and discuss several ECM targeted pro- and anti-angiogenesis therapies that can be tested empirically ., We previously published a cell-based model of tumor-induced angiogenesis that captures endothelial cell migration , growth , and division at the level of individual cells 19 ., That model also describes key cell-cell and cell-matrix interactions , including intercellular adhesion , cellular adhesion to matrix components , and chemotaxis to simulate the early events in new capillary sprout formation ., In the present study , we extend that model to incorporate additional mechanisms for cellular motility and sprout extension , and use vascular morphogenesis as a framework to study how ECM topography influences intercellular and cell-matrix interactions ., The model is two-dimensional ., It uses a lattice-based cellular Potts model describing individual cellular interactions coupled with a partial differential equation to describe the spatio-temporal dynamics of vascular endothelial growth factor ., At every time step , the discrete and continuous models feedback on each other , and describe the time evolution of the extravascular tissue space and the developing sprout ., The cellular Potts model evolves by the Metropolis algorithm: lattice updates are accepted probabilistically to reduce the total energy of the system in time ., The probability of accepting a lattice update is given bywhere is the change in total energy of the system as a result of the update , is the Boltzmann constant , and is the effective temperature corresponding to the amplitude of cell membrane fluctuations ., A higher temperature corresponds to larger cell membrane fluctuation amplitudes ., The energy , , includes a term describing cell-cell and cell-matrix adhesion , a constraint controlling cellular growth , an effective chemotaxis potential , and a continuity constraint ., Mathematically , total energy is given by: ( 1 ) In the first term of Eq ., 1 , represents the binding energy between model constituents ., For example , describes the relative strength of cell-cell adhesion that occurs via transmembrane cadherin proteins ., Similarly , is a measure of the binding affinity between an endothelial cell and a matrix fiber through cell surface integrin receptors ., Each endothelial cell is associated with a unique identifying number , ., is the Kronecker delta function and ensures that adhesive energy only accrues at cell surfaces ., The second term in Eq ., 1 describes the energy expenditure required for cell growth and deformation ., Membrane elasticity is described by , denotes cell current volume , and is a specified target volume ., For proliferating cells , the target volume is double the initial volume ., This growth constraint delivers a penalty to total energy for any deviation from the target volume ., In the third term , the parameter is the effective chemical potential and influences the strength of chemotaxis relative to other parameters in the model ., This chemotaxis potential varies depending on cell phenotype ( discussed below ) and is proportional to the local VEGF gradient , , where denotes the concentration of VEGF ., Cells must simultaneously integrate multiple external stimuli , namely intercellular adhesion , chemotactic incentives , and adherence to extracellular matrix fibers ., To do so , endothelial cells deform their shape and dynamically regulate adhesive bonds ., In the model , however , it is possible that collectively these external stimuli may cause a cell to be pulled or split in two ., To prevent non-biological fragmentation of cells , we introduce a continuity constraint that preserves the physical integrity of each individual cell ., This constraint expresses that it is energetically expensive to compromise the physical integrity of a cell and is incorporated into the equation for total energy ( Eq . 1 ) in the last term , where is a continuity constraint that represents the effects of the cytoskeletal matrix of a cell ., is the current size of the endothelial cell with identifying number , and is a breadth first search count of the number of continuous lattice sites occupied by that endothelial cell ., Thus , signals that the physical integrity of the cell has been compromised and a penalty to total energy is incurred ., Cooperatively , the continuity constraint and the volume constraint implicitly describe the interactions holding the cell together ., The amount of VEGF available at the right hand boundary of the domain is estimated by assuming that in response to a hypoxic environment , quiescent tumor cells secrete a constant amount of VEGF and that VEGF decays at a constant rate ., It is reasonable to assume that the concentration of VEGF within the tumor has reached a steady state and therefore that a constant amount of VEGF , denoted , is available at the boundary of the tumor ., We use constant boundary conditions for the left ( ) and right boundaries and periodic boundary conditions in the y-direction ., A gradient of VEGF is established as VEGF diffuses through the stroma with constant diffusivity coefficient , decays at a constant rate , and is bound by endothelial cells , ., A complete description of the biochemical derivation of the function for endothelial cell binding and uptake of VEGF ( ) has been previously published 19 ., For more direct comparison to other mathematical models of angiogenesis models and to isolate the effects of ECM topology on vessel morphology , we assume that the diffusion coefficient for VEGF in tissue is constant ., This is a simplification , however , because the ECM is not homogeneous and VEGF can be bound to and stored in the ECM ., Realistically , the diffusion coefficient ( ) for VEGF in the ECM depends on both space and time ., We address the implications of this assumption in the Discussion ., Under these assumptions , the concentration profile of VEGF satisfies a partial differential equation of the form: ( 2 ) The inset in Figure 1A provides an illustration of the 166 µm×106 µm domain geometry ., We initialize the simulation by establishing the steady state solution to Eq ., 2 ., The activation and aggregation of endothelial cells , and subsequent breakdown of basement membrane in response to VEGF 20 is a pre-condition ( boundary condition ) to the simulation ., The breakdown of basement membrane allows endothelial cells to enter the extravascular space through a new vessel opening ., Our simulation starts with a single activated endothelial cell ∼10 µm in diameter that has budded from the parent vessel located adjacent to the left hand boundary 20 ., We use 10 µm only as an initial estimate of endothelial cell size 21 , 22 ., Once the simulation begins , the cells immediately deform in shape and elongate ., During the simulation , the VEGF field is updated iteratively with cell uptake information , for example as shown in Figure 1B , C ., VEGF data is processed by the cells at the cell membrane and incorporated into the model through the chemotaxis term in Eq ., 1 ., From the parent blood vessel , endothelial cells ( red ) migrate into the domain in response to VEGF that is supplied from a tumor located adjacent to the right hand boundary ., The space between represents the stroma and is composed of extracellular matrix fibers ( green ) and interstitial fluid ( blue ) ., The physical meanings of all symbols and their parameter values are summarized in Table 1 ., To more accurately capture the cell-cell and cell-matrix interactions that occur during morphogenesis , we implement several additional features to this model ., One improvement is the implementation of stalk cell chemotaxis ., Stalk cells are not inert , but actively respond to chemotactic signals 23 ., As a consequence , cells now migrate as a collective body , a phenomenon called collective or cohort migration 24 ., This modification , however , also makes it possible for individual cells , as well as the entire sprout body , to migrate away from the parent vessel , making it necessary to consider cell recruitment from the parent vessel ., Cell recruitment is another added feature ., During the early stages of angiogenesis , cells are recruited from the parent vessel to facilitate sprout extension 20 , 25 ., Kearney et al . 26 measured the number and location of cell divisions that occur over 3 . 6 hours in in vitro vessels 8 days old ( a detailed description of these experiments is provided in our discussion of model validation ) ., In these experiments , the sprout field is defined as the area of the parent vessel wall that ultimately gives rise to the new sprout and the sprout itself ., The sprout field is further broken down into regions based on distance from the parent vessel and these regions are classified as distal , proximal , and nascent ., The authors report that 90% of all cell divisions occur in the parent vessel and the remaining 10% occur at or near the base of the sprout in the nascent area of the sprout field ., On average , total proliferation accounts for approximately 5 new cells in 3 . 6 hours , or 20 cells in 14 hours ., This data suggests that there is significant and sufficient proliferation in the primary vessel to account for and facilitate initial sprout extension ., This data does not suggest that proliferation in other areas of the sprout field does not occur at other times ., In fact , it has been established that a new sprout can migrate only a finite distance into the stroma without proliferation and that proliferation is necessary for continued sprout extension 25 ., We model sprout extension through a cell-cell adhesion dependent recruitment of additional endothelial cells from the parent vessel ., As an endothelial cell at the base of the sprout moves into the stroma , cell-cell adhesion pulls a cell from the parent vessel along with it ., In practice , a new cell is added to the base of the sprout when and where the previous cell detaches from the parent vessel wall ( left boundary of the simulation domain ) ., We assume , based on the data presented in 26 , that there is sufficient proliferation in the parent vessel to provide the additional cells required for initial sprout extension while maintaining the physical integrity of the parent vessel ., As in our previous model , once a cell senses a threshold concentration of VEGF , given by , it becomes activated ., We recognize that cells have distinct phenotypes that dictate their predominate behavior ., Thus , we distinguish between tip cells , cells that are proliferating , and non-proliferating but migrating stalk cells ., Tip cells are functionally specialized cells that concentrate their internal cellular machinery to promote motility 23 ., Tip cells are highly migratory pathfinding cells and do not proliferate 25 , 26 ., To model the highly motile nature of the tip cell , we assign it the highest chemotactic coefficient , ., The remainder of the cells are designated as stalk cells and use adhesive binding to and release from the matrix fibers for support and to facilitate cohort migration ., Stalk cells also sense chemical gradients but are not highly motile phenotypes ., Thus , the stalk cells in the model are assigned a lower , that is weaker , chemotactic coefficient than the specialized tip cell ., Proliferating cells are located behind the sprout tip 23 , 26 and increase in size as they move through an 18 hour cell cycle clock in preparation for cell division 27 ., Cells that are proliferating can still migrate 26; it is only during the final stage of the cell cycle that endothelial cells stop moving and round up for mitosis ( personal communication with C . Little ) ., As we assume that the presence of VEGF increases cell survivability , we do not model endothelial cell apoptosis ., As described in our previous work 19 , we model the mesh-like anisotropic structure of the extracellular matrix by randomly distributing 1 . 1 µm thick bundles of individual collagen fibrils at random discrete orientations between −90 and 90 degrees ., Unless otherwise stated , model matrix fibers comprise approximately 40% of the total stroma and the distribution of the ECM is heterogeneous , with regions of varying densities as can be seen in Figure 1A and Figure 7D ., The cells move on top of the 2D ECM model and interact with the matrix fibers at the cell membrane through the adhesion term in Eq ., 1 ., To relate the density ( ) of this model fibrillar matrix to physiological values , we measure matrix fiber density as the ratio of the interstitium occupied by matrix molecules to total tissue space , , and compare it to measured values of the volume fraction of collagen fibers in healthy tissues 28 ., In order to isolate and control the effects of the matrix topology on cellular behavior and sprout morphology we look at a static ECM , that is we do not model ECM rearrangement or dynamic matrix fiber cross-linking and stiffness ., We do , however , consider endothelial cell matrix degradation in a series of studies presented in Results ., No single model has been proposed that incorporates every aspect of all processes involved in sprouting angiogenesis , nor is this level of complexity necessary for a model to be useful or predictive ., It is not our intention to include every bio-chemical or mechanical dynamic at play during angiogenesis ., We develop this two-dimensional cell-based model as a step towards elucidating cellular level dynamics fundamental to angiogenesis , including cell growth and migration , and cell-cell and cell-matrix interactions ., Consequently , we do not incorporate processes or dynamics at the intracellular level ., For example , we describe endothelial cell binding of VEGF to determine cell activation and to capture local variations in VEGF gradients , but neglect intracellular molecular pathways signaled downstream of the receptor-ligand complex ., Moreover , our focus is on early angiogenic events and therefore we also do not consider the effects of blood flow on remodeling of mature vascular beds ., Numerical studies of flow-induced vascular remodeling have been given attention in McDougall et al . 29 , and Pries and Secomb 30 , 31 ., As is the case in many other simulations of biological systems , when we do not have direct experimental measurements for all of the parameters , choosing these parameter values is not trivial ., A list of values and references for our model parameters is provided in Table 1 ., A parameter is derived from experimental data whenever possible , otherwise it is estimated and denoted ‘est’ ., Fortunately , a sensitivity analysis ( discussed later ) shows that the dynamics of our model are quite robust to substantial variations in some parameters and tells us exactly which parameters are most critical ., We can then choose from a range of parameter values that exhibits the general class of behavior consistent with experimental observations ., See Table 1 for these parameter ranges and Table 3 for the effect of parameter perturbations , as well as , supplemental Figures S1 and S2 for examples of cellular behavior under different parameter sets ., In the cellular Potts model , the relative value , not the absolute value , of the parameters corresponds to available physiological measurements and gives rise to a cell behavior observed experimentally ., For example , the Youngs modulus for human vascular endothelial cells is estimated at 2 . 01*105 Pa 32 ., The Youngs modulus of a collagen fiber in aqueous conditions is between 0 . 2–0 . 8 GPa 33 ., However , the modulus of a collagen gel network is much lower and is measured at 7 . 5 Pa 34 ., Although interstitial fluid compressibility ( water ) is estimated to be 2 . 2 GPa 35 , indicating its hard to compress under uniform pressure , it deforms easily , that is , the shear modulus is low and is measured at 10−6 Pa 36 ., The qualitative parameters corresponding to these quantitative measurements are where ., Thus , the elastic modulus of endothelial cells>matrix fibers>interstitial fluid ( 0 . 2 MPa>7 . 5 Pa>10−6 Pa ) and is reflected in the relative values of the corresponding parameters , , and ., In a similar manner , the coupling parameters , , describe the relative adhesion strengths among endothelial cells , matrix fibers , and interstitial fluid ., For instance , choosing reflects that fact that endothelial cells have a higher binding affinity to each other , via cadherin receptors and gap junctions for example , than they do to matrix fibers 37 , 38 ., The chemotactic potential , , is chosen so that its contribution to the change in total energy is the same order of magnitude as the contribution to total energy from adhesion or growth ., The difference between the concentration of VEGF at two adjacent lattice sites is on the order of 10−4 ., Therefore , to balance adhesion and growth , must be on the order of 106 ., We calibrate this parameter to maximize sprout extension speeds ., Similarly , the parameter for continuity , , is chosen so that cells will not dissociate ., This is achieved by setting greater than the collective contribution to total energy from the other terms ., By equating the time it takes an endothelial cell to divide during the simulation with the endothelial cell cycle duration of 18 hours , we convert Monte Carlo steps to real time units ., In the simulations reported in this paper , 1 Monte Carlo step is equivalent to 1 minute ., Since this model has several enhancements over the previous model 19 , there are a different number of parameters , which necessitates recalibration of all the parameters ., Therefore , some parameters take on different values ., The canonical benchmark for validating models of tumor-induced angiogenesis is the rabbit cornea assay 39 , 40 ., In this in vivo experimental model , tumor implants are placed in a corneal pocket approximately 1–2 mm from the limbus ., New vessel growth is measured with an ocular micrometer at 10× , which has a measurement error of ±0 . 1 mm or 100 µm ., Initially , growth is linear and sprout extension speeds are estimated at a rate of 0 . 5 mm/day , or 20 . 8±4 . 2 µm/hr ., Sprouts then progress at average speeds estimated to be between 0 . 25–0 . 50 mm/day , or 10 . 4–20 . 8±4 . 2 µm/hr ., More recent measurements of sprout extension speeds during angiogenesis are reported in Kearney et al . 26 ., In this study , embryonic stem cells containing an enhanced green fluorescent protein are differentiated in vitro to form primitive vessels ., Day 8 cell cultures are imaged within an ∼160 µm2 area at 1 minute intervals for 10 hours and show sprouting angiogenesis over this period ., The average extension speed for newly formed sprouts is 14 µm/hr and ranges from 5 to 27 µm/hr ., For cell survival , growth factor is present and is qualitatively characterized as providing a diffuse , or shallow , gradient ., No quantitative data pertaining to growth factor gradients or the effect of chemotaxis during vessel growth are reported 26 ., We use the above experimental models and reported extension speeds as a close approximation to our model of in vivo angiogenesis for quantitative comparison and validation ., We simulate new sprout formation originating from a parent vessel in the presence of a diffusible VEGF field , which creates a shallow VEGF gradient ., We measure average extension speeds over a 14 hour period in a domain 100 µm by 160 µm ., As was done in Kearney et al . 26 , we calculate average sprout velocities as total sprout tip displacement in time and measure this displacement as the distance from the base of the new sprout to the sprout tip ., Figure 1A shows average sprout extension speed over time for our simulated sprouts ., Reported speeds are an average of at least 10 independent simulations using the same initial VEGF profile and parameter set as given in Table 1 ., Error bars represent the standard error from the mean ., The average extension speeds of our simulated sprouts are within the ranges of average sprout speeds measured by both Kearney et al . 26 and Gimbrone et al . 39 ., Table 2 summarizes various morphological measurements for the simulated sprouts ., It shows that the average velocity , thickness , and cell size of the simulated sprouts compare favorably to relevant experimental measurements ., Sprout velocity is given at 10 hours for direct comparison to 26 and averaged over 14 hours ., Sprout thicknesses and cell size are within normal physiological ranges ., There are many different cell shapes and sizes and vessel morphologies , however , that can be obtained in vivo and in vitro given different environmental factors ( VEGF profile , ECM topology and stiffness , inhibitory factors , other cell types , etc . ) ., In this manuscript , we investigate several of these dependencies and as we discuss below specific model parameters can be tuned to reproduce different cellular interactions and environments ., Figure 1A indicates that average sprout extension speed changes as a function of time ., Within the first two hours , speeds average ∼30 µm/hr and the new sprout consists of only 1–2 endothelial cells ., At two hours , sprouts contain an average of 3 cells , and at 4 hours , there are a total of 5–6 cells ., Over time , as more cells are added to the developing sprout , cell-cell adhesion and cellular adhesion to the extracelluar matrix slow the sprout extension speed ., The inset in Figure 1A shows the geometry of the computational domain and simulated sprout development at 7 . 8 hours ., As shown , simulated sprouts are approximately one cell diameter wide , which compares quantitatively well to reported VEGF induced vessel diameters 41 , 42 ., Here and in all simulation snapshots , tip cells are identified with a ‘T’ ., In moving multicellular clusters , rear retraction is a collective process that involves many cells simultaneously 16 ., A natural result of the cell-based model is that cells exhibit rear retraction , which refers to the ability of a cell to release its trailing adhesive bonds with the extracellular matrix during migration ., Collective migration , another characteristic dynamic observed during sprout growth , is also evident during the simulations ( see videos ) ., The VEGF concentration profile in picograms ( pg ) at 7 . 8 hours is given in Figure 1B ., Higher concentrations of VEGF are encountered as the cells approach the tumor ., However , because cell uptake of VEGF is small compared to the amount of available VEGF , it is difficult to discern the heterogeneities in the VEGF profile from this figure ., Figure 1C is the VEGF gradient profile ( pg ) at 7 . 8 hours and is a better indicator of the changes in local VEGF concentration ., This image shows larger gradients in the proximity of the tip cell and along the leading edges of the new sprout ., On average , simulated sprouts migrate 160 µm and reach the domain boundary in approximately 15 . 6 hours , before any cells in the sprout complete their cell cycle and proliferate ., We do not expect to see proliferation in the new sprout because the simulation duration is less than the 18 hour cell cycle and the cell cycle clock is set to zero for newly recruited cells to simulate the very onset of angiogenesis ., In our simulations , sprout extension is facilitated by cell recruitment from the parent vessel ., Between 15 and 20 cells are typically recruited , which agrees with the number of cells we estimate would be available for recruitment based on parent vessel cell proliferation reported by Kearney et al . 26 ., In those experiments 26 , proliferation in the parent vessel was measured for day 8 sprouts , which likely has cells at various stages in their cell cycles ., Proliferation in the new sprout is another mechanism for sprout extension ., Thus , we consider the possibility that cells recruited from the parent vessel may be in different stages of their cell cycles by initializing the cell cycle clock of each recruited cell at randomly generated times ., We observe no differences in extension speeds , sprout morphology , or the number of cells recruited as a result of the assumption we make for cell cycle initialization ( or random ) ., This suggests that , in the model , stalk cell proliferation and cell recruitment from the parent vessel are complementary mechanisms for sprout extension ., By adjusting key model parameters , we are able to simulate various morphogenic phenomena ., For example , by increasing the chemotactic sensitivity of cells in the sprout stalk and decreasing the parameter controlling cellular adhesion to the matrix , , we are able to capture stalk cell migration and translocation along the side of a developing sprout ( Video S1 ) ., This phenomena , where stalk cells weaken their adhesive bonds to the extracellular matrix and instead use cell-cell adhesion to facilitate rapid migration , frequently occurs in embryogenesis ( personal communication with C . Little ) and is described as preferential migration to stretched cells 43 ., Compare Video S1 with Figure 1, ( f ) in Szabo et al . 2007 43 ., Figure S1 shows the morphology for one particular set of parameter values corresponding to weaker cell-cell and cell-matrix adhesion and stronger chemotaxis ., In this simulation , cells elongate to approximately 40 µm in length , fewer cells are recruited from the parent vessel , and the average extension speed at 14 hours slows to 6 . 8 µm/hr ., The length scale is consistent with experimental measurements of endothelial cell elongation 23 , 44 ., Figure 5 from Oakley et al . 1997 shows images from experiments using human fibroblasts stained for actin, ( e ) and tubulin, ( f ) on micro-machined grooved substratum 45 ., These experiments demonstrate that cells alter their shape , orientation , and polarity to align with the direction of the grooves ( double-headed arrow ) , exhibiting topographic , or contact , guidance ., Figure S2 is a simulation designed to mimic these experiments by isolating the cellular response to topographical guidance on similarly patterned substratum ., In this simulation , there is no chemotaxis and no cell-cell contact; cells respond only to topographical cues in the extracellular matrix ., Simulated cells alter their shape and orient in the direction of the matrix fibers ., Figure S2 bears a striking resemblance to the cell shapes observed in 45 ., We are also able to simulate interstitial invasion/migration by a single cell by turning off proliferation and cell recruitment but leaving all other parameters unchanged ( Video S2 ) ., This simulation is especially relevant in the context of fibroblast recruitment during wound healing and tumor cell invasion ( e . g . , glioblastoma , the most malignant form of brain cancer 46 ) , where understanding cell-matrix interactions and directed motility are critical mechanisms for highly motile or invasive cell phenotypes ., We design a set of numerical experiments allowing us to observe the onset of angiogenesis in extravascular environments of varying matrix fiber density ., We consider matrix fiber densities given as a fraction of the total interstitial area , ., As a measure of matrix orientation equivalency , the total fiber orientation in both the x and the y direction is calculated as we increased the matrix density ., The total x and total y fiber orientation do not vary with changes in total matrix density ., Besides varying the matrix density , all other parameters are held fixed ., All simulations last the same duration corresponding to approximately 14 hours ., The average rate at which the sprout grows and migrates , or its average extension sp | Introduction, Methods, Results, Discussion | The extracellular matrix plays a critical role in orchestrating the events necessary for wound healing , muscle repair , morphogenesis , new blood vessel growth , and cancer invasion ., In this study , we investigate the influence of extracellular matrix topography on the coordination of multi-cellular interactions in the context of angiogenesis ., To do this , we validate our spatio-temporal mathematical model of angiogenesis against empirical data , and within this framework , we vary the density of the matrix fibers to simulate different tissue environments and to explore the possibility of manipulating the extracellular matrix to achieve pro- and anti-angiogenic effects ., The model predicts specific ranges of matrix fiber densities that maximize sprout extension speed , induce branching , or interrupt normal angiogenesis , which are independently confirmed by experiment ., We then explore matrix fiber alignment as a key factor contributing to peak sprout velocities and in mediating cell shape and orientation ., We also quantify the effects of proteolytic matrix degradation by the tip cell on sprout velocity and demonstrate that degradation promotes sprout growth at high matrix densities , but has an inhibitory effect at lower densities ., Our results are discussed in the context of ECM targeted pro- and anti-angiogenic therapies that can be tested empirically . | A cell migrating in the extracellular matrix environment has to pull on the matrix fibers to move ., When the matrix is too dense , the cell secretes enzymes to degrade the matrix proteins in order to get through ., And when the matrix is too sparse , the cell produces matrix proteins to locally increase the “foothold” ., How cells interact with the extracellular matrix is important in many processes from wound healing to cancer invasion ., We use a computational model to investigate the topography of the matrix on cell migration and coordination in the context of tumor induced new blood vessel growth ., The model shows that the density of the matrix fibers can have a strong effect on the extension speed and the morphology of a new blood vessel ., Further results show that matrix degradation by the cells can enhance vessel sprout extension at high matrix density , but impede sprout extension at low matrix density ., These results can potentially point to new targets for pro- and anti-angiogenesis therapies . | cardiovascular disorders/vascular biology, developmental biology/morphogenesis and cell biology, cell biology/morphogenesis and cell biology, cell biology/cell growth and division, mathematics, cell biology/cell adhesion, computational biology/systems biology | null |
674 | journal.pcbi.1004970 | 2,016 | Accurate Automatic Detection of Densely Distributed Cell Nuclei in 3D Space | The animal brain is the most complex information processing system in living organisms ., To elucidate how real nervous systems perform computations is one of the fundamental goals of neuroscience and systems biology ., The wiring information for neural circuits and visualization of their activity at cellular resolution are required for achieving this goal ., Advances in microscopy techniques in recent years have enabled whole-brain activity imaging of small animals at cellular resolution 1–4 ., The wiring information of all the neurons in the mouse brain can be obtained using recently developed brain-transparentization techniques 5–9 ., Detection of neurons from microscopy images is necessary for optical measurements of neuronal activity or for obtaining wiring information ., Because there are many neurons in the images , methods of automatic neuron detection , rather than manual selection of ROIs ( regions of interest ) , are required and several such methods have been proposed 10 , 11 ., Detection of cells that are distributed in three-dimensional ( 3D ) space is also important in other fields of biology such as embryonic development studies 12–17 ., In these methods , cell nuclei are often labeled by fluorescent probes and used as a marker of a cell ., To identify nuclei in such images , the basic method is blob detection , which for example consists of local peak detection followed by watershed segmentation ., If the cells are sparsely distributed , blob detection methods are powerful techniques for nucleus detection ., However , if two or more cells are close to each other , the blobs are fused , and some cells will be overlooked ., These false negatives may be trivial for the statistics of the cells but may strongly affect individual measurements such as those of neuronal activity ., Overlooking some nuclei should be avoided when subsequent analyses assume that all the cells were detected , for example , when making wiring diagram of neurons or establishing a cell lineage in embryonic development ., Therefore , correct detection of all nuclei from images without false negatives is a fundamental problem in the field of bio-image informatics ., Although many efforts have been made to develop methods that avoid such false negatives , these methods seem to insufficiently overcome the problem ., In the head region of Caenorhabditis elegans , for example , the neuronal nuclei are densely packed and existing methods produce many false negatives , as shown below ., Actually , in the studies of whole-brain activity imaging of C . elegans reported so far , the local peak detection method that can overlook many nuclei was employed 3 , 18 , or the nuclei were manually detected 19 , 20 ., Highly accurate automatic nucleus detection methods should be developed in order to improve the efficiency and accuracy of such image analysis ., Here we propose a highly accurate automatic nucleus detection method for densely distributed cell nuclei in 3D space ., The proposed method is based on newly developed clump splitting method suitable for 3D images and improves the detection of all nuclei in 3D images of neurons of nematodes ., A combination of this approach with a Gaussian mixture fitting algorithm yields highly accurate locations of densely packed nuclei and enables automatic tracking and measuring of these nuclei ., The performance of the proposed method is demonstrated by using various images of densely-packed head neurons of nematodes which was obtained by various types of microscopes ., In this study , we focused on the head neurons of the soil nematode C . elegans , which constitute the major neuronal ensemble of this animal 21 ., All the neuronal nuclei in a worm of strain JN2100 were visualized by the red fluorescent protein mCherry ., The head region of the worm was imaged by a confocal microscope , and we obtained 3D images of 12 animals ( Data 1 , Fig 1A ) ., The shape of the nuclei was roughly ellipsoidal ( Fig 1B ) ., The fluorescence intensity increased toward the centers of the nuclei ( Fig 1D ) ., The typical half-radius of the nuclei was about 1 . 10 μm ( S1 Fig ) ., The distance to the nearest neighboring nucleus was 4 . 30 ± 2 . 13 μm ( mean and standard deviation , S1 Fig ) , suggesting that the neurons are densely distributed in 3D space ., The mean fluorescence intensities differed among neurons by one order of magnitude ( S1 Fig ) , making it difficult to detect a darker nucleus near a bright nucleus ., We first applied conventional blob detection techniques to the 3D image ( Fig 1C–1E ) ., Salt-and-pepper noise and background intensities were removed from the image ., The image was smoothed to avoid over-segmentation ( Fig 1C and 1D ) ., Local intensity peaks in the preprocessed image were detected and used as seeds for 3D seeded grayscale watershed segmentation ., Each segmented region was regarded as a nucleus ( Fig 1E ) ., We found that dark nuclei in high-density regions often escaped detection ., If the dark nucleus was adjacent to a bright nucleus , the fluorescence of the bright nucleus overlapped that of the dark one , and the local intensity peak in the dark nucleus was masked ( Fig 1D ) ., As a result , the seed for the dark nucleus was lost , and the dark nucleus fused with the bright nucleus ( Fig 1E ) ., The rate of false-negative nuclei was 18 . 9% ., In contrast , our proposed method successfully detected and segmented the dark nuclei ( Fig 1F ) ., The shapes of the nuclei are roughly ellipsoidal , and the fluorescence intensity increased toward the centers of the nuclei , suggesting that the intensity of nuclei can be approximated by a mixture of trivariate Gaussian distributions ., The intensities fk of the k-th Gaussian distribution gk at voxel position, x∈R3, can be written as, fk ( x ) =πkgk ( x|μk , Σk ) =πkexp\u2061 ( −12 ( x−μk ) TΣk−1 ( x−μk ) ) ,, where μk and Σk are the mean vector and covariance matrix of gk , respectively , and πk is an intensity scaling factor ., To explain the effect on the curvature , typical bright and dark nuclei were approximated by the Gaussian distribution and are shown in Fig 2 as iso-intensity contour lines ( Fig 2A , 2C and 2E ) and plots of the intensity along the cross section ( Fig 2B , 2D and 2F ) ., When a bright nucleus was near a dark nucleus , the peak intensity of the dark nucleus merged with the tail of the fluorescence intensity distribution of the bright nucleus and no longer formed a peak ., These false negatives can be avoided by using methods for dividing a close pair of objects , or clump splitting ., Such methods have been developed for correct detection of objects in two-dimensional ( 2D ) images 22–26 ., These methods focus on the concavity of the outline of a blob ., The concavity was calculated based on one of or a combination of various measurements such as angle 25 , area 27 , curvature 26 , and distance measurements 24 of the outline ., In these methods , after binarization of the image , concavity was obtained for each point on the outline ., Then the concave points were determined as the local peaks of the concavity ., After determination of concave points , a line connecting a pair of concave points is regarded as the boundary between the objects ., When we regard the outermost contour line in Fig 2E as the outline of the fused blob ( Fig 2G ) , the conventional 2D clump splitting method can be easily applied and two concave points are detected from the fused blob ( Fig 2G , red circles ) ., The blob was divided into two parts by a border line connecting the two points , and the dark nucleus was detected ., In the ideal case in Fig 2E , we obtained necessary and sufficient number of concave points ., In real images , however , we might obtain too many concave points because outlines often contain noise and are not smooth ., However , the number of concave points to choose is unknown because it is hard to know how many nuclei are included in a blob in a real image ., Further , it is not obvious how to find the correct combinations of concave points to be linked if a blob contains three or more objects ., In addition , for 3D images , the concepts of border lines that connect two concave points cannot be naturally expanded to three dimensions , because now we need some extra processes such as connecting groups of concave points in order to form border surfaces ., Even if we regard a 3D image as a stack of 2D images , it is hard to split objects fused in the z direction ( direction of the stacks ) 11 , 27 ., Here we introduce a concept of areas of concavity instead of concave points ( i . e . local peak of concavity ) ., Hereafter we use curvature as a measure of concavity and focus on areas of negative curvature for simplicity and clarity , but other measures such as angle , area , and distance from convex hull may be applicable ., Furthermore we used the iso-intensity contour lines inside the object in addition to the outline of the object ., Near the concave points in Fig 2E , the iso-intensity contour lines have negative curvature; i . e . , they curve in the direction of low intensities ., Negative curvature may be a landmark of the border line because a single Gaussian distribution has positive curvature everywhere ., Actually , the voxels at which an iso-intensity contour line has negative curvature were between two Gaussian distributions ( Fig 2E , area between the broken lines ) ., Once these voxels are removed from the blob , detection of two nuclei should be straightforward ., This approach is different from the classic clump splitting methods in two respects; focusing on area rather than local peak of concavity ( concave points ) , and using iso-intensity contour lines in addition to the outline ., These differences eliminate the need for determining how many concave points should be chosen and for obtaining correct combinations of the concave points because the area of negative curvature will cover the border lines ., Therefore we can use the approach even if a blob contains three or more objects ., In addition , this approach is robust to noise because it does not depend on a single contour line ., Furthermore , this approach can be expanded to 3D images naturally because the 3D area ( i . e . voxels ) of negative curvatures will cover the border surfaces of the 3D objects ., Iso-intensity contour lines in 2D images are parts of iso-intensity contour surfaces in three dimensions ., A point on an iso-intensity surface has two principal curvatures , which can be calculated from the intensities of surrounding voxels ( S2 Text ) 28 ., The smaller of the two principal curvatures is positive at any point in a single Gaussian distribution but is negative around the border of two Gaussian distributions ., Therefore , once voxels that have negative curvature are removed from the blob , two or more nuclei should be detected easily in 3D images ., Thus our approach solves the above problems of the classic clump splitting methods ., We applied the above approach to real 3D images ( Fig 3 ) ., The original images were processed by denoising , background removal , and smoothing to obtain the preprocessed images ., The peak detection algorithm could find only a peak from the bright nucleus , and the blob obtained by watershed segmentation contained both nuclei ., The principal curvatures of the iso-intensity surface were calculated from the preprocessed image ., There were voxels of negative curvature in the area between two nuclei , but the area did not divide the two nuclei completely ., The voxels of negative curvature were removed from the blob , and the blob was distance-transformed; these procedures were followed by 3D watershed segmentation ., Thus , the two nuclei were separated , and the dark nucleus was successfully detected ., After voxels of negative curvature were removed from the blobs , the size of blobs obtained by the second watershed segmentation tended to be smaller than real nuclei , and the distances between the blobs tended to be larger ., To obtain the precise positions and sizes of the nuclei , least squares fitting with a Gaussian mixture was applied to the entire 3D image using a newly developed method ( see Methods ) ., The number of Gaussian distributions and the initial values of the centers of the distributions were derived from the above results ., Repeated application of watershed segmentation may increase over-segmentation ., If the distance between two fitted Gaussian distributions is too small , the two distributions may represent the same nuclei ., In this case , one of the two distributions was removed to avoid over-segmentation , and the fitting procedure was repeated with a single Gaussian distribution ., The proposed method detected 194 out of 198 nuclei in the 3D image ( Fig 4 ) ., Among the four overlooked nuclei , the intensities of two of them were too low to be detected ., The other two had moderate intensities but were adjacent to brighter nuclei ., In these cases , curvature-based clump splitting successfully split the two nuclei ., However , deviations of the brighter nuclei from Gaussian distributions disrupted the fitting of the Gaussian distributions and resulted in misplacement of the Gaussian distributions for the darker nuclei , which were instead fitted to the brighter nuclei ., On the other hand , the proposed method returned 11 false positives ., Two of them resulted from the misplacement of the Gaussian distribution for the darker nuclei described above ., Four of them were not neuronal nuclei but were fluorescence foci intrinsic to the gut ., Three of them were the result of over-segmentation of complex-shaped non-neuronal nuclei ., One of them was mislocalized fluorescence in the cytosol ., The last one was the result of over-segmentation of a large nucleus that was fitted with two Gaussian distributions separated by a distance larger than the cutoff distance ., We compared the performance of the proposed method with five previously published methods for nucleus segmentation ( Fig 5 and Table 1 ) ., Ilastik 29 is based on machine learning techniques and uses image features such as Laplacian of Gaussian ., FARSight 30 is based on graph cut techniques ., RPHC 1 was designed for multi-object tracking problems such as whole-brain activity imaging of C . elegans and uses a numerical optimization-based peak detection technique for object detection ., 3D watershed plugin in ImageJ 31 consists of local peak detection and seeded watershed ., This method is almost the same as the conventional blob detection method used in our proposed method ., CellSegmentation3D 32 uses gradient flow tracking techniques and was developed for clump splitting ., This method has been used in the study of automated nucleus detection and annotation in 3D images of adult C . elegans 33 ., We applied these six methods to 12 animals in Data 1 ( Fig 5 ) and obtained the performance indices ( Table 1 , see Methods ) ., The parameters of each method were optimized for the dataset ., The 3D images in the dataset contains 190 . 92 nuclei on average , based on manual counting ., The proposed method found 96 . 9% of the nuclei and the false negative rate was 3 . 1% , whereas the false negative rate of the other methods were 11 . 2% or more ., The false positive rate of the proposed method was 4 . 9% and that of the other methods ranged from 2 . 1% to 21 . 2% ., The proposed method shows the best performance with both of the well-established indices , F-measure 12 and Accuracy 34 , because of the very low false negative rate and modest false positive rate ., It should be noted that all of the compared methods overlooked more than 10% of nuclei in our dataset ., The reason for this was suggested by the segmentation results , in which almost all of these methods failed to detect the dark nuclei near the bright nuclei and fused them ( Fig 5 , right column ) ., These results suggest that all the compared methods have difficulty in handling 3D images with either large variance of object intensity or dense packing of objects , or both ( S1 Fig ) ., These results clearly indicate that our proposed method detects densely distributed cell nuclei in 3D space with highest accuracy ., Very low false negative rate is the most significant improvement of the proposed method from the other methods , suggesting that the proposed method will improve efficiency and accuracy of image analysis steps drastically ., Because none of the computational image analysis methods is perfect , experimenters should be able to correct any errors they find ., Therefore , a user-friendly graphical user interface ( GUI ) for visualization and correction of the results is required ., We developed a GUI called RoiEdit3D for visualizing the result of the proposed method and correcting it manually ( S2 Fig ) ., Because RoiEdit3D is based on ImageJ/Fiji 35 , 36 in MATLAB through Miji 37 , experimenters can use the familiar interface and tools of ImageJ directly ., Developers can extend the functionality using a favorite framework chosen from various options such as ImageJ macros , Java , MATLAB scripts , and C++ languages ., Interface with downstream analyses should be straightforward because the corrected results are saved in the standard MATLAB data format and can be exported to Microsoft Excel ., Three-dimensional images are shown as trihedral figures using the customized Orthogonal View plugin in ImageJ ( S2 Fig ) ., Fitted Gaussian distributions are shown as ellipsoidal regions of interest ( ROIs ) in each view ., The parameters of the Gaussian distributions are shown in the Customized ROI Manager window in tabular form ., The Customized ROI Manager and trihedral figures are linked , and selected ROIs are highlighted in both windows ., When the parameters of the distributions or the names of nuclei are changed in the Customized ROI Manager window , the corresponding ROIs in the trihedral figures are updated immediately ., Least squares fitting with a Gaussian mixture can be applied after ROIs are manually removed or added ., RoiEdit3D can be used for multi-object tracking ., The fitted Gaussian mixture at a time point is used as an initial value for the mixture at the next time point , and a fitting procedure is executed ( Fig 6A ) ., Additionally , the intensities of nuclei can be obtained as parameters of the fitted Gaussian distributions ., We tried to track and measure the fluorescence intensity of nuclei in real time-lapse 3D images ( Data2 ) ., The animal in the image expressed a calcium indicator , so neural activity during stimulation with the sensory stimulus , sodium chloride , could be measured as changes in the fluorescent intensity ., The proposed nucleus detection method was applied to the first time point in the image and found 194 nuclei out of 198 nuclei ., Seventeen false positives and four false negatives were corrected manually using RoiEdit3D ., Then the nuclei in the time-lapse 3D image were tracked by the proposed method ., Most of the nuclei were successfully tracked ., One or more tracking errors occurred in 27 nuclei during 591 frames , and the success rate was 86 . 4% , which is comparable to that in the previous work 1 ., The tracking process takes 19 . 83 sec per frame ( total 3 . 25 hr ) ., The ASER gustatory neuron was successfully identified and tracked in the time-lapse 3D image by the proposed method ( Fig 6B ) ., The ASER neuron reportedly responds to changes in the sodium chloride concentration 38 , 39 ., We identified a similar response of the ASER neuron using the proposed method ( Fig 6C ) ., This result indicates that the proposed method can be used for multi-object tracking and measuring , which is an essential function for whole-brain activity imaging ., Furthermore the proposed method was utilized to measure the fluorescence intensity of nuclei in time-lapse 2D images ( Data 3 ) ., The proposed nucleus detection method was applied to the image for the first time point ( S3 Fig ) ., Data 3 does not contain images of a highly-localized nuclear marker , and therefore the images of calcium indicator that was weakly localized to the nuclei were used instead ., The proposed method found 7 nuclei out of 9 nuclei ., Six false positives and two false negatives were corrected manually using RoiEdit3D ., Then the nuclei were tracked by the proposed method ., All of the nuclei were successfully tracked during 241 time frames ., The ASER neuron was successfully identified and tracked in the 2D images ., The response of the ASER neuron in the 2D images ( S3 Fig ) is similar to that in the 3D images ., This result indicates that the proposed method can be used for multi-object tracking and measuring of 2D images as well as 3D images ., In this article , we proposed a method that accurately detects neuronal nuclei densely distributed in 3D space ., Our GUI enables visualization and manual correction of the results of automatic detection of nuclei from 3D images as well as 2D images ., Additionally , our GUI successfully tracked and measured multiple objects in time-lapse 2D and 3D images ., Thus , the proposed method can be used as a comprehensive tool for analysis of neuronal activity , including whole-brain activity imaging ., Although the microscopy methods for whole-brain activity imaging of C . elegans have been intensively developed in recent years 3 , 18–20 , computational image analysis methods were underdeveloped ., In these works , the neuronal nuclei in the whole-brain activity imaging data were detected either manually or automatically by peak detection ., Manual detection is most reliable but time- and labor-consuming , whereas the accuracy of the automatic peak detection is relatively low because of overlooking dark nuclei near bright nuclei ., Our proposed method will reduce the difficulty and improve the accuracy ., Furthermore , the numbers of the neuronal nuclei found or tracked in these four works were less than the real number of neuronal nuclei 3 , 18–21 ., The scarcity may be due not only to the experimental limitations such as fluctuation of fluorescent protein expression or low image resolution , but also to the limitations of the image analysis methods that may overlook nuclei ., The proposed method can detect almost all the nuclei in our whole-brain activity imaging data ( Fig 6 ) , suggesting that the proposed method can avoid errors that may be caused by overlooking nuclei , such as erroneous measurements of neural activities and misidentifications of neuron classes ., Thus , our method will be highly useful for the purpose ., Peng and colleagues have intensively developed the computational methods for automatic annotation of cell nuclei in C . elegans 33 , 40 , 41 ., Although their methods successfully annotate cells in many tissue such as body wall muscles and intestine , the methods seem not to be applicable to annotations of head neurons in adult worms , which is highly desired in the field of whole-brain activity imaging 20 ., They pointed out that the positions of neuronal nuclei in adult worms are highly variable 33 and this may be one of the reasons for the difficulty ., The accuracy of detection and segmentation of neuronal nuclei may be another reasons because CellSegmentation3D that was incorporated in their latest annotation framework 33 shows compromised performance in our dataset ( Table 1 , Fig 5 ) ., Our proposed method improves the accuracy of neuronal nucleus detection and will promote developing the automatic annotation methods for the neurons ., It is noteworthy that the method of simultaneous detection and annotation of cells 41 is unique and useful in the studies of C . elegans ., Because the method assigns the positions of reference to the sample image directly and avoid the detection step , the method find cells without overlooking under some conditions , but would not work correctly under the large variation of the numbers or the relative positions of the nuclei , both of which are observed in our dataset ., The optimal method for accurate detection of nuclei will vary depending on the characteristics of the nuclei ., Many conditions such as the visualization method , shape , and distribution of nuclei will affect these characteristics ., In our case , the distributions of the fluorescence intensity of nuclei were similar to Gaussian distributions; thus , we developed an optimal method for such cases ., Even if an original image does not have these characteristics , some preprocessing steps such as applying a Gaussian smoothing filter may enable application of our method to the image ., Although choosing the optimal method and tuning its parameters might be more work than manual identification , the automatic detection method would improve subjectivity and effectivity ., In the field of biology , it is often the case that hundreds or thousands of animals should be analyzed equally well ., In such case , manual detection would be time-consuming and the automatic detection method would be required ., For tracking the nuclei in time lapse images , we can apply the detection method to each time frame separately and then link the detected nuclei between frames ., In this case , some false negatives and false positives would be separately produced for each frame , and they might disrupt the link step , resulting in increase of tracking errors ., On the other hand , in the proposed method , the result of the automatic detection could be corrected manually , resulting in decrease of tracking errors ., The proposed tracking method is a simplistic approach ., Combination with existing excellent tracking methods will likely improve tracking performance of the proposed method ., Cell division and cell death did not occur in our data , but they are fundamental problems in the analysis of embryonic development ., It may be important to improve our method if it is to be applied to these problems so that the method handles such phenomena appropriately ., C . elegans strains JN2100 and JN2101 were used in this study ., Animals were raised on nematode growth medium at 20°C ., E . coli strain OP50 was used as a food source ., We used three datasets in this study ., Data1 and 2 contain ~200 neuronal nuclei , and Data3 contains 9 nuclei ., The positions of the centers of the nuclei were manually corrected by experimental specialists using the proposed GUI ., The blobs of the nuclei were detected by the conventional method ( Steps 1 & 2 ) ., Under-segmented blobs were detected and split in Step, 3 . The precise positions and sizes of the nuclei were obtained in Step, 4 . The names and parameter values of the filters used in the proposed method are shown in S1 Table ., The performance of proposed method for cell detection was compared with five state-of-the-art methods: Ilastik , FARSight , RPHC , 3D watershed plugin in ImageJ , and CellSegmentation3D ., Ilastik is machine learning-based method and required a training data that was created manually ., The parameters of RPHC was the same as the literature 1 ., The parameters of the other methods were optimized based on F-measure and accuracy ., The parallel displacements of the raw 3D images of 12 animals in Data 1 was corrected , and the methods were applied to the images ., Because FARSight crashed during processing , its command line version ( segment_nuclei . exe ) was used 50 ., The input images for FARSight and CellSegmentation3D were converted to 8-bit images because they could not operate with 32-bit grayscale images ., For CellSegmentation3D , because it could not operate with our whole 3D image , the input images were divided and processed separately ., The comparison was performed and the processing time was measured on the same PC as that used for the proposed method ., All the methods other than CellSegmentation3D might be able to utilize multi-threading ., The centroids of the segmented regions obtained by each program were used as the representative points of the objects ., For the proposed method , the means of the fitted Gaussian distributions ( μk ) were used as the representative points ., The Euclid distances of the representative points and manually pointed Ground Truth were obtained ., If a representative point was nearest-neighbor of a point of Ground Truth and vice versa , the object was regarded as a True Positive ., If only the former condition was met , the Ground Truth was regarded as a False Negative ., If only the latter condition was met , the object was regarded as a False Positive ., We obtained the indices of the performance 12 , 34 , 50 as follows:, Truepositiverate=TPGT ,, Falsepositiverate=FPGT ,, Falsenegativerate=FNGT ,, F−measure=2×TP2×TP+FN+FP ,, Accuracy=TPTP+FN+FP ,, where, GT=TP+FN ., GT , TP , FP and FN mean Ground Truth , True Positive , False Positive and False Negative , respectively . | Introduction, Results, Discussion, Methods | To measure the activity of neurons using whole-brain activity imaging , precise detection of each neuron or its nucleus is required ., In the head region of the nematode C . elegans , the neuronal cell bodies are distributed densely in three-dimensional ( 3D ) space ., However , no existing computational methods of image analysis can separate them with sufficient accuracy ., Here we propose a highly accurate segmentation method based on the curvatures of the iso-intensity surfaces ., To obtain accurate positions of nuclei , we also developed a new procedure for least squares fitting with a Gaussian mixture model ., Combining these methods enables accurate detection of densely distributed cell nuclei in a 3D space ., The proposed method was implemented as a graphical user interface program that allows visualization and correction of the results of automatic detection ., Additionally , the proposed method was applied to time-lapse 3D calcium imaging data , and most of the nuclei in the images were successfully tracked and measured . | To reach the ultimate goal of neuroscience to understanding how each neuron functions in the brain , whole-brain activity imaging techniques with single-cell resolution have been intensively developed ., There are many neurons in the whole-brain images and manual detection of the neurons is very time-consuming ., However , the neurons are often packed densely in the 3D space and existing automatic methods fail to correctly split the clumps ., In fact , in previous reports of whole-brain activity imaging of C . elegans , the number of detected neurons were less than expected ., Such scarcity may be a cause of measurement errors and misidentification of neuron classes ., Here we developed a highly accurate automatic cell detection method for densely-packed cells ., The proposed method successfully detected almost all neurons in whole-brain images of the nematode ., Our method can be used to track multi-objects and enables automatic measurements of the neuronal activities from whole-brain activity imaging data ., We also developed a visualization and correction tool that is helpful for experimenters ., Additionally , the proposed method can be a fundamental technique for other applications such as making wiring diagram of neurons or establishing a cell lineage in embryonic development ., Thus our framework supports effective and accurate bio-image analyses . | invertebrates, fluorescence imaging, engineering and technology, caenorhabditis, neuroscience, animals, animal models, caenorhabditis elegans, human factors engineering, model organisms, neuroimaging, research and analysis methods, computer and information sciences, imaging techniques, animal cells, man-computer interface, calcium imaging, graphical user interface, cellular neuroscience, cell biology, computer architecture, neurons, nematoda, biology and life sciences, cellular types, image analysis, organisms, user interfaces | null |
1,750 | journal.ppat.0030137 | 2,007 | A Virtual Look at Epstein–Barr Virus Infection: Biological Interpretations | Computer simulation and mathematical modeling are receiving increased attention as alternative approaches for providing insight into biological and other complex systems 1 ., An important potential area of application is microbial pathogenesis , particularly in cases of human diseases for which applicable animal models are lacking ., To date , most simulations of viral pathogenesis have tended to focus on HIV 2–7 , and employ mathematical models based on differential equations ., None have addressed the issue of acute infection by the pathogenic human herpes virus Epstein–Barr virus ( EBV ) and its resolution into lifetime persistence ., With the ever-increasing power of computers to simulate larger and more complex systems , the possibility arises of creating an in silico virtual environment in which to study infection ., We have used EBV to investigate the utility of this approach ., EBV is a human pathogen , associated with neoplastic disease , that is a paradigm for understanding persistent infection in vivo and for which a readily applicable animal model is lacking ( reviewed in 8 , 9 ) ., Equally important is that EBV infection occurs in the lymphoid system , which makes it relatively tractable for experimental analysis and has allowed the construction of a biological model of viral persistence that accounts for most of the unique and peculiar properties of the virus 10 , 11 ., We are therefore in a position to map this biological model onto a computer simulation and then ask how accurately it represents EBV infection ( i . e . , use our knowledge of EBV to test the validity of the simulation ) and whether the matching of biological observation and simulation output provides novel insights into the mechanism of EBV infection ., Specifically , we can ask if it is possible to identify critical switch points in the course of the disease where small changes in behavior have dramatic effects on the outcome ., Examples of this would be the switch from clinically silent to clinically apparent infection and from benign persistence to fatal infection ( as occurs in fatal acute EBV infection and the disease X-linked lymphoproliferative 12 , for example ) , or to clearance of the virus ., Indeed , is clearance ever possible , or do all infections lead inevitably to either persistence or death ?, Such an analysis would be invaluable ., Not only would it provide insight into the host–virus balance that allows persistent infection , but it would also reveal the feasibility and best approaches for developing therapeutic interventions to diminish the clinical symptoms of acute infection , prevent fatal infection , and/or clear the virus ., A diagrammatic version of the biological model is presented in Figure 1 ., EBV enters through the mucosal surface of the Waldeyer ring , which consists of the nasopharyngeal tonsil ( adenoids ) , the paired tubal tonsils , the paired palatine tonsils , and the lingual tonsil arranged in a circular orientation around the walls of the throat ., Here EBV infects and is amplified in epithelium ., It then infects naïve B cells in the underlying lymphoid tissue ., The components of the ring are all equally infected by the virus 13 ., EBV uses a series of distinct latent gene transcription programs , which mimic a normal B cell response to antigen , to drive the differentiation of the newly infected B cells ., During this stage , the infected cells are vulnerable to attack by cytotoxic T cells ( CTLs ) 14 ., Eventually , the latently infected B cells enter the peripheral circulation , the site of viral persistence , as resting memory cells that express no viral proteins 15 and so are invisible to the immune response ., The latently infected memory cells circulate between the periphery and the lymphoid tissue 13 ., When they return to the Waldeyer ring they are occasionally triggered to terminally differentiate into plasma cells ., This is the signal for the virus to begin replication 16 , making the cells vulnerable to CTL attack again 14 ., Newly released virions may infect new B cells or be shed into saliva to infect new hosts , but are also the target of neutralizing antibody ., Primary EBV infection in adults and adolescents is usually symptomatic and referred to as infectious mononucleosis ( AIM ) ., It is associated with an initial acute phase in which a large fraction ( up to 50% ) of circulating memory B cells may be latently infected 17 ., This induces the broad T lymphocyte immune response characteristic of acute EBV infection ., Curiously , primary infection early in life is usually asymptomatic ., In immunocompetent hosts , infection resolves over a period of months into a lifelong persistent phase in which ∼1 in 105 B cells carry the virus 18 ., Exactly how persistent infection is sustained is unclear ., For example , once persistence is established , it is unknown if the pool of latently infected memory B cells is self-perpetuating or if a low level of new infection is necessary to maintain it ., Indeed , we do not know for sure that the pool of latently infected B cells in the peripheral memory compartment is essential for lifetime persistence ., It is even unclear whether the virus actually establishes a steady state during persistence or continues to decay , albeit at an ever slower rate 17 ., In the current study we describe the creation and testing of a computer simulation ( PathSim ) that recapitulates essential features of EBV infection ., The simulation has predictive power and has utility for experiment design and understanding EBV infection ., One practical limitation of available simulation and modeling approaches has been their inaccessibility to the working biologist ., This is often due to the use of relatively unfamiliar computer interfaces and output formats ., To address these issues , we have presented the simulation via a user-friendly visual interface on a standard computer monitor ., This allows the simulation to be launched and output to be accessed and analyzed in a visual way that is simple and easily comprehensible to the non-specialist ., The computer model ( PathSim ) is a representation of the biological model described in the Introduction ., A schematic version of both is shown in Figure 1 ., To simulate EBV infection , we created a virtual environment consisting of a grid that describes a biologically meaningful topography , in this case the Waldeyer ring ( five tonsils and adenoids ) and the peripheral circulation , which are the main sites of EBV infection and persistence ., The tonsils and adenoids were composed of solid hexagonal base units representing surface epithelium , lymphoid tissue , and a single germinal center/follicle ( Figure 2A–2C; Video S1 ) ., Each hexagonal unit had one high endothelial venule ( HEV ) entry point from the peripheral blood and one exit point into the lymphatic system ( Figure 2A ) ., Discrete agents ( cells or viruses ) reside at the nodes ( red boxes ) of the 3-D grid ( white lines ) ., There they can interact with other agents and move to neighboring nodes ., Agents are assessed at regular , specified time intervals as they move and interact upon the grid ., Virtual cells were allowed to leave the Waldeyer ring via draining lymphatics and return via the peripheral blood and HEVs ( Figure 2A and 2B; Video S1 ) as in normal mucosal lymphoid tissue 19 ., A brief summary list of the agents employed in our simulation , and their properties and interactions , is given in Table 1 ., In this report we refer to actual B cells as , for example , “B cells” , “latently infected B cells” , or “lytically infected B cells” , and their virtual representations as “virtual B cells” , “BLats” , or “BLyts” ., Similarly , we refer to actual virus as virions and their virtual counterparts as virtual virus or Vir ., A full description of the simulation , including a complete list of agents , rules , the default parameters that produce the output described below , and a preliminary survey of the extended parameter space is presented in M . Shapiro , K . Duca , E . Delgado-Eckert , V . Hadinoto , A . Jarrah , et al . ( 2007 ) A virtual look at Epstein-Barr virus infection: simulation mechanism ( unpublished data ) ., Here , we will first present a description of how the virtual environment was visualized and then focus on a comparison of simulation output with the known biological behavior of the virus ., Simulation runs were accessed through an information-rich virtual environment ( IRVE ) ( Figures 2 and 3; Videos S1 and S2 ) , which was invoked through a Web interface ., This provided a visually familiar , straightforward context for immediate comprehension of the spatial behavior of the system 20 ., It also allowed specification of parameters , run management , and ready access to data output and analysis ., Figure 3 demonstrates how the time course of infection may be visualized ., Usually the simulation was initialized by a uniform distribution of Vir over the entire surface of the Waldeyer ring , thereby seeding infection uniformly ., However , in the simulation shown in Figure 3A , virtual EBV was uniformly deposited only on the lingual tonsil ., Figure 3B–3D shows the gradual spread of virtual infection ( intensity of red color indicating the level of free Vir ) to the adjacent tonsils ., It can be seen in this case that the infection spreads uniformly to all the tonsils at once , implying that it was spreading via BLats returning from the blood compartment and reactivating to become BLyts , rather than spreading within the ring ., Examples of infectious spread between and within the tonsils can also be seen in Video S2 ., In this paper we present a comprehensive model of EBV infection that effectively simulates the overall dynamics of acute and persistent infection ., The fact that this simulation can be tuned to produce the course of EBV infection suggests that it models the basic processes of this disease ., To achieve this , we have created a readily accessible , virtual environment that appears to capture most of the salient features of the lymphoid system necessary to model EBV infection ., Achieving infection dynamics that reflect an acute infection followed by recovery to long-term low-level persistent infection seems to require access of the virus to a blood compartment where it is shielded from immunosurveillance ., Because we cannot perform a comprehensive parameter search ( due to the very large parameter space involved ) , we cannot unequivocally state that the blood compartment is essential ., What is clear though , is that persistence is a very robust feature in the presence of a blood compartment , and that we could not achieve an infection process that even remotely resembles typical persistent EBV infection in its absence ., The areas in which the simulation most closely follows known biology are summarized in Table 2 and include the peak time of infection , 33–38 d , compared to the incubation time for AIM of 35–50 d 21 ., This predicts that patients become sick and enter the clinic at or shortly after peak infection in the peripheral blood , a prediction confirmed by our patient studies , where the numbers of infected B cells in the periphery always decline after the first visit 17 ., An important feature of a simulation is its predictive power ., Our analysis predicted that access to the peripheral memory compartment is essential for long-term persistence ., This is consistent with recent studies on patients with hyper-IgM syndrome 31 ., Although these individuals lack classical memory cells , they can be infected by EBV; however , they cannot sustain persistent infection and the virus becomes undetectable ., Unfortunately , those studies did not include a sufficiently detailed time course to see if time to virus loss coincided with the simulation prediction of 1–2 mo ., Another area where the simulation demonstrated its predictive power was in the dynamics of viral replication ., In the simulation it was unexpectedly observed that the level of Vir production plateaued long before BLats , predicting that the levels of virus shedding , unlike latently infected cells , will have leveled off by the time AIM patients arrive in the clinic ., This prediction , which contradicted the common wisdom that virus shedding should be high and decline rapidly in AIM patients , was subsequently confirmed experimentally ( V ., Hadinoto , M . Shapiro , T . Greenough , J . Sullivan , K . Luzuriaga , D . Thorley-Lawson ( 2007 ) On the dynamics of acute EBV infection and the pathogenesis of infectious mononucleosis ( unpublished data ) and see also 22 , 23 ) ., The simulation also quite accurately reproduces the relatively large variation in virus production over time , compared to the stability of B latent ., This difference is likely a consequence of stochasticity ( random variation ) having a relatively larger impact on virus production ., This is because the number of B cells replicating the virus at any given time is very small , both in reality and the simulation , compared to the number of infected B cells , but the number of virions they release when they do burst is very large ., This difference may reflect on the biological requirements for persistence of the virus since a transient loss in virus production due to stochasticity can readily be overcome through recruitment from the pool of B latents ., However , a transient loss of B latents would mean clearance of the virus ., Hence , close regulation of B latent but not virion levels is necessary to ensure persistent infection ., Although there is now a growing consensus that EBV infects normal epithelial cells in vivo 27–29 , the biological significance of this infection remains unclear ., The available evidence suggests that epithelial cell infection may not be required for long-term persistence 25 , 26 , and this is also seen in the simulation ., The alternate proposal is that epithelial infection might play an important role in amplifying the virus , during ingress and/or egress , as an intermediary step between B cells and saliva ., This is based on the observation that the virus can replicate aggressively in primary epithelial cells in vivo 30 ., In the simulation , epithelial amplification had no significant effect on the ability of Vir to establish persistence ., This predicts that epithelial amplification does not play a critical role in entry of the virus , but leaves open the possibility that it may be important for increasing the infectious dose present in saliva for more efficient infection of new hosts ., The simulation is less accurate in the precise quantitation of the dynamics ., Virtual acute infection resolves significantly more slowly and persistence is at a higher level than in a real infection ., In addition , virtual persistent infection demonstrates clear evidence of oscillations in the levels of infected cells that have not been detected in a real infection ., The most likely explanation for these discrepancies is that we have not yet implemented T cell memory ., Thus , as the levels of virtual infected cells drops , the immune response weakens , allowing Vir to rebound while a new supply of virtual CTLs is generated ., Immunological memory would allow a more sustained T cell response that would produce a more rapid decline of infected cells , lower levels of sustained persistence , and tend to flatten out oscillatory behavior , thus making the simulation more quantitatively accurate ., This is one of the features that will be incorporated into the next version of our simulation ., It remains to be determined what additional features need to be implemented to sharpen the model and also whether and to what extent the level of representation we have chosen is necessary for faithful representation of EBV infection ., Our simulation of the Waldeyer ring and the peripheral circulation was constructed with the intent of modeling EBV infection ., Conversely , our analysis can be thought of as the use of EBV to validate the accuracy of our Waldeyer ring/peripheral circulation simulation and to evaluate whether it can be applied to other pathogens ., Of particular interest is the mouse gamma herpesvirus MHV68 32 , 33 ., The applicability of MHV68 as a model for EBV is controversial ., Although it also persists in memory B cells 34 , it appears to lack the sophisticated and complicated latency states that EBV uses to access this compartment ., However , one of the simplifications in our simulation is that the details of these different latency states and their transitions are all encompassed within a single concept , the BLat ., We have also assumed a time line whereby a newly infected BLat becomes activated and CTL sensitive , migrates to the follicle , and exits into the circulation , where it is no longer seen by our virtual CTLs ., In essence , we have generalized the process by which the virus proceeds from free virion to the site of persistence in such a way that it may be applicable to both EBV and MHV68 ., Thus , we might expect that the overall dynamics of infection may be similar even though detailed biology may vary ., As a first step to test if this concept had value , we performed an analysis based on studies with MHV68 where it was observed that the levels of infected B cells at persistence were unaffected by the absolute amount of input virus at the time of infection 35 ., When this parameter was varied in the simulation , we saw the same outcome ., This preliminary attempt raises the possibility that the mouse virus may be useful for examining quantitative aspects of EBV infection dynamics ., The last area we wished to investigate was whether we could identify biologically meaningful “switch” points , i . e . , places in time and space where relatively small changes in critical parameters dramatically affect outcome , for example , switching from persistence to clearance to death ., We have observed one such switch point—reactivation of BLats upon return to the Waldeyer ring—that rapidly switches the infection process from persistence to death ., How this might relate to fatal EBV infection , X-linked lymphoproliferative disease , is uncertain ., However , viral production is a function both of how many B cells initiate reactivation and how efficiently they complete the process ., We believe that most such cells are killed by the immune response before they release virus 16 , so defects in the immune response could allow more cells to complete the viral replication process and give the same fatal outcome ., The ability to find such conditions for switch points could be very useful in the long term for identifying places in the infection process where the virus might be optimally vulnerable to drug intervention ., The easiest place to target EBV is during viral replication; however , it is currently unclear whether viral replication and infection are required for persistence ., It may be that simply turning off viral replication after persistence is established fails to eliminate the virus because the absence of new cells entering the pool through infection is counterbalanced by the failure of infected cells to disappear through reactivation of the virus ., If , however , a drug allowed abortive reactivation , then cells would die without producing infectious virus and new infection would be prevented ., This models the situation that would arise with a highly effective drug or viral mutant that blocked a critical stage in virion production ( e . g . , viral DNA synthesis or packaging ) , so that reactivation caused cell death without release of infectious virus ., A similar effect could be expected with a drug or vaccine that effectively blocked all new infection ., This is another case in which studies with the mouse virus , where non-replicative mutants can be produced and tested , may be informative as to whether and to what extent infection is required to sustain the pool of latently infected B cells and persistence ., The simulation could then be used to predict how effective an anti-viral that blocked replication , or a vaccine that induced neutralizing antibodies , would need to be at reducing new infection in order to cause EBV to be lost from the memory pool ( for a more detailed discussion of this issue see M . Shapiro , K . Duca , E . Delgado-Eckert , V . Hadinoto , A . Jarrah , et al . ( 2007 ) A virtual look at Epstein-Barr virus infection: simulation mechanism ( unpublished data ) ) ., Most modeling of virus infection to date has tended to focus on HIV and use differential equations 2–7 ., One such study involved EBV infection 36 , but to our knowledge none outside of our group has addressed the issue studied here of acute EBV infection and how it resolves into lifetime persistence ., In preliminary studies of our own , modeling EBV infection with differential equations that incorporate features common to the HIV models , with parameters physiologically reasonable for EBV did not produce credible dynamics of infection ( K . Duca , unpublished observations ) ., Although we do not exclude the possibility that such models may be useful for simulating EBV , we took an agent-based approach because it is intuitively more attractive to biologists ., Such models are increasingly being recognized as an effective alternative way to simulate biological processes 37–39 and have several advantages ., The main advantage is that the “agent” paradigm complies by definition with the discrete and finite character of biological structures and entities such as organs , cells , and pathogens ., This makes it more accurate , from the point of view of scientific modeling ., It is also less abstract since the simulated objects , processes , and interactions usually have a straightforward biological interpretation and the spatial structure of the anatomy can be modeled meticulously ., The stochasticity inherent to chemical and biological processes can be incorporated in a natural way ., Lastly , it is generally much easier to incorporate qualitative or semi-quantiative information into rule sets for discrete models than it is for such data to be converted to accurate rate equations ., The major drawback to agent-based models is that there is currently no mathematical theory that allows for rigorous analysis of their dynamics ., Currently , one simply runs such simulations many times and performs statistical analyses to assess their likely behaviors ., Developing such a mathematical theory remains an important goal in the field ., In summary , we have described a new computer simulation of EBV infection that captures many of the salient features of acute and persistent infection ., We believe that this approach , combined with mouse modeling ( MHV68 ) and EBV studies in patients and healthy carriers , will allow us to develop a more profound understanding of the mechanism of viral persistence and how such infections might be treated and ultimately cleared ., Details of the AIM patient populations tested have been published previously 17 ., Adolescents ( ages 17–24 ) presenting to the clinic at the University of Massachusetts at Amherst Student Health Service ( Amherst , Massachusetts , United States ) with clinical symptoms consistent with acute infectious mononucleosis were recruited for this study ., Following informed consent , blood and saliva samples were collected at presentation and periodically thereafter ., Diagnosis at the time of presentation to the clinic required a positive monospot test and the presence of atypical lymphocytes 21 ., Confirmation of primary Epstein–Barr infection required the detection of IgM antibodies to the EBV viral capsid antigen in patient sera 40 ., These studies were approved by the Human Studies Committee at the University of Massachusetts Medical School ( Worcester , Massachusetts , United States ) and by the Tufts New England Medical Center and Tufts University Health Sciences Institutional Review Board ., All blood samples were diluted 1:1 in 1x PBS ., The technique for estimating the absolute number of latently infected B cells in the peripheral blood of patients and healthy carriers of the virus is a real-time PCR–based variation of our previously published technique 17 , the details of which will be published elsewhere ( V ., Hadinoto , M . Shapiro , T . Greenough , J . Sullivan , K . Luzuriaga , et al . ( 2007 ) On the dynamics of acute EBV infection and the origins of infectious mononucleosis ( unpublished data ) ) ., To measure the absolute levels of virus shedding in saliva , individuals were asked to rinse and gargle for a few minutes with 5 ml of water and the resultant wash processed for EBV-specific DNA PCR using the same real-time–based PCR technique ., We have performed extensive studies to standardize this procedure that will be detailed elsewhere ( V ., Hadinoto , M . Shapiro , T . Greenough , J . Sullivan , K . Luzuriaga , et al . ( 2007 ) On the dynamics of acute EBV infection and the origins of infectious mononucleosis ( unpublished data ) ) ., In the simulation , B cells are either uninfected ( BNaïve ) , latently infected ( BLat ) , or replicating virtual virus ( BLyt ) ; we do not distinguish blast and memory B cells ., In the biological model , newly infected B cells in the lymphoepithelium of the Waldeyer ring pass through different latency states , which are vulnerable to attack by cytotoxic T cells ( CTL latent ) ., Subsequently , they become memory B cells that enter the peripheral circulation and become invisible to the immune response by turning off viral protein expression ., In the simulation , all these latency states are captured in the form of a single entity , the BLat ., In addition , the blood circulation and lymphatic system are both represented as abstract entities that only allow for transport of BNaïves and BLats around the body ., Virtual T cells are restricted to the Waldeyer ring ., This simplification is based on the assumption that , in the biological model , EBV-infected cells entering the peripheral circulation are normal and invisible to CTLs , because the virus is inactive , and therefore the peripheral circulation simply acts as an independent pool of and a conduit for B latent ., Operationally , therefore , BLats escape TLats in the simulation simply by entering the peripheral circulation ., Consequently , unlike the biological model , BLats are vulnerable to TLats whenever they reenter the lymph node ., Each agent ( e . g . , Vir or a BNaïve ) has a defined life span , instructions for movement , and functions that depend on which other agents they encounter ( for example , if a Vir encounters a BNaïve , it infects it with some defined probability ) ., The agents , rules , and parameters used are based on known biology wherever possible with simplifications ( see above ) where deemed appropriate ., A brief description and discussion of the agents and their rules is given in Table 1 ., A detailed listing is provided in M . Shapiro , K . Duca , E . Delgado-Eckert , V . Hadinoto , A . Jarrah , et al . ( 2007 ) A virtual look at Epstein-Barr virus infection: simulation mechanism ( unpublished data ) ., At each time point ( 6 min of real time ) , every agent is evaluated and appropriate actions are initiated ., The simulation is invoked through a Web interface ( IRVE; see movies linked to Figure 2 , and 20 ) that allows a straightforward visual , familiar , and scalable context for access to parameter specification , run management , data output , and analysis ., This has the additional advantage that it readily allows comprehension of the spatial behavior of the system ( e . g . , “how does the infection spread ? ” ) ., The simulation may also be invoked from the command line ., Through the Web , users can process simulation data for output and analysis by a number of common applications such as Microsofts Excel , University of Marylands TimeSearcher 41 , and MatLab ., We have developed display components that encapsulate multiple-view capabilities and improved multi-scale interface mappings ., The IRVE is realized in the international standard VRML97 language ., The simulation can be rerun and reanalyzed using a normal VCR-type control tool , which allows the operator , for example , to fast forward , pause , rewind , or drag to a different time point , and to play back runs or analyze simulation output dynamically ., In the IRVE , any spatial object ( including the global system ) can be annotated with absolute population numbers ( as a time plot and/or numeric table ) or proportional population numbers ( as a bar graph ) for any or all of the agents ., Spatial objects themselves can be animated by heat-map color scales ., The intensity of the color associated with each agent is a measure of the absolute level of the agent; so , for example , as the level of free Vir increases , so will the level of intensity of the associated color ( in this case red ) both within the single units and in the entire organ ., In our simulation we manage multiple views of the dynamic population values through a higher order annotation called a PopView ( population view ) ., A PopView is an interactive annotation that provides three complementary representations of the agent population ., The representations can be switched through in series by simple selection ., The default view is a color-coded bar graph where users can get a quick , qualitative understanding of the agent populations in a certain location at that time step ., The second is a field-value pair text panel , which provides numeric readouts of population levels at that time step ., The third is a line graph where the population values for that region are plotted over time ., Because of the large amount of time points and the large number of grid locations , the IRVE manages an integrated information environment across two orders of magnitude: “Macro” and “Micro” scales ., Through the standard VRML application the user has a number of options including free-navigational modes such as: fly , pan , turn , and examine ., This allows users to explore the system , zooming in and out of anatomical structures as desired ., In addition , the resulting visualization space is navigable by predefined viewpoints , which can be visited sequentially or randomly through menu activation ., This guarantees that all content can be accessible and users can recover from any disorientation ., The Visualizer manages Macro and Micro scale result visualizations using proximity-based filtering and scripting of scene logic ., As users approach a given anatomical structure , the micro-scale meshes and results are loaded and synchronized to the time on the users VCR controller . | Introduction, Results, Discussion, Methods | The possibility of using computer simulation and mathematical modeling to gain insight into biological and other complex systems is receiving increased attention ., However , it is as yet unclear to what extent these techniques will provide useful biological insights or even what the best approach is ., Epstein–Barr virus ( EBV ) provides a good candidate to address these issues ., It persistently infects most humans and is associated with several important diseases ., In addition , a detailed biological model has been developed that provides an intricate understanding of EBV infection in the naturally infected human host and accounts for most of the virus diverse and peculiar properties ., We have developed an agent-based computer model/simulation ( PathSim , Pathogen Simulation ) of this biological model ., The simulation is performed on a virtual grid that represents the anatomy of the tonsils of the nasopharyngeal cavity ( Waldeyer ring ) and the peripheral circulation—the sites of EBV infection and persistence ., The simulation is presented via a user friendly visual interface and reproduces quantitative and qualitative aspects of acute and persistent EBV infection ., The simulation also had predictive power in validation experiments involving certain aspects of viral infection dynamics ., Moreover , it allows us to identify switch points in the infection process that direct the disease course towards the end points of persistence , clearance , or death ., Lastly , we were able to identify parameter sets that reproduced aspects of EBV-associated diseases ., These investigations indicate that such simulations , combined with laboratory and clinical studies and animal models , will provide a powerful approach to investigating and controlling EBV infection , including the design of targeted anti-viral therapies . | The possibility of using computer simulation and mathematical modeling to gain insight into biological systems is receiving increased attention ., However , it is as yet unclear to what extent these techniques will provide useful biological insights or even what the best approach is ., Epstein–Barr virus ( EBV ) provides a good candidate to address these issues ., It persistently infects most humans and is associated with several important diseases , including cancer ., We have developed an agent-based computer model/simulation ( PathSim , Pathogen Simulation ) of EBV infection ., The simulation is performed on a virtual grid that represents the anatomy where EBV infects and persists ., The simulation is presented on a computer screen in a form that resembles a computer game ., This makes it readily accessible to investigators who are not well versed in computer technology ., The simulation allows us to identify switch points in the infection process that direct the disease course towards the end points of persistence , clearance , or death , and identify conditions that reproduce aspects of EBV-associated diseases ., Such simulations , combined with laboratory and clinical studies and animal models , provide a powerful approach to investigating and controlling EBV infection , including the design of targeted anti-viral therapies . | agent based model, infectious diseases, epstein-barr virus, computer simulation, pathology, virology, immunology, dynamics of infection, computational biology | null |
2,310 | journal.pcbi.1006716 | 2,019 | State-aware detection of sensory stimuli in the cortex of the awake mouse | The large majority of what we know about sensory cortex has been learned by averaging the response of individual neurons or groups of neurons across repeated presentations of sensory stimuli ., However , multiple studies in the last three decades have clearly demonstrated that sensory-evoked activity in primary cortical areas varies across repeated presentations of a stimulus , particularly when the sensory stimulus is weak or near the threshold for sensory perception 1–3 , and have suggested that this is an equally important aspect of sensory coding as the average response 4–6 ., Variability is thought to arise from a complex network-level interaction between sensory-driven synaptic inputs and ongoing cortical activity , and single-trial response variability is partially predictable from the ongoing activity at the time of stimulation ., A large body of work has focused on characterizing this relationship between notions of cortical “state” and sensory-evoked responses 7–13 , establishing some simple models of local cortical dynamics 14 ., Less is known about the impact of this relationship for downstream circuits ( though see 15 , 16 ) ., As an example , consider the detection of a sensory stimulus , which has been foundational in the human 17–22 and non-human primate psychophysical literature 23 , 24 and serves as one of the most widely utilized behavioral paradigms in rodent literature 25–27 ., In an attempt to link the underlying neural variability to behavior , the principal framework for describing sensory perception of stimuli near the physical limits of detectability is signal detection theory 28 ., A key prediction of signal detection theory is that , on single trials , detection of the stimulus is determined by whether the neural response to the stimulus crosses a threshold ., Particularly large responses would be detected but smaller responses would not , so variability in neural responses would lead to , and perhaps predict , variability in the behavioral response ., From the perspective of an ideal observer , if variability in the sensory-evoked response can be forecasted using knowledge of cortical state , the observer could potentially make better inferences , but in traditional ( state-blind ) observer analysis , the readout of the ideal observer is not tied to the ongoing cortical state ., In this work , using network activity recordings from the whisker sensitive region of the primary somatosensory cortex in the awake mouse , we develop a data-driven framework that predicts the trial-by-trial variability in sensory-evoked responses in cortex by classifying ongoing activity into discrete states that are associated with particular patterns of response ., The classifier takes as inputs features of network activity that are known to be predictive of single-trial response from previous studies 9 , 14 , as well as more complex spatial combinations of such features across cortical layers , to generate ongoing discrete classifications of cortical state ., We optimize the performance of this state classifier by systematically varying the selection of predictors ., Finally , embedding this classification of state in a state-aware ideal observer analysis of the detectability of the sensory-evoked responses , we analyze a downstream readout that changes its detection criterion as a function of the current state ., We find that state-aware observers outperform state-blind observers and , further , that they equalize the detection accuracy across states ., Downstream networks in the brain could use such an adaptive strategy to support robust sensory detection despite ongoing fluctuations in sensory responsiveness during changes in brain state ., The foundation upon which the state-aware observer is constructed is a prediction of the sensory-evoked cortical response ., This prediction is based on classifying elements of the ongoing , pre-stimulus activity into discrete “states , ” and the goal is to find the features of ongoing activity and the classification rules that generate the best prediction of sensory-evoked responses ., Treating this as a discrete problem was a methodological choice motivated by the rationale that such an approach could find rules that are not linear in the features of ongoing activity and could lend more flexibility in the rules relating features of ongoing activity to variability in the response ., The features of ongoing activity include the power spectrum of pre-stimulus LFP and the instantaneous “LFP activation” ( Fig 2A ) ., To describe sensory-evoked responses , we define a parameterization of the LFP response using principal components analysis ( Fig 2B ) ., The state classifier is a function that takes as inputs features of pre-stimulus LFP and produces an estimate of the principal component ( PC ) weights and thus of the single-trial evoked response ( Fig 2C ) ., In the following sections , we describe this process in detail ., Next , within the general class of pre-stimulus features considered–power ratio and LFP activation–we optimized several choices: the range of frequencies used to compute the power ratio; the cortical depth from which the ongoing LFP signal is taken; and possible combinations of LFP signals across the cortical depth ., Changes in pre-stimulus features resulted in changes in the boundaries between states , and ultimately in changes in prediction performance ., First , we varied the bounds of the low-frequency range ( “L range” , Fig 3A ) ., The increase in fVE was on average 0 . 09 ± 0 . 05 ( N = 11 recordings ) ( Fig 3B; classifier boundaries shown in S2 Fig ) , with a significant increase in 10 of 11 recordings ( Fig 3C , asterisks ) ., We found that the optimal L range could extend to frequencies up to 40 Hz ( Fig 3C ) , with the median bounds of the optimal L being from 1 to 27 Hz ., Using for each recording the power ratio based on the optimized range of low-frequency power ( Fig 3 ) , we next determined where along the cortical depth the most predictive activity was and whether taking spatial combinations of LFP activity could improve the prediction ., Note that in this analysis , the channel for the stimulus-evoked response was held fixed ( L4 ) and thus the parameterization of the evoked response using principal components did not change , but the pre-stimulus channel was varied ., For each recording , we thus built a series of classifiers , using single- and multi-channel LFP activity from across the array ( Fig 4A , S3 Fig ) , which again were optimized for prediction of the single-trial L4 sensory-evoked response ., Classifiers built from a single channel of LFP performed best when the channel was near L4 ( Fig 4B , single example; Fig 4C , average profile ) ., Because the LFP represents a volume-conducted signal , we also examined the current source density ( CSD ) 34–36 , estimated on single trials using the kernel method 37 ., There was no improvement in fVE using CSD to build classifiers ( fVE difference , CSD minus LFP: -0 . 07; range: ( -0 . 12 , -0 . 01 ) ) ., For each recording , we defined an optimal classifier channel based on the spatial profile of fVE for single-channel predictors ( Fig 4B; S3 Fig ) ., In the “pair” combination , we paired the optimal classifier channel with each of the other possible 31 channels ( Fig 4B; green dashed line ) ., We optimized the classifier in the 3-dimensional space defined by power ratio ( on the optimal channel only ) and LFP activation from each of the two channels and compared the fVE to that obtained using the optimal classifier channel only ( Fig 4D ) ., We found no improvement in the prediction using the pair combination compared to using the optimal channel alone ( Fig 4D , mean fVE difference: 0 . 00 ± 0 . 01; 0/11 recordings with significant change , pair vs . single ) or using more complex combinations of channels ( S3 Fig ) ., To summarize , we optimized classifiers based on pre-stimulus features to predict single-trial sensory-evoked LFP responses in S1 cortex of awake mice ., We found that the classifier performance was improved by changing the definition of the power ratio ( L/W ) such that the low-frequency range ( L ) extended from 1 Hz to 27 Hz , depending on the recording , which differed from the range typically used from anesthetized recordings in S1 ( 1–5 Hz ) 8 , 9 ., We also found that the most predictive pre-stimulus LFP activation was near layer 4 ., After establishing a clear enhanced prediction of the single-trial stimulus-evoked response within the LFP by considering the pre-stimulus activity , we investigated the impact of this relationship on the detection of sensory stimuli from cortical LFP activity using a state-aware ideal observer analysis ., We first considered a simple matched-filter detection scheme 38 in which the ideal observer operated by comparing single-trial evoked responses to the typical shape of the sensory evoked response ( Methods , Detection ) ., The matched filter was defined by the trial-average evoked LFP response , and this filtered the raw LFP ( Fig 5A ) to generate the LFP score ( Fig 5B ) ., For the state-blind observer , a detected event was defined as a peak in the LFP score that exceeded a fixed threshold ( Fig 5B , stars ) ., The LFP score distributions from time periods occurring during known stimulus-evoked responses and from the full spontaneous trace were clearly distinct but overlapping ( Fig 5C ) , and detected events ( Fig 5B , stars ) included both “hits” ( detection of a true sensory input ) and “false alarms” ( detection of a spontaneous fluctuation as a sensory input ) ., Next , using the state classifier constructed in the first half of the paper , we analyzed the performance of a state-aware observer on a reserved set of trials , separate from those used for fitting and optimizing the state classifiers ( Methods ) ., Specifically , using the optimized state classifier ( Figs 3 and 4 ) , we continuously classified “state” at each time point in the recording ( Fig 5D ) ., The state-aware observer detects events exceeding a threshold , which changed as a function of the current state ( Fig 5E ) ., Instead of a single LFP score distribution , we now have one for each predicted state ( Fig 5F ) , leading to many possible strategies for setting the thresholds for detecting events across states ., In general , the overall hit rate and false alarm rate will depend on hits and false alarms in each individual state ( Fig 6A and 6B for single example; S4 and S5 Figs show all recordings ) , as well as the overall fraction of time spent in each state ( Fig 6A , inset ) ., We walk through the analysis for a single example , selected as one of the clearest examples of how state-aware detection worked ., While this example recording shows a relatively large improvement , it is not the recording with the largest improvement , and , moreover , the corresponding plots for all recordings are shown in S4 and S5 Figs ., To compare between traditional ( state-blind ) and state-aware observers , we compared hit rates at a single false alarm rate , determined for each recording as the false alarm rate at which 80%-90% detection was achieved by a state-blind ideal observer ., To select thresholds for the state-aware observer , we systematically varied the thresholds in state 1 and state 3 , while adjusting the state-2 threshold such that average false alarm rate was held constant ., For each combination of thresholds , we computed the overall hit rate ( Fig 6C ) ., For the example recording highlighted in Fig 6 , the state-aware observer ( hit rate: 96% ) outperformed the traditional one ( hit rate: 90% ) ., This worked because the threshold in state 3 could be increased with very little decrease in the hit rate ( Fig 6B ) , and this substantially decreased the false alarm rate in state 3 ( Fig 6A ) ., Because the overall false alarm rate is fixed , this meant more false alarms could be tolerated in states 1 and, 2 . Consequently , thresholds in states 1 and 2 could be decreased , which increased their hit rates ., Across recordings , we found that the state-aware observer outperformed the state-blind observer in 9 of 11 recordings ( Fig 6D; S4 and S5 Figs ) ., Hit rates slightly but significantly increased from a baseline of 81% for the state-blind observer to 84% for state-aware detection , or an average change of +3 percentage points ( SE: 3%; signed-rank test , p < 0 . 01 , N = 11 ) ., The overall change in hit rate reflects both the fraction of time spent in each state ( some fixed feature of an individual mouse ) and the changes in state-dependent hit rates ., To separate these factors , we analyzed the hit rate of the state-blind and state-aware observers by computing , for each observer , the hit rate conditioned on each pre-stimulus state ( Fig 6E ) ., For this recording , the state-blind observer had very low hit rate in state 1 and high hit rates in states 2 and, 3 . In comparison , hit rates were similar across the three state for the state-aware observer ( Fig 6D ) ., Thus , in state 1 ( smallest responses , blue ) , we observed a large increase in the hit rate depending on whether the observer used state-blind or state-aware thresholds ., Averaged across all recordings , the state-1 hit rates increased from 60% to 76% , which is a relative increase of 26% ( SE 11% ) ., Because this is weighted by the fraction of time spent in state 1 , the overall impact on the hit rate is smaller ., Hit rates increased slightly on average in state 2 ( + 2% , SE 4% ) and decreased slightly in state 3 ( -7% , SE 9% ) ., The net impact of this is that across the majority of recordings , the cross-state range of hit rates for the state-blind ideal observer was much larger than that for the state-aware ideal observer ( Fig 6D and 6F; 19% , average state-blind minus state-aware hit rate range in percentage points ( SE: 5% ) ; p < 0 . 01 , signed-rank test , N = 11 ) ., Thus , while the overall differences between state-aware and state-blind hit rates are modest , the state-aware observer has more consistent performance across all pre-stimulus states than a state-blind observer ., Due to the rapid development of tools that enable increasingly precise electrophysiology in the awake animal , there is a growing appreciation that the “awake brain state” encompasses a wide range of different states of ongoing cortical activity , and that this has a large potential impact on sensory representations during behavior 39–44 ., Here , we constructed a framework for the prediction of highly variable , single-trial sensory-evoked responses in the awake mouse based on a data-driven classification of state from ongoing cortical activity ., In related work , past studies have used some combination of LFP/MUA features to predict future evoked MUA response 9 , 14 ., We used a similar approach for state classification and response prediction in cortical recordings in the awake animal , extending this to allow complex combinations of ongoing activity in space and different features of the pre-stimulus power spectrum as predictors ., We found that simple features of pre-stimulus activity sufficed to enable state classification that yielded single-trial prediction of sensory evoked responses ., These predictive features were analogous to the synchronization and phase variables found in previous studies 8 , 9 , 14 , though we found a revised definition of synchronization was more predictive ., In particular , we found that the very low-frequency band of the LFP power spectrum ( 1–5 Hz ) was less predictive of single-trial evoked responses in our recordings than a wider band ( e . g . 1 to 27 Hz ) ., This is consistent with findings from a recent study 40 that surveyed the power spectrum of LFP across different behavioral states in the awake animal and demonstrated differences in the power spectrum between quiet and active wakefulness up to 20 Hz ., While we have focused on the problem of state classification and prediction from the perspective of an internal observer utilizing neural activity alone , future work could investigate whether the state classifier is also tracking external markers of changes in state , such as those indicated by changes in pupil diameter 42 , 45 , whisking 40 , or other behavioral markers in the awake animal ., We fit classifiers for each individual recording rather than pooling responses across animals and recording sessions ., The structure of the classification rules was similar across recordings , showing that the relationship between pre-stimulus features and evoked responses is robust ., This suggests that a single classifier could be fit , once inputs and outputs are normalized to standard values ., This normalization could be accomplished by determining the typical magnitude of LFP sensory responses and rescaling accordingly ., Moreover , the ordered structure of the classification rules suggests that a continuous model of state , rather than a discrete model , would have worked as well ., To implement as a continuous model , one would fit a regression of the evoked response coefficients using as independent variables LFP activation and power ratio ., Judging by the classification boundaries shown in S1 and S2 Figs , keeping only linear terms in activation and power ratio would give a good prediction ., In its current formulation , this framework utilizes only the features of ongoing cortical activity that are reflected in the LFP in order to classify state and predict the evoked LFP response ., Both as features underlying the state classifier and as the sensory-evoked response being predicted , LFP must be interpreted carefully , as the details of how underlying sinks and sources combine depend on the local anatomy and population spiking responses 46 ., In barrel cortex , the early whisker-evoked LFP response ( 0 to 25 ms ) is characterized by a current sink in L4 initially driven by thalamic inputs to cortex , but also reflecting cortical sources of activity: the evoked LFP is highly correlated with the layer-4 multi-unit activity response 47 , 48 ., We restricted our predictive framework to the high degree of variability in this initial response ., It remains to determine how LFP response variability is reflected in the sensory-evoked single-unit cortical spiking activity patterns ., Further , regarding LFP as a predictor used by the state classifier , LFP is a mesoscopic marker of cortical state that neglects finer details of cortical state organization ., In addition to establishing whether better predictions are made from more detailed representations of cortical state , it is an interesting question how microcircuit spiking dynamics are related to the mesoscopic markers of cortical state , or how much can be inferred about population spiking dynamics from the LFP ., Finally , thalamic and cortical activity are tightly linked , and the results presented here may also reflect variations in ongoing thalamic activity ., Disentangling thalamic and cortical sources of variability in the evoked response will require paired recordings and perturbative experimental approaches designed to address issues of causality ., In the second part of the paper , we used ideal observer analysis to show that state-aware observers , with oracle knowledge of the spontaneous , ongoing state fluctuations informative of the single-trial sensory-evoked response , can out-perform a state-blind ideal observer ., Our analysis relied on classification of the markers of ongoing state ., This is not to suggest that this specific estimation takes place in the brain , but instead could potentially be achieved dynamically by a downstream network through the biophysical properties of the circuitry ., Theoretically , the gain and threshold of such a readout neuron or network could be dynamically modified on the basis of the ongoing activity as a biophysical manifestation of the adaptive state-aware ideal observer , though the identification of specific mechanisms was beyond the scope of the current study ., We found that the state-aware observer had higher accuracy than the traditional , state-blind observer , but the absolute gain in hit rate ( at fixed false alarm rate ) averaged across all states was modest ., When pre-stimulus states were analyzed separately , however , we found that accuracy in the low-response state was substantially higher for the state-aware observer , where there was a relative increase of 25% in the hit rate for this state ., Because small sensory responses are predictable from the ongoing activity , transiently lowering the threshold for detection resulted in more “hits” in the low-response state , while false alarms in high-response states could be avoided by raising the threshold when the state changed ., However , the cortical activity was classified to be in this particular state approximately 20% of the time , and thus had a relatively modest impact on the overall performance , averaged across all states ., What is not currently known is the overall statistics associated with the state transitions ( i . e . distribution of time spent in each state , rate of transitions , etc . ) during engagement within perceptual tasks , but in any case , what we observe here is a normalization of detectability across brain states ., For near-threshold sensory perception , the signal detection theory framework asserts that single-trial responses are predictive of perceptual report 28 ., While there are many previous studies that seem to support this 49–52 , several animal studies have called this into question , showing that primary sensory neural activity does not necessarily co-vary with perceptual report on simple detection tasks 23 , 25 , 27 ., It is possible that the conflicting findings in the literature are due to behavioral state effects , and that more consistent reports would emerge if the analysis of the neural activity incorporated elements of the state-classification approach developed here ., Our results show how single-trial response size can be decoupled from perception , if a downstream network predicts and then accounts for the variability in sensory responses ., Moreover , our analysis showed that some states of pre-stimulus activity should be associated with higher or lower performance on a near-threshold detection task , which has been observed in near-threshold detection studies in the rodent 26 and monkey 24 ., It should be noted that there is controversy regarding the relevance of primary sensory cortex in simple behavioral tasks 53 , 54 , but this is likely related to the task difficulty 55 , where a large body of literature has resolutely shown that processing in primary cortical areas is critical for difficult tasks that increase cognitive load , and we suspect that near threshold stimuli such as those shown here fall in that category ., Many studies have demonstrated a link between pre-stimulus cortical activity and perceptual report on near-threshold detection tasks in humans 17 , 18 , 56–59 ., Currently , it is not entirely clear how far the parallel in cortical dynamics between the mouse and human can be taken ., One challenge is that connecting invasive recordings in the mouse to non-invasive recordings in human studies is non-trivial ., Here , at the level of LFP , we observed similarities between species in the interaction between ongoing and evoked activity: the largest evoked responses tended to be preceded by positive deflection in the LFP , and the smallest evoked responses were preceded by negative deflection in the LFP ., This relationship , the negative interaction phenomenon , points to a non-additive interaction between ongoing and evoked activity and is also observed in both invasive and non-invasive recordings in humans 33 , 56 , 60 , 61 ., Establishing parallels between cortical dynamics on a well-defined task , such as sensory detection , between humans and animal models is an important direction for future studies ., In summary , we have developed a framework for the prediction of variable single-trial sensory-evoked responses and shown that this prediction , based on cortical state classification , can be used to enhance the readout of sensory inputs ., Utilizing state-dependent decoders for brain-machine interfaces has been shown to greatly improve the readout of motor commands from cortical activity 62 , 63 , at the very end-stage of cortical processing ., Others have raised the possibility of using state knowledge to ‘cancel out’ variability in sensory brain-machine interfaces , with the idea that this could generate a more reliable and well-controlled cortical response 64 , 65 , which would in theory transmit information more reliably ., This is intriguing , though our analysis suggests a slightly different interpretation: if downstream circuits also have some knowledge of state , canceling out encoding variability may not be the appropriate goal ., Instead , the challenge is to target the response regime for each state ., This could be particularly relevant if structures controlling state , including thalamus 66 , are upstream of the cortical area in which sensory BMI stimulation occurs ., The simple extension of signal detection theory we explored suggests a solution to the problem that the brain faces at each stage of processing: how to adaptively read out a signal from a dynamical system constantly generating its own internal activity ., All procedures were approved by the Institutional Animal Care and Use Committee at the Georgia Institute of Technology ( Protocol Number A16104 ) and were in agreement with guidelines established by the National Institutes of Health ., Six nine to twenty-six week old male C57BL/6J mice were used in this study ., Mice were maintained under 1–2% isoflurane anesthesia while being implanted with a custom-made head-holder and a recording chamber ., The location of the barrel column targeted for recording was functionally identified through intrinsic signal optical imaging ( ISOI ) under 0 . 5–1% isoflurane anesthesia ., Recordings were targeted to B1 , B2 , C1 , C2 , and D2 barrel columns ., Mice were habituated to head fixation , paw restraint and whisker stimulation for 3–7 days before proceeding to electrophysiological recordings ., Following termination of the recordings , animals were anesthetized ( isoflurane , 4–5% , for induction , followed by a euthanasia cocktail injection ) and perfused ., Local field potential was recorded using silicon probes ( A1x32-5mm-25-177 , NeuroNexus , USA ) with 32 recording sites along a single shank covering 775 μm in depth ., The probe was coated with DiI ( 1 , 1’-dioctadecyl-3 , 3 , 3′3’-tetramethylindocarbocyanine perchlorate , Invitrogen , USA ) for post hoc identification of the recording site ., The probe contacts were coated with a PEDOT polymer 67 to increase signal-to-noise ratio ., Contact impedance measured between 0 . 3 MOhm and 0 . 7 MOhm ., The probe was inserted with a 35° angle relative to the vertical , until a depth of about 1000 μm ., Continuous signals were acquired using a Cerebus acquisition system ( Blackrock Microsystems , USA ) ., Signals were amplified , filtered between 0 . 3 Hz and 7 . 5 kHz and digitized at 30 kHz ., Mechanical stimulation was delivered to a single contralateral whisker corresponding to the barrel column identified through ISOI using a galvo motor ( Cambridge Technologies , USA ) ., The galvo motor was controlled with millisecond precision using a custom software written in Matlab ( Mathworks , USA ) ., The whisker stimulus followed a sawtooth waveform ( 16 ms duration ) of various velocities ( 1000 deg/s , 500 deg/s , 250 deg/s , 100 deg/s ) delivered in the caudo-rostral direction ., To generate stimuli of different velocity , the amplitude of the stimulus was changed while its duration remained fixed ., Whisker stimuli of different velocities were randomly presented in blocks of 21 stimuli , with a pseudo-random inter-stimulus interval of 2 to 3 seconds and an inter-block interval of a minimum of 20 seconds ., The total number of whisker stimuli across all velocities presented during a recording session ranged from 196 to 616 stimuli ., For analysis , the LFP was down-sampled to 2 kHz ., The LFP signal entering the processing pipeline is raw , with no filtering beyond the anti-aliasing filters used at acquisition , enabling future use of these methods for real-time control ., Prior to the analysis , signal quality on each channel was verified ., We analyzed the power spectrum of LFP recorded on each channel for line noise at 60 Hz ., In some cases , line noise could be mitigated by fitting the phase and amplitude of a 60-Hz sinusoid , as well as harmonics up to 300 Hz , over a 500-ms period in the pre-stimulus epoch , then extrapolating the sinusoid over the stimulus window and subtracting ., A small number of channels displayed slow , irregular drift ( 2 or 3 of 32 channels ) and these were discarded ., All other channels were used ., Current source density ( CSD ) analysis was used for two different purposes: first , to functionally determine layers based on the average stimulus-evoked response , and second , to analyze the pre-stimulus activity ( in single trials ) to localize sinks and sources generating the predictive signal ., We describe the general method used here ., Prior to computing the current source density ( CSD ) , each channel was scaled by its standard deviation to normalize impedance variation between electrodes ., We then implemented the kernel CSD method 37 to compute CSD on single trials ., This method was chosen because it accommodates irregular spacings between electrodes , which occurs when recordings on a particular contact do not meet quality standards outlined above ., To determine the best values for the kernel method parameters ( regularization parameter , λ; source extent in x-y plane , r; and source extent in z-plane , R ) we followed the suggestion of Potworowski ( 2012 ) and selected the parameter choices that minimize error in the reconstruction of LFP from the CSD ., These parameters were similar across recordings , so for all recordings we used: λ = 0 . 0316; r = 200μm; R = 37 . 5μm ., The trial-averaged evoked response was computed on each trial by subtracting the pre-stimulus baseline ( average over 200 ms prior to stimulus delivery ) and computing the average across trials ., The CSD of this response profile was computed as described above ., The center of layer 4 was determined by finding the largest peak of the trial-averaged evoked LFP response as well as the location of the first , large sink in the trial-averaged sensory-evoked CSD response ., We assume a width of 205 μm for layer 4 , based on published values for mice 32 ., The matched filter ideal observer analysis 38 is implemented as follows ., The score s, ( t ) is constructed by taking the dot product of the evoked responses yt with a filter matched to the average evoked response:, s, ( t ) =yt∙ξ0, This is equivalent to computing the sum, s, ( t ) =∑t=1Nξ ( x ( t+t ) -x, ( t ) ) ξ ( t ), In the standard encoding model , if η is zero-mean white noise , this gives a signal distribution, P, ( s ) ~N ( ∥ξ0∥ , σ2 ), where σ2=∥ξ0∥2ση2 and a noise distribution with mean 0 ., In practice , we do not parameterize the distribution , because η is not uncorrelated white noise , and work from the score distribution directly ., For the state-aware decoder , we use the prediction α^t , k of evoked responses, yt=ξ0+∑k=1NCα^t , kξk+η, This changes the score to, s, ( t ) =|ξ0|2+∑kα^t , kξk∙ξ0+η∙ξ0, Typically , one of the first two PCs ( ξ1 or ξ2 ) has a very similar shape to ξ0 , while the other one has both positive and negative components ( Fig 2 , S1 and S2 Figs ) ., For the state-aware threshold , we use state predictions for the component that is more similar to ξ0 , as indicated in S1 and S2 Figs ., An event is detected at time t for threshold θ when s, ( t ) > θ is a local maximum that is separated from the nearest peak by at least 15 ms and has a minimum prominence ( i . e . drop in s before encountering another peak that was higher than the original peak ) of |ξ | Introduction, Results, Discussion, Methods | Cortical responses to sensory inputs vary across repeated presentations of identical stimuli , but how this trial-to-trial variability impacts detection of sensory inputs is not fully understood ., Using multi-channel local field potential ( LFP ) recordings in primary somatosensory cortex ( S1 ) of the awake mouse , we optimized a data-driven cortical state classifier to predict single-trial sensory-evoked responses , based on features of the spontaneous , ongoing LFP recorded across cortical layers ., Our findings show that , by utilizing an ongoing prediction of the sensory response generated by this state classifier , an ideal observer improves overall detection accuracy and generates robust detection of sensory inputs across various states of ongoing cortical activity in the awake brain , which could have implications for variability in the performance of detection tasks across brain states . | Establishing the link between neural activity and behavior is a central goal of neuroscience ., One context in which to examine this link is in a sensory detection task , in which an animal is trained to report the presence of a barely perceptible sensory stimulus ., In such tasks , both sensory responses in the brain and behavioral responses are highly variable ., A simple hypothesis , originating in signal detection theory , is that perceived inputs generate neural activity that cross some threshold for detection ., According to this hypothesis , sensory response variability would predict behavioral variability , but previous studies have not born out this prediction ., Further complicating the picture , sensory response variability is partially dependent on the ongoing state of cortical activity , and we wondered whether this could resolve the mismatch between response variability and behavioral variability ., Here , we use a computational approach to study an adaptive observer that utilizes an ongoing prediction of sensory responsiveness to detect sensory inputs ., This observer has higher overall accuracy than the standard ideal observer ., Moreover , because of the adaptation , the observer breaks the direct link between neural and behavioral variability , which could resolve discrepancies arising in past studies ., We suggest new experiments to test our theory . | single channel recording, medicine and health sciences, engineering and technology, statistics, somatosensory cortex, signal processing, matched filters, brain, social sciences, neuroscience, signal filtering, multivariate analysis, animal anatomy, mathematics, membrane electrophysiology, bioassays and physiological analysis, zoology, research and analysis methods, mathematical and statistical techniques, principal component analysis, signal detection theory, electrophysiological techniques, animal physiology, psychology, anatomy, vibrissae, biology and life sciences, sensory perception, physical sciences, statistical methods | null |
1,998 | journal.pcbi.1005524 | 2,017 | A mathematical model coupling polarity signaling to cell adhesion explains diverse cell migration patterns | Rho GTPases are central regulators that control cell polarization and migration 15 , 16 , embedded in complex signaling networks of interacting components 17 ., Two members of this family of proteins , Rac1 and RhoA , have been identified as key players , forming a central hub that orchestrates the polarity and motility response of cells to their environment 18 , 19 ., Rac1 ( henceforth “Rac” ) works in synergy with PI3K to promote lamellipodial protrusion in a cell 16 , whereas RhoA ( henceforth “Rho” ) activates Rho Kinase ( ROCK ) , which activates myosin contraction 20 ., Mutual antagonism between Rac and Rho has been observed in many cell types 19 , 21 , 22 , and accounts for the ability of cells to undergo overall spreading , contraction , or polarization ( with Rac and Rho segregated to front versus rear of a cell ) ., The extracellular matrix ( ECM ) is a jungle of fibrous and adhesive material that provides a scaffold in which cells migrate , mediating adhesion and traction forces ., ECM also interacts with cell-surface integrin receptors , to trigger intracellular signaling cascades ., Important branches of these pathways are transduced into activating or inhibiting signals to Rho GTPases ., On one hand , ECM imparts signals to regulate cell shape and cell motility ., On the other hand , the deformation of a cell affects its contact area with ECM , and hence the signals it receives ., The concerted effect of this chemical symphony leads to complex cell behavior that can be difficult to untangle using intuition or verbal arguments alone ., This motivates our study , in which mathematical modeling of GTPases and ECM signaling , combined with experimental observations is used to gain a better understanding of cell behavior , in the context of experimental data on melanoma cells ., There remains the question of how to understand the interplay between genes ( cell type ) , environment ( ECM ) and signaling ( Rac , Rho , and effectors ) ., We and others 19 , 21–27 have previously argued that some aspects of cell behavior ( e . g . , spreading , contraction , and polarization or amoeboid versus mesenchymal phenotype ) can be understood from the standpoint of Rac-Rho mutual antagonism , with fine-tuning by other signaling layers 28 ., Here we extend this idea to couple Rac-Rho to ECM signaling , in deciphering the behavior of melanoma cells in vitro ., There are several overarching questions that this study aims to address ., In experiments of Park et al . 11 melanoma cells were cultured on micro-fabricated surfaces comprised of post density arrays coated with fibronectin ( FN ) , representing an artificial extracellular matrix ., The anisotropic rows of posts provide inhomogeneous topographic cues along which cells orient ., In 11 , cell behavior was classified using the well-established fact that PI3K activity is locally amplified at the lamellipodial protrusions of migrating cells 36 ., PI3K “hot spots” were seen to follow three distinct patterns about the cell perimeters: random ( RD ) , oscillatory ( OS ) , and persistent ( PS ) ., These classifications were then associated with three distinct cell phenotypes: persistently polarized ( along the post-density axis ) , oscillatory with two lamellipodia at opposite cell ends oscillating out of phase ( protrusion in one lamellipod coincides with retraction of the other , again oriented along the post-density axis ) , and random dynamics , whereby cells continually extend and retract protrusions in random directions ., The fraction of cells in each category was found to depend on experimental conditions ., Here , we focus on investigating how experimental manipulations influence the fraction of cells in different phenotypes ., For simplicity , we focus on the polarized and oscillatory phenotypes which can be most clearly characterized mathematically ., The following experimental observations are used to test and compare our distinct models of cell signaling dynamics ., For a graphical summary of cell phenotypes and experimental observations , see Fig 1 ., We discuss three model variants , each composed of ( A ) a subsystem endowed with bistability , and ( B ) a subsystem responsible for negative feedback ., In short , Model 1 assumes ECM competition for ( A ) and feedbacks mediated by GTPases for ( B ) ., In contrast , in Model 2 we assume GTPase dynamics for ( A ) and ECM mediated feedbacks for ( B ) ., Model 3 resembles Model 2 , but further assumes limited total pool of each GTPase ( conservation ) , which turns out to be a critical feature ., See Tables 1 and 2 for details ., We analyze each model variant as follows: first , we determine ( bi/mono ) stable regimes of subsystem ( A ) in isolation , using standard bifurcation methods ., Next , we parameterize subsystem ( B ) so that its slow negative feedback generates oscillations when ( A ) and ( B ) are coupled in the model as a whole ., For this to work , ( B ) has to force ( A ) to transition from one monostable steady state to the other ( across the bistable regime ) as shown in the relaxation loop of Fig 2d ., This requirement informs the magnitude of feedback components ., Although these considerations do not fully constrain parameter choices , we found it relatively easy to then parameterize the models ( particularly Models 1b and 3 ) ., This implies model robustness , and suggests that broad regions of parameter space lead to behavior that is consistent with experimental observations ., Parameters associated with rates of activation and/or feedback strengths are summarized in the S1 Text ., The parameters γi represent the strengths of feedbacks 1 or 2 in Fig 2, ( b ) and 2, ( c ) ., γR controls the positive feedback ( 2 ) of Rac ( via lamellipod spreading ) on ECM signaling , and γρ represents the magnitude of negative feedback ( 1 ) from Rho to ECM signaling ( due to lamellipod contraction ) ., γE controls the strength of ECM activation of Rho in both feedbacks ( 1 ) and ( 2 ) ., When these feedbacks depend on cell state variables , we typically use Hill functions with magnitude γi , or , occasionally , linear expressions with slopes γ ¯ i ., ( These choices are distinguished by usage of overbar to avoid confusing distinct units of the γ’s in such cases . ), Experimental manipulations in 11 ( described in Section “Experimental observations constraining the models” ) can be linked to the following parameter variations ., In view of this correspondence between model parameters and experimental manipulations , our subsequent analysis and bifurcation plots will highlight the role of feedback parameters γR , ρ , E in the predictions of each model ., Rather than exhaustively mapping all parameters , our goal is to use 1 and 2-parameter bifurcation plots with respect to these parameters to check for ( dis ) agreement between model predictions and experimental observations ( O1–O3 ) ., This allows us to ( in ) validate several hypotheses and identify the eventual model ( the Hybrid , Model 3 ) and set of hypotheses that best account for observations ., We first investigated the possibility that lamellipod competition is responsible for bistability and that GTPases interactions create negative feedback that drives the oscillations observed in some cells ., To explore this idea , we represented the interplay between lamellipodia ( e . g . , competition for growth due to membrane tension or volume constraints ) , using an elementary Lotka-Volterra ( LV ) competition model ., For simplicity , we assume that AE , LE depend linearly on Rac and Rho concentration , and set BE = 0 ., ( This simplifies subsequent analysis without significantly affecting qualitative conclusions . ), With these assumptions , the ECM Eq ( 3c ) reduce to the well-known LV species-competition model ., First consider Eq ( 3c ) as a function of parameters ( AE , LE ) , in isolation from GTPase input ., As in the classical LV system 45 , competition gives rise to coexistence , bistability , or competitive exclusion , the latter two associated with a polarized cell ., These regimes are indicated on the parameter plane of Fig 3a with the ratios of contractile ( LE ) and protrusive ( AE ) strengths in each lamellipod as parameters ., ( In the full model , these quantities depend on Rac and Rho activities; the ratios LE ( ρk ) /AE ( Rk ) for lamellipod k = 1 , 2 lead to aggregate parameters that simplify this figure ., ) We can interpret the four parameter regimes in Fig 3a as follows: I ) a bistable regime: depending on initial conditions , either lamellipod “wins” the competition ., II ) Lamellipod 1 always wins ., III ) Lamellipod 2 always wins ., IV ) Lamellipods 1 and 2 coexist at finite sizes ., Regimes I-III represent strongly polarized cells , whereas IV corresponds to an unpolarized ( or weakly polarized ) cell ., We next asked whether , and under what conditions , GTPase-mediated feedback could generate relaxation oscillations ., Such dynamics could occur provided that slow negative feedback drives the ECM subsystem from an E1-dominated state to an E2-dominated state and back ., In Fig 3a , this correspond to motion along a path similar to one labeled, ( d ) in Panel ( a ) , with the ECM subsystem circulating between Regimes II and III ., This can be accomplished by GTPase feedback , since both Rho and Rac modulate LE ( contractile strength ) and AE ( protrusion strength ) ., We show this idea more explicitly in Fig 3, ( c ) –3, ( e ) by plotting E1 vs LE1 while keeping LE1 + LE2 constant ., ( Insets similarly show E2 vs LE1 . ), Each of Panels ( c-e ) corresponds to a 1-parameter bifurcation plot along the corresponding path labeled ( c-e ) in Panel ( a ) ., We find the following possible transitions: In Fig 3c , we find two distinct polarity states: either E1 or E2 dominate while the other is zero regardless of the value of LE1; a transition between such states does not occur ., In Fig 3d , there is a range of values of LE1 with coexisting stable low and high E1 values ( bistable regime ) flanked by regimes where either the lower or higher state loses stability ( monostable regimes ) ., As indicated by the superimposed loop , a cycle of protrusion ( green ) and contraction ( blue ) could then generate a relaxation oscillation as the system traverses its bistable regime ., In Fig 3e , a third possibility is that the system transits between E1-dominated , coexisting , and E2-dominated states ., In brief , for oscillatory behavior , GTPase feedback should drive the ECM-subsystem between regimes I , II , and III without entering regime IV ., Informed by this analysis , we next link the bistable ECM submodel to a Rac-Rho system ., To ensure that the primary source of bistability is ECM dynamics , a monostable version of the Rac-Rho sub-system is adopted by setting n = 1 in the GTPase activation terms AR , Aρ in Eqs ( 3a ) and ( 3b ) ., We consider three possible model variants ( 1a-1c ) for the full ECM / GTPase model ., In view of the conclusions thus far , we now explore the possibility that bistability stems from mutual antagonism between Rac and Rho , rather than lamellipod competition ., To do so , we chose Hill coefficients n = 3 in the rates of GTPase activation , AR , Aρ ., We then assume that ECM signaling both couples the lamellipods and provides the requisite slow negative feedback ., Here we consider the case that GTPases are abundant , so that the levels of inactive Rac and Rho ( RI , ρI ) are constant ., We first characterize the GTPase dynamics with bR , ρ as parameters ., Subsequently , we include ECM signaling dynamics and determine how the feedback drives the dynamics in the ( bR , bρ ) parameter plane ., Isolated from the ECM influence , each lamellipod is independent so we only consider the properties of GTPase signaling in one ., This mutually antagonistic GTPase submodel is the well-known “toggle switch” 50 that has a bistable regime , as shown in the ( bR , bρ ) plane of Fig 4a ., ECM signaling affects the Rac / Rho system only as an input to bρ ., A linear dependence of bρ on Ek failed to produce an oscillatory parameter regime , so we used a nonlinear Hill type dependence with basal and saturating components ., Furthermore , for GTPase influence on ECM signaling we use Hill functions for the influence of Rho ( in LE ) and Rac ( in BE ) on protrusion and contraction ., We set AE = 0 in this model for simplicity ., ( Nonzero AE can lead to compounded ECM bistability that we here do not consider . ), Given the structure of the bρ − bR parameter plane and the fact that ECM signaling variables only influence bρ , we can view oscillations as periodic cycles of contraction and protrusion forming a trajectory along one of horizontal dashed lines in Fig 4a ., This idea guides our parametrization of the model ., We select a value of bR that admits a bistable range of bρ in Fig 4a ., Next we choose maximal and minimal values of the function bρ ( EK ) that extend beyond the borders of the bistable range ., This choice means that the system transitions from the high Rac / low Rho state to the low Rac / high Rho state over each of the cycles of its oscillation ., With this parametrization , we find oscillatory dynamics , as shown in Fig 4b ., We now consider the two-lamellipod system with the above GTPase module in each lamellipod; we challenge the full model with experimental observations ., Since each lamellipod has a unique copy of the Rac-Rho module , ECM signaling provides the only coupling between the two lamellipods ., First , we observed that inhibition of ROCK ( reduction of γρ in Fig 4b ) suppress oscillations ., However the resulting stationary state is non-polar , in contrast to experimentally observed increase in the fraction of polarized cells ( O1 ) ., We adjusted the coupling strength ( lc ) to ensure that this disagreement was not merely due to insufficient coupling between the two lamellipods ., While an oscillatory regime persists , the discrepancy with ( O1 ) is not resolved: the system oscillates , but inhibiting ROCK gives rise to a non-polarized stationary state , contrary to experimental observations ., Yet another problematic feature of the model is its undue sensitivity to the strength of Rac activation ( bR ) ., This is evident from a comparison of the dashed lines in Fig 4a ., A small change in bR ( vertical shift ) dramatically increases the range of bistability ( horizontal span ) , and hence the range of values of bρ to be traversed in driving oscillations ., This degree of sensitivity seems inconsistent with biological behavior ., It is possible that an alternate formulation of the model ( different kinetic terms or different parametrization ) might fix the discrepancies noted above , so we avoid ruling out this scenario altogether ., In our hands , this model variant failed ., However a simple augmentation , described below , addresses all deficiencies , and leads to the final result ., In our third and final step , we add a small but significant feature to the bistable GTPase model to arrive at a working variant that accounts for all observations ., Keeping all equations of Model 2 , we merely drop the assumption of unlimited Rac and Rho ., We now require that the total amount of each GTPase be conserved in the cell ., This new feature has two consequences ., First , lamellipods now compete not only for growth , but also for limited pools of Rac and Rho ., This , along with rapid diffusion of inactive GTPases across the cell 30 , 31 , 51 provides an additional global coupling of the two lamellipods ., This seemingly minor revision produces novel behavior ., We proceed as before , first analyzing the GTPase signaling system on its own ., With conservation , the bR − bρ plane has changed from its previous version ( Fig 4a for Model 2 ) to Fig 5a ., For appropriate values of bR , there is a significant bistable regime in bρ ., Indeed , we find three regimes of behavior as the contractile strength in lamellipod k , bρ ( Ek ) , varies: a bistable regime where polarity in either direction is possible , a regime where lamellipod j “wins” ( Ej > Ek , left of the bistable regime ) , and a regime where lamellipod k “wins” ( right of the bistable regime ) ., Only polarity in a single direction is possible on either side of the bistable regime ., As in Model 2 , we view oscillations in the full model as cycles of lamellipodial protrusion and contraction that modify bρ ( Ek ) over time , and result in transitions between the three polarity states ., To parameterize the model , we repeat the process previously described ( choose a value of bR consistent with bistability , then choose the dependence of bρ on ECM signaling so as to traverse that entire bistable regime . ) We couple the GTPase system with ECM equations as before ., We then check for agreement with observations ( O1–O3 ) ., As shown in Fig 5, ( e ) and 5, ( f ) , the model produces both polarized and oscillatory solutions ., To check consistency with experiments , we mapped the dynamics of this model with respect to both ROCK mediated contraction and PI3K mediated protrusion ( Fig 5c ) ., Inhibiting ROCK ( Fig 5b , decreasing γρ ) results in a transition from oscillations to polarized states , consistent with ( O1 ) ., PI3K upregulation promotes oscillations ( increasing γR , Fig 5c ) , characteristic of the more invasive cell line 1205Lu ( consistent with O2 ) ., Finally , increased fibronectin density ( increased γE , Fig 5d ) also promotes oscillations , consistent with ( O3 ) ., We conclude that this Hybrid Model can account for polarity and oscillations , and that it is consistent with the three primary experimental observations ( O1–3 ) ., Finally , Model 3 can recapitulate such observations with more reasonable timescales for GTPase and ECM dynamics than were required for Model variant 1b ., It is apparent that Model 3 contains two forms of lamellipodial coupling: direct ( mechanical ) competition and competition for the limited pools of inactive Rac and Rho ., While the former is certain to be an important coupling in some contexts or conditions 52 , we find that it is dispensable in this model ( e . g , see lc = 0 in Fig 5c ) ., We comment about the effect of such coupling in the Discussion ., In the context of this final model , we also tested the effect of ECM activation of Rac ( in addition to the already assumed effect on Rho activation ) ., As shown in Fig 5d ( dashed curves ) , the essential bifurcation structure is preserved when this modification is incorporated ( details in the S1 Text , and implications in the Discussion ) ., To summarize , Model 1b was capable of accounting for all observations , but required conservation of GTPase to do so ., This model was however rejected due to unreasonable time scales needed to give rise to oscillations ., Model 2 could account for oscillations with appropriate timescales , but it appears to be highly sensitive to parameters and , in our hands , inconsistent with experimental observations ., Model 3 , which combines the central features of Models 1b and 2 , has the right mix of timescales , and agrees with experimental observations ., In that final Hybrid Model , ECM based coupling ( lc ) due to mechanical tension or competition for other resources is not essential , but its inclusion makes oscillations more prevalent ( Fig 5b and 5e ) ., Furthermore , in this Hybrid Model , we identify two possible negative feedback motifs , shown in Fig 2b ., These appear to work cooperatively in promoting oscillations ., As we have argued , feedbacks are tuned so that ECM signaling spans a range large enough that bρ ( Ek ) traverses the entire bistable regime ( Fig 5a ) ., This is a requirement for the relaxation oscillations schematically depicted in Fig 2c ., Within an appropriate set of model parameters , either feedback could , in principle , accomplish this ., Hence , if Feedback 1 is sufficiently strong , Feedback 2 is superfluous and vice versa ., Alternatively , if neither suffices on its own , the combination of both could be sufficient to give rise to oscillations ., Heterogeneity among these parameters could thus be responsible for the fact that in ROCK inhibition experiments ( where Feedback 1 is essentially removed ) , most but not all cells transition to the persistent polarity phenotype ., The Hybrid Model ( Model 3 ) is consistent with observations O1–O3 ., We can now challenge it with several further experimental tests ., In particular , we make two predictions ., Migrating cells can exhibit a variety of behaviors ., These behaviors can be modulated by the cell’s internal state , its interactions with the environment , or mutations such as those leading to cancer progression ., In most cases , the details of mechanisms underlying a specific behavior , or leading to transitions from one phenotype to another are unknown or poorly understood ., Moreover , even in cases where one or more defective proteins or genes are known , the complexity of signaling networks make it difficult to untangle the consequences ., Hence , using indirect observations of cell migration phenotypes to elucidate the properties of underlying signaling modules and feedbacks are , as argued here , a useful exercise ., Using a sequence of models and experimental observations ( O1–O3 ) we tested several plausible hypotheses for melanoma cell migration phenotypes observed in 11 ., By so doing , we found that GTPase dynamics are fundamental to providing, 1 ) bistability associated with polarity and, 2 ) coupling between competing lamellipods to select a single “front” and “rear” ., ( This coupling is responsible for the antiphase lamellipodial oscillations ) ., Further , slow feedback between GTPase and ECM signaling resulting from contraction and protrusion generate oscillations similar those observed experimentally ., The single successful model , Hybrid Model ( Model 3 ) , is essentially a relaxation oscillator ., Mutual inhibition between the limited pools of Rac and Rho , sets up a primary competition between lamellipods that produces a bistable system with polarized states pointing in opposite directions ., Interactions between GTPase dynamics and ECM signaling provide the negative feedback required to flip this system between the two polarity states , generating oscillations for appropriate parameters ., Results of Model 3 are consistent with observations ( O1–O3 ) , and lead to predictions ( P1–P2 ) , that are also confirmed by experimental observations 11 ., In 11 , it is further shown that the fraction of cells exhibiting each of these behaviors can be quantitatively linked to heterogeneity in the ranges of parameters representing the cell populations in the model’s parameter space ., In our models , we assumed that the dominant effect of ECM signaling input is to activate Rho , rather than Rac ., In reality , both GTPases are likely activated to some extent in a cell and environment-dependent manner 41 , 42 ., We can incorporate ECM activation of Rac by amending the term AR so that its magnitude is dependent on ECM signaling ( Ek ) ., Doing so results in a shift in the borders of regimes we have indicated in Fig 5d ( dashed versus solid borders , see S1 Text for more details ) ., So long as Rho activation is the dominant effect , this hardly changes the qualitative results ., As the strength of feedback onto Rac strengthens , however , the size of the oscillatory regime is reduced ., Thus if feedback onto Rac dominates , loss of oscillations would be predicted ., This is to be expected based on the structure of these interactions ., Where ECM → Rho mediates a negative feedback , ECM → Rac mediates a positive feedback , which would be expected to suppress oscillatory behavior ., Thus while the ECM likely mediates multiple signaling feedbacks , this modeling suggest feedback onto Rho is most consistent with observations ., We have argued that conservation laws ( fixed total amount of Rac and fixed total amount of Rho ) in the cell plays an important role in the competition between lamellipods ., Such conservation laws are also found to be important in a number of other settings ., Fully spatial ( PDE ) modeling of GTPase function has shown that conservation significantly alters signaling dynamics 27 , 31 , 54 ., In 55 , it was shown that stochastically initiated hot spots of PI3K appeared to be globally coupled , potentially through a shared and conserved cytoplasmic pool of a signaling regulator ., Conservation of MIN proteins , which set up a standing wave oscillation during bacterial cell division , has been shown to give rise to a new type of Turing instability 56 ., Finally , interactions between conserved GTPase and negative regulation from F-actin in a PDE model was shown to give rise to a new type of conservative excitable dynamics 46 , 47 , which have been linked to the propagation of actin waves 57 ., These results provide interesting insights into the biology of invasive cancer cells ( in melanoma in particular ) , and shed light onto the mechanisms underlying the extracellular matrix-induced polarization and migration of normal cells ., First , they illustrate that diverse polarity and migration patterns can be captured within the same modeling framework , laying the foundation for a better understanding of seemingly unrelated and diverse behaviors previously reported ., Second , our results present a mathematical and computational platform that distills the critical aspects and molecular regulators in a complex signaling cascade; this platform could be used to identify promising single molecule and molecular network targets for possible clinical intervention . | Introduction, Results, Discussion | Protrusion and retraction of lamellipodia are common features of eukaryotic cell motility ., As a cell migrates through its extracellular matrix ( ECM ) , lamellipod growth increases cell-ECM contact area and enhances engagement of integrin receptors , locally amplifying ECM input to internal signaling cascades ., In contrast , contraction of lamellipodia results in reduced integrin engagement that dampens the level of ECM-induced signaling ., These changes in cell shape are both influenced by , and feed back onto ECM signaling ., Motivated by experimental observations on melanoma cells lines ( 1205Lu and SBcl2 ) migrating on fibronectin ( FN ) coated topographic substrates ( anisotropic post-density arrays ) , we probe this interplay between intracellular and ECM signaling ., Experimentally , cells exhibited one of three lamellipodial dynamics: persistently polarized , random , or oscillatory , with competing lamellipodia oscillating out of phase ( Park et al . , 2017 ) ., Pharmacological treatments , changes in FN density , and substrate topography all affected the fraction of cells exhibiting these behaviours ., We use these observations as constraints to test a sequence of hypotheses for how intracellular ( GTPase ) and ECM signaling jointly regulate lamellipodial dynamics ., The models encoding these hypotheses are predicated on mutually antagonistic Rac-Rho signaling , Rac-mediated protrusion ( via activation of Arp2/3 actin nucleation ) and Rho-mediated contraction ( via ROCK phosphorylation of myosin light chain ) , which are coupled to ECM signaling that is modulated by protrusion/contraction ., By testing each model against experimental observations , we identify how the signaling layers interact to generate the diverse range of cell behaviors , and how various molecular perturbations and changes in ECM signaling modulate the fraction of cells exhibiting each ., We identify several factors that play distinct but critical roles in generating the observed dynamic: ( 1 ) competition between lamellipodia for shared pools of Rac and Rho , ( 2 ) activation of RhoA by ECM signaling , and ( 3 ) feedback from lamellipodial growth or contraction to cell-ECM contact area and therefore to the ECM signaling level . | Cells crawling through tissues migrate inside a complex fibrous environment called the extracellular matrix ( ECM ) , which provides signals regulating motility ., Here we investigate one such well-known pathway , involving mutually antagonistic signalling molecules ( small GTPases Rac and Rho ) that control the protrusion and contraction of the cell edges ( lamellipodia ) ., Invasive melanoma cells were observed migrating on surfaces with topography ( array of posts ) , coated with adhesive molecules ( fibronectin , FN ) by Park et al . , 2017 ., Several distinct qualitative behaviors they observed included persistent polarity , oscillation between the cell front and back , and random dynamics ., To gain insight into the link between intracellular and ECM signaling , we compared experimental observations to a sequence of mathematical models encoding distinct hypotheses ., The successful model required several critical factors ., ( 1 ) Competition of lamellipodia for limited pools of GTPases ., ( 2 ) Protrusion / contraction of lamellipodia influence ECM signaling ., ( 3 ) ECM-mediated activation of Rho ., A model combining these elements explains all three cellular behaviors and correctly predicts the results of experimental perturbations ., This study yields new insight into how the dynamic interactions between intracellular signaling and the cell’s environment influence cell behavior . | cell physiology, cell motility, engineering and technology, enzymes, signal processing, biological cultures, enzymology, cell polarity, developmental biology, gtpase signaling, cell cultures, melanoma cells, cellular structures and organelles, research and analysis methods, extracellular matrix signaling, proteins, extracellular matrix, guanosine triphosphatase, biochemistry, signal transduction, hydrolases, cell biology, cell migration, biology and life sciences, cultured tumor cells, cell signaling | null |
1,170 | journal.pcbi.1002932 | 2,013 | A Protein Turnover Signaling Motif Controls the Stimulus-Sensitivity of Stress Response Pathways | Eukaryotic cells must constantly recycle their proteomes ., Of the approximately 109 proteins in a typical mouse L929 fibrosarcoma cell , 106 are degraded every minute 1 ., Assuming first-order degradation kinetics , this rate of constitutive protein turnover , or flux , imposes an average half-life of 24 hours ., Not all proteins are equally stable , however ., Genome-wide quantifications of protein turnover in HeLa cells 2 , 3 and 3T3 murine fibroblasts 4 show that protein half-lives can span several orders of magnitude ., Thus while some proteins exist for months and even years 5 , others are degraded within minutes ., Gene ontology terms describing signaling functions are highly enriched among short-lived proteins 3 , 6 , 7 , suggesting that rapid turnover is required for proper signal transduction ., Indeed , defects in protein turnover are implicated in the pathogenesis of cancer and other types of human disease 8 , 9 ., Conspicuous among short-lived signaling proteins are those that regulate the p53 and NFκB stress response pathways ., The p53 protein itself , for example , has a half-life of less than 30 minutes 10 , 11 ., Mdm2 , the E3 ubiquitin ligase responsible for regulating p53 , has a half-life of 45 minutes 4 ., And the half-life of unbound IκBα , the negative feedback regulator of NFκB , is less than 15 minutes 12 , 13 ( see Figure S1 ) , requiring that 6 , 500 new copies of IκBα be synthesized every minute 13 ., Given the energetic costs of protein synthesis , we hypothesized that rapid turnover of these proteins is critical to the stimulus-response behavior of their associated pathways ., To test our hypothesis we developed a method to systematically alter the rates of protein turnover in mass action models without affecting their steady state abundances ., Our method requires an analytical expression for the steady state of a model , which we derive using the py-substitution method described in a companion manuscript ., From this expression , changes in parameter values that do not affect the steady state are found in the null space of the matrix whose elements are the partial derivatives of the species abundances with respect to the parameters ., We call this vector space the isostatic subspace ., After deriving a basis for this subspace , linear combinations of basis vectors identify isostatic perturbations that modify specific reactions independently of all the others , for example those that control protein turnover ., By systematic application of these isostatic perturbations to a model operating at steady state , the effects of flux on stimulus-responsiveness can be studied in isolation of changes to steady-state abundances ( see Methods ) ., We first apply our method to a prototypical negative feedback module in which an activator controls the expression of its own negative regulator ., We show that reducing the flux of either the activator or its inhibitor slows the response to stimulation ., However , reducing the flux of the activator lowers the magnitude of the response , whereas reducing the flux of the inhibitor increases it ., This complementarity allows the activator and inhibitor fluxes to exert precise control over the modules response to stimulation ., Given this level of control , we hypothesized that rapid turnover of p53 and Mdm2 must be required for p53 signaling ., A hallmark of p53 is that it responds to DNA damage in a series of digital pulses 14–18 ., These pulses are important for determining cell fate 19–21 ., To test whether high p53 and Mdm2 flux are required for p53 pulses , we applied our method to a model in which exposure to ionizing radiation ( IR ) results in oscillations of active p53 17 ., By varying each flux over three orders of magnitude , we show that high p53 flux is indeed required for oscillations ., In contrast , high Mdm2 flux is not required , but rather controls the refractory time in response to transient stimulation ., If the flux of Mdm2 is low , a second stimulus after 22 hours does not result in appreciable activation of p53 ., In contrast to p53 , the flux of NFκB turnover is very low , while the flux of its inhibitor , IκB , is very high ., Prior to stimulation , most NFκB is sequestered in the cytoplasm by IκB ., Upon stimulation by an inflammatory signal like tumor necrosis factor alpha ( TNF ) , IκB is phosphorylated and degraded , resulting in rapid but transient translocation of NFκB to the nucleus and activation of its target genes 22–24 ., Two separate pathways are responsible for the turnover of IκB 12 ., In one , IκB bound to NFκB is phosphorylated by the IκB kinase ( IKK ) and targeted for degradation by the ubiquitin-proteasome system ., In the other pathway , unbound IκB is targeted for degradation and requires neither IKK nor ubiquitination 25 , 26 ., We call these the “productive” and “futile” fluxes , respectively ., Applying our method to a model of NFκB activation , we show that the futile flux acts as a negative regulator of NFκB activation while the productive flux acts as a positive regulator ., We find that turnover of bound IκB is required for NFκB activation in response to TNF , while high turnover of unbound IκB prevents spurious activation of NFκB in response to low doses of TNF or ribotoxic stress caused by ultraviolet light ( UV ) ., As with p53 then , juxtaposition of a positive and negative regulatory flux govern the sensitivity of NFκB to different stimuli , and may constitute a common signaling motif for controlling stimulus-specificity in diverse signaling pathways ., To examine the effects of flux on stimulus-responsiveness , we built a prototypical negative feedback model reminiscent of the p53 or NFκB stress-response pathways ( Figure 1A ) ., In it , an activator “X” is constitutively expressed but catalytically degraded by an inhibitor , “Y” ., The inhibitor is constitutively degraded but its synthesis requires X . Activation is achieved by instantaneous depletion of Y , the result of which is accumulation of X until negative feedback forces a return to steady state ., The dynamics of this response can be described by two values: , the amplitude or maximum value of X after stimulation , and , the time at which is observed ( Figure 1B ) ., Parameters for this model were chosen such that the abundances of both X and Y are one arbitrary unit and X achieves its maximum value of at time , where the units of time are also arbitrary ., To address the role of these parameters in shaping the response of the activator , we first performed a traditional sensitivity analysis ., We found that increasing the synthesis of X ( Figure 1C ) , or decreasing the degradation of X ( Figure 1D ) or the synthesis of Y ( Figure 1E ) , all result in increased responsiveness ., However , these changes also increase the abundance of X . To distinguish between the effects caused by changes in flux and those caused by changes in abundance , we developed a method that alters the flux of X and Y while maintaining their steady state abundances at ., Using this method , we found that increasing the flux of X increases responsiveness ( Figure 1G ) , but not to the same extent as increasing the synthesis parameter alone ( Figure 1C ) ., In contrast , reducing the flux of Y yields the same increase in responsiveness as decreasing the synthesis of Y ( Figure 1E ) or the degradation of X ( Figure 1D ) ., These observations suggest that both the flux and abundance of X are important regulators of the response , as is the flux of Y , but not its abundance ., This conclusion is supported by the observation that when the abundance of Y is increased by reducing its degradation , there is little effect on signaling ( Figure 1F ) ., To further characterize the effects of flux on the activators response to stimulation , we applied systematic changes to the fluxes of X and Y prior to stimulation and plotted the resulting values of and ., Multiplying the flux of X over the interval showed , as expected , that the value of increases while the value of deceases ( Figure 2A ) ., In other words , a high activator flux results in a strong , fast response to stimulation ., If we repeat the process with the inhibitor , we find that both and decrease as the flux increases; a high inhibitor flux results in a fast but weak response ( Figure 2B ) ., This result illustrates that fluxes of different regulators can have different but complementary effects on stimulus-induced signaling dynamics ., Complementarity suggests that changes in flux can be identified such that is altered independently of , or independently of ., Indeed , if both activator and inhibitor fluxes are increased in equal measure , is held fixed while the value of decreases ( Figure 2C ) ., Increasing both fluxes thus simultaneously reduces the timescale of the response without affecting its magnitude ., An equivalent relationship can be found such that remains fixed while is affected ( Figure 2D ) ., Because an increase in either flux will reduce , to alter without affecting requires an increase in one flux but a decrease in the other ., Also , is more sensitive to changes in the inhibitor flux versus the activator flux; small changes in the former must be paired with larger changes in the latter ., This capability to achieve any value of or indicates that flux can precisely control the response to stimulation , without requiring any changes to steady state protein abundance ., Given that flux precisely controls the dynamic response to stimulation in a prototypical signaling module , we hypothesized that for p53 , oscillations in response to DNA damage require the high rates of turnover reported for p53 and Mdm2 ., To test this , we applied our method to a published model of p53 activation in response to ionizing gamma radiation ( IR ) , a common DNA damaging agent ( Figure 3A ) 17 ., Because the model uses arbitrary units , we rescaled it so that the steady state abundances of p53 and Mdm2 , as well as their rates of synthesis and degradation , matched published values ( see Table S1 ) ., We note that these values are also in good agreement with the consensus parameters reported in 16 ., Next we implemented a multiplier of Mdm2-independent p53 flux and let it take values on the interval ., For each value we simulated the response to IR using a step function in the production of the upstream Signal molecule , , as previously described 17 ., To characterize the p53 response we let be the amplitude of stable oscillations in phosphorylated p53 ( Figure 3B ) , and use this as a metric for p53 sensitivity ., Where , we say the module is sensitive to IR stimulation ., We find that is greater than zero only when the flux of p53 is near its observed value or higher ( Figure 4A ) ., If the flux of p53 is reduced by 2-fold or more , p53 no longer stably oscillates in response to stimulation , but exhibits damped oscillations instead ., Interestingly , repeating this analysis with a multiplier for the Mdm2 flux over the same interval reveals that Mdm2 flux has little bearing on p53 oscillations ( Figure 4B ) ., For any value of the multiplier chosen , ., As with p53 , this multiplier alters the Signal-independent flux of Mdm2 but does not affect Signal-induced Mdm2 degradation ., If oscillations are already compromised by a reduced p53 flux , no concomitant reduction in Mdm2 flux can rescue the oscillations ( Figure 4C ) ., We therefore conclude that the flux of p53 , but not Mdm2 , is required for IR-sensitivity in the p53 signaling module ., What then is the physiological relevance of high Mdm2 flux ?, In the model , signal-mediated Mdm2 auto-ubiquitination 27 is a major contributor to Mdm2 degradation after stimulation ., If Signal production is transient , Mdm2 protein levels must be restored solely via Signal-independent degradation ., We therefore hypothesized that if the flux of Mdm2 is low , Mdm2 protein levels would remain elevated after stimulation and compromise sensitivity to subsequent stimuli ., To test this hypothesis we again let the Mdm2 flux multiplier take values over the interval ., For each value we stimulated the model with a 2-hour pulse of Signal production , followed by 22 hours of rest , followed by a second 2-hour pulse ( Figure 3B ) ., We defined to be the amplitude of the first peak of phosphorylated p53 and to be the amplitude of the second peak ., Sensitivity to the second pulse is defined as the difference between and , with indicating full sensitivity ., As seen in Figures 4D and E , the flux of p53 has no bearing on the sensitivity to the second pulse while the flux of Mdm2 strongly affects it ., At one one-hundredth the observed Mdm2 flux – corresponding to protein half-life of 3-days – over 20 , 000 fewer molecules of p53 are phosphorylated , representing more than a two-fold reduction in sensitivity ( Figure 4E ) ., This result is robust with respect to the interval of time chosen between pulses ( Figure S2 ) ., If the sensitivity to the second pulse is already compromised by a reduced Mdm2 flux , a concomitant reduction in p53 flux fails to rescue it , while an increase in p53 flux still further reduces it ( Figure 4F ) ., We therefore conclude that the flux of Mdm2 , and not p53 , controls the systems refractory time , and a high Mdm2 flux is required to re-establish sensitivity after transient stimulation ., A second major stress-response pathway is that of NFκB ., NFκB is potently induced by the inflammatory cytokine TNF , but shows a remarkable resistance to internal metabolic perturbations or ribotoxic stresses induced by ultraviolet light ( UV ) 13 , or to triggers of the unfolded protein response ( UPR ) 28 ., Like p53 , the dynamics of NFκB activation play a major role in determining target gene expression programs 29 , 30 ., Although NFκB is considered stable , the flux of IκBα – the major feedback regulator of NFκB – is conspicuously high ., We hypothesized that turnover of IκB controls the stimulus-responsiveness of the NFκB signaling module ., Beginning with a published model of NFκB activation 13 , we removed the beta and epsilon isoforms of IκB , leaving only the predominant isoform , IκBα ( hereafter , simply “IκB”; Figure 5A ) ., Steady state analysis of this model supported the observation that almost all IκB is degraded by either of two pathways: a “futile” flux , in which IκB is synthesized and degraded as an unbound monomer; and a “productive” flux , in which free IκB enters the nucleus and binds to NFκB , shuttles to the cytoplasm , then binds to and is targeted for degradation by IKK ( Figure 5B ) ., These two pathways account for 92 . 5% and 7 . 3% of the total IκB flux , respectively ., The inflammatory stimulus TNF was modeled as before , using a numerically-defined IKK activity profile derived from in vitro kinase assays 30 ( Figure 5A , variable ) ., Stimulating with TNF results in strong but transient activation of NFκB ., A second stimulus , ribotoxic stress induced by UV irradiation , was modeled as 50% reduction in translation and results in only modest activity 13 ., As above , we let be the amplitude of activated NFκB in response to TNF and the time at which is observed ., Analogously , we let be the amplitude of NFκB in response to UV , and the time at which NFκB activation equals one-half ( see Figure 5C ) ., We then implemented multipliers for the futile and productive flux and let each multiplier take values on the interval ., For each value we simulated the NFκB response to TNF and UV and plotted the effects on and ., The results show that reducing the productive flux yields a slower , weaker response to TNF ( Figure 6A ) ., By analogy to Figure 2 , this indicates that the productive flux of IκB is a positive regulator of NFκB activation ., In contrast , the futile flux acts as a negative regulator of NFκB activity , though its effects on and are more modest ( Figure 6B ) ., Thus , similar to p53 , the activation of NFκB is controlled by a positive and negative regulatory flux ., In response to UV , a reduction in either flux delays NFκB activation , but reducing the futile flux results in a significant increase in while reducing the productive flux has almost no effect ( Figure 6C and D ) ., Conversely , while an increase in the futile flux has no effect on , an increase in the productive flux results in a significant increase ., If we now define NFκB to be sensitive to TNF or UV when or are ten-fold higher than its active but pre-stimulated steady state abundance , then TNF sensitivity requires a productive flux multiplier , while UV insensitivity requires a productive flux multiplier and a futile flux multiplier ., This suggests that the flux pathways of IκB may be optimized to preserve NFκB sensitivity to external inflammatory stimuli while minimizing sensitivity to internal metabolic stresses ., In contrast to p53 , the negative regulatory flux of IκB dominates the positive flux ., We hypothesized that this imbalance must affect the sensitivity of NFκB to weak stimuli ., To test this hypothesis we generated dose-response curves for TNF and UV using the following multipliers for the futile flux: , , , and ( see Methods ) ., The results confirm that reducing the futile flux of IκB results in hypersensitivity at low doses of TNF ( Figure 7 , Row 1 ) ., At one one-hundredth the wildtype flux , a ten-fold weaker TNF stimulus yields an equivalent NFκB response to the full TNF stimulus at the wildtype flux ., Similarly , a high futile flux prevents strong activation of NFκB in response to UV ( Figure 7 , Row 2 ) ., At and times the futile flux , UV stimulation results in a 20-fold increase in NFκB activity , compared to just a 2-fold increase at the wildtype flux ., We therefore conclude that turnover of unbound IκB controls the EC50 of the NFκB signaling module , and that rapid turnover renders NFκB resistant to metabolic and spurious inflammatory stimuli ., Previous studies have shown that the fluxes of p53 10 , 11 , its inhibitor Mdm2 31 , 32 , and the unbound negative regulator of NFκB , IκB 12 , are remarkably high ., To investigate whether rapid turnover of these proteins is required for the stimulus-response behavior of the p53 and NFκB stress response pathways , we developed a computational method to alter protein turnover , or flux , independently of steady state protein abundance ., For p53 , we show that high flux is required for sensitivity to sustained stimulation after ionizing radiation ( Figure 4A ) ., Interestingly , inactivating mutations in p53 have long been known to enhance its stability 33 , either by interfering with Mdm2-catalyzed p53 ubiquitination 34 , 35 , or by affecting p53s ability to bind DNA and induce the expression of new Mdm2 36–39 ., Inactivation of p53 also compromises the cells sensitivity to IR 40 , 41–43 ., Our results offer an intriguing explanation for this phenomenon , that p53 instability is required for oscillations in response to IR ., Indeed , IR sensitivity was shown to correlate with p53 mRNA abundance 44–46 , a likely determinant of p53 protein flux ., In further support of this hypothesis , mouse embryonic fibroblasts lacking the insulin-like growth factor 1 receptor ( IGF-1R ) exhibit reduced p53 synthesis and degradation , but normal protein abundance ., These cells were also shown to be insensitive to DNA damage , caused by the chemotherapeutic agent etoposide 32 ., Like p53 , increased stability of Mdm2 has been observed in human leukemic cell lines 47 , and Mdm2 is a strong determinant of IR sensitivity 48 , 49 ., Again our results suggest these observations may be related ., Activation of p53 in response to IR is mediated by the ATM kinase ( “Signal” in Figure 3 ) 50 , 51 ., Batchelor et al . show that saturating doses of IR result in feedback-driven pulses of ATM , and therefore p53 17 ., In Figure 4B we show that these are independent of Mdm2 flux ., However , sub-saturating doses of IR ( 10 Gy versus 0 . 5 Gy ) 52 , 53 cause only transient activation of ATM 54 , after which constitutive Mdm2 synthesis is required to restore p53 sensitivity ( Figure 4E ) ., This suggests that high Mdm2 flux is required for sensitivity to prolonged exposure to sub-saturating doses of IR ., Indeed , this inverse relationship between flux and refractory time has been observed before ., In Ba/F3 pro-B cells , high turnover of the Epo receptor maintains a linear , non-refractory response over a broad range of ligand concentrations 55 ., For NFκB , our method revealed that an isostatic reduction in the half-life of IκB sensitizes NFκB to TNF ( Figure 7A ) , as well as to ribotoxic stress agents like UV ( Figure 7B ) ., This observation agrees with previous theoretical studies using a dual kinase motif , where differential stability in the effector isoforms can modulate the dynamic range of the response 56 ., For NFκB , the flux of free IκB acts as a kinetic buffer against weak or spurious stimuli , similar to serial post-translational modifications on the T cell receptor 57 , or complementary kinase-phosphatase activities in bacterial two-component systems 58 ., In contrast , increasing the half-life of IκBα alone – without a coordinated increase in its rate of synthesis – increases the abundance of free IκBα and actually dampens the activity of NFκB in response to TNF 25 ., This difference highlights the distinction between isostatic perturbations and traditional , unbalanced perturbations that also affect the steady state abundances ., It also calls attention to a potential hazard when trying to correlate stimulus-responsiveness with protein abundance measurements: observed associations between responses and protein abundances do not rule out implied changes in kinetic parameters as the causal link ., Indeed static , and not kinetic measurements , are the current basis for molecular diagnosis of clinical specimens ., Thus while nuclear expression of p53 59–66 and NFκB 67–69 have been shown to correlate with resistance to treatment in human cancer , the correlation is not infallible 40 , 70–74 ., If stimulus-responsiveness can be controlled by protein turnover independently of changes to steady state abundance , then correlations between abundance and a therapeutic response may be masked by isostatic heterogeneity between cells ., For p53 and NFkB , we show that stimulus sensitivity can be controlled by a paired positive and negative regulatory flux ., We propose that this pairing may constitute a common regulatory motif in cell signaling ., In contrast to other regulatory motifs 75 , 76 , the “flux motif” described here does not have a unique structure ., The positive p53 flux , for example , is formed by the synthesis and degradation of p53 itself , while the positive flux in the NFκB system includes the nuclear import of free NFκB and export of NFκB bound to IκB ., For p53 , the negative flux is formed by synthesis and degradation of Mdm2 , while for NFκB it is formed by the synthesis , shuttling , and degradation of cytoplasmic and nuclear IκB ., Thus the reaction structure for each flux is quite different , but they nevertheless form a regulatory motif that is common to both pathways ( Figure 8 ) ., And since the mathematical models used here are only abstractions of the underlying network , the true structure of the p53 and NFκB flux motifs are in reality even more complex ., The identification of a flux motif that controls stimulus-responsiveness independently of protein abundances may prompt experimental investigation into the role of flux in signaling ., At a minimum , this could be achieved using fluorescently-labeled activator and inhibitor proteins in conjunction with tunable synthesis and degradation mechanisms ., The tet-responsive promoter system 77 , 78 , for example , could provide tunable synthesis , while the CLP-XP system 79 could provide tunable degradation ., For the two-dimensional analysis presented here , and to avoid confounding effects on signaling dynamics caused by shared synthesis and degradation machinery 80 , independently tunable synthesis and degradation mechanisms may be required ., If these techniques are applied to mutants lacking the endogenous regulators , this would further allow decreases in protein flux to be studied in addition to strictly increases ., Finally , in this study we have examined the effects of flux on stimulus-responsiveness , but in a typical signaling module , many other isostatic perturbations exist ., For example , the isostatic subspace of our NFκB model has 18 dimensions , of which only a few were required by the analysis presented here ., By simultaneously considering all isostatic perturbations , some measure of the dynamic plasticity of a system can be estimated , perhaps as a function of its steady-state ., Such an investigation can inform diagnosis of biological samples , and whether information from a single , static observation is sufficient to predict the response to a particular chemical treatment , or whether live-cell measurements are required as well ., As we have shown that protein turnover can be a powerful determinant of stimulus-sensitivity , we anticipate that kinetic measurements will be useful predictors of sensitivity to chemical therapeutics ., To begin , we assume that the system of interest has been modeled using mass action kinetics and that the steady state abundance of every biochemical species is a known function of input parameters ., In other words , such that ( 1 ) Equation 1 is the well-known steady-state equation; is a vector of independent parameters and is the vector of species abundances ., We use an overbar to denote a vector that satisfies Equation 1 ., For excellent reviews on mass action models and their limitations , see 81–83 ., For a method on finding analytical solutions to the steady state equation , see our accompanying manuscript ., Next , we wish to find a change in the input parameters such that the resulting change in the species abundances is zero , where is defined as Thus for , we require that The right-hand side of this equation can be approximated by a truncated Taylor series , as follows:where is the Jacobian matrix whose elements are the partial derivatives of each species with respect to each parameter ., Thus , for we require that In other words , must lie in the null space of ., We call this the isostatic subspace of the model – parameter perturbations in this subspace will not affect any of the steady-state species abundances ., If lies within the isostatic subspace , it is an isostatic perturbation vector ., Let be a matrix whose columns form a basis for the isostatic subspace ., Then a general expression for an isostatic perturbation vector is simply ( 2 ) where is a vector of unknown basis vector coefficients ., Finally , Equation 2 can be solved for a specific linear combination of basis vectors that achieves the desired perturbation ., In our case we identified those combinations that result in changes to protein turnover ., Our prototypical negative feedback model consists of two species , an activator “X” and an inhibitor “Y” , and four reactions , illustrated in Figure 1A ., Let denote the abundance of the activator and denote the abundance of the inhibitor ., An analytical expression for the steady-state of this model was identified by solving Equation 1 for the rates of synthesis , giving ( 3 ) ( 4 ) To parameterize the model we first let ., Degradation rate constants were then calculated such that at time , where again is the maximum amplitude of the response ., Activation was achieved by instantaneous reduction of to ., To modify the flux , we defined flux multipliers and such that and ., Note that by virtue of Equations 3 and 4 , values for and other than result in commensurate changes in and such that steady state is preserved ., See file “pnfm . sci” in Protocol S1 for details ., Figures 2A and 2B were achieved by letting and vary over the interval , then calculating the altered vector of rate constants and simulating the models response to stimulation ., Figure 2C required letting vary over this same interval while having ., Finally , Figure 2D was achieved by letting vary over the same interval , and for each value of , numerically calculating the value of that gave ., All species , reactions , and rate equations required by our model of p53 oscillations are as previously described 17 ., Our only modification was to scale the parameter values so that the rates of p53 and Mdm2 synthesis and degradation , as well as their steady-state abundances , matched published observations ( see Table S1 ) ., Specifically we let To derive a steady-state solution for this model , we solved Equation 1 for the steady-state abundance of Mdm2 and the rate of Mdm2-independent p53 degradation , giving To simulate the response to ionizing radiation we used the ( scaled ) stimulus given in 17 ., Namely , at time we let the rate of Signal production , , go to ., This stimulus was either maintained indefinitely ( Figure 4A–C ) or for just 2 hours , followed by 22 hours of rest , followed by a second 2 hour stimulation ( Figure 4D–F ) ., Changes in p53 or Mdm2 flux were achieved as above , by defining modifiers and such that ( 5 ) ( 6 ) ( 7 ) Prior to stimulation , we let one modifier take values on the interval while holding the other modifier constant ., Equations 6 and 7 ensure that the p53-independent flux of Mdm2 is modified without affecting its steady-state abundance ., Equation 5 , which is slightly more complicated , results in changes to the rate of Mdm2-independent p53 degradation , , by modifying the independent parameter , which controls the rate of p53 synthesis ., This yields the desired Numerical integration was carried out to time ., After each integration , we defined to be the minimum vertical distance between any adjacent peak and trough in phosphorylated p53 , and and to be the amplitudes of the first and second peak , respectively ., Details of this model can be found in the file “p53b . sci” in Protocol S1 ., For more information on the time delay parameters and , and their role in generating oscillations , see 84 , 85 ., Our model of NFκB activation is similar to the one described in 13 , except the beta and epsilon isoforms of IκB have been removed ., Our model has 10 species and 26 reactions , the majority of which are illustrated in Figure 5A ., Rate equations and parameter values are identical to those in 13 ., An analytical expression for the steady-state of this model was found by solving Equation 1 for the following dependent variables: , , , , and , and the rate constants , , and ., The precise expressions for these variables are extremely cumbersome but may be found in their entirety in the file “nfkb . sci” in Protocol S1 ., Activation of NFκB is achieved by either of two , time-dependent numerical input variables , and ., modifies the activity of IKK while modifies the efficiency of IκB translation ., Both have a finite range of and have unstimulated , wildtype values of and , respectively ., The inflammatory stimulus TNF is modeled using a unique function of derived from in vitro kinase assays 30 ., Since these assays only measured IKK activity out to 4 hours , we extended each stimulus by assuming the value of at 4 hours is maintained out to 24 hours ., Justification for this can be found in the 24-hour kinase assays in 86 , which shows no IKK activity between 8 and 24 hours after TNF stimulation ., UV stimulation is modeled using a step decrease in the value of from 1 . 0 to 0 . 5 for the entire 24 hours ., This mimics the 50% reduction in translational efficiency observed in 13 ., Steady-state analysis of this model revealed that over 99% of all IκB was degraded via either of two pathways , futile ( 92% ) and productive ( 7% ) ., See Figure 5B for the composition of these pathways ., To modify the flux through either pathway without altering any of the steady-state abundances , the algebraic method described above proved absolutely necessary . | Introduction, Results, Discussion, Methods | Stimulus-induced perturbations from the steady state are a hallmark of signal transduction ., In some signaling modules , the steady state is characterized by rapid synthesis and degradation of signaling proteins ., Conspicuous among these are the p53 tumor suppressor , its negative regulator Mdm2 , and the negative feedback regulator of NFκB , IκBα ., We investigated the physiological importance of this turnover , or flux , using a computational method that allows flux to be systematically altered independently of the steady state protein abundances ., Applying our method to a prototypical signaling module , we show that flux can precisely control the dynamic response to perturbation ., Next , we applied our method to experimentally validated models of p53 and NFκB signaling ., We find that high p53 flux is required for oscillations in response to a saturating dose of ionizing radiation ( IR ) ., In contrast , high flux of Mdm2 is not required for oscillations but preserves p53 sensitivity to sub-saturating doses of IR ., In the NFκB system , degradation of NFκB-bound IκB by the IκB kinase ( IKK ) is required for activation in response to TNF , while high IKK-independent degradation prevents spurious activation in response to metabolic stress or low doses of TNF ., Our work identifies flux pairs with opposing functional effects as a signaling motif that controls the stimulus-sensitivity of the p53 and NFκB stress-response pathways , and may constitute a general design principle in signaling pathways . | Eukaryotic cells constantly synthesize new proteins and degrade old ones ., While most proteins are degraded within 24 hours of being synthesized , some proteins are short-lived and exist for only minutes ., Using mathematical models , we asked how rapid turnover , or flux , of signaling proteins might regulate the activation of two well-known transcription factors , p53 and NFκB ., p53 is a cell cycle regulator that is activated in response to DNA damage , for example , due to ionizing radiation ., NFκB is a regulator of immunity and responds to inflammatory signals like the macrophage-secreted cytokine , TNF ., Both p53 and NFκB are controlled by at least one flux whose effect on activation is positive and one whose effect is negative ., For p53 these are the turnover of p53 and Mdm2 , respectively ., For NFκB they are the TNF-dependent and -independent turnover of the NFκB inhibitor , IκB ., We find that juxtaposition of a positive and negative flux allows for precise tuning of the sensitivity of these transcription factors to different environmental signals ., Our results therefore suggest that rapid synthesis and degradation of signaling proteins , though energetically wasteful , may be a common mechanism by which eukaryotic cells regulate their sensitivity to environmental stimuli . | cellular stress responses, signaling networks, mathematics, stress signaling cascade, regulatory networks, biology, nonlinear dynamics, systems biology, biochemical simulations, signal transduction, cell biology, computational biology, molecular cell biology, signaling cascades | null |
2,446 | journal.ppat.1007818 | 2,019 | Clonorchis sinensis excretory-secretory products increase malignant characteristics of cholangiocarcinoma cells in three-dimensional co-culture with biliary ductal plates | Cholangiocarcinoma ( CCA ) is an aggressive malignancy of the bile duct epithelia associated with local invasiveness and a high rate of metastases ., It is the second most common primary hepatic tumor after hepatocellular carcinoma , which is considered to be a highly lethal cancer with a poor prognosis due to the difficulty in accurate early diagnosis 1 ., There are several established risk factors for CCA , including primary cholangitis , biliary cysts and hepatolithiasis 2 ., Another critical factor is infection with the liver flukes Opisthorchis viverrini and Clonorchis sinensis , resulting in the highest incidences of CCA being in Southeast Asian countries 3 ., The proposed mechanisms of liver fluke-associated cholangiocarcinogenesis include mechanical damage to bile duct epithelia resulting from the feeding activities of the worms , infection-related inflammation , and pathological effects from their excretory-secretory products ( ESPs ) , consisting of a complex mixture of proteins and other metabolites ) 4 ., These coordinated actions provoke epithelial desquamation , adenomatous hyperplasia , goblet cell metaplasia , periductal fibrosis , and granuloma formation , all contributing to the production of a conducive tumor microenvironment ., Eventually , malignant cholangiocytes undergo uncontrolled proliferation that leads to the initiation and progression of CCA 5 ., Like other parasitic helminths , liver flukes release ESPs continuously during infection , in this case into bile ducts and surrounding liver tissues ., These substances play pivotal roles in host–parasite interactions 6 ., Exposure of human CCA cells and normal biliary epithelial cells to liver fluke ESPs results in diverse pathophysiological responses , including proliferation and inflammation 7 , 8 ., Additionally , profiling of differential cancer-related microRNAs ( miRNAs ) expression has revealed that the miRNAs involved in cell proliferation and the prevention of tumor suppression are dysregulated in both CCA cells and normal cholangiocytes exposed to C . sinensis ESPs 9 ., These results suggest that there are ESP-responsive pathologic signal cascades that are common to both cancerous and non-cancerous bile duct epithelial cells ., Another aspect of carcinogenic transformation is the tissue microenvironment , consisting of the extracellular matrix ( ECM ) and surrounding cells and is a crucial factor in the regulation of cancer cell motility and malignancy 10 ., The diverse responses of tumor cells , cholangiocytes , and immune cells in the CCA microenvironment cooperatively affect cancer progression , including invasion , and/or metastasis 11 ., Chronic inflammation of the bile duct due to the presence of liver flukes is closely associated also with the development of CCA , because it causes biliary epithelial cells to produce various cytokines and growth factors including interleukin-6 , -8 ( IL-6 , -8 ) , transforming growth factor-β ( TGF-β ) , tumor necrosis factor-α ( TNF-α ) , platelet-derived growth factor and epithelial growth factor 12 ., Exposure to cytokines and growth factors induces their endogenous production by CCA cells through a crosstalk loop , enhancing malignant features such as invasion , metastasis , chemoresistance and epithelial-mesenchymal transition ( EMT ) 13 ., Cytokines driven by chronic inflammation contribute to the pathogenesis of CCA and should be collectively considered in studies on tumor microenvironment ., We have established a three-dimensional ( 3D ) cell culture assay previously that contains a gradient of C . sinensis ESPs in the ECM and mimics the complex CCA microenvironment ., In this previous study , CCA cells ( HuCCT1 ) were morphologically altered to form aggregates in response to C . sinensis ESPs , and these CCA cells could only invade the type I collagen ( COL1 ) hydrogel scaffold in response to ESP gradient treatment ., This response was accompanied with an elevation of focal adhesion protein expression and the secretion of matrix metalloproteinase ( MMP ) isoforms 14 , suggesting that C . sinensis ESPs may promote CCA progression ., Additionally , this study revealed the chemoattractant effect of C . sinensis ESP gradients for CCA cells and to expand this work , we explored the more complicated tumor microenvironment subjected to ESPs from C . sinensis ., In the present study , we developed an in vitro clonorchiasis-associated tumor microenvironment model that consisted of the following factors: ( 1 ) a 3D culture system of normal cholangiocytes using a microfluidic device as 3D quiescent biliary ductal plates on ECM; ( 2 ) physiological co-culture of CCA cells with normal cholangiocytes coupled to the directional application of C . sinensis ESPs to reconstitute a 3D CCA microenvironment; and ( 3 ) visualization and assessment of the interactions between tumor cells and their microenvironments to assess how the malignant progression of CCA corresponds with carcinogenic liver fluke infestation ( Fig 1 ) ., To reconstitute the microenvironment of a normal bile duct on an ECM , H69 cells were cultured three dimensionally on a COL1 hydrogel within a microfluidic device ., The cells formed an epithelial layer and sprouted 3-dimensionally into the hydrogel one day after seeding ( Fig 2A ) ., The sprouts formed 3D tube-like structures resembling newly-developed small bile ducts ( Fig 2A and 2B ) ., This morphological change can be referred to as cholangiogenesis , and hepatic neoductule formation from an existing biliary ductal plate 15 ., The sprouting was suppressed in this study to form quiescent mature biliary ductal plates by varying the composition of the culture medium , namely complete , fetal bovine serum-free ( FBS ( - ) ) , and FBS-free/epidermal growth factor-depleted ( FBS ( - ) /EGF ( - ) ) ., In complete culture medium , H69 cells dynamically sprouted and expanded into the COL1 hydrogel and the boundary between the biliary ductal plate and the COL1 hydrogel ( Fig 2C , Day 4 ) moved far from the initial cell seeding point ( Fig 2C , Day 1 ) ., In the absence of FBS and EGF , cholangiogenesis decreased dramatically ( Fig 2D ) ., Additionally , the H69 cells in FBS-free/EGF-depleted medium were in G0 phase ( Fig 2E ) and expressed a basolateral polarity marker ( Integrin α6 ) along the region of the COL1 hydrogel scaffold that was in contact with the cell layer ( Fig 2F ) 16 ., Therefore , we designated this cluster of H69 cells as representing a quiescent 3D biliary ductal plate ., The mechanical properties of the COL1 hydrogel were modulated by altering the initial pH or concentration to identify other factors that could suppress the H69 cell sprouting ., When the pH of the COL1 solution prior to gelation was basic ( pH 11 ) , the resulting hydrogel was stiffer than one gelled at pH 7 . 4 and one produced with a high concentration ( 2 . 5 mg/mL ) 17 ., H69 cells on stiffer COL1 hydrogel showed more numerous sprouts with larger surface areas ( Fig 3A and 3B ) ., H69 cells cultured on normal COL1 hydrogels ( 2 . 0 mg/mL and pH 7 . 4 ) and in FBS ( - ) /EGF ( - ) medium formed a quiescent biliary ductal plate ., To mimic C . sinensis infestation , ductal plates were treated with ESPs ( 4 μg/mL ) by either application to the channel containing the H69 cells ( direct application ) or to the other channel of the microfluidic device ( gradient application ) ., After gradient application , ESPs diffused through the COL1 hydrogel and toward the basal side of the biliary duct plate , forming a complex concentration profile ., Computational simulation results showed that after 24 hours ESP concentration reached 3 μg/mL at the apical side of the biliary ductal plate ( 2~2 . 5 μg/mL at basal ) upon direct application , and 1 . 5~2 μg/mL at the basal side of the biliary ductal plate ( 1 μg/mL at apical ) upon gradient application ( Fig 4A ) ., Based on the observation that the H69 cells produced three stratified layers and each layer is 10 μm in thick , ~40% of the local ESP concentration difference was applied to a single HuCCT1 cell entered the biliary ductal plate under gradient application ., The 3D biliary ductal plate was stably maintained and remained healthy after either type of ESP treatment and neither treatment induced cholangiogenesis ( Fig 4B and 4C ) ., HuCCT1 CCA cells labeled with GFP were seeded onto the apical side of the 3D quiescent biliary ductal plate formed by H69 cells under the culture condition as defined above ., The HuCCT1 cells were then exposed to ESPs ( direct or gradient ) for 3 days ., After the ESP treatment , the HuCCT1 cells actively invaded the biliary ductal plate and reached the COL1 hydrogel ( Fig 5A ) ., After gradient ESP application , 1 . 71-fold and 1 . 85-fold more HuCCT1 cells invaded the biliary ductal plate compared to non-treated control , or those treated with direct application , respectively ( Fig 5B and 5D ) ., Interestingly , the number of individualized HuCCT1 cells in the biliary duct layer and COL1 hydrogel were similar , both after gradient and direct treatment ( Fig 5C and 5D ) ., It has been reported that elevated plasma levels of IL-6 and TGF-β1 are correlated with histophathological changes in the livers of C . sinensis-infected mice 18 , 19 ., Moreover , interaction of these cytokines appears to be assoiciated with an increased malignancy of CCA cells 13 ., These findings prompted us to examine whether IL-6 and TGF-β1 were involved in invasion and migration of HuCCT1 cells in our system ., First , we measured IL-6 and TGF-β1 levels in the culture supernatants of ESP-treated H69 cells using ELISA , and found that secretion of both IL-6 and TGF-β1 was significantly elevated at 12 hours post-ESP treatment , compared to the non-treated control ( Fig 6A ) ., Elevated secretion of IL-6 was maintained at 24 hours and increased further increased by 48 hours , while the TGF-β1 secretion level increased in a time-dependent manner ., To assess the crosstalk of IL-6 and TGF-β1 from H69 cells with co-cultured HuCCT1 cells , the induction of IL-6 and TGF-β1 in ESP-treated H69 cells was attenuated by means of small interfering ( si ) RNA transfection ., The culture supernatants from each of four groups of 48 hour-ESP-treated H69 cells ( transfected with siRNAs of scrambled oligonucleotide or with siRNAs for IL-6 , TGF-β1 or both ) were substituted for 24 hour-ESP-treated medium in HuCCT1 cell cultures ., Then , these HuCCT1 cells were incubated further for 48 hours and their culture supernatants were analyzed using ELISA ., The levels of both ESP-induced IL-6 and TGF-β1 secretion by HuCCT1 cells , as well as H69 cells , were significantly decreased in the respective siRNA transfectants , when compared with those of untransfected and scrambled siRNA-transfected . cells ( Fig 6B ) ., Moreover , a greater reduction in the secretion of these cytokines was observed when using the supernatant from cells treated with siRNA for both IL-6 and TGF-β1 siRNA than when using ones from cells treated with either siRNA alone , suggesting that an IL-6/TGF-β1 autocrine/paracrine signaling network may be in effect between non-cancerous and cancerous co-cultured cells ., Next , we examined ESP-mediated changes in E- and N-cadherin expression in HuCCT1 and H69 cells , which are , respectively , epithelial and mesenchymal markers , that are regarded as functionally significant factors in cancer progression ., Decreased amounts of immunoreactive E-cadherin were detected in HuCCT1 cells following 24 hours post-ESP treatment , and this decreased further at 48 hours ., Increased expression of N-cadherin was obvious at 24 hours post-ESP treatment and maintained up to 48 hours ( Fig 7A ) ., However , in H69 cells , the expression of E-cadherin was significantly elevated at 24 hours and increased further subsequently , while there was no substantial change in N-cadherin expression during the same period of ESP treatment ( Fig 7B ) ., This suggests that ESPs may contribute to facilitating EMT-like changes only in HuCCT1 cells , leading to the promotion of migration/invasion ., Finally , we evaluated the involvement of IL-6 and TGF-β1 in the cadherin switching of HuCCT1 cells treated with culture supernatants from siRNA-treated cells described above ., Silencing of IL-6 and TGF-β1 markedly attenuated the reduction of E-cadherin and the elevation of N-cadherin expression induced by ESPs ., The levels of E- and N-cadherin expression in double silencing supernatant-treated HuCCT1 cells were almost the same as those of the non-treated control ( Fig 7C ) , indicating that the IL-6 and TGF-β1 expression induced by the ESPs contributed to EMT progression in HuCCT1 cells ., The microfluidic model of a CCA tumor microenvironment used in this study consisted of a quiescent 3D biliary ductal plate formed by H69 cells ( cholangiocytes ) on a COL1 ECM that has been stimulated by C . sinensis ESPs ., HuCCT1 cells ( CCA cells ) responded to microenvironmental factors actively by proliferating , migrating and invading the 3D biliary ductal plate and passing into the neighboring ECM ., HuCCT1 cells exhibited different cellular behaviors when co-cultured on the biliary duct layer , compared to when they were cultured on ECM alone , as described previously 14 ., As a CCA tumor microenvironment factor , characteristics of normal cholangiocytes were carefully investigated , and compared with previous reports 20 ., Ishida Y et al . reported ductular morphogenesis and functional polarization of human biliary epithelial cells when embedded three dimensionally in a COLI hydrogel 21 ., Tanimizu N et al . also reported the development of a 3D tubular-like structure during the differentiation of mouse liver progenitor cells 16 , 22 ., However , these traditional dish-based culture platforms only generated 3D tube-like structures whose apical-basal polarities differed from those observed in vivo , and which were unsuitable for co-culturing with CCA cells to monitor tumor malignancy changes upon invasion/migration in a tumor milieu ., In the microfluidic 3D culture platform described here , H69 cells formed a cholangiocyte layer and sprouted into the COL1 hydrogel ., This mimicked an asymmetrical ductal structure at the parenchymal layer on the portal vein side and a primitive ductal structure during the early stage of biliary tubulogenesis during cholangiogenesis 15 ., H69 cholangiocytes lining small bile ducts are layer-forming biliary epithelial cells with a potential proliferative capacity , but under normal conditions are quiescent or in the G0 state of the cell cycle 23 ., The mechanical and biochemical properties of the ECM and culture medium within the microenvironment of the biliary epithelium were characterized and shown to be conducive to the formation of a stable 3D biliary ductal plate and primitive ductal structure , which are crucial steps in cholangiogenesis ., The reconstituted biliary ductal plate on the ECM formed a 3D CCA tumor microenvironment , which was then seeded with CCA tumor cells and treated with C . sinensis ESPs ., Many publications have described the co-culture of tumor cells with stromal cells ( mainly fibroblasts ) and reported upregulated tumor cell malignancy , however , only a few of these studied CCA cells 24 ., One study co-cultured various CCA cells ( HuCCT1 and MEC ) with hepatic stellate cells as a CCA stroma and reported increased invasion and proliferation by CCA cells 25 ., To our knowledge , this is the first attempt to construct a co-culture system that facilitates the direct contact of normal and CCA tumor cells from a single type of tissue; both cells were cultured separately and the combined to produce the pathophysiological effect ., Therefore our work describes an advanced method for orchestrating complex CCA microenvironments , especially in 3D ., The first components of the CCA microenvironment are growth factors , which are present in the culture medium and are candidates for promoting the proliferation , differentiation and migration of cholangiocytes ., FBS should be preferentially excluded to arrest the cells in a quiescent state in order to assess the direct effects of C . sinensis ESPs on CCA malignancy ., Although the precise effect of EGF on normal cholangiocytes remains to be elucidated , key roles of EGF in biliary duct development and cholangiocytes differentiation including cholangiogenesis and neoductule formation from an existing biliary ductal plate , have been reported 16 , 26 ., Additionally , signaling via EGF and its receptor ( EGFR ) facilitates the progression of hepato-cholangiocellular cancer 27 , 28 ., Thus , we excluded EGF from our 3D co-culture model system because the complex and diverse roles of EGF might mask direct ESP-dependent effects on the CCA microenvironment ., The second component in the microenvironment was the COL1 ECM ., The mechanical properties of the COL1 hydrogel , such as fibril diameter and stiffness , can be altered by controlling the collagen concentrations or adjusting the pH in of the collagen solution prior to gel casting ., High pH reduces the diameter of COL1 nanofibers after gelation and this increases the stiffness of the COL1 hydrogel dominantly; the linear modulus of a COL1 hydrogel produced from pH 11 and 2 . 0 mg/mL is 2 . 7-fold and 3 . 1-fold higher than those from pH 7 . 4 and 2 . 5 mg/mL and from pH 7 . 4 and 2 . 0 mg/mL , being approximately 53 kPa , 20 kPa and 17 kPa , respectively 17 ., It has been reported that hepatobiliary cells express diverse cellular behaviors with respect to proliferation , differentiation , adhesion , and migration under stiff ECM conditions , with close association in development , homeostasis and disease progression 29 , 30 ., The morphological changes in H69 cells on stiff COL1 probably reflect the highly-activated proliferation of cholangiocytes ( ductular reaction ) in liver fibrosis , and “atypical” proliferation of cholangiocytes commonly seen in patients with prolonged cholestatic liver diseases , such as primary sclerosing cholangitis or primary biliary cirrhosis 31 , 32 ., C . sinensis ESPs were the third component of the CCA microenvironment considered in this study , and ESP stimuli induced 3D morphological changes in biliary ductal plate ., Some H69 cells in the 3D biliary ductal plate grown on a COL1 hydrogel interacted with it by sprouting; however , the majority maintained the layer structure during the entire experimental periods , independent of direct or gradient ESP application ( Fig 4B and 4C ) ., While no obvious changes in N-cadherin expression in H69 cells were observed , the expression of E-cadherin in H69 cells was increased gradually in a time-dependent manner during the experiments ( Fig 7B ) , implying that the ESPs may cause H69 cells to exhibit more epithelial characteristics ., However , we do not rule out the possibility that more intense stimulation , such as with a higher dose of ESPs and/or longer exposure times , could produce EMT-like effects in H69 cells ., We determined that ESPs are implicated in the acquisition of CCA malignant characteristics; increased invasion and migration ., Single cell invasion by HuCCT1 cells was similarly increased by both direct and gradient ESP application , while migration increased significantly upon gradient application only ., The concentration profile produced by computational simulation explained these differential effects on HuCCT1 cells; the average concentration of ESPs over the entire area of the 3D biliary ductal plate was estimated at over 1 . 5 μg/mL and the H69 cells forming the biliary ductal plate were exposed to high concentrations of ESPs ( over 800 ng/mL ) , sufficient to induce significantly increased levels of IL-6 and TGF-β1 ( Fig 4A , red dotted line ) ., In contrast , the concentration of ESPs over the channel where HuCCT1 cells were seeded was considerably higher after direct application versus gradient application; yet , the magnitude of the effect on both migration and invasion was smaller ( Fig 5 ) ., These results suggest multiple pathological effects of ESPs in the CCA microenvironment , such as the stimulation of normal tissues near the CCA and the chemoattraction of CCA cells ., It has been reported previously that C . sinensis ESP-triggered CCA cell migration/invasion is mediated by ERK1/2-NF-κB-MMP-9 and integrin β4-FAK/Src pathways , suggesting that ESPs may function as detrimental modulators of the aggressive progression of liver fluke-associated CCA 33 , 34 ., In the present study , the morphological features of HuCCT1 cells exposed to ESPs in 3D co-culture with H69 cells differed from those of HuCCT1 cells cultured alone ., Co-cultured HuCCT1 cells exhibited increased motility , as represented by single cell invasion , while HuCCT1 cells exhibited aggregation in the 3D culture 14 ., This implies that the interaction between HuCCT1 and H69 cells contributes to a change in HuCCT1 cell phenotype ., Cytokines generated by various types of cells within the tumor microenvironment play pro- or anti-tumorigenic roles , depending on the balance of different immune mediators and the stage of tumor development 35 ., During liver fluke infection , chronically-inflamed epithelia are under constant stimulation to participate in the inflammatory response by continuous secretion of chemokines and cytokines ., This creates a vulnerable microenvironment that may promote malignant transformation and even cholagiocarcinogenesis ., IL-6 is considered a proinflammatory cytokine that has typically pro-tumorigenic effects during infection ., Liver cell lines , including H69 cells , preferentially take up O . viverrini ESPs by endocytosis , resulting in proliferation and increased secretion of IL-6 7 ., Elevated plasma concentrations of IL-6 are associated with a significant dose-dependent increase in the risk of opisthorchiasis-associated advanced periductal fibrosis and CCA 36 ., The TGF-β-mediated signaling pathway is involved in all stages of liver disease progression from initial inflammation-related liver injury to cirrhosis and hepatocellular carcinoma 37 ., A crude antigen from C . sinensis differentiates macrophage RAW cells into dendritic-like cells and upregulates ERK-dependent secretion of TGF-β , which modulates the host’s immune responses 38 ., C . sinensis infection activates TGF-β1/Smad signaling promoting fibrosis in the livers of infected mice 19 ., Additionally , it has been reported that the E/N-cadherin switch via TGF-β-induced EMT is correlated with cancer progression of CCA cells and the survival of patients with extrahepatic CCA 39 , 40 ., Consistent with these studies , we observed that the decreased E-cadherin and increased N-cadherin expression in ESP-exposed HuCCT1 cells ( Fig 7A ) was associated with increased secretion of IL-6 and TGF-β1 by H69 cells ( Fig 6A ) as well as by HuCCT1 cells , as reported previously 41 ., The cytokine mediated-interaction between H69 and HuCCT1 cells was evaluated by means of siRNAs , which the levels of IL-6 and TGF-β1 secretion were suppressed in the culture supernatants of siRNA-IL-6 and -TGF-β1 H69 transfectants ( Fig 6B ) ., The suppression of these cytokines was correlated with an impairment of the change in E-/N-cadherin expression in HuCCT1 cells triggered by the ESPs ( Fig 7C ) ., This suggests that local accumulation of these cytokines , as the result of constitutive and dysregulated secretion of both cell types , promotes a more aggressive pathogenic process in the tumor milieu ., Therefore , it is tempting to speculate that ESPs facilitate a positive feedback loop of elevated inflammatory cytokine secretion in both non-cancerous and cancerous cells , triggering an E/N-cadherin switch in HuCCT1 cells that subsequently increased invasion and/or migration mediated by the EMT ., We will conduct future studies to explore this possibility ., In conclusion , HuCCT1 cells exhibited elevated single cell invasion after both direct and gradient ESP application , with increased migration occurring only after gradient treatment ( ESPs applied to the basal side ) ., These changes were caused by coordinated interactions between normal cholangiocytes , CCA cells and C . sinensis ESPs , which resulted in increased secretion of IL-6 and TGF-β1 and a cadherin switch in ESP-exposed cells ., Therefore , the combined effects of these detrimental stimulations in both cancerous and non-cancerous bile duct epithelial cells during C . sinensis infection may facilitate a more aggressive phenotype of CCA cells , such as invasion/migration , resulting in more malignant characteristics of the CCA tumor ., Our findings broaden our understanding of the molecular mechanism underlying the progression of CCA caused by liver fluke infection ., These observations provide a new basis for the development of chemotherapeutic strategies to control liver fluke-associated CCA metastasis and thereby help to reduce its high mortality rate in the endemic areas ., Cell culture medium components were purchased from Life Technologies ( Grand Island , NY ) , unless otherwise indicated ., Polyclonal antibodies against the following proteins were purchased from the indicated sources: Ki-67 and integrin α6 ( Abcam , Cambridge , UK ) ; E-cadherin ( BD Biosciences , San Jose , CA ) ; N-cadherin ( Santa Cruz Biotechnology , Santa Cruz , CA ) ; glyceraldehyde-3-phosphate dehydrogenase ( GAPDH; AbFrontier Co . , Seoul , Korea ) ., Horseradish peroxidase ( HRP ) -conjugated secondary antibodies were obtained from Jackson ImmunoResearch Laboratory ( West Grove , PA ) ., All other chemicals were obtained from Sigma-Aldrich ( St . Louis , MO ) ., Human HuCCT1 cholangiocarcinoma cells ( originally established by Miyagiwa et al . in 1989 42 ) was maintained in RPMI 1640 medium supplemented with 1% ( v/v ) penicillin/streptomycin and 10% FBS ., Human H69 cholangiocyte cells , which are SV40-transformed bile duct epithelial cells derived from non-cancerous human liver 43 , were kindly provided by Dr . Dae Ghon Kim of the Department of Internal Medicine , Chonbuk National University Medical School , Jeonju , Korea ., H69 cells were grown in DMEM/F12 ( 3:1 ) containing 10% FBS , 100 U/mL penicillin , 100 μg/ml streptomycin , 5 μg/ml of insulin , 5 μg/ml of transferrin , 2 . 0 ×10−9 M triiodothyronine , 1 . 8 × 10−4 M adenine , 5 . 5 × 10−6 M epinephrine , 1 . 1 × 10−6 M hydrocortisone , and 1 . 6 × 10−6 M EGF ., Both cell types were cultured at 37°C in a humidified atmosphere containing 5% CO2 ., Clonal cell lines that stably expressed a green fluorescent protein ( GFP ) were generated by transfection of HuCCT1 cells ., Briefly , HuCCT1 cells were grown to ~70% confluence and were transfected using Lipofectamine 2000 ( Invitrogen , Calsbad , CA ) and a pGFP-C1 vector ( Clontech Laboratories , Inc . , Palo Alto , CA ) for 24 hours ., To generate stable lines , the cells were cultured for 3 weeks in a complete medium containing 1 mg/ml G 418 disulfate salt ( Sigma-Aldrich ) that was changed every 2~3 days ., Colonies with uniform GFP fluorescence were screened and two clonal cell lines with approximately similar levels of GFP overexpression were chosen for further experiments ., Adult C . sinensis specimens for the preparation of ESPs were obtained from infected , sacrificed New Zealand albino rabbits to collect adult worms ., Animal care and experimental procedures were performed in strict accordance with the national guidelines outlined by the Korean Laboratory Animal Act ( No . KCDC-122-14-2A ) of the Korean Centers for Disease Control and Prevention ( KCDC ) ., The KCDC-Institutional Animal Care and Use Committee ( KCDC-IACUC ) /ethics committee reviewed and approved the ESPs preparation protocols ( approval identification number; KCDC-003-11 ) ., The ESPs from C . sinensis adult worms were prepared as described previously 41 ., Briefly , adult worms were recovered from the bile ducts of male New Zealand albino rabbits ( 12 weeks old ) orally infected with ~500 metacercariae 12 weeks earlier ., Worms were washed several times with cold phosphate-buffered saline ( PBS ) to remove any host contaminants ., Five fresh worms were cultured in 1 mL of prewarmed PBS containing a mixture of antibiotics and protease inhibitors ( Sigma-Aldrich ) for 3 hours at 37 °C in a 5% CO2 environment ., Then the culture fluid was pooled , centrifuged , concentrated with a Centriprep YM-10 ( Merck Millipore , Billerica , MA ) membrane concentrator , and filtered through a sterile 0 . 2-μm syringe membrane ., After measuring the ESP protein concentration , the aliquots were stored at −80°C until use ., The microfluidic device was prepared as described previously 14 ., Briefly , the microfluidic devices were produced by curing polydimethylsiloxane ( PDMS , Silgard 184 , Dow Chemical , Midland , MI ) overnight on a micro-structure-patterned wafer at 80°C ., The device was punched to produce ports for the hydrogel and cell suspension injections ., After sterilization , the device and s glass coverslip ( 24 × 24 mm; Paul Marienfeld , Germany ) were permanently bonded to each other and the surfaces of microchannels in the device were coated with poly-D-lysine by treatment 1 mg/mL solution ., The devices were stored under a sterile condition until use ., The gel region of microfluidic device was filled with an unpolymerized COL1 solution ( 2 . 0 mg/mL , pH 7 . 4 ) and then placed in a 37°C humidified chamber to polymerize the hydrogel ., EGF-depleted H69 medium containing 1% FBS was injected into the medium channels to prevent shrinkage of the COL1 hydrogel , and the devices were stored at 37°C in a 5% CO2 incubator until cell seeding ., H69 cells ( 5 × 105 cells ) suspended in conditional medium ( FBS-free , EGF-depleted ) were loaded into one medium port ., After filling a medium channel by the cells in the suspension by hydrostatic flow , the device was positioned vertically for 2 hours at 37°C in a 5% CO2 incubator to allow the cells to attach to the COL1 hydrogel wall by gravity ., One day after seeding with H69 cells , HuCCT1-GFP cells suspended in conditional medium at 10 × 105 cells/mL were seeded into the cell channel in a manner identical to the H69 cells ., ESPs were diluted in conditional medium to a concentration of 4 μg/mL and then added either to the cell channel ( direct application ) or to the medium channel ( gradient application ) ., The medium was replaced every day with fresh conditional medium supplemented with ESPs ( Fig 1B ) ., H69 cells cultured in a microfluidic device were washed twice with PBS and fixed with a 4% paraformaldehyde solution for 30 minutes ., A 0 . 1% Triton X-100 solution was treated to permeabilize the cell membranes for 10 minutes ., The cells were incubated with 1% bovine serum albumin and primary antibodies against Ki67 or Integrin α6 ( 1:1000 dilution ) , followed by Alexa Fluor 488 secondary antibody ( 1:1000 dilution; Invitrogen ) ., After staining with 4 , 6-Diamidino-2-Phenylindole ( DAPI , 1:1000 dilution , Invitrogen ) , and rhodamine phalloidin ( to stain F-actin , 1:200 dilution , Invitrogen ) , the cells were examined by a confocal laser-scanning microscope ( LSM700; Carl Zeiss , Jena , Germany ) and by fluorescent microscope ( Axio Observer Z1; Carl Zeiss , Jena , Germany ) ., We used the siRNAs ( Ambion Silencer Select ) of IL-6 , TGF- β1 , and scrambled oligonucleotide as a negative control from Thermo Fisher Scientific ( Waltham , MA ) ., H69 or HuCCT1 cells were seeded on 24-well culture plate and transiently transfected with either each or both target-specific siRNAs using Lipofectamine RNAiMAX ( Invitrogen ) according with the manufacturer’s protocols ., Each siRNA transfection was performed in quadruplicate ., After 24 hours , the transfection mixture on the cells was replaced with fresh culture medium ., At 60 hour after transfection , H69 cells were depleted of FBS gradually , followed by incubation in conditional medium supplemented with 800 ng/mL ESPs for 48 hours ., The culture supernatants from H69 cells were collected and clarified by brief centrifugation ., Then , the 24 hour-ESP ( 800 ng/mL ) -treated medium of HuCCT1 cell | Introduction, Results, Discussion, Materials and methods | Clonorchis sinensis is a carcinogenic human liver fluke , prolonged infection which provokes chronic inflammation , epithelial hyperplasia , periductal fibrosis , and even cholangiocarcinoma ( CCA ) ., These effects are driven by direct physical damage caused by the worms , as well as chemical irritation from their excretory-secretory products ( ESPs ) in the bile duct and surrounding liver tissues ., We investigated the C . sinensis ESP-mediated malignant features of CCA cells ( HuCCT1 ) in a three-dimensional microfluidic culture model that mimics an in vitro tumor microenvironment ., This system consisted of a type I collagen extracellular matrix , applied ESPs , GFP-labeled HuCCT1 cells and quiescent biliary ductal plates formed by normal cholangiocytes ( H69 cells ) ., HuCCT1 cells were attracted by a gradient of ESPs in a concentration-dependent manner and migrated in the direction of the ESPs ., Meanwhile , single cell invasion by HuCCT1 cells increased independently of the direction of the ESP gradient ., ESP treatment resulted in elevated secretion of interleukin-6 ( IL-6 ) and transforming growth factor-beta1 ( TGF-β1 ) by H69 cells and a cadherin switch ( decrease in E-cadherin/increase in N-cadherin expression ) in HuCCT1 cells , indicating an increase in epithelial-mesenchymal transition-like changes by HuCCT1 cells ., Our findings suggest that C . sinensis ESPs promote the progression of CCA in a tumor microenvironment via the interaction between normal cholangiocytes and CCA cells ., These observations broaden our understanding of the progression of CCA caused by liver fluke infection and suggest a new approach for the development of chemotherapeutic for this infectious cancer . | The oriental liver fluke , Clonorchis sinensis , is a biological carcinogen of humans and is the cause of death of infectious cancer patients in China and Korea ., Its chronic infection promotes cholangiocarcinogenesis due to direct contact of host tissues with the worms and their excretory-secretory products ( ESPs ) ; however , the specific mechanisms underlying this pathology remain unclear ., To assess its contribution to the progression of cholangiocarcinoma ( CCA ) , we developed a 3-dimensional ( 3D ) in vitro culture model that consists of CCA cells ( HuCCT1 ) in direct contact with normal cholangiocytes ( H69 ) , which are subsequently exposed to C . sinensis ESPs; therefore , this model represents a C . sinensis-associated CCA microenvironment ., Co-cultured HuCCT1 cells exhibited increased motility in response to C . sinensis ESPs , suggesting that this model may recapitulate some aspects of tumor microenvironment complexity ., Proinflammatory cytokines such as IL-6 and TGF-β1 secreted by H69 cells exhibited a crosstalk effect regarding the epithelial-mesenchymal transition of HuCCT1 cells , thus , promoting an increase in the metastatic characteristics of CCA cells ., Our findings enable an understanding of the mechanisms underlying the etiology of C . sinensis-associated CCA , and , therefore , this approach will contribute to the development of new strategies for the reduction of its high mortality rate . | biliary system, invertebrates, amorphous solids, medicine and health sciences, innate immune system, liver, immune physiology, cytokines, engineering and technology, helminths, gene regulation, immunology, carcinomas, cancers and neoplasms, gastrointestinal tumors, animals, trematodes, oncology, physiological processes, developmental biology, clonorchis sinensis, materials science, molecular development, adenocarcinomas, gels, microfluidics, fluidics, small interfering rnas, gene expression, flatworms, cholangiocarcinoma, immune system, bile ducts, biochemistry, rna, eukaryota, anatomy, nucleic acids, physiology, clonorchis, genetics, secretion, biology and life sciences, materials, physical sciences, non-coding rna, mixtures, organisms | null |
1,652 | journal.pcbi.1006657 | 2,019 | A data-driven interactome of synergistic genes improves network-based cancer outcome prediction | Metastases at distant sites ( e . g . in bone , lung , liver and brain ) is the major cause of death in breast cancer patients 1 ., However , it is currently difficult to assess tumor progression in these patients using common clinical variables ( e . g . tumor size , lymph-node status , etc . ) 2 ., Therefore , for 80% of these patients , chemotherapy is prescribed 3 ., Meanwhile , randomized clinical trials showed that at least 40% of these patients survive without chemotherapy and thus unnecessarily suffer from the toxic side effect of this treatment 3 , 4 ., For this reason , substantial efforts have been made to derive molecular classifiers that can predict clinical outcome based on gene expression profiles obtained from the primary tumor at the time of diagnosis 5 , 6 ., An important shortcoming in molecular classification is that ‘cross-study’ generalization is often poor 7 ., This means that prediction performance decreases dramatically when a classifier trained on one patient cohort is applied to another one 8 ., Moreover , the gene signatures found by these classifiers vary greatly , often sharing only few or no genes at all 9–11 ., This lack of consistency casts doubt on whether the identified signatures capture true ‘driver’ mechanisms of the disease or rather subsidiary ‘passenger’ effects 12 ., Several reasons for this lack of consistency have been proposed , including small sample size 11 , 13 , 14 , inherent measurement noise 15 and batch effects 16 , 17 ., Apart from these technical explanations , it is recognized that traditional models ignore the fact that genes are organized in pathways 18 ., One important cancer hallmark is that perturbation of these pathways may be caused by deregulation of disparate sets of genes which in turn complicates marker gene discovery 19 , 20 ., To alleviate these limitations , the classical models are superseded by Network-based Outcome Predictors ( NOP ) which incorporate gene interactions in the prediction model 21 ., NOPs have two fundamental components: aggregation and prediction ., In the aggregation step , genes that interact , belong to the same pathway or otherwise share functional relation are aggregated ( typically by averaging expressions ) into so called “meta-genes” 22 ., This step is guided by a supporting data source describing gene-gene interactions such as cellular pathway maps or protein-protein interaction networks ., In the consequent prediction step , meta-genes are selected and combined into a trained classifier , similar to a traditional classification approach ., Several NOPs have been reported to exhibit improved discriminative power , enhanced stability of the classification performance and signature and better representation of underlying driving mechanisms of the disease 18 , 23–25 ., In recent years , a range of improvements to the original NOP formulation has been proposed ., In the prediction step , various linear and nonlinear classifiers have been evaluated26 , 27 ., Problematically , the reported accuracies are often an overestimation as many studies neglected to use cross-study evaluation scheme which more closely resembles the real-world application of these models 7 ., Also for the aggregation step , which is responsible for forming meta-genes from gene sets , several distinct approaches are proposed such as clustering 23 and greedy expansion of seed genes into subnetworks 18 ., Moreover , in addition to simple averaging , alternative means by which genes can be aggregated , such as linear or nonlinear embeddings , have been proposed 17 , 28 ., Most recent work combines these steps into a unified model 8 , 29 ., Recent efforts that extend these concepts to sequencing data by exploiting the concept of cancer hallmark networks have also been proposed 30 ., Despite these efforts and initial positive findings , there is still much debate over the utility of NOPs compared to classical methods , with several studies showing no performance improvement 21 , 31 , 32 ., Perhaps even more striking is the finding that utilizing a permuted network 32 or aggregating random genes 10 performs on par with networks describing true biological relationships ., Several meta-analyses attempting to establish the utility of NOPs have appeared with contradicting conclusions ., Notably , Staiger et al . compared performance of nearest mean classifier 33 in this setting and concluded that network derived meta-genes are not more predictive than individual genes 21 , 32 ., This is in contradiction to Roy et al . who achieved improvements in outcome prediction when genes were ranked according to their t-test statistics compared to their page rank property 34 in PPI network 28 , 35 ., It is thus still an open question whether NOPs truly improve outcome prediction in terms of predictive performance , cross-study robustness or interpretability of the gene signatures ., A critical—yet often neglected—aspect in the successful application of NOPs is the contribution of the biological network ., In this regard , it should be recognized that many network links are unreliable 36 , 37 , missing 38 or redundant 39 and considerable efforts are being made to refine these networks 38 , 40–42 ., In addition , many links in these networks are experimentally obtained from model organisms and therefore may not be functional in human cells 43–45 ., Finally , most biological networks capture only a part of a cell’s multifaceted system 46 ., This incomplete perspective may not be sufficient to link the wide range of aberrations that may occur in a complex and heterogeneous disease such as breast cancer 47 , 48 ., Taken together , these issues raise concerns regarding the extent to which the outcome predictors may benefit from inclusion of common biological networks in their models ., In this work , we propose to construct a network ab initio that is specifically designed to improve outcome prediction in terms of cross-study generalization and performance stability ., To achieve this , we will effectively turn the problem around: instead of using a given biological network , we aim to use the labelled gene expression datasets to identify the network of genes that truly improves outcome prediction ( see Fig 1 for a schematic overview ) ., Our approach relies on the identification of synergistic gene pairs , i . e . genes whose joint prediction power is beyond what is attainable by both genes individually 49 ., To identify these pairs , we employed grid computing to evaluate all 69 million pairwise combinations of genes ., The resulting network , called SyNet , is specific to the dataset and phenotype under study and can be used to infer a NOP model with improved performance ., To obtain SyNet , and allow for rigorous cross-study validation , a dataset of substantial size is required ., For this reason , we combined 14 publicly available datasets to form a compendium encompassing 4129 survival labeled samples ., To the best of our knowledge , the data combined in this study represents the largest breast cancer gene expression compendium to date ., Further , to ensure unbiased evaluation , sample assignments in the inner as well as the outer cross-validations folds are kept equal across all assessments throughout the paper ., In the remainder of this paper , we will demonstrate that integrating genes based on SyNet provides superior performance and stability of predictions when these models are tested on independent cohorts ., In contrast to previous reports , where shuffled versions of networks also performed well , we show that the performance drops substantially when SyNet links are shuffled ( while containing the same set of genes ) , suggesting that SyNet connections are truly informative ., We further evaluate the content and structure of SyNet by overlaying it with known gene sets and existing networks , revealing marked enrichment for known breast cancer prognostic markers ., While overlap with existing networks is highly significant , the majority of direct links in SyNet is absent from these networks explaining the observed lack of performance when NOPs are guided by the phenotype-unaware networks ., Interestingly , SyNet links can be reliably predicted from existing networks when more complex topological descriptors are employed ., Taken together , our findings suggest that compared to generic gene networks , phenotype-specific networks , which are derived directly from labeled data , can provide superior performance while at the same time revealing valuable insight into etiology of breast cancer ., We first evaluated NOP performance for three existing methods ( Park , Chuang and Taylor ) and the Group Lasso ( GL ) when supplied with a range of networks , including generic networks , tissue-specific networks and SyNet ., As a baseline model , we used a Lasso classifier trained using all genes in our expression dataset ( n = 11748 ) without network guidance ., The Lasso exhibits superior performance among many linear and non-linear classifiers evaluated on our expression dataset ( see S3 for details ) ., The AUC of the four NOPs , presented in Fig 2 , clearly demonstrates that SyNet improves the performance of all NOPs , except for the Park method in which it performs on par to the Correlation ( Corr ) network ., Notably , SyNet is inferred using training samples only , which prevents “selection bias” in our assessments 50 ., Furthermore , comparison of baseline model performance ( i . e . Fig 2 , rightmost bar ) and other NOPs supports previous findings that many existing NOPs do not outperform regular classifiers that do not use networks 8 , 21 , 32 ., The GL clearly outperforms all other methods , in particular when it exploits the information contained in SyNet ., This corroborates our previous finding 8 that existing methods which construct meta-genes by averaging are suboptimal ( see S1 for a more extensive analysis ) ., The GL using the Corr network also outperforms the baseline model , albeit non-significantly ( p~0 . 6 ) , which is in line with previous reports 23 ., It should be noted that across all these experiments an identical set of samples is used to train the models so that any performance deviation must be due to differences in, ( i ) the set of utilized genes or, ( ii ) the integration of the genes into meta-genes ., In the next two sections , we will investigate these factors in more details ., Networks only include genes that are linked to at least one other gene ., As a result , networks can provide a way of ranking genes based on the number and weight of their connections ., One explanation for why NOPs can outperform regular classifiers is that networks provide an a priori gene ( feature ) selection 32 ., To test this hypothesis and determine the feature selection capabilities of SyNet , we compare classification performances obtained using the baseline classifier ( i . e . Lasso ) that is trained using enclosed genes in each network ., While this classifier performs well compared to other standard classifiers that we investigated ( see S3 for details ) , it cannot exploit information contained in the links of given network ., So , any performance difference must be due to the genes in the network ., The number of genes in each network under study is optimized independently by varying the threshold on the weighted edges in the network and removing unconnected genes ( see section “Regular classifiers and Network based prediction models” for network size optimization details ) ., The edge weight threshold and the Lasso regularization parameter were determined simultaneously using a grid search cross-validation scheme ( see S5 for details ) ., Fig 3 provides the optimal performances for 12 distinct networks along with number of genes used in the final model ( i . e . genes with non-zero Lasso coefficients ) ., We also included the baseline model where all genes ( n = 11748 ) are utilized to train Lasso classifier ( rightmost bar ) ., The results presented in Fig 3a demonstrate that SyNet is the only network that performs markedly better than the baseline model which is trained on all genes ., Interestingly , we observe that SyNet is the top performing network while utilizing a comparable number of genes to other networks ., The second-best network is the Corr network ., We argue that superior performance of SyNet over the Corr network stems from the disease specificity of genes in SyNet which helps the predictor to focus on the relevant genes only ., It should be noted that the data on which SyNet and the Corr network are constructed are completely independent from the validation data on which the performance is based due to our multi-layer cross-validation scheme ( see Methods and S5 ) which avoids selection bias 50 ., We conclude that dataset-specific networks , in particular SyNet which also exploits label information , provides a meaningful feature selection that is beneficial for classification performance ., Our result show that none of the tissue-specific networks outperform the baseline ., Despite the modest performance , it is interesting to observe that performance for these networks increases as more relevant tissues ( e . g . breast and lymph node networks ) are utilized in the classification ., Additionally , we observe that tissue-specific networks do not outperform the generic networks ., This may be the result of the fact that generic networks predominantly contain broadly expressed genes with fundamental roles in cell function which may still be relevant to survival prediction ., A similar observation was made for GWAS where SNPs in these widely-expressed genes can explain a substantial amount of missed heritability 51 ., In addition to classifier performance , an important motivation for employing NOPs is to identify stable gene signatures , that is , the same genes are selected irrespective of the study used to train the models ., Gene signature stability is necessary to confirm that the identified genes are independent of dataset specific variations and therefore are true biological drivers of the disease under study ., To measure the signature consistency , we assessed the overlap of selected genes across all repeats and folds using the Jaccard Index ., Fig 3b shows that a Lasso trained using genes preselected by SyNet , identifies more similar genes across folds and studies compared to other networks ., Surprisingly , despite the fact that the expression data from which SyNet is inferred changes in each classification fold , the signature stability for SyNet is markedly better than for generic or tissue-specific networks that use a fixed set of genes across folds ., Therefore , our results demonstrate that synergistic genes in SyNet truly aid the classifier to robustly select signatures across independent studies ., The ultimate goal of employing NOPs compared to classical models that do not use network information is to improve prognosis prediction by harnessing the information contained in the links of the given network ., Therefore , we next aimed to assess to what extent also connections between the genes , as captured in SyNet and other networks , can help NOPs to improve their performance beyond what is achievable using individual genes ., As before , we utilized identical datasets ( in terms of genes , training and test samples ) in inner and outer cross-validation loops to train all four NOPs as well as the baseline model which uses Lasso trained using all genes ( n = 11748 ) ., Our results presented in Fig 4a , clearly demonstrate that compared to other NOPs under study , GL guided by SyNet achieves superior prognostic prediction for unseen patients selected from an independent cohort ., To confirm that NOP performance using SyNet is the result of the network structure , we also applied the GL to a shuffled version of SyNet ( Fig 4a ) ., We observe a substantial deterioration of the AUC , supporting the conclusion that not only the genes , but also links contained in SyNet are important to achieve good prediction ., Moreover , this observation rules out that the GL by itself is able to provide enhanced performance compared to standard Lasso ., The result of a similar assessment for the Corr network is given in S12 ., Additionally , we found that SyNet remains predictive even when the dataset is down sampled to 25% of samples ( see S13 for details ) ., We also evaluated a recently developed set of subtype-specific networks for breast cancer 52 and found that SyNet markedly outperforms these networks in predictive performance ( see S18 for details ) ., We next assessed the performance gain of the network-guided model compared to a Lasso model that cannot exploit network information ., To this end , the GL was trained based on each network whereas the Lasso is was trained based on the genes present in the network ., Fig 4b demonstrates the results of this analysis ., We find that the largest gain in GL performance is achieved when using SyNet ( Fig 4b , x-axis ) , indicating that the links between genes in SyNet truly aid classification performance beyond what is obtained as a result of the feature selection capabilities of Lasso ., Fig 4c provides the Kaplan-Meier plot when each patient is assigned to a good or poor prognostic class according to frequency of predicted prognosis across 10 repeats ( ties are broken by random assignment to one of the classes ) for Lasso as well as Group Lasso ., Result of this analysis suggests that superior performance of the GL compared to the Lasso is mostly stemming from GLs ability to better discern the patients with poor prognosis ., An important property of an outcome predictor is to exhibit constant performance irrespective of the dataset used for training the model ( i . e . performance stability ) ., This is a highly desirable quality , as concerns have been raised regarding the highly variable performances of breast-cancer classifiers applied to different cohorts 7 , 53 ., To measure performance stability , we calculated the standard deviation of the AUC for Lasso and GL ., The y-axis in Fig 4b represents the average difference of standard deviation for Lasso and GL across all evaluated folds and repeats ( 14 folds and 10 repeats ) ., Based on this figure , we conclude that a NOP model guided by SyNet not only provides superior overall performance , it also offers improved stability of the classification performance ., Finally , we investigated the importance of hub genes in SyNet ( genes with >4 neighbors ) and observe that a comparable performance can be obtained with a network consisting of hub genes exclusively at the cost of reduced performance stability ( see S14 for details ) ., Moreover , we did not observe performance gain for a model that is governed by combined links from multiple networks ( either by intersection or unification , see S15 for details ) ., We further confirmed that the performance gain of the network-guided GL is preserved when networks are restricted to have equal number of links ( see S7 for details ) , or when links with lower confidence are included in the network ( see S16 for details ) ., We also considered the more complex Sparse Group Lasso ( SGL ) , which offers an additional level of regularization ( see S1 Text for details ) ., No substantial difference between GL and SGL performance was found ( see S8 for details ) ., Likewise , we did not observe substantial performance differences when the number of genes , group size and regularization parameters were simultaneously optimized in a grid search ( see S9 for details ) ., Together , these findings can be considered as the first unbiased evidence of true classification performance improvement in terms of average AUC and classification stability by a NOP ., Many curated biological networks suffer from an intrinsic bias since genes with well-known roles are the subject of more experiments and thus get more extensively and accurately annotated 54 ., Post-hoc interpretation of the features used by NOPs , often by means of an enrichment analysis , will therefore be affected by the same bias ., SyNet does not suffer from such bias , as its inference is purely data driven ., Moreover , since SyNet is built based on gene pairs that contribute to the prediction of clinical outcome , we expect that the genes included in SyNet not only relate to breast cancer; they should play a role in determining how aggressively the tumor behaves , how advanced the disease is or how well it responds to treatment ., To investigate the relevance of genes contained in SyNet in the development of breast cancer and , more importantly , clinical outcome , we ranked all pairs according to their median Fitness ( Fij ) across 14 studies and selected the top 300 genes ( encompassing 3544 links ) ., This cutoff was frequently chosen by the GL as the optimal number of genes in SyNet ( see section “SyNet improves NOP performance” ) ., Fig 5 visualizes this network revealing three main subnetworks and a few isolated gene pairs ., We performed functional enrichment for all genes as well as for the subcomponents of the three large subnetworks in SyNet using Ingenuity Pathway Analysis ( IPA ) 55 ., IPA reveals that out of 300 genes in SyNet , 287 genes have a known relation to cancer ( 2e-06<p<1e-34 ) of which 222 are related to reproductive system disease ( 2e-06<p<1e-34 ) ., Furthermore , according to IPA analysis , the top five upstream regulators of genes in SyNet ( orange box , Fig 5 ) are CDKN1A , E2F4 , RABL6 , TP53 and ERBB2 , all of which are well known players in the development of breast cancer56–60 ., The mean degree of the 300 genes in SyNet is 24 , but there are 12 genes which have a degree of 100 or above: ASPM 61 , BUB162 , CCNB2 63 , CDKN3 64 , CENPA 65 , DLGAP5 66 , KIF23 67 , MCM10 68 , MELK 69 , RACGAP1 70 , TTK 71 and UBE2C 72 ., All these genes play a vital role in progression through the cell cycle and mitosis , by ensuring proper DNA replication , correct formation of the mitotic spindle and proper attachment to the centromere ., In addition to a clear involvement of genes linked to breast cancer generically , IPA also finds clear indications that the genes in SyNet are relevant to clinical outcome and prognosis of the disease ., For instance , the most highly enriched cluster ( Fig 5; green cluster ) is found by IPA to be associated to histological grade of the tumor ( p = 6e-201 ) ., The histological grade , which is based on the morphological characteristics of the tumor , has been shown to be informative for the clinical behavior of the tumor and is one of the best-established prognostic markers 73 ., Notably , the blue cluster is enriched for genes involved in tamoxifen resistance ( p<2e-3 ) , one of the important treatments of ER-positive breast cancer ., Two other sub-clusters ( yellow and purple in Fig 5 ) , contain genes from distinctly different biological processes than the main cluster ., In these clusters we also observe clear hub genes: SLC7A7 and CD74 in the yellow and ACKR1 and MFAP4 in the purple cluster ., ACKR1 is a chemokine receptor involved in the regulation of the bio-availability of chemokine levels and MFAP4 is involved in regulating cell-cell adhesion ., The recruitment of cells , as regulated by chemokines , and reducing cell-cell adhesion both play an important role in the process of metastasis ., CD74 has also been linked to metastasis in triple negative breast cancer 74 ., Metastasis , and not the primary tumor , is the main cause of death in breast cancer 3 ., IPA highly significantly identifies the SyNet genes as upstream regulators of canonical pathways implicated in breast cancer ( Fig 5 ) , such as Cell Cycle Control of Chromosomal Replication ( 8e-18 ) , Mitotic Roles of Polo-Like Kinase ( 4e-15 ) , Role of CHK Proteins in Cell Cycle Checkpoint Control ( 6e-12 ) , Estrogen-mediated S-phase Entry ( 2e-11 ) , and Cell Cycle: G2/M DNA Damage Checkpoint Regulation ( 5e-10 ) ., Although all cancer cells deregulate cell cycle control , the degree of dysregulation may contribute to a more aggressive phenotype ., For instance , it is recognized that the downregulation of certain checkpoint regulators is related to a worse prognosis in breast cancer75 , 76 ., In summary , SyNet predominantly appears to contain genes relevant to two main processes in the progression of breast cancer: increased cell proliferation and the process of metastasis ., Although many genes have not previously been specifically linked to breast cancer prognosis , their role in regulating different stages of replication and mitosis points to a genuine biological role in the progression and prognosis of breast cancer ., We next sought to investigate the similarity between SyNet and existing biological networks that directly or indirectly capture biological interactions ., To enable a comparison with networks of different sizes , we compare the observed overlap ( both in terms of genes as well as links ) to the distribution of expected overlap obtained by shuffling each network 1000 times ( while keeping the degree distribution intact ) ., Overlap is determined for varying network sizes by thresholding the link weights such that a certain percentage of genes or links remains ., Results are reported in terms of a z-score in Fig 6 ., Fig 6a shows that for the majority of networks a significantly higher than expected number of SyNet genes is contained in the top of each network ., The overlap is especially pronounced for the tissue-specific networks , in particular the Breast-specific and Lymph node-specific networks , supporting our observation that SyNet contains links that are relevant for breast cancer ., The enrichment becomes even more significant when considering the overlap between the links ( Fig 6b ) ., In this respect , SyNet is also clearly most similar to the Breast-specific and Lymph node-specific networks ., We confirmed that these enrichments are not only driven by the correlation component of SyNet by repeating this analysis with a variant of the SyNet network without the correlation component ( i . e . only average and synergy of gene pairs are used for pair-ranking; see S10 for details ) ., It should moreover be noted that , although a highly significant overlap is observed , the vast majority of SyNet genes and links are not present in the existing networks , explaining the improved performance obtained with NOPs using SyNet ., Specifically , out of the 300 genes in SyNet , only 142 are contained within the top 25% of genes ( n = 1005 ) in the Breast-specific network , and 151 in the top 25% of genes ( n = 1290 ) in the Lymph node-specific network ., Similarly , out of the 3544 links in SyNet , only 1182 are contained within the top 25% of links ( n = 12500 ) in the Breast-specific network , and 617 in the top 25% ( n = 12500 ) of the Lymph node-specific network ( see S11 for details ) ., We further confirmed that the overall trend in observed overlaps between SyNet and other networks does not change when the size of these networks ( in terms of the number of links ) are increased or reduced ( see S17 for details ) ., In addition to direct overlap , we also aimed to investigate if genes directly connected in SyNet may be indirectly connected in existing networks ., To assess this for each pair of genes in SyNet , we computed several topological measures characterizing their ( indirect ) connection in the biological networks ., We included degree ( Fig 7a ) , shortest path ( Fig 7b ) and Jaccard ( Fig 7c ) ( see S1 Text for details ) ., To produce an edge measure from degree and page rank ( which are node based ) , we computed the average degree and page rank of genes in a pair respectively ., Furthermore , we produced an expected distribution for each pair by computing the same topological measures for one of the genes and another randomly selected gene ., The results from this analysis supports our previous observation that the information contained in the links of SyNet is markedly—yet only partially—overlapping with the information in the existing networks ., Notably , the similarity increases for networks of increased relevance to the tissue in which the gene expression data is measured ( i . e . breast tissue ) ., Encouraged by the overlap with existing biological networks , we next asked whether links in SyNet can be predicted from the complete collection of topological measures calculated based on existing networks ., To this end , we characterized each gene-pair by a set of 12 graph-topological measures that describe local and global network structure around each gene-pair ., In addition to the degree , shortest path and Jaccard , we included several additional graph-topological measures including direct link , page rank ( with four betas ) , closeness centrality , clustering coefficient and eigenvector centrality ( see S1 Text for details ) ., While converting node-based measures to edge based measures , in addition to using the average , we also used the difference between the score for each gene in the pair , similar to our previous work 77 ., We applied these measures to all 10 networks in our collection yielding a total of 210 features ., The gene-pairs are labeled according to their presence or absence in SyNet ., Inspection of this dataset using the t-SNE 78 reveals that the links in SyNet occupy a distinct part of the 2D embedding obtained ( Fig 8a ) ., We trained a Lasso and assessed classification performance in a 50-fold cross validation scheme where in each fold 1/50 of pairs in SyNet is kept hidden and the rest of pairs is utilized to train the classifier ., To avoid information leakage in this assessment , we removed gene pairs from the training set in case one of the genes is present in the test set ., Based on this analysis we find that a simple linear classifier can reach ~85% accuracy in predicting the synergistic gene relationships from SyNet ( Fig 8b , rightmost bar ) ., The contribution from generic networks is notably smaller than for the tissue-specific networks ., In particular the networks relevant to breast cancer are highly informative , to the extent that combining multiple networks no longer improves prediction performance ., Further investigation of feature importance revealed that the page rank topological measure was commonly used as a predictive marker across folds ., Apparently , while direct overlap between SyNet and existing networks is modest , the topology of the relevant networks ( i . e . breast-specific and lymph node-specific networks ) are highly informative for the links contained in SyNet ., This corroborates findings from Winter et al . in which the page rank topological measure was proposed to identify relevant genes in outcome prediction 34 , 35 , 79 ., Although the principle of using existing knowledge of the cellular wiring diagram to improve performance , robustness and interpretability of gene expression classifiers appears attractive , contrasting reports on the efficacy of such approach have appeared in literature 21 , 28 , 35 ., Consensus in this field has particularly been frustrated by an evaluation of a limited set of sub-optimal classifiers 21 , 23 , 28 , 35 , small sample size 18 , 24 , 26 , or the use of standard K-fold cross-validation instead of cross-study evaluation schemes , which results in inflated performance estimates 24 , 26 ., For this reason , it remained unclear if network-based classification , and in particular network-based outcome prediction , is beneficial ., Here , we present a rigorously cross-validated procedure to train and evaluate Group Lasso-based NOPs using a variety of networks , including tissue-specific networks in particular , which have not been evaluated in the context of NOPs before ., Based on our analyses , we conclude that none of the existing networks achieve improved performance compared to using properly regularized classifiers trained on all genes ., In this work we therefore present a novel gene network , called SyNet , which is computationally derived directly from the survival-labeled sam | Introduction, Results, Discussion, Materials and methods | Robustly predicting outcome for cancer patients from gene expression is an important challenge on the road to better personalized treatment ., Network-based outcome predictors ( NOPs ) , which considers the cellular wiring diagram in the classification , hold much promise to improve performance , stability and interpretability of identified marker genes ., Problematically , reports on the efficacy of NOPs are conflicting and for instance suggest that utilizing random networks performs on par to networks that describe biologically relevant interactions ., In this paper we turn the prediction problem around: instead of using a given biological network in the NOP , we aim to identify the network of genes that truly improves outcome prediction ., To this end , we propose SyNet , a gene network constructed ab initio from synergistic gene pairs derived from survival-labelled gene expression data ., To obtain SyNet , we evaluate synergy for all 69 million pairwise combinations of genes resulting in a network that is specific to the dataset and phenotype under study and can be used to in a NOP model ., We evaluated SyNet and 11 other networks on a compendium dataset of >4000 survival-labelled breast cancer samples ., For this purpose , we used cross-study validation which more closely emulates real world application of these outcome predictors ., We find that SyNet is the only network that truly improves performance , stability and interpretability in several existing NOPs ., We show that SyNet overlaps significantly with existing gene networks , and can be confidently predicted ( ~85% AUC ) from graph-topological descriptions of these networks , in particular the breast tissue-specific network ., Due to its data-driven nature , SyNet is not biased to well-studied genes and thus facilitates post-hoc interpretation ., We find that SyNet is highly enriched for known breast cancer genes and genes related to e . g . histological grade and tamoxifen resistance , suggestive of a role in determining breast cancer outcome . | Cancer is caused by disrupted activity of several pathways ., Therefore , to predict cancer patient prognosis from gene expression profiles , it may be beneficial to consider the cellular interactome ( e . g . the protein interaction network ) ., These so-called Network based Outcome Predictors ( NOPs ) hold the potential to facilitate identification of dysregulated pathways and delivering improved prognosis ., Nonetheless , recent studies revealed that compared to classical models , neither performance nor consistency ( in terms of identified markers across independent studies ) can be improved using NOPs ., In this work , we argue that NOPs can only perform well when supplied with suitable networks ., The commonly used networks may miss associations specially for under-studied genes ., Additionally , these networks are often generic with low coverage of perturbations that arise in cancer ., To address this issue , we exploit ~4100 samples and infer a disease-specific network called SyNet linking synergistic gene pairs that collectively show predictivity beyond the individual performance of genes ., Using a thorough cross-validation , we show that a NOP yields superior performance and that this performance gain is the result of the wiring of genes in SyNet ., Due to simplicity of our approach , this framework can be used for any phenotype of interest ., Our findings confirm the value of network-based models and the crucial role of the interactome in improving outcome prediction . | medicine and health sciences, genetic networks, breast tumors, statistics, protein interaction networks, cancers and neoplasms, oncology, mathematics, forecasting, network analysis, genome analysis, research and analysis methods, computer and information sciences, genomics, mathematical and statistical techniques, gene expression, breast cancer, proteomics, prognosis, biochemistry, diagnostic medicine, gene regulatory networks, gene identification and analysis, genetics, biology and life sciences, physical sciences, computational biology, gene prediction, statistical methods | null |
1,667 | journal.pcbi.1000303 | 2,009 | Canalization of Gene Expression and Domain Shifts in the Drosophila Blastoderm by Dynamical Attractors | Canalization refers to the constancy of the wild type phenotype under varying developmental conditions 1–4 ., In order to explain canalization , C . H . Waddington hypothesized that there must only be a finite number of distinct developmental trajectories possible , since cells make discrete fate decisions , and that each such trajectory , called a chreod , must be stable against small perturbations 5 ., One aspect of canalization , the buffering of phenotypic variability against genotypic variability in wild type , has received considerable experimental 2 , 6–10 and theoretical 11–13 attention ., The phenomenon of canalization of genotypic and environmental variation was seen by Waddington as a consequence of the underlying stability of developmental trajectories , an idea supported by theoretical analysis 13 ., But this central idea of Waddingtons has heretofore received little attention in real developmental systems because of a lack of relevant quantitative molecular data ., The further investigation of Waddingtons hypothesis is of great importance because it provides a scientific connection between the reliability and invariance of the formation of cell types and tissues in the face of underlying molecular variability , as we now explain ., Quantitative molecular data permitting the study of developmental canalization are now available for the segment determination process in Drosophila 14 ., The segmented body plan of the fruit fly Drosophila melanogaster is determined when the embryo is a blastoderm 15 by the segmentation genes 16 ., Quantitative spatiotemporal gene expression data show that the maternal protein gradients and the early expression patterns of the zygotic gap and pair-rule genes vary a great deal from embryo to embryo 14 , 17 ., The variation of the expression patterns of the gap and pair-rule genes decreases over time so that it is significantly lowered by the onset of gastrulation at the end of cellularization ( 14 , Fig . 1 ) ., The observed reduction of variability over time in the segmentation gene system suggests that the developmental trajectory of the Drosophila embryo is stable against perturbation ., The characterization of the stability properties of the developmental trajectory is central to our understanding of the mechanisms that underlie canalization 3 ., In the case of the gap genes , we have shown elsewhere 18 that variation reduction relative to the maternal gradient Bicoid ( Bcd ) occurs because of gap gene cross regulation ., Using a gene circuit model of the gap gene network 18–22 we identified specific regulatory interactions responsible for variation reduction in silico and verified their role in canalization experimentally ., Importantly , the model reproduces the observed low variation of gap gene expression patterns 18 , which provides an opportunity to analyze the properties of the system that give rise to developmental stability ., These results raise two generic problems that occur in the analysis of complex numerical models ., First , even if the model describes a natural phenomenon faithfully , understanding the natural phenomenon is only achieved when the models behavior can be understood as well ., The complexity of the model , unsurprising in terms of the underlying complexity of the biological system itself , poses a significant challenge to understanding model function ., Second , any model is an approximation to the actual mechanisms operating in an organism ., The models behavior must be robust to perturbation , since organisms develop and function reliably even though the underlying mechanisms are subject to a wide variety of perturbations and stresses ., There is extensive molecular variability among cells and embryos ( 14 , 17 , 23–26; reviewed in 27 ) and yet there is functional identity between equivalent cell types or conspecific individuals ., René Thom tried to resolve this apparent contradiction between the constancy of biological function and the variability in biological substructure by proposing a qualitative topological view of the trajectories of dynamical models 28 ., The term “topology” is used here to refer to properties of developmental trajectories that are invariant under continuous deformation ., The preservation of these properties ensures the robustness of model behavior , while a qualitative view often leads to an intuitive understanding of complex mechanisms ., One such robust property is an attractor state , or a stable steady state of a dynamical system , that attracts all trajectories in some neighborhood of itself ., Attractor states are locally stable under small perturbations of the dynamical model 29 , and for this reason it has been proposed that cell fates are attractors 11 , 13 , 30–33 ., The presence of an attractor state in the phase space of a system implies that there exists a region of phase space , called the basin of attraction , in which all trajectories approach the attractor asymptotically 34 , 35 ., This suggests that an attractor is the kind of qualitative robust property that could explain the stability of trajectories , and hence canalization ., There are , however , three important considerations to keep in mind when using attractors to describe the Drosophila blastoderm ., First , the reduction of variation due to attractors is only guaranteed at late times , but the reduction in the variation of the gap gene expression patterns takes place over about 100 minutes prior to gastrulation ., The reduction of variation before gastrulation is biologically essential as the expression patterns of engrailed and wingless , which form the segmentation prepattern , have a resolution of one nucleus and are created by the precise overlap of pair-rule and gap domains 14 , 36 ., Furthermore , at about the time of gastrulation the embryo undergoes the midblastula transition 37 , 38 at which time a qualitative change occurs in the genetic control of the embryo ., Second , in general there can be more than one attractor in the phase space 39–43 ., Thus , the basins of attraction need to correspond to biological initial conditions and be large enough to ensure robustness ., Finally , the set of attractors found must succeed not only in explaining canalization but also the morphogenetic properties of the system ., One such property is the anterior shift of gap gene domains located in the posterior region 14 , 21 , 44 ., These shifts are biologically significant and are difficult to reconcile with stable point attractors ., In this paper we show that the variation reduction of gap gene expression patterns is a consequence of the action of robust attracting states ., We further show that the complex patterning of the gap gene system reduces to the three qualitative dynamical mechanisms of ( 1 ) movement of attractors , ( 2 ) selection of attractors , and ( 3 ) selection of states on a one dimensional manifold ., The last of the three mechanisms also causes the domain shifts of the gap genes , providing a simple geometric explanation of a transient phenomenon ., In the Gap Gene Circuits section we briefly describe the gene circuit model; see 18 for a full description ., For each nucleus in the modeled anteroposterior ( A–P ) region , we identified the attractors in the gap gene phase space , calculated the trajectories , the basins of attraction and other invariant sets such as one dimensional attracting manifolds ( Stability Analysis of the Trajectories of the Gap Gene System section ) ., The stability of the trajectories was tested by varying the initial conditions within a biological range , based on gene expression data , that represents the variability of early gap gene expression ., We plotted the attractors and several trajectories corresponding to different initial conditions to make phase portraits that show the global qualitative behavior of the system ., Finally , we studied how the phase portraits changed as A–P position was varied to infer qualitative pattern formation mechanisms ., The biological conclusions about canalization and pattern formation arising from the dynamical characterization are presented in the Mechanisms of Canalization and Pattern Formation section ., The gene circuit used in this study models the spatiotemporal dynamics of the protein expression of the gap genes hunchback ( hb ) , Krüppel ( Kr ) , giant ( gt ) , and knirps ( kni ) during the last two cleavage cycles ( 13 and 14A ) before gastrulation 37 in the Drosophila blastoderm ., The protein products of these genes localize to nuclei 45–48 so that the state variables are the concentrations of the proteins in a one dimensional row of nuclei along the A–P axis of the blastoderm ., The concentration of the protein in the nucleus at time is denoted by ., In the model we considered a region , from 35% to 92% egg lenth ( EL ) along the A–P axis , which corresponds approximately to the region of the blastoderm fated to form the segmented part of the adult body 49 , 50 ., The gap genes are expressed in broad domains ( Fig . 2A , B; 14 ) under the control of maternal cues ., The anterior maternal system acts primarily through the protein gradient Bcd 51–53 which is essentially stationary and has an exponential profile ( Fig . 2C; 14 , 51 , 54 ) during the modeled time period ., The posterior maternal system is represented by the maternal Hb gradient ( Fig . 2C; 55–57 ) ., The terminal system regulates gap gene expression by activating tailless ( tll ) and huckebein ( hkb ) 58–61 ., The terminal system is represented in the model by the Tll gradient , which is expressed posterior to 80% EL in the modeled region during cycles 13 and 14 ( 14 and Fig . S1B ) ., tll is considered upstream of the gap genes since its expression pattern is unchanged in gap gene mutants 62 ., The concentration of Bcd in nucleus is denoted by and was determined using Bcd data from a representative cycle 13 embryo by an exponential fit , so that ( see 18 for details ) ., The concentrations of Tll and another upstream regulator , Caudal ( Cad ) 63 , 64 , were determined by interpolating average data in time 18 ., The concentrations of Tll and Cad are denoted by respectively , with an explicit dependence on time , since these gradients are not stationary ( Fig . S1 ) ., The dynamical equations governing are given by ( 1 ) where in a gene circuit with genes and nuclei ., The first term on the right hand side of Eq ., ( 1 ) represents protein synthesis , the second one represents protein transport through Fickian diffusion and the last term represents first-order protein degradation ., The diffusion constant , varies inversely with squared internuclear distance , and is the degradation rate ., The synthesis term is set to zero during the mitosis preceding the thirteenth nuclear division as synthesis shuts down 65 ., Following this mitosis , the nuclei are divided and daughter nuclei are given the same state as the mother nucleus ., is the maximum synthesis rate , and is a sigmoidal regulation-expression function ., The first term in the argument of represents the transcriptional cross regulation between the gap genes and the genetic interconnectivity is specified by the matrix ., Positive elements of imply activation while negative ones imply repression ., The regulation of the gap genes by Bcd is represented in the second term and is the regulatory strength ., The regulation of the gap genes by upstream time-varying inputs is represented in the third term and is the number of such inputs ., There are two such inputs in this model , Cad and Tll , and the elements of the matrix have the same meaning as those of ., The last term , , represents the effect of ubiquitous transcription factors and sets the threshold of activation ., The initial conditions for Hb are specified using cleavage cycle 12 data ., Cycle 12 data are a good approximation to the maternal Hb gradient since the zygotic expression of hb appears to begin in cleavage cycle 13 17 ., The initial conditions for Kr , Gt , and Kni are taken to be zero , since their protein expression is first detected in cycle 13 14 , 61 , 66–68 ., The gene circuits parameters were determined by performing a least-squares fit to a time series of averaged gap gene data 14 using the Parallel Lam Simulated Annealing algorithm ( see Methods ) ., This time series has nine points ( time classes; see Table S1 ) , one in cycle 13 and the rest in cycle 14A ., The output of the gene circuit ( Fig . 2E , F ) fits the data ( Fig . 2D ) well and its network topology ( Fig . 2K ) is consistent with previous results ( see 18 for discussion and parameters ) ., In order to characterize the stability of the trajectories of the gap gene system in terms of qualitatively robust features like attractors , we apply the tools of dynamical systems theory 34 , 69 ., Since the gene circuit has variables ( Gap Gene Circuits section ) its state is represented as a point in an -dimensional concentration space , or phase space ., In general the concentrations of gap proteins change with time , and hence , a solution of the gene circuit is a curve in this phase space ., The gene circuit can also have solutions which do not change with time ., Such a solution , called an equilibrium or steady state solution , is represented as a single point in phase space ., The positions of the equilibrium solutions in phase space and their stability properties determine the stability of a general time varying solution of the gene circuit ., The reader not familiar with linear analysis near an equilibrium point should see Protocol S2 for a pedagogical description of equilibria and their stability in two dimensions ., Based on the analysis in the previous section the region of interest , from 35% EL to 71% EL , can be divided into an anterior and a posterior region ( Fig . 2J ) having distinct modes of canalization and pattern formation ., The two regions are separated by a saddle-node bifurcation that occurs at 53% EL ( Fig . 3A ) , that is , at the peak of the central Kr domain ., We next demonstrate that in the anterior region ( Fig . 2J ) , which extends from the peak of the third anterior gt domain to the peak of the central Kr domain , the state of a nucleus at gastrulation is close to a point attractor ., The trajectories are stable by virtue of being in the basin of attraction of the nucleuss attractor state and hence canalize ., Pattern formation occurs by the selection of one state from many in a multistable phase space ., The concentrations of the Bcd and Cad gradients control pattern formation in the anterior by determining the sizes of the basins and the positions of the attractors , while maternal Hb concentration selects a particular attractor by setting the starting point in its basin ., Previous experimental 73 and theoretical 19 work suggested that Bcd and maternal Hb patterned the anterior of the embryo synergistically; our results identify specific roles for Bcd and Hb in anterior patterning ., The posterior region extends from the peak of the central Kr domain to the peak of the posterior gt domain ( Fig . 2J ) and its nuclei have phase spaces with very different properties ., In this region , the state of the nucleus is far from any attractor state at gastrulation ., Instead the state of a nucleus is close to a one-dimensional manifold and canalization is achieved due to attraction by this manifold ., Even though the phase space is multistable , the biological range of maternal Hb concentrations in the posterior region place all nuclear trajectories in one basin of attraction ., As a consequence , the modes of pattern formation operative in the anterior cannot function in the posterior ., Maternal Hb patterns the posterior by determining the position on the attracting manifold which a particular trajectory reaches by the time of gastrulation ., These results reveal the mechanism by which maternal Hb acts as a morphogen in the posterior 73–75 and also explain the dynamical shifts of gap gene domains 14 , 21 , a significant biological property of the posterior region ., We begin the presentation of detailed results by describing the phase spaces of typical nuclei in the two regions , highlighting mechanisms for canalization and pattern formation ., An equilibrium is labeled by either ( point attractor ) or ( saddle equilibrium ) , denoted by a superscript , with subscripts denoting the number of eigenvalues having positive or negative real parts ., For example denotes the second saddle equilibrium in the modeled region which has one eigenvalue with positive real part and three with negative real parts ., Equilibria are also given descriptive names based on which proteins are at high levels ( on ) ignoring the proteins that are at low levels ., For example , if a point attractor is at hb-on , Kr-off , gt-on , and kni-off , it is referred to as the “hb , gt-on” attractor ., A discrete 6 , 7 and buffered response to perturbations is the hallmark of a canalized developmental system ., Without recourse to molecular data , Waddington sought to explain these two properties of the response by postulating certain favored stable developmental trajectories which he called chreods ., Our results ( see Fig . 7 for summary ) show that dynamical systems with multiple attracting states possess both of these properties ., Small perturbations are damped because of phase space volume contraction driven by attractors ., A discrete response to larger perturbations is a consequence of the discontinuous boundaries between the basins of attraction of a multistable system or of bifurcations ., Using a model based on gene expression data , we can conclude that the trajectory of the gap gene system is a chreod ., The initial high variation of gap gene expression may arise from early events governed by stochastic laws ., Previous observations indicate that the first nuclei in which gap gene transcription is activated are selected probabilistically 68 , 76 ., Moreover , the gradients of Hb and Cad proteins are formed by translational repression from the Nanos and Bcd gradients respectively 57 , 77 , 78 under conditions of relatively low molecular number 23 , which is likely to lead to intrinsic fluctuations 79 ., Our results show that a deterministic description of gap gene dynamics is sufficient to account for the reduction of initial variation regardless of its source ., It is evident however that there are at least two other types of variation that the system might be subject to ., First , a natural population will have genotypic variation which , in the framework of the model , would be reflected in the variation of its parameters ., Second , gap gene expression itself is likely to be a stochastic process rather than a deterministic one ., Notwithstanding this fact , there is no evidence in Drosophila for the coupling of molecular fluctuations to phenotypic fluctuations as seen in prokaryotes 80 , suggesting that molecular fluctuations are buffered in some sense ., We emphasize that an attractor is stable against small perturbations of the model itself 29 , and hence is a model property that is preserved to an extent if there is genotypic variation in a population or if errors are introduced by stochastic gene expression ., However , further study of both of these aspects of canalization is required in order to more fully understand their role ., With regard to pattern formation in the blastoderm , the prevailing theory is that the border positions of downstream genes are determined at fixed values or thresholds of the Bcd gradient 23 , 52 ., This idea cannot , however , account for either the low variability of downstream gene border positions 14 , 17 , 18 , or the dynamical shifts of domains in the posterior 14 , 21 ., Fixed threshold specification also cannot explain precise placement of the borders in the posterior since the low molecular number of Bcd in the nuclei implies a high level of molecular noise 23 , 81 ., In the dynamical picture ( Fig . 7 ) , contrary to the threshold view , Bcd ceases to have a role in positional specification posterior to the peak of the Kr domain since , posterior to this position , the geometry of the phase space does not change qualitatively with A–P position ., Instead , maternal Hb acts as a morphogen , obviating the problems arising from a low molecular number of Bcd ., Maternal Hb has long been recognized as a morphogen 74 , 75 for the posterior region but the mechanism with which it specifies the posterior region pattern was not clear ., As is the case with Bcd , a threshold-based theory for positional specification by Hb 82 is incomplete and requires the postulation of thresholds that can be modified by their targets ., The qualitative dynamics provides a viable mechanism for posterior patterning ., The attracting manifold is the geometric manifestation of asymmetric repression between the gap genes in reverse order of gap gene domains , ., The initial Hb concentration determines which neighborhood of the manifold the trajectory traverses as it is reaches the manifold: Kr-on , kni-on , or gt-on ., In other words , posterior patterning works by triggering particular feedback loops in the gap gene network based on maternal Hb concentration ., This mechanism also accounts for domain shifts , a property particular to the posterior region , since the trajectories mimic the geometry of the manifold as they approach it ., The dynamical analysis of the gap gene system provides a simple and integrative view of pattern formation in the blastoderm ( Fig . 7 ) ., The existence of distinct anterior and posterior patterning systems was inferred from the effect of maternal mutations on larval cuticle phenotype and was subsequently characterized in terms of the effects of the Bcd 52 , 73 , 83 , 84 and maternal Hb gradients 56 , 57 , 77 ., But where and how is the control of patterning transferred from Bcd to maternal Hb ?, Our analysis shows that the hand-off occurs at the A–P position where the Kr-on attractor is annihilated through a saddle-node bifurcation , implying a sharp rather than gradual transfer ., With knowledge of the two dynamical regimes , the complex spatiotemporal dynamics of the gap gene system can be understood in the simple terms of three mechanisms: movement of attractors through phase space , selection of attractors by initial conditions , and the selection of states on an attracting manifold ( Fig . 7 ) ., Finally , we mention the advantage of having the unexpected mechanism of a one dimensional manifold for canalization and patterning ., The Bcd concentration is a bifurcation parameter of the dynamical equations ., If there were specific attractors corresponding to each gap gene state , with bifurcations creating and annihilating them successively as the Bcd concentration is varied , the molecular noise in Bcd 23 would give rise to “jitter” or rapid switching between attractors ., The manifold with its smooth dependence on maternal Hb is qualitatively robust to such fluctuations ., In a connectionist model of cognition 85 , one dimensional unstable manifolds connecting a sequence of saddle points have been proposed as a means of representing transient brain dynamics ., The gap gene phase space is a low dimensional projection of the high dimensional phase space of all the molecular determinants in the blastoderm ., It may well be that the attractors found in our analysis are actually saddle points in the high dimensional phase space and are way points , with manifolds connecting them , rather than final end points ., The methods used to obtain and characterize the quantitative data are as described in earlier work 14 ., All gene expression levels are on a scale of 0–255 chosen to maximize dynamic range without saturation ., The numerical implementation of the gene circuit equations is as described 18 , 21 ., The gap gene circuit was fit to integrated gap gene data 14 using Parallel Lam Simulated Annealing ( PLSA ) 86 , 87 ., PLSA minimizes the root mean squared ( RMS ) difference between model output and data ., For each nucleus , data were available at nine time points ( Table S1 ) ., Search spaces , penalty function , and other annealing parameters were as described 22 , 88 ., The circuit analyzed in detail had an RMS score of 10 . 76 , corresponding to a proportional error in expression residuals of about 4–5% ., Equilibria were determined by the Newton-Raphson method as described in Protocol S3 ., One-dimensional unstable manifolds of hyperbolic equilibria were calculated by solving the ODEs using the Bulirsch-Stoer 71 method with starting points in the unstable eigenspace of the equilibria 72 ., The basin boundaries on the Hb axis were calculated by finding starting points for trajectories that reach saddle points with one positive eigenvalue ( Protocol S3 ) ., The time evolution of volume phase space was calculated as described ( Protocol S8 ) ., The methods used to calculate the equilibria branches and to determine the type of bifurcations are described in Protocol S4 . | Introduction, Results, Discussion, Methods | The variation in the expression patterns of the gap genes in the blastoderm of the fruit fly Drosophila melanogaster reduces over time as a result of cross regulation between these genes , a fact that we have demonstrated in an accompanying article in PLoS Biology ( see Manu et al . , doi:10 . 1371/journal . pbio . 1000049 ) ., This biologically essential process is an example of the phenomenon known as canalization ., It has been suggested that the developmental trajectory of a wild-type organism is inherently stable , and that canalization is a manifestation of this property ., Although the role of gap genes in the canalization process was established by correctly predicting the response of the system to particular perturbations , the stability of the developmental trajectory remains to be investigated ., For many years , it has been speculated that stability against perturbations during development can be described by dynamical systems having attracting sets that drive reductions of volume in phase space ., In this paper , we show that both the reduction in variability of gap gene expression as well as shifts in the position of posterior gap gene domains are the result of the actions of attractors in the gap gene dynamical system ., Two biologically distinct dynamical regions exist in the early embryo , separated by a bifurcation at 53% egg length ., In the anterior region , reduction in variation occurs because of stability induced by point attractors , while in the posterior , the stability of the developmental trajectory arises from a one-dimensional attracting manifold ., This manifold also controls a previously characterized anterior shift of posterior region gap domains ., Our analysis shows that the complex phenomena of canalization and pattern formation in the Drosophila blastoderm can be understood in terms of the qualitative features of the dynamical system ., The result confirms the idea that attractors are important for developmental stability and shows a richer variety of dynamical attractors in developmental systems than has been previously recognized . | C . H . Waddington predicted in 1942 that networks of chemical reactions in embryos can counteract the effects of variable developmental conditions to produce reliable outcomes ., The experimental signature of this process , called “canalization , ” is the reduction of the variation of the concentrations of molecular determinants between individuals over time ., Recently , Waddingtons prediction was confirmed in embryos of the fruit fly Drosophila by observing the expression of a network of genes involved in generating the basic segmented body plan of this animal ., Nevertheless , the details of how interactions within this genetic network reduced variation were still not understood ., We use an accurate mathematical model of a part of this genetic network to demonstrate how canalization comes about ., Our results show that coupled chemical reactions having multiple steady states , or attractors , can account for the reduction of variation in development ., The variation reduction process can be driven not only by chemical steady states , but also by special pathways of motion through chemical concentration space to which neighboring pathways converge ., These results constitute a precise mathematical characterization of a healing process in the fruit fly embryo . | developmental biology, computational biology/transcriptional regulation, mathematics, developmental biology/pattern formation, computational biology, evolutionary biology, computational biology/systems biology | null |
1,068 | journal.pcbi.0030187 | 2,007 | Heat Shock Response in CHO Mammalian Cells Is Controlled by a Nonlinear Stochastic Process | Complex biological systems are built out of a huge number of components ., These components are diverse: DNA sequence elements , mRNA , transcription factors , etc ., The concentration of each component changes over time ., One way to understand the functions of a complex biological system is to construct a quantitative model of the interactions present in the system ., These interactions are usually nonlinear in terms of the concentrations of the components that participate in the interaction process ., For example , the concentration of a dimer is proportional to the product of the concentrations of the molecules that dimerise ., Besides being nonlinear , the interactions are also stochastic ., The production process of a molecule is not deterministic , and it is governed by a probability rate of production ., In what follows , a nonlinear stochastic model for the response to heat shocks in CHO mammalian cells will be developed ., Heat stress is just one example of the many ways a molecular system can be perturbed ., From a general perspective , the structure of a molecular system is uncovered by imposing different perturbations ( input signals ) on the system under study , and then the responses of the system ( output signals ) are measured ., From the experimental collection of pairs of input–output signals , laws that describe the system can be uncovered ., This is the fundamental idea in Systems and Synthetic Biology 1–5 and has long proved to be successful in the field of electronics ., The input signals are applied through the use of signal generators 6–8 ., An input signal that is easy to manipulate is a heat pulse , the parameters to change being the pulse temperature and its duration ., Members of the stress protein family such as the heat shock protein 70 ( HSP70 ) are highly responsive to temperature variations ., This protein is a molecular chaperone and is a critical component of a complex genetic network that enables the organism to respond to deleterious effects of stress 9–11 ., Since Hsp70 is thus an important regulator in a complex system , our goal was to find if it is possible to develop a mathematical model of the regulation of its expression in mammalian cells exposed to heat shock ., Our specific objectives were, 1 ) determine an equation representing the average expression of Hsp70 over time in a cell population after an initial heat shock ,, 2 ) determine how the physical parameters of heat shock ( temperature and duration ) influence the parameters of this equation , and, 3 ) determine the mathematical model that describes the expression of Hsp70 at the single-cell level ., We first describe the process of inferring the mathematical model from the experimental data ., Then a mathematical study of the model will follow ., To acquire the experimental data , we elected to use a system using a reporter gene where the expression of the green fluorescent protein ( GFP ) is under the control of the promoter region of the mouse Hsp70 gene ., The GFP reporter proved useful for quantitative analysis 12 and was used before in connection with Hsp70 in different biological systems 13–17 ., The Hsp70-GFP fusion gene was integrated into a plasmid and transfected in Chinese hamster ovary ( CHO ) cells ., Stable transfectants were selected for their low level of basal expression of GFP and their capacity to upregulate GFP effectively and homogenously after exposure to heat shock ., Flow cytometry was used to make precise quantitative measurements of the fluorescence of a large cell population ., Since the quality of the experimental data was critical to the feasibility of the mathematical analysis , steps were taken to minimize sample-to-sample and experiment-to-experiment variability and to maintain the experimental noise to a minimum ., To that effect , temperature and time were tightly controlled for heat shocks , the cells were treated as a batch in a single tube for each condition ( combination of temperature and time ) , and aliquots were taken at each time point ., All samples were fixed for at least 24 h before analysis by flow cytometry so that changes of fluorescence due to fixation would not be a factor , and all the samples from the same experiment were analyzed at the same time ., Flow cytometry was chosen for analysis because it allows a very accurate quantitative measurement of the fluorescence of a large number of events , independently of the actual size of the sample ., Within the same experiment and between experiments , the same instruments settings were used for the flow cytometer , and at least 1 × 104 cells were analyzed per sample ., Detailed protocols and experimental conditions are available in the Materials and Methods section ., First , we will follow a description of the time course of the mean response to a heat shock ., At elevated temperatures ( 39 °C to 47 °C ) , the heat shock promoter HSP70 is active and GFP starts to be synthesized ., The input signals were chosen in the form of a pulse at a temperature ( T ) and duration in time ( D ) ( Figure 1A ) ., In the first experiment , the dynamic response of GFP after a heat pulse at 42 °C for 30 min was monitored by taking samples each 30 min for 18 h ., Before and immediately after the heat shock , the GFP intensity remains at approximately the same level; this phenomenon was observed in all subsequent experiments ., The fold induction of GFP with respect to a reference ( GFP0 ) was then determined:, The reference is the first measured sample away from the end of the heat shock ( 30 min after the shock in Figure 1A ) ., Our finding is that the logarithm of the fold induction of GFP follows an exponential saturation trajectory ( Figure 1B ) , with tight confidence bounds for the estimated parameters and tight prediction bounds for nonsimultaneous observations ., The tight prediction bounds appear even when almost half of the data is not used during fitting ( Figure 1B ) ., The time t is measured relative to the reference time t0 . The initial fold induction at t = 0 ( or equivalent t0 after the end of heat shock ) is 1 ., This value of 1 for the initial fold induction is consistent with the entire time evolution if a fit with the expression, will give a value for parameter, very close to the value for parameter a ., Theoretically ,, must be equal with a to have a fold induction of 1 at t = 0 ., The result of the fit ( Figure 1B ) shows this consistency ., From now on we will take, ., The empirical law for the response of the cells to the heat pulse can be thus cast into the form:, The same law appeared in repeated measurements of pulses at 42 °C for 30 min duration ( unpublished data ) ., Parameter b describes the quickness of the response ., As b increases , the saturation value of the response is reached in less time ., Parameter a specifies the saturation value of the response ., The plateau reached by the fold induction is ea and thus grows exponentially with parameter a ., These findings suggest that the same law is valid for other heat shock pulses , parameters a and b being dependent on the heat pulse height T and its duration D ( Figure 1C ) ., To find the range of validity for the empirical law , measurements were taken for the responses to heat shocks at various heat pulse parameters T and D in a series of three experiments that partially overlapped Figure 2 ., The law was again present in all responses for temperatures between 41 . 5 °C and 42 . 5 °C , ( examples selected in Figure 3A , fit 3 , 4 ) ., For lower temperature ( 39 . 5 °C to 40 . 5 °C ) , the law was valid , but with poor 95% confidence intervals for estimated parameters a and b , as in Figure 3A , fit 1 , 2 ( the activity of the Hsp70 promoter was low ) ., At high temperatures or long durations ( Figure 3B ) , the double exponential law still explains the main characteristic of the stress response and is valid after a few hours from the end of the heat shock ., In the following , a theoretical model will be developed to explain the experimentally discovered law ., The exponential accumulation of the GFP shows that the derivative with respect to time of the mean GFP is proportional with itself:, There must be thus a molecular process , described by the exponential term abe−bt , which controls the heat shock response ., This theoretical suggestion is confirmed by previous studies of the heat shock system which revealed that the accumulation and subsequent degradation of the heat shock transcription factor 1 ( HSF1 ) regulates Hsp70 18–22 ., Experimental results 18 show that HSF1 activation is characterized by a rapid and transient increase in hsp70 transcription which parallels the kinetics of HSF1–DNA binding and inducible phosphorylation ., This rapid increase in HSF1–DNA binding activity reaches a maximal level and thereafter attenuates to a low level ., This rapid increase in activity followed by attenuation will form the starting point for our theoretical model ., An activation–accumulation two-component model will be developed as a minimal theoretical description of the empirical law ., The “activation” variable ( X1 ) represents the first phase of the heat shock response and includes components like HSF1–DNA binding activity ., X1 will increase during the duration of the heat shock and then , after the shock , will decrease with a lifetime proportional to parameter b ( Figure 4 ) ., The “accumulation” variable ( X2 ) includes the products of transcription and translation ., This second variable , at low levels before the shock , will gain momentum after the shock ., To connect the model with the experimental data , the GFP will be considered to be proportional with X2 ., The speed of accumulation of X2 , that is , dX2/dt , will be proportional to the product X1X2 ., Immediately after the shock , X1 has a big value ( the activation is high ) , and thus the speed of X2 is high ( the accumulation is in full thrust ) ., This will trigger an initial fast accumulation of GFP , which is proportional with X2 ., Later on , the activity X1 disappears , nullifying the product X1X2 and thus the speed of X2 ., The process is then terminated ( the accumulation stops ) ( Figure 4 ) ., The empirical law follows directly as a solution of the activation–accumulation system of equations:, with b and c as some constants ., Indeed , given the initial conditions X1 ( 0 ) and X2 ( 0 ) at a zero time reference t0 = 0 , the solution to this system of differential equations is, With the notation, the empirical law follows from X2 ( t ) :, The theoretical model contains two parameters: b and c ., Parameter b is directly accessible to experimental measurements , whereas parameter c is not; however , the product cX1 ( 0 ) which equals the product of a and b can be measured ., It is interesting to notice that the above time evolution can be re-expressed as a conservation law which is independent of any reference time ., For any two time points t1 and t2 , the following holds, At this point , there is no more information in the activation–accumulation description above than is in the empirical law ., However , one can search for more information hidden in the above two-component description by turning attention to the full data available , not only to the mean value of GFP ., For each sampled time , the full data available consists of measured GFP levels for at least 10 , 000 single cells ., These 10 , 000 single-cell measurements are typically distributed as in Figure 5 . There is a long tail at high values of GFP ., This biological variation in response to the stress is explained by turning the deterministic two-component system into a stochastic two-component system 6 , 7 ., The stochastic description must be completely enforced by ideas behind the deterministic two-component system ., The stochastic model is simple ., X1 is the mean value of a stochastic activation variable which will be denoted by q1 , X1 = 〈q1〉 ., After the heat shock , q1 will decrease with a probabilistic transition rate bq1 ., The activation–accumulation stochastic model is based on the same relation as before ( compare bq1 with bX1 ) , but now it describes the probabilistic transition rate and not a deterministic speed of attenuation ., By the same token , X2 is the mean value of q2 and its probabilistic accumulation rate is cq1q2 ., One notices that the transition probability rate cq1q2 is nonlinear in the variables q1 and q2 ., The stochastic two-component description is thus a mirror image of the deterministic two-component system ., However , the probabilistic system is more powerful as it predicts that the histograms of GFP ( proportional with q2 ) obtained from the flow cytometry measurements follow a Gamma distribution, with GFP ≡ x ., This prediction is confirmed experimentally ( Figure 5 ) ., The fact that the levels of proteins in gene networks tend to follow a Gamma distribution , which is a continuum version of a discrete negative-binomial distribution , was presented in 23 , 24 ., The papers 23 , 24 develop theoretical models describing the steady-state distribution of protein concentration in live cells ., Our interest lies in the non–steady-state behavior of these distributions ., Namely , the aim is to find the time evolution of the parameters that characterize these distributions ., The entire time evolution of the distributions is presented in Figure 6 . The distributions become wider as time passes ., The experimental data reveal that parameter ρ remains constant in time and only θ changes ., These experimental findings are theoretically explained in detail in the section Analysis of the Theoretical Model ., What follows summarizes the theoretical conclusions that are useful in understanding the experimental results of Figures 5 and 6 . The probability distribution for the discrete molecule number q2 , predicted by the stochastic activation–accumulation model , is the negative-binomial distribution ., This distribution appeared in earlier theoretical studies of genetic networks 23 , 24 and in physics 25 , 26 ., The GFP intensity is proportional with q2 and appears in measurements as a decimal number and not as a pure integer ., Thus , to describe the probability distribution of the GFP intensity , a continuous version of the discrete negative-binomial distribution is necessary ., This continuous version is the Gamma distribution observed experimentally in Figures 5 and 6 . The physical interpretation of parameter ρ will now be discussed ., At initial time t0 , immediately after the heat shock , there will be at least one cell from the entire cell population which contains the minimum number of molecules q2 ., Denote this number by N0 ., As the time passes , the molecule number q2 will grow , following the described stochastic process ., However , there is a nonzero probability , though extremely small , that the process of accumulation in one cell does not start even after 24 h ., This can happen in one of those cells that contain the minimum number of molecules q2 at the initial time t0 ., Thus , at any later time t > t0 , the lowest possible number of molecules q2 in a cell is N0 as it was at the initial time t0 ., It can be shown ( see the section Analysis of the Theoretical Model ) that ρ = N0 ., This explains the time independence of the experimental values of ρ; it also gives a physical meaning to ρ as being proportional to the minimum number of GFP molecules in a cell ., Parameter θ contains the time evolution of the stochastic accumulation of the GFP molecules ., This evolution can be again expressed as a time conservation property, valid between any two time points t1 and t2 ., The above relation Equation 10 contains parameters a and b and can be used to check the consistency of the model ., Using the data from Figure 6 , it follows that a = 3 . 159 with a 95% confidence interval ( 3 . 074 , 3 . 244 ) and b = 0 . 2572 with a 95% confidence interval ( 0 . 2358 , 0 . 2785 ) ., From the mean value for GFP , it results that a = 2 . 423 with a 95% confidence interval ( 2 . 351 , 2 . 496 ) and b = 0 . 2579 with a 95% confidence interval ( 0 . 2344 , 0 . 2814 ) ., Parameter a is sensitive to the estimation procedure , a phenomenon connected with the fact that parameter ρ is not perfectly constant but decreases a bit with time ., The mean value of the Gamma distribution is ρθ ., For a perfectly constant ρ , the estimated value for a would be the same using either the θ values or the ρθ data ., Contrary to parameter a , parameter b is independent of the way it is estimated , and the estimation is highly reliable ., To further check the reality of the Gamma distribution for heat shock response , a comparison of the Gamma fit with the lognormal fit is presented in Figure 7 . The lognormal was chosen because it can be viewed as a result of many random multiplicative biological processes ., A loglikelihood ratio less than 1 favors the Gamma distribution against the lognormal ., Moreover , at 37 °C the Gamma distribution is not a good fit ( loglikelihood ratio is bigger than 1 ) as it should be because the promoter is not active ., The law, is useful in making predictions for the fold induction to many other heat shock pulses ., For a heat pulse of a given temperature and duration , parameters a and b can be read out from Figure 8 . The constant level contours were inferred from the experimental data ., The level patterns differ; parameter a increases monotonically with the temperature and duration of the heat pulse ( Figure 8A ) , while the levels of parameter b form an unstable saddle shape pattern ( Figure 8B ) ., The conclusion of this section will be rephrased using a control theory perspective ., The end result of this paper is an input–output relation for the response of the CHO cells to heat shocks , together with a theoretical model that explains it ., The input signals are pulses of a precise time duration D and temperature height T . The output measured signals are the GFP intensity ., The input–output relation is given by the time-dependent probability density for GFP intensity, with, Parameters a and b are functions of the input signal , that is a = a ( T , D ) and b = b ( T , D ) ., The dependence of parameters a and b on temperature T and duration D is given by the contour plots of Figure 8 . The functional forms of a = a ( T , D ) and b = b ( T , D ) is a consequence of biological phenomena that take place during the heat shock ., We do not have a theoretical model for the phenomena that take place during the heat shock ., To explain the time evolution of the output variable ( GFP intensity ) , we developed a coarse-grained model for the heat shock response ., This coarse-grained model is valid for the biological phenomena that takes place after the end of the heat shock ., The model predicts the existence of a molecular factor that controls the GFP accumulation ( variable q1 ) ., We associated this theoretical factor with the heat shock factor HSF1-DNA binding activity ., The theoretical model is based on an activation variable q1 and an accumulation variable q2 ., The state of this two-component model is thus ( q1 , q2 ) , and any pair of positive integer numbers can be a possible state ., The main goal is to find the mean value and standard deviation for the activation and accumulation variable , respectively ., These quantities will be obtained from the equation for the probability P ( q1 , q2 , t ) that the system is in the state ( q1 , q2 ) at the time t ., The equation for P ( q1 , q2 , t ) depends on the multitude of transitions which can change a state ( q1 , q2 ) ., The experimental results suggest that two possible transitions change the state ( q1 , q2 ) ., One transition represents the decreasing of the activation variable from q1 to q1 − 1 ., On the state ( q1 , q2 ) , this attenuation appears as ( q1 , q2 ) → ( q1 − 1 , q2 ) , with an unaffected accumulation variable q2 ., The second transition will describe the accumulation of the accumulation variable from q2 to q2 + 1 ., On the state ( q1 , q2 ) , this accumulation appears as ( q1 , q2 ) → ( q1 , q2 + 1 ) , with the activation variable q1 now being unaffected ., A notation for the transition direction can be introduced: ɛ−1 = ( −1 , 0 ) ., The degradation transition can thus be written as ( q1 , q2 ) → ( q1 , q2 ) + ɛ−1 ., The negative sign in the index −1 is just a reminder of the fact that the transition reduces the number of molecules; the 1 in the subscript tells us that the transition is on the first variable ., Likewise , the accumulation transition can be expressed as ( q1 , q2 ) → ( q1 , q2 ) + ɛ2 and ɛ2 = ( 0 , 1 ) ., The index 2 is positive ( accumulation ) and is associated with the second component ., To find the probability P ( q1 , q2 , t ) , the transition probabilities per unit time are needed ., The experiment suggests we use, as the transition probability rate for the attenuation of the activation component , and, as the transition probability rate for the increasing of the accumulation component ., The stochastic model can be represented with the help of a molecular diagram 7 ( Figure 9 ) ., The components q1 and q2 are represented by ovals and the transitions by squares ., The lines that start from the center of a transition square represent the sign of that transition and point to the component on which the transition acts ., The transition ɛ−1 is negative , so the line ends in a bar and acts on q1 ., The transition ɛ2 is positive and so the line ends with an arrow; it acts on q2 ., The lines that stop on the edges of the transition squares represent the transition probability rates ., The line that starts from q1 and ends on ɛ−1 represents the transition probability rate bq1 ., In other words , the transition ɛ−1 is controlled by q1 ., The lines that start on q1 and q2 and merge together to end on ɛ2 represent the product cq1q2 , ( the merging point represents the mathematical operation of taking the product ) ., At this point , the theoretical model is fixed and what comes next is a sequence of computations to extract information out of it ., This information will be compared with the experimental results ., Given the transition rates , the equation for the probability P ( q1 , q2 , t ) is given by the following equation 7 , 25 ., The above equation for P ( q1 , q2 , t ) is not easy to solve ., We will use the method outlined in 6 , 7 and work with the function X ( z1 , z2 , t ) defined by, The equation for the function X ( z1 , z2 , t ) is a consequence of the equation for P ( q1 , q2 , t ) :, The goal is to find the time variation of the mean value and standard deviation for the activation and accumulation variable: 〈q1〉 , 〈q2〉 ,, ,, , 〈q1 , q2〉 , etc ., Here 〈〉 is a notation for the mean value with respect to the probability distribution P ( q1 , q2 , t ) ., From X ( z1 , z2 , t ) , the above mean values can be obtained by taking partial derivatives of X ( z1 , z2 , t ) at z1 =1 , z2 = 1 ., These partial derivatives are actually the factorial cumulants of the probability distribution P ( q1 , q2 , t ) ., In what follows , the sign =: means that the right side is introduced as a notation ., The equations for X1 ( t ) , X2 ( t ) , X11 ( t ) , and higher factorial cumulants result from the equation for X ( z1 , z2 , t ) :, The activation–accumulation model being nonlinear , the equations for the factorial cumulants cannot be reduced to a finite system of equations , unless some approximation technique is employed ., All third-order cumulants were discarded to obtain the above system of equations ., In 7 it was shown , using simulations , that the effect of discarding higher-order factorial cumulants is negligible ., The finite system thus obtained contains X1 , X2 , X12 , X11 , and X22 as variables ., Although it can be solved for X1 and X2 , we found that the influence of the correlation term X12 is small and cannot be experimentally detected in the GFP response ., Taken thus , X12 = 0 , and the system of equations is reduced to:, The solution to X22 from the four-equation system is, with k a constant determined from the initial value X2 ( t0 ) at some time t0 after the heat shock ., The solution can be restated in terms of the variance , Var , of the variable q2 ., The transformation from the factorial cumulants to Var is, And , thus , remembering that the mean value of q2 is X22 , it follows that, Such a relation between Var and Mean is satisfied by the negative-binomial distribution , a point to which we will return later ., Employing the general procedure , we continue to solve the system of equations for X1 , X2 , X11 , and X22 ., However , for the case of negligible X12 , the stochastic process is decoupled in two stochastic processes , each of which is exactly solvable ., It is thus useful to solve directly for the probability distribution of q2 at this point ., The transition probability rate for the first stochastic process ( for the activation component q1 ) is the same as before:, ., For the second one , it changes from, to, ( the coupling between q1 and q2 is through the mean value of q1 now ) ., This simplifies the problem of finding the distribution of q2 ., Denote the mean value of cq1 with g ( t ) , which acts actually like a signal generator on q2 6 , 7 ., The time variation of g ( t ) from the first equation in Equation 20 is, so the stochastic process for q2 now has an accumulation transition rate, The origin of time , t = 0 , is taken at the end of the heat shock , so X1 ( 0 ) represents the mean value of the activation variable at the end of the heat shock ., The probability P ( q2 , t ) to have q2 number of molecules at time t can be found from the master equation for this process, To find the solution , an initial condition P ( q2 , t0 ) must be specified ., The time t = t0 is some time taken after the heat shock pulse ( t0 > 0 ) , when the effects of the shock start to be detectable; it can be , for example , 30 min or 2 h after the pulse ., The probability distribution P ( q2 , t0 ) can be obtained , in principle , from the experimental values of GFP since GFP = fq2 ., There is an obstacle though: the proportionality factor f is unknown ., The factor f converts the number of molecules q2 into the laser intensity which is the output of the flow cytometry machine ., The conversion from the molecule numbers to the laser intensity can be more complicated than the proportionality relation GFP = fq2 ., For example , a background B can change the relation into: GFP = fq2 + B . We measured the GFP in regular CHO cells ( no HFP70-GFP construct ) and found that the background B is about 50 times less than the minimum intensity of GFP in the transfected CHO cells ., The settings of the flow cytometry instrument were set in a linear response range , and thus we will use the scaling relation GFP = fq2 to connect the flow cytometry readings with the number of molecules ., To conclude this initial condition discussion , in a perfect setting we would know the scaling factor f and then get P ( q2 , t0 ) from the measured data ., Because the scaling factor f is unknown , the problem will be solved in two steps ., The first step in choosing P ( q2 , t0 ) is based on a simple assumption: all cells have the same number of molecules q2 = N at the time t = t0 ., That is P ( q2 , t0 ) = δ ( q2 , N ) where δ is the Kronecker delta function ., The solution to Equation 26 with this initial condition is, Here q2 can take only values greater than N , q2 = N , N + 1 , ··· ., This distribution appeared in the study of cosmic rays 26 , and in the context of protein production was presented in 23 ., In terms of the variable x = q2 − N , it is known as the negative-binomial distribution , with interpretations that are not connected with the present problem ., The number N also represents the minimum possible number of molecules q2 in any cell ., This physical interpretation of N will be helpful in what follows ., The variable p ( t ) in the distribution is time-dependent , since the signal generator g ( t ) acts on q2:, The mean and variance for q2 are, from which follows Equation 23, Although the assumption that all the cells contain the same number of molecules at t = t0 is unreal , it produces a valuable outcome ., The negative-binomial distribution implies a Gamma distribution for the GFP intensity ( through the scaling relation GFP = fq2 ) , a fact to be discussed shortly ., Because the Gamma distribution is a good fit for the experimental data , we conclude that the negative-binomial is the correct solution for the distribution of the accumulation variable q2 ., The second step in choosing the probability distribution P ( q2 , t0 ) will be guided by the experimental results ., The experimental results show that the biological system passes through a chain of events from an unknown distribution of GFP before the heat shock , to a Gamma distribution at some time t0 after the heat shock ( 2 h , for example ) ., Also , the experiment shows that the distribution of GFP is Gamma at later times t > t0 ., In other words , the distribution of q2 becomes a negative-binomial at some time t0 after the heat shock and then afterward remains negative-binomial ., These experimental observations are mathematically explained by showing that a solution to Equation 26 with a negative-binomial distribution at t0 remains negative-binomial for all later times t > t0 ., Indeed , the solution to Equation 26 with a negative-binomial initial condition, is, which is a negative-binomial at all times t > t0 ., The number N0 is the minimum number of molecules q2 to be found in a cell at t0 and also at all later times t > t0 ( because q2 cannot decrease ) ., The time evolution of the mean 〈q2〉 is, and represents , using Equation 24 , the same empirical law ( Equation 8 ) as before ., To conclude , the dynamical system is such that once the cells enter into a negative-binomial distribution at some time after the heat shock , the distribution remains negative-binomial at later times ., As the time passes , all the distributions will have the same parameter N0 but different parameters p ( t ) ., To connect the theory with the experimental results , the probability distribution for the GFP intensity is needed ., This distribution is the continuum limit of the distribution for q2 ., It is a well-known fact that the continuum limit of a negative-binomial distribution is the Gamma distribution ., This continuum limit is presented here in order to find parameters ρ and θ , which can be experimentally measured ., The change from the integer variable q2 to the real variable fq2 is simple if advantage is taken of the fact that the common parameter N0 is a small number ., Parameter N0 is less than any possible molecule number q2 present in the system after the time t0 , q2 ≫ N0 ., Then , writing for simplicity p ( t ) as p ,, In the last step , we used the approximation 1 − y ≅ e−y for small values of y ., To go from the discrete variable q2 to the continuous variable GFP , we write the above relation as an equation for the probability density, with Δq2 = 1; then scale to GFP , ( GFP = f q2 ) ., The probability density, P℘ for GFP is then, This is a Gamma distribution for GFP ≡ x, with, From Equation 24 and 28 , we get, The mean value of the Gamma distribution is ρθ from which the empirical law Equation 8 follows ., The way the material is organized and presented in this paper is an outcome of a series of guiding principles imposed upon the project ., These guiding principles were formulated to keep in balance the experimental data with both the mathematical and biological models ., The guiding principles are:, 1 ) start from experimental measurements and discover an empirical law from data using signal generators as input into the system;, 2 ) build a simple mathematical model with as few parameters as possible to explain the empirical law;, 3 ) check the mathematical model using additional experimental information;, 4 ) use a general math | Introduction, Results, Discussion, Materials and Methods | In many biological systems , the interactions that describe the coupling between different units in a genetic network are nonlinear and stochastic ., We study the interplay between stochasticity and nonlinearity using the responses of Chinese hamster ovary ( CHO ) mammalian cells to different temperature shocks ., The experimental data show that the mean value response of a cell population can be described by a mathematical expression ( empirical law ) which is valid for a large range of heat shock conditions ., A nonlinear stochastic theoretical model was developed that explains the empirical law for the mean response ., Moreover , the theoretical model predicts a specific biological probability distribution of responses for a cell population ., The prediction was experimentally confirmed by measurements at the single-cell level ., The computational approach can be used to study other nonlinear stochastic biological phenomena . | The structure of an unknown biological system is uncovered by experimentally perturbing the system with a series of input signals ., The response to these perturbations is measured as output signals ., Then , the mathematical relation between the input and the output signals constitutes a model for the system ., As a result , a classification of biological molecular networks can be devised using their input–output functional relation ., This article studies the input–output functional form for the response to heat shocks in mammalian cells ., The Chinese hamster ovary ( CHO ) mammalian cells were perturbed with a series of heat pulses of precise duration and temperature ., The experimental data , taken at the single-cell level , revealed a simple and precise mathematical law for the time evolution of the heat shock response ., Parameters of the mathematical law can be experimentally measured and can be used by heat shock biologists to classify the heat shock response in different experimental conditions ., Since the response to heat shock is the outcome of a transcriptional factor control , it is highly probable that the empirical law is valid for other biological systems ., The mathematical model explains not only the mean value of the response but also the time evolution of its probability distribution in a cell population . | mus (mouse), computational biology | null |
18 | journal.pcbi.1003158 | 2,013 | Expanding the Druggable Space of the LSD1/CoREST Epigenetic Target: New Potential Binding Regions for Drug-Like Molecules, Peptides, Protein Partners, and Chromatin | Lysine specific demethylase-1 with its corepressor protein CoREST ( LSD1/CoREST ) has emerged as one of the most promising epigenetic targets in drug discovery and design 1 ., LSD1/CoREST is widely investigated for its expanding biological roles in cancer , neurodegeneration , and viral infection 2–7 ., The precedence for drugging chromatin modifying epigenetic targets was established with FDA approval of vironostat and romidepsin , antineoplastic epigenetic drugs that target histone deacetylases 8–10 ., However , no promising therapeutics that target LSD1/CoREST have emerged to date ., A few LSD1 inhibitors have been reported 6 but they display modest activity , have non-ideal medicinal chemistry features due to their polycationic nature 11 , 12 or are poorly selective covalent inhibitors that bind to FAD in the H3-histone N-terminal tail-binding pocket ( Figure 1 ) 13–15 ., Alternatively , short peptide sequences have been recently designed to bind with affinities comparable to those displayed by the natural H3-histone substrate 16 and are inspiring the development of lead compounds ., Recently , our group proposed that druggable regions beyond the AOD active site ( Figure 1 ) might hold the key to developing pharmacologically relevant inhibitors by an allosteric mechanism revealed by extended molecular dynamics ( MD ) simulations 17 , 18 ., Moreover , these new druggable regions could target protein-protein interactions necessary to the formation of multi-protein complexes 19–25 and/or prevent LSD1/CoREST from binding to the nucleosome 18 , 26 ., Multiple solvent crystal structures ( MSCS ) is an experimental technique that can probe favorable binding regions for small molecular fragments on protein surfaces ., Still , only a reduced number of protein crystals are suited for such experiments because the conditions for MSCS can interfere with crystallization ., This limitation highlights the importance of developing reliable computational techniques that quickly and accurately identify potential binding hot spots on a protein receptor ., FTMap 27 and SiteMap 28 , 29 are two algorithms that were successfully and independently developed to predict druggable hot spots ., In order to investigate protein druggability while effectively including receptor dynamics , conformational clustering analysis has been shown to generate reduced receptor configurational ensembles with significant computational timesaving 30–33 ., Thus far , ensemble-based approaches have often employed clustering algorithms to select only a handful of dominant receptor MD centroids , which are the most representative structures extracted from a conformational clustering analysis , but this poses the general question whether a few most dominant structures are sufficient to capture more ephemeral states of the receptor , which could contribute to important mechanistic steps such as the opening of transient cavities available for binding ., Nichols et al . highlighted this problem in the context of blind virtual screening through ligand docking to MD generated receptor structures 34 , 35 ., In this study , we took a complete-ensemble approach by effectively including all the most relevant MD centroids in addition to available X-ray structures to probe the druggable space of the dynamic LSD1/CoREST epigenetic target ( Figure 1 ) ., A reduced number of tens of MD centroids allows effectively eliminating redundant information and efficient computational analysis ., The entire LSD1/CoREST protein complex was investigated using the independent algorithms FTMap and SiteMap so that previously uncharacterized hot spots could be identified ., The newly developed Druggable Site Visualizer ( DSV ) software tool was used to inspect favorable binding regions ., The resultant computational predictions were compared with the available experimental data including X-ray crystallography experiments that used small peptides to investigate protein-protein interactions on the LSD1/CoREST surface ., The co-crystallized Pro-Leu-Ser-Phe-Leu-Val peptide in a novel , predicted binding site on LSD1/CoREST shows the strength of the methods hereby presented ., The molecular systems and simulations used in this study were previously described 17 , 18 ., The atomic coordinates from the structure by Yang et al . ( PDB ID: 2IW5; 2 . 6 Å resolution ) 26 were used to initialize a 500 ns run of LSD1/CoREST ., A second 500 ns run of LSD1/CoREST bound to the H3-histone N-terminal tail ( 16 residues ) was initialized using the peptide substrate coordinates by Forneris et al . ( PDB ID: 2V1D; 3 . 1 Å resolution ) 36 ., Standard preparation , minimization , heat-up , and equilibration procedures were performed using GROMACS ( version 4 . 5 . 4 ) compiled in double precision 37 , 38 , the GROMOS 53A6 force field parameter set 39 , the compatible SPC water model 40 , and compatible ion parameters 41 ., 50 , 000 MD snapshots were extracted every 10 ps from each trajectory and used for analysis ., An RMSD-based conformational clustering algorithm was used to extract reduced unbound and H3-bound configurational ensembles 42 as implemented in the GROMACS g_cluster program 37 , 38 ., The snapshots from each trajectory were aligned to each other by least-square fitting 43 of the Cα atoms of key residues from the amine oxidase domain ( Pro171-Glue427 and Ser517-Lue836 ) ., Conformational clustering was performed on all atoms of these residues by scanning a wide range of RMSD similarity thresholds , and the final choice was made by employing a similarity threshold of 2 Å ., See the Results section for a detailed discussion of the conformational clustering analysis ., Prior to the mapping calculations each structure was prepared using the Protein Preparation Wizard utility from Schrödinger 44 , 45 ., Water molecules were removed when present and hydrogen atoms added to reproduce a neutral apparent pH . The position of all hydrogen atoms was energy minimized using the OPLS 2005 force field 46 ., The FTMap and SiteMap alternative computational approaches were used to search for favorable binding regions on LSD1/CoREST structures ., The FTMap algorithm samples an order of 109 docked poses for 16 small molecule probes using Fast Fourier Transforms ., The docked probes are scored and reduced to sets containing the top 2 , 000 poses for each probe ., After minimization the probes are rescored and clustered using a 3-Å cutoff ., The SiteMap algorithm generates site points on a grid surrounding the receptor van der Waals surface ( 0 . 35 Å grid 3D resolution in our study ) ., Site points sheltered in a pocket or cleft of the protein are retained while points left exposed to solvent are eliminated; the criteria for retaining a site point is determined by the ratio of the squares of the distance of site points to a protein receptor atom and the van der Waals radius of that receptor atom being less than the default value of 2 . 5 29 ., The remaining site points that have neighbors in close proximity are grouped into SiteMap sites ., A probe simulating a water molecule explores each site and characterizes the sites based on van der Waals and electrostatic potentials ., Contour maps of each site are generated that describe the binding characteristics of the site ., Apart from grid resolution , the SiteMap default settings were employed in all cases and sites were merged with the receptor into a single PDB file for analysis ., The Druggable Site Visualizer ( DSV ) software was developed for this work as a plugin for graphical modeling with Visual Molecular Dynamics ( VMD ) 47 ., Figure 2 summarizes the DSV workflow and the underlying automated steps that remain blind to the user ., The DSV function Visualize takes FTMap and SiteMap output in PDB file format and processes it for convenient and data-rich visualization ., Visualize employs as arguments either a single receptor structure or an ensemble of structures; the latter scenario is subsequently described and used in this work for processing the reduced MD ensembles ., The user loads a first PDB structure through DSV and a QuickSurf representation is created ., Then the remaining structures with FTMap and SiteMap information are loaded as DSV performs their automated alignment to the first reference structure ., DSV converts FTMap consensus sites ( CSs ) to spheres centered about the geometric midpoint of each CS and sized according to CS rank ( largest sphere corresponding to highest ranking CS ) ., This graphical approach was inspired by previous work by Ivetac and McCammon 32 and automated in DSV ., DSV colors such FTMap spheres corresponding with the rank of the MD centroid they correspond to ( color coding goes from red for highest-ranking MD centroids to blue for lowest ranking MD centroids where rank is determined by population of the MD cluster from which the centroid was extracted by conformational clustering ) ., In parallel , DSV Visualize converts the SiteMap sites to isosurface representations colored according to their MD centroid rank ., By default , all of the FTMap spheres and SiteMap surfaces are displayed on the first-loaded reference structure ., For graphical purposes the user makes some system dependent , arbitrary decisions ., Typical user-defined inputs are: In this work the number of CSs displayed for each system are specified in the text and figure captions , LSD1/CoREST structures were aligned based on the Cα atoms of all protein residues , the largest sphere radius was set equal to the number of spheres displayed ( in Å ) , and the iValue was set to the default value 0 . 5 ., Another automated feature of DSV is the Select-residues function ., This function may work with a single receptor structure or an ensemble of structures that contain FTMap and SiteMap output ., The latter scenario is subsequently described and used in this work for identifying residues defining new druggable regions as described in the Discussion section ., The first PDB reference structure file is loaded through DSV and a NewCartoon representation of the protein receptor is produced ., Subsequent structures are loaded through DSV and aligned to the initial reference structure , following an identical procedure described above for the Visualize function ., Select-residues then loops through all MD centroids and selects residues within 3 Å of FTMap CSs and produces licorice representations of the residues on the first structure while removing duplicate occurrences of residues across the ensemble of MD centroids ., A licorice representation of residues is created for all residues within 3 Å of SiteMap sites while eliminating redundancy ., At the last step , a third representation is created that shows residues in licorice representations for residues within 3 Å of both FTMap and SiteMap sites ., For graphical purposes the user inputs some system-dependent decisions ., Examples of user-defined inputs in the first release of DSV are: The first release of DSV ( version 1 . 0 ) can be freely downloaded at the software tools web page of the Baron lab , currently: http://barongroup . medchem . utah . edu/tools ., The crystallographic data and three-dimensional structure of LSD1/CoREST bound to the peptide Pro-Leu-Ser-Phe-Leu-Val were described before 16 ( PDB ID: 3ZMV ) ., Briefly , the peptide complex was obtained by crystal soaking in solutions consisting of 1 . 6 M sodium/potassium tartrate , 100 mM N- ( 2-acetamido ) -2-iminodiacetic acid pH 6 . 5 , 10% ( v/v ) glycerol , and 2–5 mM peptide for 3 h ., X-ray diffraction data were collected at 100 K at the Swiss Light Source ( Villigen , Switzerland ) ., Data processing and refinement were carried out using programs of the CCP4 package 48 ., The reduced ensembles obtained from conformational clustering contained 52 ( unbound ) and 45 ( H3-bound ) MD centroids ., Figure 1 shows the MD centroids sorted according to their cluster rank as visualized by Druggable Site Visualizer ( DSV ) ., The top-ranking clusters contained 11 , 643 ( unbound ) and 10 , 995 ( H3-bound ) MD snapshots whereas four ( unbound ) and three ( H3-bound ) MD clusters were singly populated ., Overall , this result was consistent with the general observation of a moderate decrease in LSD1/CoREST flexibility upon H3-histone binding 17 , 18 ( Figure 1 ) ., Note that this study employed all the MD centroids in each ( unbound or H3-bound ) reduced ensemble , to account as well for transient and more rare MD snapshots ., It is therefore different from previous closely related approaches ( e . g . see Refs . 32 , 33 that focused the analysis on the most dominant MD centroids only ) ., Druggability mapping was first explored using available X-ray structures of the LSD1/CoREST complex ., Results based on X-ray structures of LSD1/CoREST bound to the H3 ( PDB code 2V1D 36 ) and SNAIL ( PDB code 2Y48 49 ) N-terminal peptides were mapped with DSV for the five highest-ranking FTMap CSs ( Figure 3A , top row ) and the 10 highest-ranking FTMap CSs ( Figure 3A , bottom row ) ., Druggability mappings of these structures were performed both in the absence ( first column ) and presence ( second and third columns ) of the peptide ligands ., In all cases , the most likely druggable region picked by FTMap was clearly the well-known H3-pocket ., The FAD cofactor pocket was also similarly favored ( Figure S1 ) ., This result confirmed that new favorable regions were found independently of which X-ray structure was employed , and independently of which peptide substrates occupied the H3-binding site ., The observed ability of FTMap to blindly predict favorable LSD1/CoREST sites for non-covalent binding of peptide ligands or of the FAD cofactor confirmed analogous successes recently reported for different protein receptors 27 , 50 , 51 ., After achieving confidence in FTMap accuracy on the LSD1/CoREST complex , druggability mapping was investigated using complete reduced MD ensembles obtained through conformational clustering of each of our 500 ns MD simulations to evaluate the effects of LSD1/CoREST dynamics on the 3D druggable space ., Figure 3B shows the five highest-ranking FTMap CSs ( top row ) and the 10 highest-ranking FTMap CSs ( bottom row ) on the MD reduced ensembles ( Figure 1 ) ., The CSs from the unbound and bound reduced ensemble predicted that the H3-pocket and FAD cofactor sites were strongly favorable as observed for the X-ray structures ( Figure 3B ) ., However and most important , inclusion of LSD1/CoREST dynamics resulted in remarkably broader predicted druggable regions due to the opening of transient niches and cavities on the protein surface and in the H3-pocket ( cf . Figure 3A vs . 3B ) ., Most notably , new CSs were observed at the AOD/SWIRM ( solid arrows Figure 3B ) and AOD/Tower ( hollow arrows Figure 3B ) inter-domain interfaces , which widely expanded the druggable regions ., In addition to performing FTMap calculations on LSD1/CoREST experimental structures and MD reduced ensembles , SiteMap calculations were also performed to explore the druggable space of LSD1/CoREST by means of an alternative , independent algorithm ., Figure 4 shows the comparison of the top-five FTMap CSs and SiteMap sites obtained from DSV using the PDB ID 2V1D ( H3-histone tail present during FTMap and SiteMap calculations ) , the unbound MD reduced ensemble , and the H3-bound MD reduced ensemble ( H3-histone tail present during FTMap and SiteMap calculations ) ., Consensus between FTMap and SiteMap was expected and largely found , as inferred by the observation that every FTMap sphere overlapped with a predicted SiteMap surface ., In all cases , however , the SiteMap sites were also found in regions in which FTMap did not predict favorable sites ., Most prominently , SiteMap predicted binding sites in the CoREST-SANT2/Tower region , while FTMap did not ., In addition , SiteMap predicted more binding sites along the AOD/Tower inter-domain interface and on the SWIRM domain ., Overall , the diverse unbound and H3-bound configurational ensembles led to distinguishable distributions of SiteMap sites on the LSD1/CoREST domains , in line with what was observed using FTMap on the same MD ensembles ., Crystal contacts on protein surfaces and computational hot spot prediction have been used to predict protein-protein interactions in the past 52 , 53 ., We thought to compare the LSD1/CoREST regions involved in crystal packing with the sites revealed by the computational analysis to determine whether predicted druggable sites corresponded to LSD1/CoREST crystal contacts ., It was very satisfactory to see ( Figure 5 ) that the regions involved in inter-molecular crystal-packing interactions overlapped closely with both FTMap CSs and SiteMap sites ., For instance , the Tower domain had minimal SiteMap and FTMap hot spots ., Nevertheless , the crystal-contact inspection showed that the Tower of an LSD1/CoREST molecule interacted through crystal-contacts with a SiteMap-predicted hot spot on the amine oxidase domain ( AOD ) of a symmetry-related LSD1/CoREST molecule ( Panel B in Figure 5 ) ., Likewise , the crystal-contact regions between the AOD and Tower/CoREST-SANT2 domain contained SiteMap-predicted hot spots on both partners ( Panel C in Figure 5 ) ., These results further validated our approach and supported the observation that the identified sites represented promising small-molecule or protein-protein interaction sites ., Additional support to the validity of our approach was given by the investigation of the crystal structure of LSD1/CoREST bound to Pro-Leu-Ser-Phe-Leu-Val ., This peptide was investigated in the framework of a study aimed at identifying the sequence features that confer specificity to the interaction between the LSD1/CoREST active site and the N-terminal SNAG domain of SNAIL1 and related transcription factors 16 , 49 ., Interestingly , the crystallographic analysis revealed that this peptide binds not only to the catalytic site but also in a distinct shallow cleft in the AOD domain ( Figure 6 ) ., The electron density was poorly defined for Pro1 , but showed well-defined conformations for all other ligand residues bound to this newly discovered site ., In particular , the peptide adopted an extended conformation that enabled its backbone to establish H-bond interactions with an adjacent β-strand ( residues 317–323 ) ., Furthermore , Phe4 and Val5 were both engaged in van der Waals contacts with nearby residues ( Ala318 , Thr319 , Phe320 , Leu329 , and Val747 ) ., It remains to be seen whether this region actually represents a potential site for interactions between LSD1 and other proteins; this will be the subject of future studies ., In the context of this work , it was most significant that the peptide-binding site was correctly identified by our computational analysis and showed that including LSD1/CoREST dynamics was crucial ., In more detail , neither FTMap nor SiteMap identified this region as a potential hotspot when the crystallographic coordinates were used ., However , when the calculations were performed using the LSD1/CoREST configurational ensemble generated from MD snapshots the binding site was correctly located by FTMap on one centroid and by SiteMap on 71% of the centroids ( Figure 7A ) ., Examination of the correlation between SiteMap hot spot prediction with specific protein conformational changes highlighted the importance of Arg312 and Phe320 ( Figures 6 and 7 ) ., During the MD simulations , these residues sampled conformations that enabled SiteMap to identify the region as potential binding site ( Figure 7B , second column ) ., Interestingly , Arg312 and Phe320 also sampled configurations that closed the binding pocket and led to negative SiteMap predictions ( Figure 7B , third column ) ., These results underscored the importance of including ensembles of LSD1/CoREST structures for exploring the presence of new binding regions even if peptide binding does not cause per se any conformational change as gathered by the comparison of the bound and unbound crystal structures ., Our findings were in line with a recent study by Johnson and Karanicolas indicating that druggable protein interaction sites are more predisposed to surface pocket formation compared with the rest of the protein surface 54 ., On the other hand , it remains to be validated whether all new binding regions identified are favorable binding sites for small drug-like molecules; as suggested by Eyrisch and Helms transient pocket formation on protein surfaces may not be relevant in the context of protein-protein interactions 55 ., Ongoing computational and experimental studies are being performed to target the newly predicted regions to discover new molecular probes ., An ensemble approach was designed to explore the druggability of dynamic protein receptors and applied to the LSD1/CoREST epigenetic target ., Overall , five well-distinct , new binding regions were revealed and display hot spot properties comparable to the well-known H3-histone site ( Figure 8 ) ., The regions at the SANT2/Tower interface ( region A ) and at the SWIRM/AOD interface ( region B ) overlap with the most prominent hinge points revealed by molecular dynamics simulations 17 , 18 ., We suggest that they could be of primary relevance for LSD1/CoREST chromatin binding ., A third interface region overlapping with a dynamic hinge point was discovered at the AOD/Tower interface ( region C ) ., These first three regions are optimal targets for the discovery of molecular probes that might block LSD1/CoREST dynamics and prevent chromatin and/or protein association ., Supporting experimental evidence of these computationally predicted properties can be obtained by examination of the LSD1/CoREST crystal contacts ( Figure 5 ) ., A fourth region encompassing the back of the AOD domain was also predicted to have strong propensity for molecular binding ( region D ) ., The computational prediction of this region was validated by X-ray crystallography experiments that used small peptides designed to investigate protein-protein interactions on the LSD1/CoREST surface ., The co-crystallized Pro-Leu-Ser-Phe-Leu-Val peptide in a novel , blindly predicted binding site on LSD1/CoREST shows the strength of the approach presented ., In addition , the observation that this true prediction would be prevented when using only the X-ray structures available ( including the structure bound to the same peptide ) underscores the relevance of including protein dynamics in the prediction of protein interactions ., A fifth region was highlighted corresponding to a small pocket on the AOD domain ( region E ) ., On the basis of our molecular dynamics simulations we propose that this predominantly hydrophobic pocket could be relevant as an allosteric site to hamper substrate binding ., This study sets the basis for future virtual screening campaigns targeting the five novel regions reported and for the design of LSD1/CoREST mutants to probe LSD1/CoREST binding with chromatin and various protein partners ., We developed and presented the Druggable Site Visualizer ( DSV ) that allows treatment of data of large-size protein configurational ensembles; it is freely distributed to the public , and readily transferable to other protein targets of pharmacological interest . | Introduction, Materials and Methods, Results, Discussion | Lysine specific demethylase-1 ( LSD1/KDM1A ) in complex with its corepressor protein CoREST is a promising target for epigenetic drugs ., No therapeutic that targets LSD1/CoREST , however , has been reported to date ., Recently , extended molecular dynamics ( MD ) simulations indicated that LSD1/CoREST nanoscale clamp dynamics is regulated by substrate binding and highlighted key hinge points of this large-scale motion as well as the relevance of local residue dynamics ., Prompted by the urgent need for new molecular probes and inhibitors to understand LSD1/CoREST interactions with small-molecules , peptides , protein partners , and chromatin , we undertake here a configurational ensemble approach to expand LSD1/CoREST druggability ., The independent algorithms FTMap and SiteMap and our newly developed Druggable Site Visualizer ( DSV ) software tool were used to predict and inspect favorable binding sites ., We find that the hinge points revealed by MD simulations at the SANT2/Tower interface , at the SWIRM/AOD interface , and at the AOD/Tower interface are new targets for the discovery of molecular probes to block association of LSD1/CoREST with chromatin or protein partners ., A fourth region was also predicted from simulated configurational ensembles and was experimentally validated to have strong binding propensity ., The observation that this prediction would be prevented when using only the X-ray structures available ( including the X-ray structure bound to the same peptide ) underscores the relevance of protein dynamics in protein interactions ., A fifth region was highlighted corresponding to a small pocket on the AOD domain ., This study sets the basis for future virtual screening campaigns targeting the five novel regions reported herein and for the design of LSD1/CoREST mutants to probe LSD1/CoREST binding with chromatin and various protein partners . | Protein dynamics plays a major role in determining the molecular interactions available to molecular binding partners , including druggable hot spots ., The LSD1/CoREST complex is one of the most relevant epigenetic targets discovered and was shown to be a highly dynamic nanoscale clamp using molecular dynamics simulations ., The general relationship between LSD1/CoREST dynamics and the molecular sites available for non-covalent interactions with an array of known binding partners ( from relatively small drug-like molecules and peptides , to larger proteins and chromatin ) remains relatively unexplored ., We employed an integrated experimental and computational biology approach to effectively capture the nature of non-covalent binding interactions available to the LSD1/CoREST nanoscale complex ., This ensemble approach relies on the newly developed graphical visualization by Druggable Site Visualizer ( DSV ) that allows treatment of large-size protein configurational ensembles data and is freely distributed to the public and readily transferable to other protein targets of pharmacological interest . | biomacromolecule-ligand interactions, medicine, medicinal chemistry, molecular dynamics, drugs and devices, enzymes, chemical biology, molecular mechanics, histone modification, protein structure, epigenetics, biophysics simulations, computer graphics, biochemistry simulations, proteins, chemistry, computing methods, biology, physical organic chemistry, biophysics, drug discovery, biochemistry, enzyme structure, protein chemistry, biochemical simulations, computational chemistry, computer science, computer modeling, organic chemistry, drug research and development, biophysic al simulations, genetics, computational biology | null |
1,862 | journal.pntd.0002899 | 2,014 | Spatio-Temporal Factors Associated with Meningococcal Meningitis Annual Incidence at the Health Centre Level in Niger, 2004–2010 | Meningococcal meningitis ( MM ) is caused by Neisseria meningitidis ( Nm ) , a commensal bacterium of the human nasopharynx transmitted by direct contact with respiratory droplets from carriers and causing meningitis after crossing the nasopharyngeal mucosa ., Epidemics of meningococcal meningitis recurrently strike countries of the African Meningitis Belt 1 ., In this sub-Saharan area , MM dynamics is characterized by seasonality and spatio-temporal heterogeneity: the disease is endemic all year round but every dry season , a hyper-endemic or epidemic increase in incidence is observed , the magnitude of which varies between years and regions 2 , 3 ., Within a country , localized outbreaks are reported at sub-district scales 4–6 ., Most epidemics have been caused by meningococci of serogroup A but C , W or X outbreaks have also been reported 7–9 ., Niger , a landlocked country of the Belt , reported between 1000 and 13500 suspected meningitis cases annually during 2004–2010 , with case-fatality rates of 4–12% 10 ., Over the study period ( 2004–2010 ) , the surveillance-based control strategy applied in Niger was to launch reactive vaccination campaigns with A/C or A/C/W polysaccharide vaccines once an outbreak has exceeded a threshold defined at the district level by the World Health Organization ( WHO ) 11 ., More than 100 years after the first major epidemic reported in the Belt , the reasons for the peculiar epidemiology of MM in Africa are still poorly understood 12 ., A combination of concomitant factors is probably necessary to trigger an epidemic in a particular place at a particular time , involving the organism ( e . g . strain virulence and transmissibility 13 ) , the host ( e . g . immune status and susceptibility 14 , 15 ) and the environment ( e . g . dry climate and dusty winds 16 ) ., Previous statistical ecologic studies aiming at explaining the spatio-temporal dynamics of MM epidemics in the Belt were mainly focused on climatic risk factors ., These studies sought for drivers of either the seasonality of epidemics ( i . e . when the onset/peak/end of the meningitis season occur ) or their intensity ( i . e . magnitude of incidence over a chosen time period ) at different spatial scales ., According to most authors , the temporal dynamics of sub-Saharan climate is the major driver of MM seasonality in the Belt 2 , 3 , 17–19 ., The suspected contribution of climatic factors to the intensity of epidemics is still under debate ., At the country level , Yaka et al partly related annual incidence in Niger to the northern component of wind during November to December 20 ., At the district level , annual incidence in four African countries was correlated to rainfall amount and dust load in the pre- , post- and epidemic season 21 and monthly incidence in one district of Ghana was modelled by a combination of various climatic and non-climatic variables 22 ., However , to our best knowledge , none of the published statistical models tried to explain intensity of meningitis outbreaks at a finer spatial scale than the district , whereas recent studies in Niger and Burkina Faso demonstrated that outbreaks occur at sub-district scales in highly localized clusters 4–6 ., Besides , whereas two neighbouring areas ( sharing similar climatic conditions ) can have different epidemic behaviours 4 , 6 , few models combined climatic factors with other types of risk factors suspected to interact , such as previous epidemics , vaccination campaigns , population density or proximity to infected regions ., The objective of our paper was therefore to study the influence of climatic and non-climatic factors on the spatio-temporal variations of annual incidence of MM serogroup A , the main etiologic agent over the study period , at the health centre catchment area ( HCCA ) scale in Niger , using a database of laboratory-confirmed cases and developing an explanatory Bayesian hierarchical model from 2004 to 2010 at the HCCA-year level ., This study was approved by the Clinical Research Committee of Institut Pasteur and authorized by the National Consultative Ethics Committee of Niger and the two French data protection competent authorities: CCTIRS ( Comité Consultatif sur le Traitement de lInformation en matière de Recherche dans le domaine de la Santé ) and CNIL ( Commission Nationale de lInformatique et des Libertés ) ., The data collected involving patients were anonymized ., Spatial analyses were based on the National Health Map of Niger , created by the Centre de Recherche Médicale et Sanitaire ( CERMES ) in 2008 , at the level of the HCCAs , areas which include all villages served by the same integrated health centre ., We used the 2010 updated version of this National Health Map of 732 HCCAs , in the WGS84 geodetic system with UTM 32N projection ., On average , a HCCA covers a 40×40 km2 area ., The study region comprised the 669 HCCAs of the southern most populated part of Niger , located roughly to the south of the 16th parallel ( Figure 1 ) ., It represents 96% of the national population of 17 138 707 inhabitants ( 2012 national census ) ., The semi-arid tropical climate of this Sahelian region is characterized by a long dry season from October to May and a rainy season from June to September ., In the North lies the Sahara desert , with less than one inhabitant per km2 ., The CERMES is the national reference laboratory in charge of the microbiological surveillance of bacterial meningitis in Niger ., Basically , cerebrospinal fluid ( CSF ) samples taken from suspected cases of meningitis by health care workers are routinely collected throughout Niger and the etiological diagnosis is carried out by Polymerase Chain Reaction ( PCR ) for all CSF ., This enhanced surveillance system is active in the whole country since 2002 , and has been described in detail elsewhere 4 ., We used the CERMES database for a retrospective study on confirmed MM A cases between 1 July 2003 and 30 June 2010 ., We aggregated MM A cases by HCCA and epidemiological year , defined as running from 1 July of the year n–1 to 30 June of the year n , in order to cover an entire meningitis season ., Health districts were contacted to obtain data on polysaccharide vaccines ( number of delivered vaccine doses and/or vaccine coverage ) at the HCCA level over the study period and the previous two years ., Full vaccination records could be collected only for Tahoua region over 2002–2010 ., Missing data in records from other regions did not enable us to use them in our analyses ., We thus studied the effect of previous vaccination campaigns conducted in Tahoua region during the years n-1 and n-2 on MM incidence of year n ., We considered different forms for the vaccination covariate: either the coverage rate ( as a continuous or categorized variable ) , the vaccination status ( vaccinated: yes/no ) , or the exceedance of several coverage thresholds ( above threshold: yes/no ) ., The cumulative effect of successive vaccination campaigns could not be studied as only one HCCA was vaccinated two years in a row ., The Institut National de la Statistique ( INS ) provided the number of inhabitants per village according to the 2001 national census ., We aggregated the villages populations at the HCCA level and applied a mean annual population growth rate of 3 . 3% ( provided by the INS ) ., We computed the population density covariate as the number of inhabitants per HCCA divided by the HCCA surface area calculated in ArcGIS software ( version 10 . 0 , ESRI Inc . Redlands , CA ) ., We retrieved a shapefile of primary roads from the HealthMapper application of the WHO ., This shapefile was superimposed to the National Health Map in ArcGIS ., For each HCCA , we computed its minimum distance to the closest primary road and expressed it as a binary covariate ( road versus no road ) or a categorical covariate ( classes of distance ) ., The landcover classification for Niger was obtained at a 1 km2 resolution , from the Land Cover Map of Africa from the Global Land Cover 2000 Project 23 ., The main vegetation types represented in our geographical subset were different classes of shrublands , grasslands and croplands ., Gridded climate data from 2003 to 2010 were extracted from ERA-Interim reanalysis , produced by the European Centre for Medium-Range Weather Forecasts ( ECMWF ) 24 ., We retrieved relative humidity , temperature , total precipitation , U ( west-east ) and V ( south-north ) wind components at a 0 . 75° spatial resolution at a daily time-step ., To characterize the wind-blown mineral dust emission from the Sahara , we used the Absorbing Aerosol Index ( AAI ) , a dimensionless quantity which indicates the presence of ultraviolet-absorbing aerosols in the Earths atmosphere 25 ., The AAI used in this study is derived from the reflectances measured by SCIAMACHY ( Scanning Imaging Absorption Spectrometer for Atmospheric Chartography ) satellite instrument in the ultraviolet wavelength range 26 ., We retrieved monthly gridded data ( 1 . 00°×1 . 25° latitude-longitude grid ) from 2003 to 2010 ( www . temis . nl/airpollution/absaai/ ) ., As we were interested in how the climate of a given year or season can influence the annual epidemic magnitude , we calculated multi-monthly means of each climatic variable , averaged over periods relevant to the meningitis season or the seasonal cycles of each climatic variable , both for each HCCA and for the whole study region ( see Figure S1 and Text S1 for further details ) ., Shuttle Radar Topography Mission ( SRTM ) elevation data were obtained from the processed CGIAR-CSI ( Consortium for Spatial Information ) SRTM 90 m Digital Elevation Dataset version 4 . 1 27 , available as 5°×5° tiles at a 3 arc second resolution ( approximately 90 m ) ., Six tiles were downloaded and combined in ArcGIS to cover the whole study region ., Finally , we collated these multi-source and multi-format spatio-temporal datasets and reconciled data at the HCCA level ( i . e . cartographic , epidemiological , vaccination and demographic data ) and gridded data ( i . e . landcover , climate , AAI and altitude data ) by averaging the gridded values over each HCCA using the statistical computing software R ( version 2 . 15 . 3 , R Core Team , R Foundation for Statistical Computing , Vienna , Austria ) ., Then , in addition to the covariates described above , we created supplementary variables to include in the statistical analyses ., To take into account potential interactions with bordering countries , we calculated in ArcGIS the minimum distance of each HCCA to the closest border and expressed it as a binary variable ( border versus no border ) or a categorical variable ( classes of distance and classes of bordering countries ) ., The five bordering countries of our geographical subset are shown in Figure 1 ., To account for potential geographic disparities in accessibility to health centres , we computed for each HCCA the mean distance ( weighted by the villages population ) from villages to their health centre ., To represent the tendency of meningitis to occur in spatio-temporal clusters of neighbouring infected HCCAs , we computed “neighbourhood” variables , using various definitions for this spatio-temporal interaction ( presence/total number of MM A cases in neighbours , mean/maximum incidence and number/percentage of neighbours with MM A cases , over an epidemiological year ) ., Neighbours were defined as adjacent HCCAs ( first order neighbours ) , since a previous analysis showed that the median size of spatial clusters was of two neighbouring HCCAs 4 ., We also computed «historical» variables describing what happened the previous year in terms of presence/number of MM A cases and incidence , in each HCCA , in its neighbours and in its district ( upper administrative level ) as potential proxies for natural immunity ., We computed similar variables for other Nm serogroups at the HCCA level in order to explore potential interactions between serogroups ., Finally , we included in the analyses the presence of early cases in each HCCA , defined as cases occurring before 31 December following 20 , as an early start of the hyper-endemic increase could indicate a higher epidemic risk ., First , for descriptive purposes , we explored whether the annual epidemic magnitude in the study region could be related to the annual and early geographical distribution of MM A cases and localized outbreaks , using Pearson correlation coefficient ., We defined localized outbreaks as HCCAs exceeding an annual incidence threshold of 20/100000 , corresponding to the 95th percentile of incidence , following the primary reference used in 6 ., Then , to investigate the spatio-temporal association of MM A annual incidence at the HCCA level with climatic and non-climatic factors , we developed a retrospective hierarchical model in Niger for 2004–2010 , over two geographical subsets:, ( i ) over the whole study region of 669 HCCAs and, ( ii ) over a subset of 95 HCCAs ( located in Tahoua region ) for which vaccination data were fully available ., The modelling approach we adopted was a Bayesian negative binomial generalized linear mixed model ( GLMM ) ., We assumed that the number of observed MM A cases in each HCCA i and year t followed a negative binomial distribution with an unknown scale parameter κ and mean μit ., We modelled log ( μit ) as a function of covariates as described above and appropriate random effects ., Basically , we included spatial random effects at the HCCA level , separated into a spatially unstructured component to capture the influence of unknown factors that are independent across areas and a spatially structured component to capture the influence of spatially correlated effects ., The temporal structure was modelled by yearly random intercepts ., We included the expected number of cases in each HCCA i and year t as an offset in the model to estimate the incidence rate ratios ( IRRs ) associated with a unit increase in exposure , by exponentiating the regression coefficients ., A preliminary forward stepwise covariate selection was performed in R software , estimating parameters by maximum likelihood ., The Bayesian multivariate model was subsequently developed in WinBUGS 28 , using Markov chain Monte Carlo ( MCMC ) simulation methods ., Further details on the modelling approach are given in Text S2 ., In Niger , from 1 July 2003 to 30 June 2010 , 5512 cases of Nm were biologically confirmed ., Other aetiologies included Streptococcus pneumoniae ( N\u200a=\u200a850 ) and Haemophilus influenzae ( N\u200a=\u200a277 ) ., Serogroup A accounted for 72 . 4% ( N\u200a=\u200a3988 ) of Nm cases and was largely predominant every year , except during 2006 and 2010 when serogroups X and W represented 48 . 9% and 71 . 6% of Nm cases , respectively ., The median age of Nm A cases was 8 . 3 years ( interquartile range ( IQR ) 5–13 ) ., Among all Nm A cases , 97 . 0% originated from our study region and 28 . 0% from our Tahoua subset ( Figure 1 ) ., Nm A cases essentially occurred over a six-month period: 98 . 1% of them were observed between December and May , with a peak during February–April ( 80 . 4% ) ., MM A temporal evolution during July 2003–June 2010 ( Figure 2 ) was characterized by considerable between-year variations ( 17-fold increase between the lowest annual incidence of 0 . 7 per 100000 in 2005 and the highest annual incidence of 11 . 3 per 100000 in 2009 ) ., Among the seven years of the study period , the annual MM A incidence across the whole study region was correlated to the number of HCCAs having at least one MM A case ( r\u200a=\u200a0 . 95 , p<0 . 01 ) , to the number of localized outbreaks ( r\u200a=\u200a0 . 99 , p<0 . 01 ) , to the maximum annual incidence of the localized outbreaks ( r\u200a=\u200a0 . 80 , p\u200a=\u200a0 . 03 ) , to the number of HCCAs with at least one early case ( r\u200a=\u200a0 . 96 , p<0 . 01 ) and to the early incidence across the study region ( r\u200a=\u200a0 . 93 , p<0 . 01 ) ., The corresponding graphs are displayed in Figure S2 ., The median duration of the localized outbreaks ( time between first and last cases ) was 45 days ( IQR 24–75 ) ., In the Bayesian multivariate model over the whole study region , the overdispersion parameter of the negative binomial ( κ−1 ) had a posterior mean value of 2 . 586 ( 95% CI\u200a=\u200a2 . 223–2 . 998 ) ( Table 1 ) ., This was significantly different from zero , which confirmed that the negative binomial formulation was necessary to account for extra-Poisson variation in the dataset ., Regarding fixed effects , five covariates were significantly associated with MM A incidence ( the 95% CI of their associated IRR did not contain 1 ) ., A reduced risk was associated with higher average relative humidity during the meningitis season ( November–June ) over the study region ( posterior mean IRR\u200a=\u200a0 . 656 , 95% CI 0 . 405–0 . 949 ) ., Early rains in March in an HCCA represented a protective spatio-temporal factor ( IRR\u200a=\u200a0 . 353 , 95% CI 0 . 239–0 . 502 ) ., The analyses identified three non-climatic factors; a positive association was found between disease incidence and percentage of neighbouring HCCAs having at least one MM A case during the same epidemiological year ( IRR\u200a=\u200a2 . 365 , 95% CI 2 . 078–2 . 695 ) , as well as presence of a road crossing the HCCA ( IRR\u200a=\u200a1 . 743 , 95% CI 1 . 173–2 . 474 ) and occurrence of early cases before 31 December in a HCCA ( IRR\u200a=\u200a6 . 801 , 95% CI 4 . 004–10 . 910 ) ., The variances of the spatially structured and unstructured random effects were respectively 0 . 174 ( 95% CI 0 . 010–0 . 488 ) and 2 . 579 ( 95% CI 1 . 974–3 . 294 ) ( Table 1 ) ., The posterior mean estimate of the spatial fraction was 0 . 062 ( 95% CI 0 . 004–0 . 166 ) , meaning that most of the residual area-specific variability was spatially unstructured ., Spatial correlation was almost totally captured by the multivariate model ., The year-specific random effects also significantly contributed to the model ( Table 1 ) and the inclusion of covariates helped to decrease the temporal random effects variance compared to the null model ., A scatter plot of the 4683 fitted posterior mean MM A cases versus the observed MM A cases shows the overall goodness of fit of the model ( Figure 3 . A ) ., The inter-annual variations of incidence at the study region level were correctly captured by the model ( Figure 3 . B ) ., In Tahoua subset during 2002–2010 , mass campaigns of A/C or A/C/W polysaccharide vaccination have been conducted in 53 HCCAs-years out of 665; the median reported vaccination coverage was 80 . 0% ( IQR 53 . 5–89 . 2% ) ., The final multivariate model over Tahoua subset yielded similar results as the model over the whole study region ( see Table S1 ) ., The same covariates were independently associated to disease incidence , except that early rains were no longer significant over this smaller geographical subset ., No vaccination covariates were significant ., To our knowledge , this study is the first spatio-temporal statistical model in the African Meningitis Belt developed at a spatial scale as fine as the health centre catchment areas and using laboratory confirmed cases of meningococcal meningitis ., Relying on advanced statistical methods , we demonstrated that both climatic and non-climatic factors ( occurrence of early rains , mean relative humidity , occurrence of early cases , presence of roads and spatial neighbourhood interactions ) are important for explaining spatio-temporal variations in MM A annual incidence at the HCCA level ., Appropriate statistical methods are necessary to investigate the underlying drivers of observed patterns of count data in small areas with spatio-temporal correlations ., Hierarchical regression models of the Bayesian family have proven useful to analyse the spatio-temporal dynamics of infectious diseases in different settings , such as dengue in Brazil 29 , soil-transmitted helminth infections in Kenya 30 or schistosomiasis in China 31 ., The Bayesian formulation allows to acknowledge the uncertainty associated with all model ( hyper ) parameters ( fixed and random ) , to include a spatial correlation structure within a prior distribution 32 and leads to more robust estimates in particular when the geographical level is small and the disease rare 33 ., Such models are still rarely applied to MM in Africa ., The modelling approach we adopted was a negative binomial GLMM using Bayesian estimation , to control for unobserved confounding factors and take into account the dependencies over space and time encountered in our dataset , incorporating year-specific and area-specific random effects 32 , 34 ., Ignoring these multiple correlations could lead to overestimate the significance of the covariates 31 ., This model also accounted for extra-Poisson variation ( overdispersion ) in the count data via the negative binomial formulation , allowing the variance to be larger than the mean ., Another noteworthy feature of our analysis lies in its spatio-temporal resolution , uncommon for a country of the Belt ., Since outbreaks have been shown to occur in spatially localized clusters at a sub-district level 4–6 , we considered primordial to analyse MM A dynamics at a finer spatial scale than the more usual country or district levels 17–21 , 35 , 36 ., From a public health perspective , the health centre catchment area used in this study is also a judicious choice ., Indeed , the Nigerien health care system is based on these integrated health centres , which constitute the lowest health level ( sub-district ) and whose locations are chosen according to accessibility for populations ., Regarding the time scale , to comply with our objective of explaining the overall annual burden of MM A in an area during each meningitis season , we chose to conduct analyses at the year level ., We did not seek in this paper to model the seasonality of meningitis , which would have implied working at least at month ( like in 22 ) or week ( like in 17 , 18 ) level ., Our approach did not allow explaining intra-seasonal temporal dynamics and diffusion patterns – which were partly described in a previous paper 4 ., This would constitute a distinct research question that the one tackled in this paper and should be further investigated ., The results of this study bring new insights into the epidemiology of MM in the Belt and the risk factors that play a role in the spatio-temporal variations of incidence ., First , we observed that , at the study region level , higher annual incidence correlated with larger number of HCCAs having at least one MM A case , with larger number of localized outbreaks and , to a lesser extent , with higher intensity of these localized outbreaks ., This brings support to Mueller and Gessner hypothesis that the magnitude of incidence during meningitis seasons in a region or country can increase if the geographical expansion and/or the intensity of localized epidemics increase 3 ., The epidemiology thus changes from a regular year with a small number of localized epidemics in the region to an epidemic wave with many localized epidemics ., We then sought to evaluate factors that could be associated with the occurrence of these localized incidence increases in a particular area during a particular year ., Based on the factors that emerged from our model and that we discuss below , we hypothesize that spatio-temporal variations in MM A incidence between years and HCCAs result from variations in the intensity or duration of the dry season climatic effects on disease risk , and is further impacted by factors of spatial contacts , representing facilitated pathogen transmission ., First , the presence of primary roads and neighbourhood effects in the multivariate model indicates that human contacts and movements are important contributing factors that we assume to likely play a role in the transmission of the meningococcus , and/or an epidemic co-factor ( e . g . respiratory virus 3 ) ., HCCAs crossed by a road would be statistically more prone to re-infections from distant areas than isolated HCCAs outside the primary road network , and would experience higher transmission levels due to higher intensity of human movements and contacts ., Yet , we cannot exclude that differences in accessibility to health centres contributed to bring out primary roads as a risk factor ., One could also argue that health centres served by a road sent more CSF samples due to easier logistics ., However , another study conducted in Niger and based on reported suspected cases ( not affected by a potential logistic bias ) also showed fewer absences and higher reappearance rates of meningitis cases in districts along primary roads 36 ., The percentage of neighbours with cases , representing local spatio-temporal interactions , is not a surprising risk factor ., Indeed , a previous study 4 showed that cases usually tended to be clustered in space and that these clusters most often encompassed a small number of HCCAs ., Areas with more infected neighbours would be more likely to be infected by local spatial transmission ., Then , the presence of climatic parameters in the multivariate model indicates that , beyond an influence on MM seasonality agreed by several authors 2 , 3 , 17–19 , climate can have a quantitative impact on inter-annual variations of incidence ., The main physiopathological hypothesis for the role of climate is that dryness and dusty winds would damage the nasopharyngeal mucosa and increase the risk of bloodstream invasion by a colonizing meningococcus , and thus the case-to-carrier ratio 2 ., Here , we found that annual incidence was negatively correlated to mean seasonal humidity over the study region ., This factor was purely temporal ( equal IRR for all spatial units within the same year ) , suggesting that humidity did not have spatial , but only temporal effect ., At the study region level , the seasons of highest MM A incidence were also the seasons of lowest mean humidity ., The between-year variations in humidity were not large but the results suggest that even a small decrease in humidity , resulting in a small increase in the case-to-carrier ratio according to the physiopathological hypothesis , can have a significant impact on the global MM A risk in all HCCAs , as these dryer conditions start in October and persist over several months ( cumulative effect ) and over a large geographical region ., Similarly , Yaka et al detected a quantitative effect of climate on inter-annual variations of meningitis at the country level but November and December northerly winds were their best predictors 20 ., This difference might be explained by the fact that they only considered the climatic conditions over the early dry season and not over the whole meningitis season ., Interestingly , a second climatic factor , the occurrence of early rains in March , has a significant effect at the HCCA level ., It has been noticed that the meningitis season seemed to stop at the onset of the rainy season , again explained by a decrease in invasiveness possibly due to less irritating conditions for the pharyngeal mucosa 2 ., Our results are in agreement with this observation and , more precisely , show that the local occurrence of first rains in March , i . e . before the real beginning of the rainy season in the country , is a protective factor ., The rains would thus stop the harmful effect of dryness and prevent local outbreaks to further develop ., The last and particularly strong factor that emerged from our model is the presence of early cases in a HCCA ( before 31 December ) ., It can be interpreted as a risk factor in itself ( an outbreak would have more time to develop if it starts earlier ) , as an indicator of longer exposure to irritating climatic conditions of the dry season , or as a proxy of other factors responsible for the presence of the bacteria and higher levels of carriage and/or invasion ., In any case , this parameter remains a strong determinant of high incidence in a HCCA ., At the study region level , we also showed that the annual MM A incidence was correlated to the number of HCCAs with at least one early case and to the overall early incidence ., Two other studies stressed the importance of early cases in the final size of the epidemic: an early onset was a good predictor of an epidemic at the district level in 37 and the number of cases during the peak months increased with the number of early cases occurring between October and December at the country level ( Niger ) in 20 ., WHO also considers early cases in the season as a warning sign of large epidemic 11 ., Surprisingly , vaccination the previous or the two previous year ( s ) was not found to be a protective factor in Tahoua subset ., However , we cannot rule out the possibility that the low number of vaccinated HCCAs-years in our subset ( 8% ) may have induced a lack of power to show a true protective effect of vaccination ., This result could also be partially due to the decline of polysaccharide vaccine efficacy to 87% and 70% at one and two years after vaccination , respectively 38 ., It is also possible that the provided data lack representativeness and over-estimate the real coverage ., Of note , we decided not to study the impact of year n vaccination on year n incidence within this model formulation , as reactive vaccination would be associated with larger outbreaks ( those which required vaccination ) and , considering delays in implementing vaccination campaigns , would artificially appear as a risk factor in the model 39 , 40 ., Residual spatio-temporal variations that remained unexplained by the covariates included in our model suggest that other unknown or unmeasured factors contributed to the observed incidence ., First , because our study concerned an ecologic investigation , suspected factors at the individual level ( e . g . age , immuno-depression , smoking… ) could not be accounted for ., Then , the temporal variations at the country level could be suspected to be influenced by higher susceptibility due to waning pre-existing immunity 15 or emergence of a new variant that can escape herd immunity 13 , 41 ., However , the length of the study period did not enable us to study these effects: molecular characterization of Nm A isolates showed that the same sequence type ( ST-7 ) was predominantly circulating in Niger during 2004–2010 42 , 43 ., At the spatial level , the residual purely spatial variation observed in our model was mainly unstructured ., The covariates better explained the spatial correlation , which both reflects shared environmental conditions and true epidemic diffusion , than the unstructured spatial variations ., This suggests that other factors specific to each HCCA , such as quality of local health services or local behavioural practices , could contribute to explain the between-area heterogeneity in MM A incidence ., The difficulty to measure such factors made the inclusion of area-level random effects necessary ., Finally , other unexplained factors , such as respiratory viral co-infections , might contribute to the residual spatio-temporal heterogeneity , via an effect on transmission , colonization and/or invasion 3 ., Although difficult to collect retrospectively , these factors should be further investigated at the health centre level and at least properly accounted for in any modelling attempt ., Mathematical models , still little developed on this topic 44 , could also help us to better understand the role of carriage and immunity in the epidemic dynamics ., This study relied on a unique dataset which provided a very precise picture of MM A spatio-temporal dynamics in Niger over seven years , and has already been used in published spatio-temporal analyses 4 , 5 ., The cases were all biologically confirmed by CERMES laboratory , which allowed us to exclude misclassified infectious agents that give similar clinical signs of meningitis ., Databases commonly used by most statistical studies on M | Introduction, Methods, Results, Discussion | Epidemics of meningococcal meningitis ( MM ) recurrently strike the African Meningitis Belt ., This study aimed at investigating factors , still poorly understood , that influence annual incidence of MM serogroup A , the main etiologic agent over 2004–2010 , at a fine spatial scale in Niger ., To take into account data dependencies over space and time and control for unobserved confounding factors , we developed an explanatory Bayesian hierarchical model over 2004–2010 at the health centre catchment area ( HCCA ) level ., The multivariate model revealed that both climatic and non-climatic factors were important for explaining spatio-temporal variations in incidence: mean relative humidity during November–June over the study region ( posterior mean Incidence Rate Ratio ( IRR ) =\u200a0 . 656 , 95% Credible Interval ( CI ) 0 . 405–0 . 949 ) and occurrence of early rains in March in a HCCA ( IRR\u200a=\u200a0 . 353 , 95% CI 0 . 239–0 . 502 ) were protective factors; a higher risk was associated with the percentage of neighbouring HCCAs having at least one MM A case during the same year ( IRR\u200a=\u200a2 . 365 , 95% CI 2 . 078–2 . 695 ) , the presence of a road crossing the HCCA ( IRR\u200a=\u200a1 . 743 , 95% CI 1 . 173–2 . 474 ) and the occurrence of cases before 31 December in a HCCA ( IRR\u200a=\u200a6 . 801 , 95% CI 4 . 004–10 . 910 ) ., At the study region level , higher annual incidence correlated with greater geographic spread and , to a lesser extent , with higher intensity of localized outbreaks ., Based on these findings , we hypothesize that spatio-temporal variability of MM A incidence between years and HCCAs result from variations in the intensity or duration of the dry season climatic effects on disease risk , and is further impacted by factors of spatial contacts , representing facilitated pathogen transmission ., Additional unexplained factors may contribute to the observed incidence patterns and should be further investigated . | Meningococcal meningitis ( MM ) is a severe infection of the meninges caused by a bacterium transmitted through respiratory droplets ., During January–May , epidemics of MM recurrently strike sub-Saharan countries , including Niger ., Understanding why epidemics occur in a particular place at a particular time would help public health authorities to develop more efficient prevention strategies ., To date , factors that govern the occurrence of localized outbreaks are still poorly understood and epidemics remain unpredictable ., In this retrospective study ( 2004–2010 ) , we developed a statistical model in order to investigate the influence of various factors ( climatic , demographic , epidemiologic , etc . ) on the annual incidence of MM serogroup A at a fine spatial scale ( the health centre catchment area ) in Niger ., We found that mean relative humidity and occurrence of early rains were protective climatic factors and that a higher risk was associated with the presence of a road , the percentage of neighbouring areas having cases and the occurrence of early cases before January ., These findings contribute to improve our understanding of MM epidemics in Africa and the associated factors , and might be used in the future for the subsequent development of an early warning system . | medicine and health sciences, atmospheric science, spatial epidemiology, geoinformatics, bacterial diseases, mathematics, statistics (mathematics), population modeling, spatial analysis, human geography, public and occupational health, infectious diseases, computer and information sciences, geography, infectious diseases of the nervous system, epidemiology, spatial autocorrelation, cartography, infectious disease modeling, meningococcal disease, climatology, meningitis, earth sciences, biology and life sciences, physical sciences, computational biology, geographic information systems | null |
1,994 | journal.pcbi.1006204 | 2,018 | Anticipating epidemic transitions with imperfect data | There are numerous causative factors linked with disease emergence , including pathogen evolution , ecological change and variation in host demography and behavior 1–5 ., Combined , they can make each pathogen’s emergence seem idiosyncratic ., In spite of this apparent particularity , there is a recent literature on the possibility of anticipating epidemic transitions using model-independent metrics 6–14 ., Referred to as early-warning signals ( EWS ) , these metrics are summary statistics ( e . g . the variance and autocorrelation ) which undergo characteristic changes as the transition is approached ., In addition to infectious disease transmission , EWS have been investigated for transitions in a broad range of dynamical systems , including ecosystem collapse and climate change 15–21 ., The motivation for EWS comes from the theories of dynamical systems and stochastic processes , in particular the slowing down that universally occurs in the vicinity of dynamical critical points 22–24 ., Theoretical results for disease emergence are promising , and suggest that the transition from limited stuttering chains of transmission ( R0 < 1 ) to sustained transmission and outbreaks ( R0 > 1 ) is preceded by detectable EWS 8 , 13 , 14 ., A major obstacle to deploying early-warning systems is the type of data available to calculate the EWS ., Theoretical predictions assume the data will be sequential recordings ( or “snapshots” ) of the true number of infected in the population through time 8–13 ., In this paper we refer to this as snapshots data ., However , epidemiological data originate instead from notifications by public health practitioners whenever a new case is identified ., Public health bodies aggregate individual cases into regular case reports ( e . g . the US Centers for Disease Control and Prevention’s Morbidity and Mortality Weekly Report ) , as shown in Fig 1 ., Different combinations of serial interval ( difference in time of symptom onset between primary and secondary cases ) and aggregation period lead to time series which have very different appearances ., Even assuming perfect reporting , variability in both the incubation period and onset of clinical symptoms mean that snapshots data cannot be reconstructed from case report data ., In addition to aggregation , case reports are subject to reporting error ( see Fig 2 ) ., Underreporting may occur due to asymptomatic infection , poorly implemented notification protocols , or socio-political factors 25–29 ., Misdiagnoses and clerical errors in the compilation of reports can result in both under- and over-reporting 30–32 ., Due to self reporting and contact tracing , once an index case has been positively identified secondary cases are more likely to be diagnosed , which may lead to clustering in case reports ., The combination of case aggregation and reporting error results in a mismatch between snapshots and imperfect epidemiological data ., EWS , such as the variance ( Fig 3 , top panel ) , are affected by imperfect data ( Fig 3 , bottom panel ) and may not display the characteristic trends that form the basis for detecting disease emergence ., This provides reason to question the direct application of EWS to observed data ., In this paper we report on a simulation study aimed at investigating the robustness of a range of EWS to case report data ., We simulated a stochastic SIR model of a pathogen emerging via increasing R0 , and corrupted the simulated case reports by applying a negative binomial reporting error ., The area under the curve ( AUC ) statistic was computed to quantify how well trends in an EWS identify emergence ., We find that performance depends on both the EWS and the reporting model ., Broadly , the mean , variance , index of dispersion and first differenced variance perform well ., The autocorrelation , autocovariance and decay time perform well unless either, i ) the data are highly overdispersed or, ii ) the aggregation period is less than the infectious period ., The coefficient of variation , kurtosis , and skewness have a more subtle dependence on the reporting model , and are not reliable ., We conclude that seven of ten EWS perform well for most realistic reporting scenarios ., The dynamics of disease spread in a host population are modeled as a stochastic process using an SIR model with birth and death 37 ., The model compartments and parameters are listed in Table 1 . Transition rates and effects are listed in Table 2 . The basic reproductive number for the SIR model is R0 ( t ) = β ( t ) / ( γ+ α ) , where β ( t ) varies due to nondescript secular trends in the transmissibility ., Simulated data are generated using the Gillespie algorithm 33 , which simulates a sequence of transition events ( infection , recovery , birth and death ) , and returns the number of individuals in each model compartment through time ., The SIR simulations are of a population with average size N0 = 106 ., The parameter ζ gives the rate at which new cases arise due to external sources , and is set to ζ = 1 per week ., The death rate , α , is the reciprocal of the life expectancy , set to 70 years ., Case counts , Ct , are given by the number of recovery events ( at rate γIt ) within each aggregation period , and are included in the model as an additional variable ( see Table 1 ) ., Reporting error is applied to the case count at the end of each aggregation period by sampling a negative binomial distribution ,, P ( K t = k | C t ) = Γ ( ϕ + k ) k !, Γ ( ϕ ) ( ρ C t ρ C t + ϕ ) k ( ϕ ρ C t + ϕ ) ϕ , ( 1 ), with reporting probability ρ and dispersion parameter ϕ 38 ., Given Ct cases , the mean number reported is μt = ρCt ., The variance is specified by the dispersion parameter via the relation σ t 2 = μ t + μ t 2 / ϕ ., Increasing ϕ reduces the overdispersion of the data , so that for large ϕ the distribution of reports is approximately Poisson ., Previous work has proposed a range of different EWS to anticipate dynamical transitions 8 , 12–15 , 17 , 18 ., The ten candidate EWS considered in this paper are listed in Table 3 . We consider additional indicators to those most frequently studied in the EWS literature ( the variance , autocorrelation and coefficient of variation ) ., As R0 approaches 1 , the mean number of cases caused by introductions rises , making it a potential EWS ., The index of dispersion is a similar measure to the coefficient of variation , and is defined as the variance to mean ratio ., The decay time ( or correlation time ) is a log-transform of the autocorrelation , which diverges as R0 approaches 1 ( the definition of critical slowing down ) ., In addition to the autocorrelation , which is normalized by the variance , we consider the unnormalized autocovariance ., As both the autocorrelation and variance increase , the autocovariance may outperform these two measures ., Theoretical results show the increase in variance accelerates as R0 approaches 1 , suggesting the first differenced variance as a complementary EWS ., Additionally we investigate the performance of two higher-order moments , the skewness and kurtosis ., Functional expressions for the dependence of each EWS on R0 can be found using the Birth-Death-Immigration ( BDI ) process , a variation of the SIR model which neglects susceptible depletion ( i . e . St = N0 ) ., The BDI process is a one-dimensional stochastic process , depending only on the number of infected It , and possesses an exact mathematical solution ( for full details see 13 ) ., This allows expressions for the moments and correlation functions of It to be found ( Table 3 , fourth column ) ., BDI theory predicts that most EWS ( the mean , variance , index of dispersion , autocovariance , decay time and first differenced variance ) are expected to grow hyperbolically as R0 approaches one ., The autocorrelation is expected to grow exponentially , the kurtosis quadratically and the skewness linearly ., The coefficient of variation is the only EWS which does not grow , instead remaining constant ., We propose observing these trends in data as a basis for anticipating disease emergence ., The numerical estimators used in this paper are listed in Table 3 , third column , discussed in more depth below ., Theoretical predictions from the BDI process are based on It and do not take into account effects of reporting error and aggregation ., The focus of this paper is to examine the robustness of each EWS to reporting process parameters , using simulated case report data , Kt ., BDI theory predicts that 9 out of 10 EWS increase as the transition is approached ., We quantify the association of each EWS with time using Kendall’s rank correlation coefficient 19 ., A coefficient close to ( +/− ) 1 implies consistent increases/decreases of the EWS in time ., As the underlying dynamics of the case reports are stochastic , the value of the rank correlation coefficient is itself a random variable ., Multiple simulations of the test ( emerging ) and null ( stationary/not emerging ) scenarios result in two distributions of correlation coefficients for each EWS ., We measure performance using the AUC statistic , defined as the overlap of the two distributions , and may be interpreted as the probability that a randomly chosen test coefficient is higher than a randomly chosen null coefficient , AUC = P ( τtest > τnull ) 39 , 40 ., The name comes from one method of calculating it , the area under the receiver operating characteristic ( ROC ) curve , a parametric plot of the false positive rate against true positive rate as the decision threshold is varied 41 ., Instead of explicitly calculating the ROC curve , the AUC can be efficiently calculated after ranking the combined set of test and null correlation coefficients by value 40 ,, AUC = r test - n test ( n test + 1 ) / 2 / ( n test n null ) , ( 2 ), where rtest is the sum of the ranks of test coefficients and ntest and nnull are the number of realizations of the test and null models respectively ., In this paper the AUC statistic quantifies how successfully an EWS distinguishes whether or not a disease is approaching an epidemic transition ., An AUC = 0 . 5 implies that an observed rank coefficient value conveys no information about whether or not the disease is emerging , i . e . the EWS is ineffective ., If the AUC < 0 . 5 then a decreasing trend in the EWS indicates emergence , whereas if AUC > 0 . 5 an increasing trend indicates emergence ., A larger |AUC − 0 . 5| implies better performance , if |AUC − 0 . 5| = 0 . 5 the rank coefficient value classifies the two scenarios perfectly ., The mathematical definitions of the EWS depend on expectations of the stochastic process , Ef ( X ) ( Table 3 , second column ) ., To calculate EWS from non-stationary time series data we use centered moving window averages with bandwidth b as estimators for expectation values ., For example , the mean at time t is estimated using, μ ^ t = ∑ s = t - ( b - 1 ) δ t + ( b - 1 ) δ X s 2 b - 1 , ( 3 ), where δ is the size of one time step ., Near the ends of the time series ( t < bδ and t > T − bδ ) , the normalization factor 2b − 1 is reduced to ensure it remains equal to the number of data points within the window ., Applying Eq 3 to the time series for X results in a time series for μ ^ ., Certain EWS depend on others , for example the variance depends on the mean ., EWS are therefore calculated iteratively , for example μ ^ is first calculated using Eq 3 , and then σ ^ 2 is found using, σ ^ t 2 = ∑ s = t - ( b - 1 ) δ t + ( b - 1 ) δ ( X s - μ ^ s ) 2 2 b - 1 ., ( 4 ), Estimators for each EWS are in Table 3 . For snapshots data Xt = It , and for case report data Xt = Kt ., Throughout this paper we use a bandwidth of b = 35 time steps ( weeks or months depending on aggregation period ) ., Results have been found to be similar for a bandwidth of b = 100 time steps ., To quantify the sensitivity of each EWS to reporting process , we calculate the AUC from simulated data for a range of different model parameter combinations ., The experimental design is fully factorial ( i . e . considers all parameter value combinations ) ., The following four parameters are varied:, ( i ) the infectious period , 1/γ , which can be either 7 or 30 days ,, ( ii ) the reporting probability , ρ = 2−8x for x in {0 , 0 . 05 , 0 . 1 , … , 1} ,, ( iii ) the dispersion parameter , ϕ , which is one of {0 . 01 , 1 , 100} ,, ( iv ) the aggregation period , δ , which is either weekly or monthly ., For the test model , the disease emerges over T = 20 years , via an increase R0 ., For the null model , R0 is constant ., One epidemiological interpretation for the test scenario is it models transmission in a population with high vaccine coverage , where gradual pathogen evolution results in increasing evasion of host immunity ., An alternative interpretation is it models zoonotic spillover , where pathogen evolution within an animal reservoir results in gradually increasing human transmissibility 42 ., In both interpretations , the null model assumes no change in host-to-host transmissibility ., The transmission dynamics were simulated using the Gillespie algorithm 33 ., The Gillespie algorithm assumes all model parameters ( including the transmissibility ) are constant ., To simulate disease emergence we modify the Gillespie algorithm , discretely increasing β at the end of each day and after each reaction to ensure an approximately linear increase in R0 over T = 20 years , from R0 ( 0 ) = 0 to R0 ( T ) = 1 . For the null model , transmission is simulated for 20 years at a constant rate , R0 = 0 ., Our choice of null has no secondary transmission , making the classification problem easy under perfect reporting ., This enables clearer identification of responses to reporting process effects as results span the full range of the AUC statistic ., We repeated the experiment with null model R0 = 0 . 5 , and found no qualitative differences ., For both scenarios transmission is subcritical , with disease presence maintained by reintroduction from an external reservoir ., For each parameter combination 1000 replicates of both scenarios are generated ., We perform these computational experiments in R using the pomp package 43 to simulate the SIR model and the spaero package 44 to calculate the EWS ., Code was written to simulate aggregation and reporting error ., All code to reproduce the results is archived online at doi:10 . 5281/zenodo . 1185284 ., Provided the data are aggregated monthly , with high reporting probability and low overdispersion , the coefficient of variation , skewness and kurtosis have similar AUC values when calculated from snapshots data ( Fig, 4 ) and case report data ( Fig 5 , right column ) ., Unlike the other seven EWS , this it is not the case for weekly data ., If calculated from weekly snapshots data with 1/γ = 1 week , the coefficient of variation has an AUC = 0 . 18 ( Fig 4 , bottom right ) ., With reporting , if ρ = 1 , ϕ = 100 the AUC = 0 . 005 ( Fig 6 , top right ) ., By switching to case report data the performance of the coefficient of variation has improved dramatically ., Similar improvements are seen for the skewness and kurtosis ., In addition , and perhaps counterintuitively , these three EWS’s performances are further enhanced at lower reporting probabilities ( compare the top right and bottom right panels of Fig 6 ) ., At low overdispersion and low reporting probability , the coefficient of variation ( |AUC − 0 . 5| = 0 . 5 ) is joint with the mean and variance as the best performing statistic , closely followed by the skewness ( |AUC − 0 . 5| = 0 . 497 ) and kurtosis ( |AUC − 0 . 5| = 0 . 491 ) ., The improvement in performance at low reporting probability is acutely sensitive to other model parameters ., Both overdispersion in the reporting ( for example Fig 6 , left column ) and larger aggregation period ( Fig 5 , right column ) severely dampen the sensitivity to ρ ., All three EWS perform poorly if ϕ = 0 . 01 , regardless of the other model parameters ., This group of EWS are all measures of the correlation between neighboring data points ., At high reporting probability ( ρ > 0 . 33 ) and low overdispersion ( ϕ = 100 ) , all three perform well ( AUC > 0 . 77 ) , regardless of infectious and aggregation periods ( see Fig 5 ) ., Performance is comparable with snapshots data ( Fig 4 ) ., Overall , they perform best if 1/γ = 1 week ( Fig 5 , top row ) and worst if 1/γ > δ ( Fig 5 , bottom left ) ., At low overdispersion , decreasing the reporting probability reduces the AUC ( compare the top right and bottom right panels of Fig 6 , AUC = 1 . 000 vs 0 . 831 ) ., The performance drop is largest if 1/γ = 1 month and δ = 1 week ., The performance of all three EWS is negatively affected by overdispersion ., Sensitivity to overdispersion is least for 1/γ = δ = 1 week , performance is only poor if ϕ = 0 . 01 and/or ρ ≲ 0 . 036 ( Fig 5 , top left ) ., These three EWS are reliable indicators of emergence provided δ ≥ 1/γ and ϕ = 100 ., Unless reporting error is highly overdispersed ( ϕ = 0 . 01 ) , the mean , variance and first differenced variance perform extremely well ( AUC ≈ 1 , see Fig 5 ) ., If case reports are aggregated weekly and have high overdispersion ( ϕ = 0 . 01 ) , they are among the best performing EWS ., The mean and variance have AUC > 0 . 85 , and the first differenced variance has AUC ≈ 0 . 66 , but is largely unaffected by reporting probability and infectious period ., However , if case reports are aggregated monthly and ϕ = 0 . 01 , then all three perform poorly ., This holds regardless of reporting probability and infectious period , and is in line with the results for other EWS ., The index of dispersion ( unrelated to the dispersion parameter ) has a similar performance to the previous group of EWS , however with certain differences ., We first consider low overdispersion ( ϕ = 100 ) ., At low reporting probabilities the index of dispersion performs best if 1/γ = 1 week and δ = 1 month ( Fig 5 , top right ) ., For other combinations of infectious period and aggregation period , performance suffers a sharp drop as reporting probability decreases ., This drop occurs at a reporting probability dependent on the infectious period and aggregation period , around ρ = 0 . 047 for δ = 1 week , and around ρ = 0 . 027 for δ = 1/γ = 1 month ., Unique among the EWS , the index of dispersion performs best at intermediate overdispersion ( ϕ = 1 ) , in particular at small reporting probability ., This is true for all infectious and aggregation periods , although most pronounced if 1/γ = 1 month and δ = 1 week ., For ϕ = 0 . 01 the index of dispersion performs better if the data are aggregated weekly , and best if the infectious period is also one week , with AUC ≈ 0 . 71 for ρ = 0 . 047 ( Fig 6 , bottom left ) ., Provided ρ ≳ 0 . 05 and ϕ > 0 . 01 , performance is good for all aggregation and infectious periods ., Overall performance is best if 1/γ = 1 week and δ = 1 month ., Taken in isolation , the mean and variance are the EWS least impacted by reporting ., Unless the overdispersion in the observation process is high ( ϕ = 0 . 01 ) , their performance is largely unaffected by reporting process parameters ., At low reporting probabilities they outperform the autocorrelation , autocovariance , decay time and index of dispersion , and are independent of aggregation period and infectious period ., EWS sensitive to correlation between neighboring data points perform well unless, i ) ϕ = 0 . 01 and/or, ii ) 1/γ > δ and ρ ≲ 0 . 06 ., While it is clear how high overdispersion in reporting reduces correlation in the data , an explanation for, ii ) is less clear ., If calculated from snapshots data , the coefficient of variation , kurtosis and skewness are the worst performing statistics ( |AUC − 0 . 5| ≈ 0 ) ., Using case report data improves performance under certain conditions ., If cases are aggregated weekly with low reporting probability and low overdispersion then they are among the best performing EWS , with |AUC − 0 . 5| ≈ 0 . 5 ., In addition the trends of the skewness and kurtosis ( both decreasing ) are opposite those given by the BDI process ( both increasing ) ., Overall , we conclude that these three EWS are unreliable indicators of disease emergence as their performance is conditional on a limited range of reporting process parameters ., For mathematical reasons , proposed EWS for disease emergence have assumed access to regular recordings ( “snapshots” ) of the entire infectious population 8–13 ., However , epidemiological data are typically aggregated into periodic case reports subject to reporting error ., To examine the practical consequences of this mismatch between theory and data , in this paper we calculated EWS from case report data ., We performed extensive numerical simulations to determine the sensitivity of each candidate EWS to imperfect data ., Case aggregation and reporting error change the statistical properties of the data , and can have subtle effects on an EWS’s performance ., We identified four groups of EWS based on their sensitivity to the various reporting process parameters ., The performance of one group , consisting of the EWS with either polynomial or no growth with R0 , has a nuanced relationship with the reporting process parameters ., We therefore conclude that the coefficient of variation , kurtosis and skewness perform poorly as EWS ., In general , the other EWS ( the mean , variance , first differenced variance , index of dispersion , autocorrelation , autocovariance and decay time ) all performed well and are strong candidates for incorporation in monitoring systems intended to provide early warning of disease emergence ., Surprisingly , the combination of reporting error and aggregation of data does not always have a detrimental effect on EWS performance ., The coefficient of variation , kurtosis and skewness perform best when both reporting probability and overdispersion are low ., At first glance this result appears counter-intuitive: as an increasingly large fraction of cases are missed , performance improves ., The point to stress here is that by changing the parameters of the reporting process we are systematically changing the statistical properties of the time series ., For instance , the BDI process predicts no trend in the coefficient of variation , due to the standard deviation and mean increasing with R0 at an identical rate 13 ., With aggregation and reporting error this identity does not necessarily hold , introducing a trend in the coefficient of variation and improving its performance ., To fully explain this phenomenon requires an analytical solution for the statistics of Kt , which requires solving the stochastic process including aggregation and reporting error ., However , it can be seen to be plausible if we focus only on stochasticity resulting from reporting error ., For low overdispersion ( e . g . ϕ = 100 ) , the reporting probability distribution can be approximated by a Poisson distribution with parameter λ = ρCt ., Ignoring demographic stochasticity , we replace Ct with ECt = ηδ ( 1 − R0 ) −1 ., Both the coefficient of variation and skewness for this distribution are λ−1/2 = { ( 1−R0 ) /ρηδ}1/2 and the kurtosis is λ−1 = ( ρηδ ) −1 ( 1 − R0 ) ., These two expressions both decrease as R0 increases from 0 to 1 , consistent with the experimentally observed AUC < 0 . 5 ., The improved performance at low ρ is a consequence of the increased stochasticity in reporting outweighing demographic stochasticity ., Can these EWS be used to anticipate disease emergence ?, If overdispersion and reporting probability are known to be low , then yes ., However , it is unlikely that the reporting process is sufficiently understood for an emerging disease ., We conclude that these three EWS are unreliable and therefore not good indicators of emergence ., There is a similar reason for why the index of dispersion has a peak in performance at intermediate reporting overdispersion ., The negative binomial reporting distribution , conditioned on ECt as above , has index of dispersion given by σ2/μ = 1 + ρηδ{ϕ ( 1−R0 ) }−1 ., Therefore increasing reporting overdispersion ( i . e increasing ϕ−1 ) amplifies the response of σ2/μ to changes in R0 ., This leads to a greater differential , improving performance of the index of dispersion as an EWS ., However , increased reporting overdispersion also implies increased volatility of data within a finite sized window , which reduces reliability ., These two countervailing factors provide an explanation for the optimal performance at intermediate overdispersion values ., In our analysis we considered an SIR model with epidemiologically plausible parameters ., The negative binomial distribution is meant to provide a stringent test on EWS performance , and the parameter ranges are conservative ( especially for overdispersion ) ., For instance , if there are 10 actual cases in a week , and reporting error is negative binomially distributed with ρ = 0 . 1 and ϕ = 0 . 01 , then the mean number of reported cases is 1 ., However , the probability of no cases being reported is P ( K = 0 ) = 0 . 955 whereas the variance in reported cases is σ2 = 101 ., The resulting time series is highly volatile , with little similarity in appearance to the underlying time series of actual cases ., It is unlikely that case reports for an emerging disease will have such high overdispersion ., In addition , for a highly pathogenic emerging disease , such as Middle East respiratory syndrome ( MERS ) or H7N9 avian influenza , the reporting probability is likely much higher than ρ = 1/256 ( the smallest value we studied ) ., Nonetheless , one of the encouraging findings of this study is that high reporting is not essential for reliable early warning ., Clear trends in the EWS can still be identified , provided there are sufficiently many introductions for cases to be sporadically detected prior to emergence ., These dynamics are typical for a reemerging vaccine controllable disease , such as measles , where cases are continually introduced into disease-free regions from endemic regions 45 , 46 ., Performance of EWS which depend on correlation between neighboring case reports was found to be contingent on the aggregation period being larger than the serial interval ( equal to the infectious period for the SIR model ) ., If this is not the case , there is a smaller probability that successive links in a chain of transmission fall into neighboring case reports ., We speculate that this reduces the impact of fluctuations in a particular report on the subsequent report , diminishing their correlation ., This effect is exacerbated if the reporting probability is low ., A more rigorous explanation requires a full solution to the stochastic process with aggregation and reporting error ., For many known pathogens the serial interval is larger than one week , for example measles virus and Bordetella pertussis 47 ., For other pathogens it is less than one week , such as SARS coronavirus 48 and influenza virus 47 , 49 ., In order for the autocorrelation , autocovariance and decay time to be reliable EWS , our results suggest the data need to be aggregated by periods larger than the serial interval ., The performance boost outweighs costs associated with having fewer data points ., We expect that these three EWS will work best for pathogens with short serial intervals; for pathogens with extremely long serial intervals ( such as HIV ) reliable use of these EWS is unlikely ., The purpose of this study was not to identify the best EWS , but to investigate the robustness of this approach to the reporting process ., In order to isolate the effects of incomplete reporting and aggregation error , we ignored parameter uncertainty by fixing epidemiological parameters ( e . g . the infectious period and the introduction rate ) , rather than drawing them from a distribution ., As shown in Table 3 , both the mean and the variance scale with the introduction rate , which is a product of the per capita introduction rate and the susceptible population size ., On the other hand , the index of dispersion , autocorrelation and decay time are all independent of introduction rate ., Uncertainty of important factors , such as the susceptible population size , is a key challenge to anticipating emergence , and these three EWS may outperform the mean and the variance if uncertainty is included ., Thus , while the mean and the variance are most robust to imperfect data , they are not necessarily the best EWS ., Instead , our results suggest that imperfect data is not a barrier to the use of EWS ., One challenge to early-warning stems from the potential suddenness of novel pathogen emergence , for example SARS was unknown prior to the global outbreak in 2002-2003 ., For known pathogens , intermittent data availability presents a separate challenge ., Mumps was excluded from the US CDC’s MMWR in 2002 following a period of low incidence ., Subsequently , there was a series of large outbreaks , notably in 2006 in the Midwest , and mumps was reincluded ., Methods such as EWS are contingent on surveillance efforts being maintained ., In addition to underlining the importance of disease surveillance , our work suggests ways it can be improved ., Case reports sometimes include additional metadata , for example whether all suspected cases are counted or only clinically confirmed cases ., The reporting error of case reports with differing case identification criteria is expected to be very different , as has been seen for instance with MERS 50 ., This paper shows that EWS depend on the reporting process , and cross-validating EWS calculated from each data stream could improve performance ., Provided it is available , how to leverage metadata is a promising avenue for future research into enhancing EWS ., These results provide an essential stepping stone from previous theoretically focused works to implementable early-warning systems ., Our findings further reinforce the hypothesis that disease emergence is preceded by detectable EWS ., While epidemiological factors preclude early-warning for certain pathogens , for example Ebola virus ( estimates of R0 have consistently been greater than one 51 ) and HIV ( see above ) , they do not rule out many others , including reemerging childhood diseases 52 , H7N9 avian influenza virus 53 , and MERS coronavirus 54 ., These pathogens all present public health risk , and EWS may be able to play an important role in monitoring for their emergence . | Introduction, Methods, Results, Discussion | Epidemic transitions are an important feature of infectious disease systems ., As the transmissibility of a pathogen increases , the dynamics of disease spread shifts from limited stuttering chains of transmission to potentially large scale outbreaks ., One proposed method to anticipate this transition are early-warning signals ( EWS ) , summary statistics which undergo characteristic changes as the transition is approached ., Although theoretically predicted , their mathematical basis does not take into account the nature of epidemiological data , which are typically aggregated into periodic case reports and subject to reporting error ., The viability of EWS for epidemic transitions therefore remains uncertain ., Here we demonstrate that most EWS can predict emergence even when calculated from imperfect data ., We quantify performance using the area under the curve ( AUC ) statistic , a measure of how well an EWS distinguishes between numerical simulations of an emerging disease and one which is stationary ., Values of the AUC statistic are compared across a range of different reporting scenarios ., We find that different EWS respond to imperfect data differently ., The mean , variance and first differenced variance all perform well unless reporting error is highly overdispersed ., The autocorrelation , autocovariance and decay time perform well provided that the aggregation period of the data is larger than the serial interval and reporting error is not highly overdispersed ., The coefficient of variation , skewness and kurtosis are found to be unreliable indicators of emergence ., Overall , we find that seven of ten EWS considered perform well for most realistic reporting scenarios ., We conclude that imperfect epidemiological data is not a barrier to using EWS for many potentially emerging diseases . | Anticipating disease emergence is a challenging problem , however the public health ramifications are clear ., A proposed tool to help meet this challenge are early-warning signals ( EWS ) , summary statistics which undergo characteristic changes before dynamical transitions ., While previous theoretical studies are promising , and find that epidemic transitions are preceded by detectable trends in EWS , they do not consider the effects of imperfect data ., To address this , we developed a simulation study which assesses how case aggregation and reporting error impact on 10 different EWS’s performance ., Case report data were simulated by combining a stochastic SIR transmission model with a model of reporting error ., Temporal trends in an EWS were used as a method of distinguishing between an emerging disease ( R0 approaching 1 ) and a stationary disease ( constant R0 ) ., We investigated the robustness of EWS to reporting process parameters , namely the aggregation period , reporting probability and overdispersion of reporting error ., Seven of ten EWS perform well for realistic reporting scenarios , and are strong candidates for incorporation in disease emergence monitoring systems . | medicine and health sciences, pathology and laboratory medicine, infectious disease epidemiology, pathogens, mathematical models, epidemiological methods and statistics, probability distribution, mathematics, stochastic processes, skewness, research and analysis methods, statistical distributions, infectious diseases, epidemiology, mathematical and statistical techniques, probability theory, statistical dispersion, epidemiological statistics, physical sciences | null |
1,037 | journal.pcbi.1002206 | 2,011 | An Integrated Disease/Pharmacokinetic/Pharmacodynamic Model Suggests Improved Interleukin-21 Regimens Validated Prospectively for Mouse Solid Cancers | Cancer is a multi-faceted disease , involving complex interactions between neoplastic cells and the surrounding microenvironment 1 ., The prospect of immunotherapy , i . e . stimulating endogenous immune responses by various molecular and cellular factors , is emerging as a promising approach against this disease 1 , 2 , 3 ., One of the latest candidates for solid cancer immunotherapy is Interleukin ( IL ) -21 , a γc-signaling protein of the IL-2 cytokine family with versatile immune-modulating properties 4 , 5 , ., IL-21 has demonstrated substantial antitumor responses in several independent preclinical studies , in which mice inoculated with diverse transplantable syngeneic tumor lines were treated with the drug via cytokine-gene transfection , plasmid delivery , or injection of the recombinant protein 9 ., In Phase I and IIa clinical trials , IL-21 was well tolerated and triggered moderate antitumor activity in some renal cell carcinoma ( RCC ) and metastatic melanoma ( MM ) patients 10 , 11 , 12 , 13 , 14 ., More recently , clinical trials of IL-21 in combination with the tyrosine kinase inhibitor sorafinib for the treatment of RCC , and Rituximab for the treatment of non-Hodgkins lymphoma , have also been investigated with encouraging results 15 ., Yet , the intricate biology of IL-21 may set hurdles for its clinical development ., Produced mainly by activated CD4+ T cells , IL-21 induces anticancer immunity predominantly by stimulation of natural killer cells ( NKs ) and/or cytotoxic T lymphocytes ( CTLs ) 4 , 5 , 6 , 7 ., The cytokine regulates various cellular and humoral pathways of immunity , and exerts conflicting stimulatory and inhibitory effects on several cell types 9 , 16 , 17 ., Recent evidence for anti-angiogenic effects of IL-21 18 further complicates its dynamical influence on the tumor microenvironment ., Considering this biological complexity , traditional “trial-and-error” methodologies for clinical IL-21 therapy design are likely inefficient , and ought to be replaced by new guided approaches to maximize drug efficacy ., Rational and systematic planning of anticancer therapy may be directed by mathematical modeling and computer-aided analysis , which provides a better understanding of the involved dynamics ., Over the past 25 years , mathematical modeling strategies have been applied in oncology-focused studies investigating tumor progression , angiogenesis and interactions with the immune system 19 , 20 , 21 , 22 , 23 , 24 ., Models for cytotoxic , cytostatic and cytokine-based direct and supportive cancer drugs have been introduced , with some being subsequently validated in preclinical and clinical settings 23 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 , 36 ., These strategies have highlighted the importance of adequate selection of therapeutic regimens to achieve desired outcomes , by carrying out in-depth analysis of optimal times , dosages , and durations of treatment ., Pharmacokinetic ( PK ) and pharmacodynamic ( PD ) modeling of anticancer agents can be particularly useful for clinical design of treatment 37 , 38 ., We have previously developed a mathematical model for the local dynamic effects of IL-21 on solid cancers ., The model focused on interactions of IL-21 with NKs/CTLs , effector cytotoxicity against target cells , and immune memory , providing initial understanding of the optimal conditions for IL-21 gene therapy 39 , 40 ., Here , we have designed a new comprehensive PK/PD/disease model to predict clinically relevant scenarios of IL-21 treatment following intravenous ( IV ) subcutaneous ( SC ) or intraperitoneal ( IP ) administration in different cancer indications ., The model forecasts long-term effects of the drug by integrating newly described PK/PD processes together with a disease model , based on our initial in situ model 39 , 40 ., This new combined model was retrospectively and prospectively validated by in vivo experiments in IL-21-treated mice bearing melanoma ( B16 ) or renal cell carcinoma ( RenCa ) ., Model predictions provide substantial insights concerning adequate planning of systemic IL-21 therapy in solid cancers ., All experiments were conducted according to Novo Nordisk principles for animal studies , as approved by the Danish National Ethics Committee on Experimental Animals , and in accordance with National Institute of Health guidelines for the care and use of laboratory animals ., Data were collected from a published preclinical study in which mice bearing B16 and RenCa tumors were treated with IL-21 by various strategies 41 ., Briefly , tumors were induced at day 0 , and a daily ( B16 ) or 3×/week ( RenCa ) IL-21 regimen ( 50 µg/dose ) was applied SC or IP either at an “early” stage ( day 3 in B16; day 7 in RenCa ) , or at a “late” stage ( day 8 in B16; day 12 in RenCa ) of tumor development ., The tumor was measured several times until experiment termination ., Data were available from additional unpublished dose-titration experiments in RenCa: IL-21 was given SC , 1× or 3×/week , and groups of mice ( n\u200a=\u200a6 ) were assigned a dose between 1-50 µg ., The complete database was a priori divided into “training datasets” for model parameter estimation , and “validation datasets” for model verification ., In new prospective experiments designed to test model-suggested regimens , 7-8-week-old wild type C57BL/6 mice ( Taconic Europe A/S , Denmark ) were inoculated SC in the right flank with 1×105 B16F0 melanoma cells ( American Type Culture Collection ( ATCC ) , CRL-6322 ) on day, 0 . Recombinant murine IL-21 ( Novo Nordisk A/S , Denmark ) or PBS was injected SC from day 3 , when tumors were visible ., IL-21 was given at 12 µg/day , 50 µg/day , or 25 µg twice a day , each group including n\u200a=\u200a10 mice ., Tumor volumes were calculated by the formula based on the two perpendicular diameters d1 and d2 measured approximately 3×/week with digital callipers ., All experiments were carried out blindly , without the investigators knowledge of model predictions ., Animals were randomized and ear-tagged prior to treatment onset and euthanized when individual tumor volumes reached 1000 mm3 ., The new comprehensive systemic model for IL-21 immunotherapy contains PK/PD effects merged with disease interactions , as schemed in Fig ., 1 . The system is described hereafter , and the coupled ordinary differential equations ( ODEs ) are fully detailed in the Text S1 ( sections A-B ) ., The model ( Fig . 1 ) was implemented in C ( Microsoft Visual Studio . NET ) and MATLAB ( The MathWorks , Natick , MA ) programming platforms ., The system was solved by fourth-order Runge Kutta integration ., Model parameters were evaluated by a customized numerical method based on Hooke and Jeeves optimization 48 combining global and local search heuristics and least-squares curve-fitting ., Parameter sets achieving maximal model agreement with experimental training data were selected ( see Text S1 , Tables S1-S2 ) ., The model was simulated under numerous IL-21 regimens , differing in onset , duration , dose , inter-dosing interval , route , etc ., All simulations were repeated several times to ensure output consistency ., Retrospective verification of the model was accomplished by checking its prediction accuracy , via statistical comparison of its output with prior independent validation datasets ( see Experimental data and 41 ) : Model simulations were conducted under the specific tumor settings and treatment conditions of each prior experiment ., For prospective model validation , selected model-identified regimens were tested experimentally , and results were statistically compared to model predictions at the data sampling times ., The goodness-of-fit between the model output and experimental data was determined by calculating the coefficient of variation ( R2 ) ., To compare between experimental datasets , Students t-test ( two-tailed , assuming equal variance ) was applied ., A P< . 05 value was considered statistically significant ., First , we examined the sensitivity of the model to small variations in the value of the plasma-tissue correlation factor s , being that this pivotal parameter simplifies rather complex PK processes ., Simulations of the experimental early-onset IL-21 regimen ( 50 µg/day applied SC/IP ) in the B16-challenged setting were carried out under diverse s values , in the vicinity of those obtained through curve-fitting ( see Materials and methods and Fig . 2 ) ., After increasing or decreasing s values by two-fold , model predictions still accurately retrieved the murine data ( R2>0 . 90; Fig . 2 ) , and were comparable to the original fits ( Fig . 2 , “Model fit” ) ., Interestingly , model predictions remained precise even when modifying the values of the effector-tumor interaction coefficients k1 and k2 ( see Text S1 , section C , and also Fig . S3 ) ., These results indicate that model predictions are robust even when s , k1 and k2 values slightly diverge , meaning that different numeric combinations of these parameters , i . e . , multiple NK:CTL ratios , can accomplish the same therapeutic effect ., This implies a potentially wide window of IL-21 doses within which effects may be comparable ., Our primary goal was to validate the models predictive accuracy ., We therefore compared its output to the experimental B16 progression following a late ( day 8 ) onset regimen of IL-21 , given at 50 µg/day SC/IP for 3 weeks 41 ., All late treatment simulations were strongly in line with the independent validation data ( R2>0 . 90; Fig . 3A ) , thus verifying the model ., Notably , the model was able to recapitulate the biological behavior even under the aforementioned modifications in s , k1 and k2 parameter values ( data not shown ) ., Next we assessed the models generality by investigating whether it can predict IL-21 therapy outcomes in other solid cancer indications , such as RenCa ., Tumor growth and selected immune system parameters were set for RenCa , using training data in untreated mice and previously calibrated parameter values for moderately-immunogenic cancers ( see Materials and methods and 39 ) ., Other parameter values were set exactly as in the B16 case ., Simulations of the experimentally-applied IL-21 treatment of 50 µg at 3×/week , given for 3 weeks 41 , showed model predictions to be strongly akin to the observed dynamics under late ( day 12 ) therapy administered SC , as well as in early ( day 7 ) and late IP regimens ( R2>0 . 90; Fig . 3B ) ., Under the early SC regimen , predicted responses were slightly weaker than observed , yet still remained within the measurements standard deviation ( R2>0 . 73; Fig . 3B , upper panel ) ., To further validate the model for RenCa , we simulated it to predict the effects another experiment that applied lower IL-21 doses ( between 1–20 µg , SC 3×/week for 3 weeks ) ., Predictions were in agreement with the validation set readouts in most doses ( R2>0 . 94; Fig . 3C ) , collectively demonstrating a moderate dose-dependent decrease in IL-21-mediated tumor eradication ., The 10 µg ( 3×/week ) simulation experiment gave a good , but slightly lower , model-data correlation ( R2>0 . 83; Fig . 3C ) ., The model also successfully retrieved a retrospective experiment testing a 30 µg ( 1×/week ) IL-21 treatment schedule ( R2>0 . 90; Fig . 3C ) ., Having validated the model , we used it to gain insights into better IL-21 therapy in the B16 setting ., In particular , we searched for regimens that would be superior to the standard daily SC 50 µg treatment applied previously 41 ., First , we tested whether the treatment initiation time is a critical factor in determining IL-21 effects , by simulating different onsets of the standard daily regimen ., The model predicted that earlier therapy initiation results in stronger anticancer responses , as expected ( Fig . 4A ) ., The simulated tumor mass at the end of therapy ( day 20 ) was lowest under the earliest regimen , which began one day after B16 challenge: This final tumor load was roughly 15% lower than that obtained in the standard treatment initiated at day 3 ., In contrast to this early regimen , the tumor load resulting from a delayed regimen , initiated at day 10 , was doubled ( Fig . 4A ) ., Further delayed regimens ( with onsets as high as day 17 ) were even less favorable ( data not shown ) ., These results collectively emphasize the importance of early-onset therapies ., Notably , however , not even the earliest treatment onset was able to fully eradicate the tumor ., Simulations were performed also to see whether the anticancer response could be improved by fractionating the IL-21 regimen into a more intensive high-dosing protocol , as suggested for other drugs 34 , 49 ., To design alternative schedules , the daily IL-21 regimen ( 16 SC injections , 50 µg each , given from day 3; 41 ) was taken as a reference point: the same total dose ( 800 µg ) was distributed differently across the treatment window , using various doses and inter-dosing intervals , creating a collection of regimens to be tested ., Intriguingly , the model predicted that a more intensive schedule , applying two 25 µg doses per day at a 12-hour inter-dosing interval , would lead to a 45% lower tumor mass than that obtained under the standard daily 50 µg regimen ( Fig . 4B ) ., Fractionation into even smaller doses given every few hours produced slightly lower tumor sizes , yet these responses were not significantly better than the 25 µg regimen outcomes ( Fig . 4B ) ., In fact , not even the most fractionated schedule could arrive at full eradication of the tumor ., At the other end , less fractionated regimens comprising large IL-21 doses given every few days had significantly weaker efficacy ( Fig . 4B ) ., In order to verify our prediction that the fractionated 25 µg/12 hour regimen would be superior to the standard 50 µg/24 hour schedule , the two were experimentally applied in B16-challenged mice ., Even though both schedules effectively attenuated tumor progression as compared to control PBS-treated mice ( *p< . 001; Fig . 4C ) , the 25 µg/12 hour regimen was considerably more successful than the standard 50 µg daily regimen ( **p< . 05; Fig 4C ) , as mathematically predicted ., The observed tumor dynamics under the 25 µg regimen had an excellent fit with the prior model predictions ( R2>0 . 90; Fig . 4C ) , providing strong and quantitative prospective validation of the models precision ., We considered that the fractionated regimen may not be clinically practical , since it could involve increased costs of therapy , and , at least in IV delivery , would possibly require hospitalizing patients ., Therefore , the search for better treatment was limited to simple , widely-acceptable daily administration schedules ., Regimens of one IL-21 dose per day ( e . g . 16 SC injections given between days 3–20 following B16 inoculation ) were simulated under different dose intensities: A dose-dependent increase in the response , reflected by lowered tumor masses , was predicted for very low ( <5 µg ) or very high ( >50 µg ) levels ( Fig . 5A ) ., Yet interestingly , similar outcomes were predicted for the 5–50 µg dose range ( Fig . 5A ) ., This might be explained by the conflicting roles of IL-21 , enhancing CTL activation while drastically reducing NK numbers at the same time 39; It is likely that in this dosing range , IL-21-increased CTL responses fail to promote further tumor shrinkage due to the IL-21-inhibition of NK availability ., A prospective experiment in B16-induced mice examined whether a low dosing regimen in the plateau range ( i . e . 12 µg/day ) could indeed be as effective as the standard 50 µg/day treatment ., Beginning on day 3 following tumor challenge , the two doses were applied SC , and the tumor mass was measured until day 17 ., Both the 12 µg and 50 µg doses induced sufficient antitumor responses in the mice ( *p< . 05 and **p< . 001 compared with PBS-treated mice; Fig . 5B ) ., Although the 12 µg dose appeared slightly less potent , its effect was not significantly different from the 50 µg schedule ( ns , p> . 05; Fig . 5B ) , as anticipated by the model ., Indeed , the model prediction ( Fig . 5A ) fit the 12 µg/day outcome to a good degree ( R2\u200a=\u200a0 . 89 , Fig . 5B ) ., These findings further validate the model , and are the first indication of an IL-21 dosing range executing equally potent effects ., Immune-targeted therapy is increasingly apparent in the battle against cancer ., Several reagents are in development within this scope , some already approved for use in certain indications 1 , 9 , 50 ., In this study , we have devised and validated a clinically-relevant mathematical model integrating the PK/PD effects on immune and disease interactions of IL-21 , one of the recent immunotherapeutic drugs under focus in solid cancers 9 ., Following its verification , our model was used for suggesting beneficial IL-21 treatment policies ., Previous attempts to model cytokine-based immune modulation of solid malignancies have been mainly theoretical , helping to elucidate certain characteristics of the tumor-immune system cross-talk and providing important insights into treatment success ( see for example 23 , 31 , 33 ) ., Our former model focused on the heart of the IL-21 response , retrieving the effects of cytokine gene therapy to a good extent ., Yet , its predictions could not be extrapolated to the clinical realm ., The current work is thus among the first biomathematical studies accounting for practical treatment aspects of cytokine immunotherapy in general , and IL-21 treatment in particular ., Our current model deals with realistic PK and PD effects on disease progression , clinically-feasible scheduling , patient compliance , etc ., Moreover , in contrast to the customary stand-alone PK/PD modeling approach , we have integrated IL-21 PK/PD with specific effects on the involved biological processes , to give a mechanistic , yet minimal , model ., Particularly , our PD/disease model accounts for real entities of the IL-21 biological processes ( effector cells , etc . ) , which enabled us to use measurable data and make testable quantitative predictions ., At the same time , we kept our model concise thanks to condensation of other overly complex biological entities ( cytotoxic proteins , etc . ) which are less cardinal and often not measured experimentally ., Overall , our approach provides a robust model that can forecast the long-term anticancer effects of a specific immunotherapeutic cytokine , via a clinically-oriented prism ., Our integrated PK/PD model was constructed by an advanced “multiple-modeling” approach , which we found most suitable for the IL-21 scenario ., The selection of a favorable model out of many analyzed structures and complexities , and the use of non-linear kinetics , enabled us to explore significantly more functional possibilities , and allowed for flexibility in the design ., Moreover , rather than forming a model per scenario , we were able to create a generalized model by describing processes that are mutual to different therapeutic settings ( administration routes , etc . ) and tumor types ., This enhanced the robustness of the model , since it structure was subject to testing under diverse conditions ., Indeed , the model encompasses IL-21-induced outcomes in a wide range of treatment conditions , under different times and administration routes ., Despite its simplicity , the model accurately predicted IL-21-relayed effects in B16- and RenCa-challenged mice , both prospectively and retrospectively ., Moreover , the model demonstrated robust behavior , and predictions were largely insensitive to modulation of key parameters ., With this combined generality and accuracy , the model can potentially accommodate other clinical settings and solid cancers where similar immune processes apply and where IL-21 has been useful ( i . e . adenocarcinoma , glioma , neuroblastoma ) 7 , 8 ., A systematic design of clinically applicable IL-21 immunotherapy strategies has long been called for ., Considering the modest responses of MM and RCC patients to IL-21 therapy 10 , 11 , 12 , 13 , 14 , it is worthwhile to examine whether the drug can be more powerful under different treatment approaches ., Previous trial regimens of IL-21 were determined based on the US Food and Drug Administration guidelines for high-dose IL-2 therapy in MM patients 13 , as the two cytokines share homology and certain effector-inducing functions ., Yet , recent findings demonstrate that IL-2 and IL-21 do not entirely align in their actions 17 , 51 , 52 , inferring that the optimal administration strategies ( administration routes , dose intensities , inter-dosing intervals , etc . ) likely vary between the two agents ., Local IL-21 delivery or expression have been proposed , by us and others , to be potentially effective and safe approaches 17 , 39 , 53 , yet such therapeutic methods are not yet available for clinical use ., Our systemic model analysis therefore represents a new effort to identify improved , clinically-appropriate IL-21 therapies , using the preclinical tumor models B16 and RenCa as case studies ., Simulations of differently dosed IL-21 schedules gave rise to central new insights ., According to the model , comparable antitumor responses are induced by daily IL-21 doses within the 5–50 µg range ., This was prospectively confirmed in B16-challenged mice , in which a substantially lower IL-21 dose ( roughly 12 µg/day ) was as effective as the standard 50 µg/day treatment ., An insensitive range of IL-21 doses with similar efficacy is not unreasonable , considering that the drug respectively inhibits or induces NKs and CTLs , two cells which complement one another in the process of cancer targeting ., This model-aided identification of smaller doses with similar therapeutic efficacy could have immense clinical value , possibly reducing putative IL-21-associated toxicities ., Adverse events have indeed been reported in IL-21-treated patients 10 , 11 , 12 , 13 , 14 ., IL-2 and interferon-α , other cytokine drugs , are associated with severe hematological and neuropsychiatric side effects complicating their use 2 ., Recent PK/PD models of toxic IL-21 effects on body temperature and red blood cell regulation 46 , 47 present a possible framework in which our improved regimens can be confirmed for clinical safety ., Another interesting concept surfacing from our simulations addresses IL-21 fractionation ., The model predicted improved antitumor responses by simple partitioning of the experimental regimen ( a single 50 µg dose/day ) into an equally intense regimen of 25 µg doses given twice daily ., This was prospectively validated by experiments in which the fractionation-treated mice ended therapy with ca ., half of the tumor load observed after the standard treatment ., Model-predicted halving of a daily dose was sufficient to significantly enhance IL-21 efficacy , and further division of the doses was not imperative ., Indeed , fractionation of cancer therapeutics was recommended in the past by mathematical modeling 29 , and its beneficial effects have been validated preclinically for a chemotherapy supportive drug 34 ., This strategy has mostly been applied in the context of radiation therapy and chemotherapy 54 , yet our results , which clearly indicate the benefit of fractionated IL-21 dosing , propose its relevance also to immune-modulating drugs ., Notwithstanding , fractionation may be impractical , reducing patient compliance and requiring hospitalization in certain cases ., Moreover , embarking on new clinical studies to test fractionation therapy is a large and expensive task , and further adjustment of the mathematical model to humans is needed before engaging in such endeavors ., Our findings also raise the question whether IL-21 ought to be administered by available “slow and continuous release” drug delivery methods , which can be viewed as regimens of maximal partitioning ., Past cytokine-gene therapy experiments in mice showed complete eradication of IL-21-secreting tumors in which the drug was released in low continuous levels directly in the target tissue 7 , 9 , supporting the possible advantage of fractionated regimens ., Future implementation of such routes of drug delivery within our model can allow to specifically analyze the benefit of such strategies for IL-21 therapy ., Our present results set the stage for constructing a humanized IL-21 model , to serve as a tool for streamlining development of the drug , and in the future , hopefully , also for personalizing cytokine immunotherapy ., The model , up-scaled to the clinical arena , can entertain diverse cancer indications , patient-specific characteristics , and different modes of therapy ., Newly-discovered IL-21 properties of relevance to the anticancer response , such as modulation of T regulatory cell functions 17 and anti-angiogenic properties 18 , may be introduced in the evolving IL-21 model ., Finally , considering the growing interest in combination therapies for solid cancers , and the promising preclinical and clinical responses observed when applying IL-21 with monoclonal antibodies or signaling inhibitors 9 , 50 , a future model will also study IL-21 therapy in combination with additional therapeutic reagents . | Introduction, Materials and Methods, Results, Discussion | Interleukin ( IL ) -21 is an attractive antitumor agent with potent immunomodulatory functions ., Yet thus far , the cytokine has yielded only partial responses in solid cancer patients , and conditions for beneficial IL-21 immunotherapy remain elusive ., The current work aims to identify clinically-relevant IL-21 regimens with enhanced efficacy , based on mathematical modeling of long-term antitumor responses ., For this purpose , pharmacokinetic ( PK ) and pharmacodynamic ( PD ) data were acquired from a preclinical study applying systemic IL-21 therapy in murine solid cancers ., We developed an integrated disease/PK/PD model for the IL-21 anticancer response , and calibrated it using selected “training” data ., The accuracy of the model was verified retrospectively under diverse IL-21 treatment settings , by comparing its predictions to independent “validation” data in melanoma and renal cell carcinoma-challenged mice ( R2>0 . 90 ) ., Simulations of the verified model surfaced important therapeutic insights: ( 1 ) Fractionating the standard daily regimen ( 50 µg/dose ) into a twice daily schedule ( 25 µg/dose ) is advantageous , yielding a significantly lower tumor mass ( 45% decrease ) ; ( 2 ) A low-dose ( 12 µg/day ) regimen exerts a response similar to that obtained under the 50 µg/day treatment , suggestive of an equally efficacious dose with potentially reduced toxicity ., Subsequent experiments in melanoma-bearing mice corroborated both of these predictions with high precision ( R2>0 . 89 ) , thus validating the model also prospectively in vivo ., Thus , the confirmed PK/PD model rationalizes IL-21 therapy , and pinpoints improved clinically-feasible treatment schedules ., Our analysis demonstrates the value of employing mathematical modeling and in silico-guided design of solid tumor immunotherapy in the clinic . | Among the many potential drugs explored within the scope of cancer immunotherapy are selected cytokines which possess promising immune-boosting properties ., Yet , the natural involvement of these proteins in multiple , often contradicting biological processes can complicate their use in the clinic ., The cytokine interleukin ( IL ) -21 is no exception: while its strength as an anticancer agent has been established in several animal studies , response rates in melanoma and renal cell carcinoma patients remain low ., To help guide the design of effective IL-21 therapy , we have developed a mathematical model that bridges between the complex biology of IL-21 and its optimal clinical use ., Our model integrates data from preclinical studies under diverse IL-21 treatment settings , and was validated by extensive experiments in tumor-bearing mice ., Model simulations predicted that beneficial , clinically practical IL-21 therapy should be composed of low-dose schedules , and/or schedules in which several partial doses are administered rather than a single complete dose ., These findings were subsequently confirmed in mice with melanoma ., Thus , future testing of these strategies in solid cancer patients can be a promising starting point for improving IL-21 therapy ., Our model can thus provide a computational platform for rationalizing IL-21 regimens and streamlining its clinical development . | medicine, immune physiology, immune cells, cytokines, clinical research design, immune activation, animal models of disease, immunology, anatomy and physiology, preclinical models, adaptive immunity, genetics and genomics, immunoregulation, immunomodulation, immunotherapy, t cells, biology, immune response, immune system, systems biology, clinical immunology, nk cells, immunity, physiology, computational biology, modeling | null |
651 | journal.pcbi.1005673 | 2,017 | Exploring the inhibition mechanism of adenylyl cyclase type 5 by n-terminal myristoylated Gαi1 | Many proteins are involved in cell communication of which one type is the G-protein-coupled receptor ( GPCR ) , embedded in the membrane ., GPCRs are part of a major signalling pathway , the GPCR signal transduction pathway , which enables the transfer of a signal from the extracellular region to the intracellular side and is a key target for drug development ., A large diversity of GPCRs can be found in nature as about 800 human genes are involved in storing different types of GPCRs that can interact with neurotransmitters , hormones or exogenous ligands , for example 1 ., In the cytosol , G proteins , composed of an α , β and γ subunit , are the first interaction partner of activated GPCRs ., When a heterotrimeric G protein is activated by a GPCR , the trimer dissociates , resulting in an α subunit and a βγ dimer 2 ., Activated Gα subunits transport the signal from the membrane to other regions of the cell by stimulating or inhibiting reactions via protein-protein interactions ., Besides direct activation by GPCRs , the function of G proteins can also be influenced by other environmental factors , such as lipidation ., Permanent N-myristoylation , for instance , is known to change the structure and function of the inhibitory G-protein subunit Gαi1 in its active GTP-bound state 3–6 ., While a wide range of GPCRs exists , a relatively low diversity is present in the G protein family , e . g . in the human body ., The human body includes only a relatively small variety of 21 α , 6 β and 12 γ subunits 1 ., The Gα subunits are divided into four major subfamilies based on their sequence homology and function 7: stimulatory Gαs , inhibitory Gαi , Gαq and Gα12 8 , 9 ., Overall the structures of the Gα subfamilies are similar ( S1 Fig , Fig 1 ) , including a Ras domain and an alpha helical ( AH ) domain ., The Ras domain is present in all members of the G-protein superfamily and can perform hydrolysis of GTP to GDP during deactivation of the Gα subunit 10 ., In addition , the domain includes an interaction site for GPCRs as well as regions that can interact with the βγ dimer ., Moreover , the Ras domain can also undergo lipidation ., Except for Gαt , all Gα proteins are able to reversibly bind a palmitate to their N-terminal helix ., Besides palmitoylation , Gαi can also permanently bind a myristoyl moiety to the N-terminus that appears to be crucial for the function of the subunit ( Fig 1C ) 4 , 5 , 9 , 11 ., The AH domain is unique to the Gα subfamilies , which is composed of six α helices and interacts with the Ras domain when GTP or GDP is present ( Fig 1C ) ., However , this interaction between the AH and Ras domain is weakened when a nucleotide is absent in Gα’s active site 12–14 ., The high structural similarity among members of the Gα subfamilies is illustrated by aligning the X-ray structures of stimulatory Gαs and inhibitory Gαi1 , resulting in a root mean square deviation ( RMSD ) of only 1 . 07 Å between the Cα atoms of the two structures ( Fig 1A ) 15 , 16 ., Hence , from a comparison of the structures it is difficult to conclude what the origin is of their inverse action , i . e . , how the structure can be related to a stimulatory respectively an inhibitory effect ., An example of a protein in which both Gαi and Gαs are important for regulation is adenylyl cyclase ( AC ) ., Ten isoforms of AC are known of which nine are membrane-bound ( AC1-9 ) and one is soluble ( sAC ) 17 ., These different types of AC are found throughout the body in different concentrations ., AC5 , for instance , is present in high quantities in the brain , the spinal cord and the heart , and is associated to congestive heart failure and pain perception 18 , 19 ., G proteins have the ability to either stimulate ( Gs ) or inhibit ( Gi ) adenylyl cyclases’ conversion of adenosine triphosphate ( ATP ) to cyclic adenosine monophosphate ( cAMP ) and pyrophosphate 20 , 21 ., ACs consist of two membrane-bound regions , each built from six trans-membrane domains , and a catalytic region in the cytosol that includes two pseudo-symmetric domains , C1 and C2 ( Fig 1 ) 22 ., GTP-bound Gαs is known to bind to the C2 domain for which the interaction site is known from X-ray structures of Gαs interacting with AC ( Fig 1B ) 23 ., Such data is absent for the case of Gαi ., In the absence of direct experimental information , a putative interaction site of GTP-bound Gαi has been suggested in analogy to the known structure of the complex of Gαs and AC ( Gαs:AC ) as the pseudo-symmetric site on the C1 domain ( Fig 1B ) ., However , how the interaction of Gαi on the C1 domain should induce inhibition is not obvious 5 ., Furthermore , since with this hypothesis the interaction sites of Gαi1 and Gαs are highly similar in addition to their structures , it is unclear how the α subunits can differentiate the two binding sites on AC and what the cause is of the stimulatory versus the inhibitory effect induced by the subunits ., A factor that could play an important role in differentiating the action of Gαi and Gαs is the difference in lipidation of both subunits ., Although the X-ray structures in the Protein Data Base ( PDB ) 24 of the active inhibitory and stimulatory Gα subunits tightly align , the N-terminus , which is not resolved for Gαi or Gαs , is not myristoylated during the expression process of Gαi as lipidation can hinder crystallisation 4 ., Hence , it is not clear to what extent the missing N-terminal myristoyl moiety affects the Gαi structure of the remaining protein while the bound myristoyl group has been known to be crucial for Gαi’s conformation and function as the ability to interact with AC5 is abolished upon removal of the myristate 4–6 ., Classical molecular dynamics ( MD ) simulations of myristoylated GTP-bound Gαi1 , Gαi1myr , demonstrate the stability of the myristoyl moiety on the Ras domain due to a hydrophobic pocket formed by β2-β3 , α1 and the C-terminus α5 ( Fig 1C ) and show that myristoylation can have a significant effect on the conformation of the subunit 25 ., The findings suggest the possibility of an alternative novel interaction mode and open up new possibilities for selective interactions with AC ., This is because the found structural changes in the classical MD simulations of Gαi1myr 25 suggest that the subunit will not be able to interact with C1 as Gαs interacts with C2 ., Here , we investigate the interaction between Gαi1myr and AC , using classical MD simulations ., To this end , the initial structure of Gαi1myr was taken from reference 25 in which a 2 μs classical MD simulation of Gαi1myr is described ., Gαi1myr can inhibit only particular isoforms of AC: AC1 , AC5 , AC6 26 ., In this study AC5 is used because X-ray structures of AC’s catalytic domains are composed of isoforms AC2 and AC5 ., Ca ., 16 AC structures can be found in the PDB with different resolutions and/or crystallisation conditions ., All available structures have been co-crystallised with a Gαs subunit and correspond therefore to stimulated conformations at various levels of activation , depending on the nature of bound cofactors ( e . g . cofactor-free complex of AC , substrate-bound AC complex ) ., When AC5 becomes active , roughly three conformational options are possible: a complex of Gαs and AC5 , Gαs:AC5 , Gαs in complex with ATP-bound AC5 , Gαs:AC5 ( ATP ) , or a complex of Gαs and AC5 bound to the reaction products cAMP and pyrophosphate , Gαs:AC5 ( cAMP ) ., Currently , it is not known which one of these forms is most likely to interact with Gαi1myr , or if Gαi1myr can inhibit all of them ., In this study , the structure of the AC5 protein was taken from a crystal structure of the cofactor-free Gαs:AC5 complex ., This apo AC5 structure was used as it could provide insight into Gαi1myr’s inhibitory effect on a stimulated conformation of AC5 in the absence of ATP ., The selected AC5 structure was employed to build a Gαi1myr:AC5 complex ( Fig 1C ) and to explore if the binding of Gαi1myr is able to affect the active conformation initially induced by Gαs ., The absence of ATP in the active site provides the opportunity to investigate Gαi1myr’s ability to prevent the formation of AC’s fully activated form by altering AC’s conformation unfavourably prior to substrate association ., In order to verify which changes are due to the interaction of AC5 with Gαi1myr and which alterations are a result of the removal of Gαs , a second simulation of AC5 , with the Gαs subunit removed , was performed on the same time scale as the Gαi1myr:AC5 complex ., Hence , in this study the impact of the presence of myristoylated Gαi on the function of AC5 is explored via investigating the conformational features of the Gαi1myr:AC5 and the free AC5 complex ( a system that only includes AC’s catalytic region in solution ) in comparison with the Gαs:AC X-ray structure ., The Gαi1myr:AC5 complex has been obtained via docking the Gαi1myr structure on to the C1 domain of AC5 ., Already the initial docking results confirm the possible importance of the myristoyl-induced structural changes of Gαi1myr as a new interaction mode for Gαi1myr could be identified ., The comparison of the performed classical MD simulation ( 2 . 5 μs ) of the Gαi1myr:AC5 complex and the free AC5 system suggest two possible ways of AC inhibition in its apo form ., First , Gαi1myr seems to inhibit AC’s conversion of ATP to cAMP by preventing active-site formation as the Gαi1myr subunit perturbs the conformation of the active site at the C1/C2 interface ., Second , the effect of Gαi1myr on the AC structure leads to a closed conformation of the Gαs binding site on C2 , decreasing the probability of Gαs association and thus of a counter-balancing re-stimulation of the AC5 activity ., Taken together , the observed events lead to a suggestion for a putative Gαi1myr inhibition mechanism of apo AC5 in which lipidation is crucial for Gαi1myr’s function and its protein-protein interactions ., Hence , the results of this study provide a possible indication that lipidation could play a significant role in regulating G protein function and therefore could impact signal transduction in G protein mediated pathways 4–6 ., The PDB structure 1AZS , including the Gαs:AC complex with AC in the apo form , was used as a template , including 1AZS’s C1 and C2 domain , for the initial AC5 structure of Rattus norvegicus ( UniprotKB Q04400 ) 27–29 ., The structure of the Rattus norvegicus Gαi1myr subunit ( UniprotKB P10824 ) interacting with GTP and Mg2+ was taken from reference 25 ( S2 and S3 Figs ) ., The HADDOCK web server 30 was used for docking ten conformations of Gαi1myr on the catalytic domains of AC5 of Rattus norvegicus ., The Gαi1myr snapshots were extracted at the end of the Gαi1myr classical MD trajectory ( around 1 . 9 μs ) discussed in reference 25 , with a time interval of 0 . 5 ns ., The active region of Gαi1 was defined in HADDOCK as a large part of the AH domain ( 112-167 ) , the switch I region ( 175-189 ) and the switch II region ( 200-220 ) , allowing for a large unbiased area on the Gαi1myr protein surface to be taken into account during docking ., The active region of AC5’s C1 domain was defined as the α1 helix ( 479-490 ) and the C-terminal region of the α3 helix ( 554-561 ) because experimentally it has been found that Gαi1myr is unable to interact with C2 and its main interactions with AC are with the C1 domain 5 ., Passive residues , residues that could take part in protein-protein interaction , were defined as residues around the active residues that are on the protein surface and within a radius of 6 . 5 Å of any active residue 30 ., The initial Gαi1myr:AC5 complex for the classical MD simulations was selected based on ( 1 ) the absence of overlap between the C2 domain and Gαi1myr , ( 2 ) no overlap with Gαi1myr’s GTP binding region and the interaction site of Gαi1myr with C1 and ( 3 ) presence of similar complexes in the top-ten docking results of the docking calculations performed for all ten Gαi1myr snapshots ., The first property of the selection criteria is important since Gαi1myr is unable to interact with C2 5 ., The second criterium has been defined since GTP is located in the active site of Gαi1myr in the classical MD simulations , but was not incorporated in the docking procedure because this is not possible in HADDOCK ., Therefore , no overlap between the GTP binding site and the C1 domain should be present in the docking result as otherwise the GTP molecule will not be able to fit in Gαi1myr’s active site ., The last criterium is the presence of similar Gαi1myr:AC5 complexes of the selected complex in all top-ten docking results which increases the probability that complexation of the two proteins is not conformation specific , but is robust as similar complexes can be obtained using different conformations of Gαi1myr ., The Gαi1myr:AC5 complex was used to simulate the protein complex for 2 . 5 μs at 310 K and 1 bar using a Nosé-Hoover thermostat and an isotropic Parrinello-Rahman barostat ., In the active site of Gαi1myr one Mg2+ ion and a GTP molecule are present ., In order to closer mimic an AC5 system with which ATP or a product such as pyrophosphate would be able to interact , a Mg2+ ion was added to the active site of AC5 ( see S1 Appendix ) ., Additionally , about 68 000 water molecules and 150 mM KCl are present in the simulated system ., The force fields used for the protein and the water molecules are AMBER99SB 31 and TIP3P 32 , which were employed by Gromacs 4 . 6 . 6 33 , 34 to perform the runs ., For GTP , the force field generated by Meagher et al . was used 35 ., The adjusted force field parameters for the K+ ions and the Cl- ions were taken from Joung et al . 36 ., The Mg2+ ion parameters originated from Allnér et al . 37 and the parameter set for the myristoyl group was taken from reference 25 ., The charges for the myristoyl group were obtained with Gaussian 09 38 based on Hartree Fock calculations in combination with a 6-31G* basis set and using the AMBER RESP procedure 39 ., Appropriate atom types from the AMBER99SB force field were selected to complete the myristoyl description ., Electrostatic interactions were calculated with the Ewald particle mesh method with a real space cutoff of 12 Å ., Bonds involving hydrogen atoms were constrained using the LINCS algorithm 40 ., The time integration step was set to 2 fs ., The free AC5 system was simulated with the same setup as the Gαi1myr:AC5 complex ., The system was solvated in 30 000 water molecules and a 150 mM KCl concentration ., The initial location of the Mg2+ ion in the active site of the enzyme was the same as in the Gαi1myr:AC5 complex system ., Multiprot 41 and VMD 42 were used to align protein structures ., Uniprot 43 was used to align protein sequences ., Images were prepared with VMD 42 ., The initial conformation of the Gαi1myr:AC5 complex suggests that Gαi1myr’s proposed interaction site ( see S1 Appendix ) affects the conformation of C1 in a different way than Gαs stabilises the C2 domain ( Figs 1 , 2 and S3 Fig ) ., Unlike Gαs , Gαi1myr is not located between the helices of AC5’s catalytic domain , but appears to clamp the C1 domain into its inactive conformation ., Gαi1myr is positioned around AC5’s α3 , interacting with α1 , α2 , and α3 via its switch I , II and III region together with the C-terminal domain of αB ( Fig 2 and S3 Fig ) ., Since C1’s α1 helix appears to decrease its distance with respect to the C2 domain when an ATP analog , adenosine 5- ( α-thio ) -triphosphate ( ATPαS ) , is present in the active site ( S2 Fig ) , the interactions between Gαi1myr and C1’s α1 in the Gαi1myr:AC5 complex could suggest that one way by which Gαi1myr is able to inhibit ATP’s conversion is by preventing C1’s α1 to rearrange upon ATP binding ., The simulation of Gαi1myr:AC5 in comparison with the free AC5 trajectory and the Gαs:AC X-ray structure demonstrate that the first step in decreasing AC5’s activity in the apo form is the relocation of the β7-β8 loop ( Fig 7 , step one ) ., In fact , the β7-β8 loop seems to have an important role for the stimulatory response since the presence of Gαs leads to the stabilisation of the loop , forming ATP’s binding site ( Fig 7 , starting conformation of AC in left panel ) 23 ., This loop conformation is lost as soon as Gαs is absent , as observed for both free AC5 and Gαi1myr:AC5 ., In step two of Fig 7 the Gαi1myr:AC5 complex undergoes a rearrangement in the C2 domain ( absent in free AC5 ) , which leads to a further perturbation of AC5’s active site ., The classical molecular dynamics simulations also show that in the presence of Gαi1myr , there appears to be a decrease in probability for Gαs association ( Fig 7 , step 3 and Fig 6 ) ., Hence , through these rearrangements Gαi1myr could deactivate apo AC5 as well as decrease the probability of reactivation via Gαs ., The results of this study suggest that Gαi1myr deactivates the apo form of adenylyl cyclase type 5 via constraining C1’s active site region ., Inhibition and stimulation of AC5 appear to follow different pathways ., While Gαs binds between the helices of C2 , increasing the stability of the C1:C2 dimer , Gαi1myr is able to clamp the helices of the C1 domain , promoting an inactive conformation of AC5’s catalytic domains and a possible decrease in affinity for Gαs on the C2 domain ., Structurally , Gαs and non-myristoylated Gαi1 are very similar , however , when myristoylation has taken place on the N-terminus of Gαi1 , the conformation of the subunit changes drastically , leading to a structure that differentiates itself from the active Gαs subunit and enables the protein to function in an inhibitory fashion as is shown via the presented classical MD simulations ., Hence , in line with experimental studies , myristoylation appears to be crucial for Gi’s function and demonstrates how important even relatively small changes to a protein structure can be for its function . | Introduction, Materials and methods, Results and discussion | Adenylyl cyclase ( AC ) is an important messenger involved in G-protein-coupled-receptor signal transduction pathways , which is a well-known target for drug development ., AC is regulated by activated stimulatory ( Gαs ) and inhibitory ( Gαi ) G proteins in the cytosol ., Although experimental studies have shown that these Gα subunits can stimulate or inhibit AC’s function in a non-competitive way , it is not well understood what the difference is in their mode of action as both Gα subunits appear structurally very similar in a non-lipidated state ., However , a significant difference between Gαs and Gαi is that while Gαs does not require any lipidation in order to stimulate AC , N-terminal myristoylation is crucial for Gαi’s inhibitory function as AC is not inhibited by non-myristoylated Gαi ., At present , only the conformation of the complex including Gαs and AC has been resolved via X-ray crystallography ., Therefore , understanding the interaction between Gαi and AC is important as it will provide more insight into the unknown mechanism of AC regulation ., This study demonstrates via classical molecular dynamics simulations that the myristoylated Gαi1 structure is able to interact with apo adenylyl cyclase type 5 in a way that causes inhibition of the catalytic function of the enzyme , suggesting that Gα lipidation could play a crucial role in AC regulation and in regulating G protein function by affecting Gαi’s active conformation . | Communication between cells is essential for the survival of any multicellular organism ., When these mechanisms cannot function properly , diseases can occur such as heart failure or Parkinson’s disease ., Understanding cell communication is therefore crucial for drug development ., Important proteins in cellular signalling are the ones that initiate mechanisms in the cell after the signal of an extracellular trigger is transported from outside to inside the cell ., G proteins ( GPs ) are an example of such proteins ., Experimental studies have shown that GPs can perform stimulatory or inhibitory functions , however , it is not well understood what the difference is in their mode of action , especially as they are structurally very similar ., Adenylyl cyclase ( AC ) is an enzyme which can be stimulated or inhibited by GPs , depending on which type of GP is active ., Hence , AC is a good candidate for investigating the difference in function between GPs ., However , only the structure of the stimulatory GP interacting with AC is known ., Here , we investigate for the first time the effect of the interaction of an inhibitory GP with AC via classical molecular dynamics simulations in order to obtain a better understanding of the difference between stimulatory and inhibitory GP association and AC regulation . | medicine and health sciences, molecular dynamics, diagnostic radiology, crystal structure, enzymes, condensed matter physics, enzymology, protein structure, lyases, crystallography, g protein coupled receptors, research and analysis methods, solid state physics, imaging techniques, proteins, chemistry, transmembrane receptors, molecular biology, physics, protein structure comparison, biochemistry, signal transduction, biochemical simulations, radiology and imaging, diagnostic medicine, cell biology, adenylyl cyclase, genitourinary imaging, biology and life sciences, physical sciences, computational chemistry, computational biology, macromolecular structure analysis | null |
1,425 | journal.pcbi.1002188 | 2,011 | Neural Computation via Neural Geometry: A Place Code for Inter-whisker Timing in the Barrel Cortex? | A fundamental question in computational neuroscience asks how the brain represents the relative timing of stimuli as they move between sensory receptors , e . g . as a light source moves relative to the retina , or as contact moves between touch sensors on the fingertip ., For over 60 years Jeffress’ place theory 1 has remained the dominant model ., The idea is that coincidence detector neurons receive input from sensors after delays governed by the distance of the neuron from either sensor ., The inter-sensor time difference is encoded by the location of neurons that are active because their connection delays exactly compensate the inter-sensor stimulation interval ., The place theory therefore suggests an important role for neural geometry in computing the motion of sensory stimuli ., Strong support for Jeffress’ place theory has been provided by a number of studies of midbrain neurons in auditory specialists like the barn owl , who locate sound sources by resolving small differences in the arrival time of sounds at either ear ( see ref . 2 for a review ) ., Evidence from the mammalian auditory system is less conclusive because , for example , rabbit auditory cortex neurons are tuned to inter-ear time differences that are too long to attribute to inter-neuron distances alone 3 ( see also refs . 4 , 5 , and ref . 6 for an alternative mechanism based on slow lateral connections ) ., However few studies have investigated how inter-sensor time-differences might be resolved in specialist mammalian sensory systems ., Tactile specialists like rats , mice , shrews , and seals determine the form and motion of tactile stimuli using prominent arrays of whiskers ( vibrissae ) on the face 7 , 8 ., For example , shrews hunting in the dark can use their whiskers to localise particular body-part shapes on fast-moving prey animals 9 ., Specific to the whisker system is a precise topographic correspondence between the individual sensor and its neural representation ., Deflection of adjacent whiskers A and B on the face evokes the largest amplitude and shortest latency responses in adjacent cortical columns A and B in the somatosensory ( barrel ) cortex ., This precise mapping , as well as observations of sub-millisecond temporal precision throughout 10–12 , makes the whisker-barrel system ideal for exploring the impact of neural geometry on neural computation ., A consistent finding across studies in the rat and mouse somatosensory cortex is that responses vary with the time interval between adjacent whisker stimulation 13–24 ., A useful metric for comparing the response to a two-whisker stimulus to the response to the individual whisker deflection is the facilitation index 17 , defined as ‘the response to paired deflection of whiskers A and B divided by the sum of the response to deflection of whisker A deflected alone and the response to whisker B deflected alone’ or ., In layer 2/3 barrel cortex ( L2/3 ) in particular , paired stimuli in which the adjacent whisker deflection precedes by 20– typically evoke sublinear responses ( ) ., For a range of near-simultaneous deflections , a number of studies have also reported supralinear responses ( ) , again particularly in L2/3 neurons 16–18 , 22 , 23 ( but see ref . 25 ) ., Interestingly Shimegi et al . 18 reported that septa-related neurons in L2/3 , located at the midline area between two barrels , were more likely to show response facilitation for short-interval stimuli , whereas barrel-related neurons were more likely to show response suppression by prior deflection of the distal whisker at longer intervals ( see Figure 1 ) ., Plots of the relationship between the inter-whisker-interval and the response magnitude for individual neurons showed evidence of tuning to particular short intervals ., Together these results suggest that the location of the L2/3 neuron relative to the underlying barrel geometry is important in determining its response to a two-whisker stimulus ., One explanation for the different responses of barrel-related and septa-related neurons , as summarised in Table 1 , is that they reflect the operation of different mechanisms for integrating adjacent-whisker signals in distinct barrel and septal circuits ( see refs . 26–28 ) ., However an alternative hypothesis , inspired by the place theory , is that the differences reflect an underlying continuum of responses , which are determined by the location of the neuron with respect to the two cortical columns ., This hypothesis would allow for , although it would not require , an essentially homogeneous population in L2/3 ., According to this alternative hypothesis , the relationship between the inter-whisker deflection interval and the facilitation index in L2/3 neurons may be determined by differences in the arrival times of synaptic inputs that originate from either barrel ., These differences may be attributed to inter-soma distance-dependent delays in the feed-forward projection from the major input in layer 4 barrel cortex ( L4 ) ., This hypothesis is supported by estimates of the speed of the projection between L4 and L2/3 neuron pairs that are relatively slow , around 0 . 2 meters per second for excitatory and inhibitory post-synaptic neurons 29 , 30 ., In this paper we show that simulated barrel cortex neurons that receive synaptic inputs with onset times constrained to embody this hypothesis can account for all of the trends relating to the stimulus interval in the data of ref ., 18 ., We show that a natural prediction of the model is the existence of a topographic mapping of the inter-whisker deflection interval across the surface of L2/3 ., Specifically , supralinear population responses will peak at short non-zero intervals in neurons located closer to the barrel representing the later of the two deflected whiskers ., The responses of individual L2/3 neurons satisfy the basic requirements for a motion detector , and across the population these responses encode a range of stimulus motion velocities ., Results therefore suggest that two-whisker timing is represented by a place code in L2/3 barrel cortex ., More generally , the lateral displacement of active neurons due to distance-dependent delays on projections between cortical columns can be used to compute the sequence and timing of events between the sensory stimuli represented by activity in those columns ., The results are interpreted as evidence in support of the place theory as a general model of cortical processing of spatiotemporal information ., We hypothesise that distance-dependent delays associated with inter-columar projections in sensory cortex can be used to extract the relative timing of sensory events ., Specifically , delays in the projection from layer 4 ( L4 ) to layer 2/3 ( L2/3 ) barrel cortex might generate selectivity to the inter-whisker deflection interval for adjacent whiskers ., To test the hypothesis , the latencies of synaptic inputs to a leaky integrate and fire neuron were constrained to reflect the range of geometries that characterise the L4 to L2/3 projection ., To validate the model , we recreated an adjacent-whisker paired-deflection study 18 , and compared responses of neurons in different cortical locations to stimuli in which the whiskers were deflected through a range of intervals ., The simplified model is based on three main assumptions , which are described with respect to the validation data in terms of adjacent whiskers A and B , but which in principle apply to a general model of cortical responses to arbitrarily complex multi-whisker deflection patterns ., The first assumption is that , upon whisker stimulation , inputs to L2/3 tend to originate from L4 neurons at the center of the corresponding barrel in L4 ., Therefore , in the model , the input layer L4 is collapsed down to just two point sources , with activity at each source representing the deflection of the corresponding whisker A or B . The second assumption is that the excitatory and inhibitory synaptic inputs evoked by deflection of whisker A and by deflection of whisker B arrive at a population of L2/3 neurons situated above and between corresponding barrels A and B . Therefore , in the model , each L2/3 neuron receives just four inputs , although each represents the total contribution of many similar synaptic contacts ., The third assumption is that the time taken for a L2/3 neuron to register a synaptic input is proportional to the straight-line distance between the L4 and L2/3 neuron ., Therefore , in the model , we assume that the time of arrival of each synaptic input is a linear function of the distance of the L2/3 neuron from either point source in L4 , and we refer to the associated constant of proportionality as the connection speed ., This simplified model of the neural geometry may deviate from the true situation ., For example , if the signalling delays are due to the axonal propagation speeds , then delays could be modified by the morphology of L4 axons , which branch vertically and laterally into L2/3 31 , 32 ., Delays could also be modified by particular branching patterns that vary systematically with the location of the neuron in the home barrel 33 ., We choose not to explicitly model the variety of axonal morphologies , firstly to keep the model formulation simple , secondly because L4 to L2/3 signalling delays are well predicted by the straight-line inter-soma distance 29 , 30 , 34 , and thirdly because post-hoc simulations which considered a laterally-branching axonal morphology did not significantly alter the results ., Furthermore , recurrent interactions within L2/3 are not modelled explicitly , because they would occur subsequent to the initial activation of L2/3 , and thus could only affect the afferent response after the critical first spike response has been determined ( see Discussion ) ., Similarly , modelling each L4 input source as a discrete representation of one whisker is justified because multi-whisker responses in L4 are thought to be due to latent contributions from intra-cortical mechanisms 19 ( see Discussion ) ., The following sections outline how each assumption is represented formally in a model that we refer to as the distance-dependent delay hypothesis ., The plausibility of each assumption , the impact of each simplification , and the alternatives to each are considered in Discussion ., The thalamocortical volley of excitation from thalamus to L4 and then up into L2/3 34 , 35 is closely followed by a volley of disynaptic inhibition , mediated by a small number of interneurons in L4 36 , with a diverse range of morphologies 32 ., We posit that the main excitatory input to L2/3 is derived from direct synaptic connections from excitatory neurons in L4 , and the main inhibitory inputs are derived indirectly from excitation of L4 inhibitory interneurons ., The circuit therefore consists of three connections: an excitatory connection from L4 to L2/3 , an excitatory connection onto the L4 inhibitory interneuron , and an inhibitory connection from the L4 interneuron to the L2/3 neuron ., According to the distance-dependent delay hypothesis each connection has an associated delay ., The onset time of the direct excitatory synaptic input at the L2/3 neuron is proportional to its distance from the barrel center ., To model the indirect inhibition through an inhibitory interneuron we use a time delay proportional to the L4 to L2/3 distance plus a constant time delay accounting for the distance of the interneuron and its spike generation time ., The circuit therefore has three parameters: the speed of the excitatory pathway between L4 and the L2/3 target neuron ( ) , the speed of the inhibitory pathway between L4 and the L2/3 target neuron ( ) , and a fixed latency representing the delayed onset of the spike in the inhibitory interneuron ( ) relative to the onset of excitation in L4 ., For neurons in the barrel cortex , the principal whisker is typically defined as the one which , upon deflection , elicits the shortest latency and/or the largest-amplitude response ., Neurons of a particular barrel column tend to share the same principal whisker , the one which on the face is isomorphic with the position of the barrel in the grid of barrels ., For a given neuron all three criteria usually select the same whisker ., These constraints can be built into the model if , for progressively longer inter-soma distances , whisker-evoked inhibition arrives progressively earlier than excitation ., This pattern of delays requires that inhibitory connections are faster than excitatory connections , and that the onset of inhibition is delayed relative to the excitation ., This is achieved in the model by setting and ., In the analysis presented by Shimegi et al . 18 , against which the model will be validated , L2/3 neurons were characterised by their horizontal location with respect to two underlying barrel columns ., The geometry is shown in Figure 2 ., In the model axes and refer to orthogonal axes of the plane tangent to the pia matter of the brain ( i . e . , the plane tangential to the cortical surface ) 37; specifically is aligned with barrels that correspond to a row of whiskers on the face , and is orthogonal in the ‘tangential plane’ ., The axis is normal to the tangential plane ., Axes and will henceforth be referred to as the horizontal and vertical axes respectively ., In the model , L2/3 neurons will be parameterised only by their horizontal location relative to the two input sources in L4 ., In effect , this means reducing the three spatial dimensions in which intra-cortical connections are defined to just two spatial dimensions by setting ., In this way we can define the position of two sources in L4 at ., Similarly we can describe L2/3 as a one-dimensional string and uniquely describe the location of individual L2/3 neurons along the string in terms of ., For example the neurons at , , and are L2/3 neurons located directly above barrel A , above barrel B , and above the midline respectively ., The Euclidean distance of each L2/3 neuron from the two sources can now be written in terms of : ( 1 ) ( 2 ) For the analyses presented in Results , the input sources were located at and the two layers were separated by vertical distance ., We will henceforth refer to and as inter-soma distances ., Reducing the description of the neural geometry in this way makes interpretation of the behaviour of the model tractable , and it allows for a direct comparison with the available electrophysiological data ., We note that using an alternative geometry has little impact on the main results , as considered in detail in Discussion ., The L2/3 neuron receives excitatory and inhibitory synaptic inputs from each stimulated whisker ., Thus , under two-whisker stimulation , the time of each input is given by: ( 3 ) ( 4 ) ( 5 ) ( 6 ) The inter-whisker interval ( ) is the time of deflection of whisker A , relative to whisker B , which is always deflected at time, 0 . Thus if whisker A was deflected before whisker B , if whisker B was deflected before whisker A , and if then the whiskers were deflected simultaneously ., The relationship between the inter-soma distance and the onset time of excitation and inhibition is illustrated in Figure 3A ., The connection speeds were chosen to be and , which are in the range of estimates derived from electrophysiological data 29 , 30 , but we note that similar analyses have estimated speeds as slow as 34 ., The constant was chosen to delay the onset of inhibition relative to excitation by for the neuron located closest to either barrel center , i . e . , ., With the inter-soma distance constrained by the geometry of Equations 1 and 2 , the input onset times , described by the linear functions in Figure 3A , become hyperbolic functions of the neuron location , as shown in Figure 3B ., The model neuron is a simple integrate and fire neuron with inputs in the form of excitatory and inhibitory post-synaptic conductance changes ( EPSCs and IPSCs ) ., Parameters followed those reported by Puccini et al . 38 as a guide for neurons in the barrel cortex ., The time course of each input , following its onset at time , , or , was modelled as a normalised difference of two exponentials: ( 7 ) The normalisation term , where , ensures that the potential peaks at, 1 . For excitatory synapses and simulating AMPA receptor channel opening 39 , and ensuring that excitatory inputs peak at ., For inhibitory synapses and as used by Puccini et al . 38 to model GABA receptor channel opening , peaking later than the EPSC at as seen in electrophysiological data ( e . g . , ref . 40 ) ., The maximum EPSC amplitude was and the maximum IPSC conductance amplitude was ( similar to ref . 38 ) ., The relative amplitude and time course of the excitatory and inhibitory post-synaptic currents are illustrated in Figure 3C ., For the L2/3 neuron we used a standard leaky integrate and fire neuron 41 , again with parameters guided by those from ref ., 38: ( 8 ) where the membrane time constant , the resting potential , the reversal potential for synapses of type inhibitory , and for excitatory synapses ., The leak conductance was and hence the membrane resistance ., Gaussian noise with standard deviation was added to the membrane potential at each time step ., Integration was by the forward Euler method ( ) ., When the membrane potential reached a spike was recorded , and the membrane potential was set to ., To anticipate how a L2/3 neuron might respond to independent deflections of either whisker , we first determine when the onset times of the EPSC and IPSC evoked by deflection of that whisker will be coincident ., We derive the time of coincidence by setting the onset times to be equal and rearranging: ( 9 ) ( 10 ) Therefore we can determine that when and hence we would expect to see the largest responses to deflection of whisker A because the excitatory input precedes the inhibitory input ., To test this , neurons through the range of locations were stimulated by applying a deflection to either whisker A or whisker B in isolation ., Analogous to the experimental procedure of ref ., 18 , each trial began prior to the onset of the first whisker deflection and ended after the onset of the second deflection ., Spike counts were calculated over this time window for the results of all simulations , however we note that spikes were precisely timed to the whisker stimuli and so this choice of time window is not critical for the behaviour of the model ( see Figure S1 ) ., The spike rate is shown as an average over 50 trials in Figure 4A to allow direct comparison with the results of ref ., 18 , and averaged over 5000 trials for clarity in 4B ., As expected , neurons located closer to a particular barrel spike more often in response to deflection of the corresponding whisker ., As the distance of the neuron from either source increases , the excitatory and inhibitory inputs evoked by the corresponding neuron register at the neuron closer together in time and thus the window of opportunity in which the EPSC can cause a spike decreases ., At longer inter-soma distances , the IPSC precedes the EPSC , and effectively silences the neuron ., These observations agree with the notion of the principal whisker as that represented by the barrel closest to the neuron , and which evokes the shortest latency and largest amplitude response ., Figure 4 shows the linear sum of the response to independent deflection of both whiskers ., These values for the linear sum are later used to construct facilitation index scores from the average spike counts obtained in paired whisker-deflection trials ., For independent deflections of either whisker , we have seen that the spike rate is dictated by the sequence and relative timing of the synaptic inputs ., Responses to paired whisker deflection stimuli are more complex because they are dictated by four PSCs rather than two and also by the ., However similar analysis of the relative arrival times of PSCs can be used to anticipate these responses ., To this end it is useful to consider regions of the space of possible neuron location and inter-whisker deflection intervals ( henceforth – space , see Figure 5A ) that are delineated by different ordering of arrival times of the four PSCs ., These regions are delineated by loci representing coincident arrival of each possible pair amongst the four PSCs ., Equations 9–10 represent two such pairs ., As their solutions are not dependent on the , Equations 9–10 describe four loci , which when plotted are straight lines at constant values of that divide – space into five columns in Figure 5A ., Solutions for the other four pairs of PSCs can be written as functions of as follows: ( 11 ) ( 12 ) ( 13 ) ( 14 ) The solutions to Equations 11–14 are also plotted in Figure 5A , and they further divide the columns into ‘rows’ ., For each region of the graph we can use the equations to state the sequence of inputs for each synaptic pair ., This is done by setting all signs to signs in Equations 9–14 ., The eight inequalities that define each region of the graph can then be combined to give the order of all four synaptic PSCs , and the twenty-four possible PSC orderings take the form , for example , in the top-left region of – space shown in Figure 5A ., Considering now only whether each synaptic event in the input sequence is excitatory or inhibitory , we can describe the input to the L2/3 neuron more simply ., This effectively reduces the twenty-four PSC sequences to just six different orders in which excitation and inhibition can arrive at the neuron ., Figure 5B shows how each of the six orderings delineates a zone in – space ., For a range of short interval stimuli , neurons situated near the midline receive both excitatory inputs before both inhibitory inputs ., They receive inputs in the order , which can be read as ‘two excitations followed by two inhibitions’ ., This zone is coloured dark blue in Figure 5B ., It is in this zone that we would expect to observe the greatest spike rate because neither IPSC precedes the EPSCs ., Notice that this zone is oriented diagonally in – space , and therefore neurons in different locations near the midline will prefer a range of ( short ) ., Similarly we can expect that the greatest suppressive interactions will be displayed in the yellow ( ) , brown ( ) , and orange zones ( ) , in which an IPSC event is always registered first ., Of these zones the orange will be expected to yield the smallest suppression as the second IPSC is preceded by both EPSCs ., In the blue zone ( ) we might expect just one of the whisker deflections to evoke a response , as the second EPSC will be silenced by two preceding IPSCs ., In the cyan zones ( ) both EPSCs are followed immediately by an IPSC ., Therefore we might expect that if the two EPSC/IPSC pairs are separated sufficiently in time for the neuron to respond to them independently , i . e . , if the first inhibition has little effect on the second excitation , then the response will resemble the linear sum of that evoked by either whisker deflected independently , and hence the facilitation index score here will be around one ., Neurons through the range of locations were stimulated by applying paired deflections to whisker A and whisker B in sequence ., By analogy with the experimental procedure of ref ., 18 , each trial began prior to the onset of the first whisker deflection and ended after the onset of the second ., The spike rate is shown as an average over 50 trials in Figure 5C ., As anticipated , the greatest activity was evoked in neurons around the midline ( ) when the whiskers were deflected through a range of short inter-whisker intervals ( ) ., Within this range neurons located left of the midline and therefore closer to barrel A responded maximally to slightly positive inter-whisker intervals where whisker B was deflected before whisker A . Neurons to the right of the midline and therefore closer to barrel B responded maximally when whisker A was deflected before whisker B at short intervals ., For intervals longer than around in either direction , and for neurons further from the midline than around half a millimetre , responses were much smaller ., In a region of – space roughly corresponding with the light blue zone in Figure 5B , responses were more variable at around 0 . 2 spikes per stimulus ., These results from the full spiking model fit well those expected based on the relative timing of the synaptic inputs ., Thus changing the relative timing of the synaptic inputs with distance-dependent delays alters the response of the neuron to paired whisker stimuli in a predictable way ., A major feature predicted by the simulation data is a mapping of short interval stimuli to the location of the most active L2/3 neuron ., The simulation data presented thus far suggest that distance-dependent delays in the L4 to L2/3 projection can generate a spatial encoding of the relative timing of whisker inputs for short interval stimuli ., But to what extent do these observations match up with experimental data ?, To answer this question we look first at the responses of individual model neurons to the range of different interval stimuli ., Figures 6A and 6B show the average spike rate for an individual neuron located either close to barrel B or between barrels A and B respectively ., The neuron in Figure 6A was located approximately to the right of the midline ., Also indicated in the figure is the linear sum of the response of this neuron to either whisker deflected in isolation ., Where paired stimuli evoke responses equal to this value , a facilitation index of 1 would be measured and we would conclude that no facilitatory interaction had occurred ., Where it is less , suppression would have been measured , and where it is greater facilitation would have been measured ., The neuron in Figure 6A shows no facilitatory interaction when whisker B ( the principal whisker ) is deflected prior to the adjacent whisker A . However for slightly negative intervals strong facilitation was measured , with the average spike count exceeding the linear sum baseline three-fold or more around a peak when whisker A is deflected before whisker B . When whisker A precedes by more than the response is strongly suppressed and almost no spikes are evoked ., The suppression recovers towards the linear sum baseline for intervals exceeding ., For the example midline neuron shown in Figure 6B facilitation appears more symmetrical around the zero inter-whisker interval ., Facilitation peaks for simultaneous intervals and fluctuates around baseline for longer intervals in either direction ., The peak in the average spike count is larger than that for the previous neuron , as is the linear sum response used to compute the strength of its facilitatory interaction ., Equivalent plots for individual L2/3 neurons , found in refs ., 17 , 18 , 23 , 24 , display similar qualitative trends to those in Figure 6A and 6B , in terms of both the facilitatory interactions and of the average spike counts for independent and paired whisker stimuli ., In Figure 6C we group the L2/3 neurons by location as either above barrel A , above barrel B or in the septal region between the barrels ., This allows for a direct comparison between the simulation data ( Figure 6C ) and the available experimental data of ref ., 18 ( compare with Figure 1 ) ., The simulation data share many of the qualities of the experimental data , as summarised in Table 1 ., Septal neurons show a large facilitatory peak for near simultaneous paired whisker deflections and for longer intervals in either direction respond with an average , equivalent to the response to either independently deflected whisker ., Neurons located above barrel B display on average a lesser facilitatory peak at interval stimuli , are suppressed by prior deflection of whisker B , and display no facilitatory interactions when whisker B is deflected first ., Geometry in the model is symmetrical about the midline and therefore the responses are symmetrical about the zero inter-whisker interval ., Therefore the above barrel B population display the exact opposite interactions with respect to the interval compared with the above barrel A population ., This includes a lesser peak for interval stimuli not apparent in the electrophysiological data ., Notice too that the peak of the septal group in the experimental data is for a slightly negative inter-whisker interval ., We will shortly demonstrate how an extension to the model , which introduces asymmetries related to the direction in which each whisker is deflected , may account for these differences ., For now we note that the population response predicted by the model affords a good match to the experimental data ., Instead of asking how L2/3 neurons in particular locations respond to different interval stimuli , we can ask how particular interval stimuli are represented across the population of L2/3 ., It is particularly important to consider the population response because even the most effective stimuli typically elicit less than one spike per stimulus in any particular neuron , and so individual spikes yield ambiguous information about the stimulus 42 ., Figure 7 shows the distribution of average responses across the population for a range of positive intervals ., Each of the short inter-whisker deflection intervals is clearly associated with a tuning curve across the population , with a peak that shifts to the left ( negative ) and scales systematically with the increase in interval ., Negative intervals also evoke symmetrical results , i . e . , a shift in peak responses towards neurons on the right , but we do not show them in the figure for clarity ., Viewed in this way , it is clear that the model predicts the existence of a topographic map for the inter-whisker deflection interval across the surface of L2/3 barrel cortex ., According to the model , paired whisker stimuli should elicit supralinear responses and display a systematic shift in tuning across the population for stimulus intervals ranging to ., As well as the representation of the inter-whisker interval across cortical space , it is useful to consider how the stimulus is represented in the timing of spikes ., Inspection of maps for the spike timing revealed that in paired-whisker stimulations , spikes were precisely timed to the whisker stimuli ., Moreover the largest responses reflected a combination of the delayed response to the principal whisker , as well as the superposition of excitatory influences from both whiskers ( see Figure S1 ) ., Therefore the model predicts that the effects measured by ref ., 18 primarily operate on the first somatosensory-evoked spikes in L2/3 ., Barrel cortex neurons are selective for the direction in which the whiskers are deflected ., The mechanism thought to underlie directional selectivity in L4 neurons is similar to that which we have outlined for two-whisker timing , but with distances measured in degrees from the preferred stimulus direction 38 , 40 ., Several studies have suggested that direction preferences vary systematically within the barrel column , such that deflection of the principal whisker to the left or right is correlated with increased activity in neurons located to the equivalent left or right of the barrel column 43 , 44 ., Therefore we can model the effect of deflecting the whisker in either direction by moving the L4 point source for that whisker in either direction in L4 ., Accordingly , to represent a deflection of whisker A to the left ( away from whisker B ) we offset the point source in L4 that corresponds to whisker A by a fixed distance to obtain a new source location at ., Deflecting whisker A to the right means moving the point source to and similarly deflecting whisker B to the left or right means moving the second source to ., For two whiskers and two deflection directions , possible combinations are both deflections to the left ( leftwards ) , both right ( rightwards ) , A left & B right ( outwards ) , and A right & B left ( inwards ) . | Introduction, Materials and Methods, Results, Discussion | The place theory proposed by Jeffress ( 1948 ) is still the dominant model of how the brain represents the movement of sensory stimuli between sensory receptors ., According to the place theory , delays in signalling between neurons , dependent on the distances between them , compensate for time differences in the stimulation of sensory receptors ., Hence the location of neurons , activated by the coincident arrival of multiple signals , reports the stimulus movement velocity ., Despite its generality , most evidence for the place theory has been provided by studies of the auditory system of auditory specialists like the barn owl , but in the study of mammalian auditory systems the evidence is inconclusive ., We ask to what extent the somatosensory systems of tactile specialists like rats and mice use distance dependent delays between neurons to compute the motion of tactile stimuli between the facial whiskers ( or ‘vibrissae’ ) ., We present a model in which synaptic inputs evoked by whisker deflections arrive at neurons in layer 2/3 ( L2/3 ) somatosensory ‘barrel’ cortex at different times ., The timing of synaptic inputs to each neuron depends on its location relative to sources of input in layer 4 ( L4 ) that represent stimulation of each whisker ., Constrained by the geometry and timing of projections from L4 to L2/3 , the model can account for a range of experimentally measured responses to two-whisker stimuli ., Consistent with that data , responses of model neurons located between the barrels to paired stimulation of two whiskers are greater than the sum of the responses to either whisker input alone ., The model predicts that for neurons located closer to either barrel these supralinear responses are tuned for longer inter-whisker stimulation intervals , yielding a topographic map for the inter-whisker deflection interval across the surface of L2/3 ., This map constitutes a neural place code for the relative timing of sensory stimuli . | To perceive how stimuli move over sensor surfaces like the retina or the fingertips , neurons in the brain must report the relative timing of signals arriving at different locations on the sensor surface ., The rat whisker system is ideal for exploring how the brain performs this computation , because the layout of a small number of sensors ( whiskers ) maps directly onto the layout of corresponding columns of neurons in the sensory cortex ., Previous studies have found that neurons located between adjacent cortical columns are most likely to respond when the corresponding adjacent whiskers are stimulated in rapid succession ., These results suggest a link between the location of the neuron and the relative timing of sensory signals reported by its activity ., We hypothesized that , if the time taken for whisker signals to arrive at a neuron is related to its distance from each cortical column , then neurons closer to a particular column will report stimuli moving towards that particular whisker ., In a model approximating the geometry of cortical connections , responses of artificial neurons matched those of real neurons on a wide range of details ., These results suggest an important role for neural geometry in neural computation . | circuit models, auditory system, computational neuroscience, biology, computational biology, sensory systems, neuroscience, sensory perception, coding mechanisms | null |
1,412 | journal.pcbi.1000742 | 2,010 | Simultaneous Clustering of Multiple Gene Expression and Physical Interaction Datasets | Heterogeneous genome-wide datasets provide different views of the biology of a cell , and their rapid accumulation demands integrative approaches that exploit the diversity of views ., For instance , data on physical interactions such as interactions between two proteins ( protein-protein ) , or regulatory interactions between a protein and a gene via binding to upstream regions of the gene ( protein-DNA ) inform how various molecules within a cell interact with each other to maintain and regulate the processes of a living cell ., On the other hand , data on the abundances or expression of molecules such as proteins or transcripts of genes provide a snapshot of the state of a cell under a particular condition ., These two data sources on physical interaction and molecular abundance provide complementary views , as the former captures the wiring diagram or static logic of the cell , and the latter the state of the cell at a timepoint in a condition-dependent , dynamic execution of this logic 1 ., Researchers have fruitfully exploited this complementarity by studying the topological patterns of physical interaction among genes with expression profiles that are condition-specific 2 , periodic 3 , or correlated 4; and similarity of the expression profiles of genes with regulatory , physical , or metabolic interactions among them 5 ., Another line of research focuses on integrating the physical and expression datasets to chart out clusters or modules of genes involved in a specific cellular pathway ., Methods were developed to search for physically interacting genes that have condition-specific expression ( i . e . , differential expression when comparing two or more conditions , as in “active subnetworks” 6 ) , or correlated expression ( eg . subnetworks in the network of physical interactions that are coherently expressed in a given expression dataset 7–9 ) ., A challenge in expanding the scope of this research is to enable a flexible integration of any number of heterogeneous networks ., The heterogeneity in the connectivity structures or edge density of networks could arise from the different data sources used to construct the networks ., For instance , a network of coexpression relations between gene pairs is typically built using expression data of a population of samples ( extracted from genetically varying individuals , or individuals subject to varying conditions/treatments ) ., Whereas a network of physical interactions between protein or gene pairs is typically built by testing each interaction in a specific individual or in-vitro condition ., Towards addressing this challenge , we propose an efficient solution to a well-defined computational framework for combined analysis of multiple networks , each describing pairwise interactions or coexpression relationships among genes ., The problem is to find common clusters of genes supported by all of the networks of interest , using quality measures that are normalized and comparable across heterogeneous networks ., Our algorithm solves this problem using techniques that permit certain theoretical guarantees ( approximation guarantees ) on the quality of the output clustering relative to the optimal clustering ., That is , we prove these guarantees to show that the clustering found by the algorithm on any set of networks reasonably approximates the optimal clustering , finding which is computationally intractable for large networks ., Our approach is hence an advance over earlier approaches that either overlap clusters arising from separate clustering of each graph , or use the clustering structure of one arbitrarily chosen reference graph to explore the preserved clusters in other graphs ( see references in survey 10 ) ., JointCluster , an implementation of our algorithm , is more robust than the earlier approaches in recovering clusters implanted in simulated networks with high false positive rates ., JointCluster enables integration of multiple expression datasets with one or more physical networks , and hence more flexible than other approaches that integrate a single coexpression or similarity network with a physical network 7–9 , or multiple , possibly cross-species , expression datasets without a physical network 11–13 ., JointCluster seeks clusters preserved in multiple networks so that the genes in such a cluster are more likely to participate in the same biological process ., We find such coherent clusters by simultaneously clustering the expression data of several yeast segregants in two growth conditions 14 with a physical network of protein-protein and protein-DNA interactions ., In systematic evaluation of clusters detected by different methods , JointCluster shows more consistent enrichment across reference classes reflecting various aspects of yeast biology , or yields clusters with better coverage of the analysed genes ., The enriched clusters enable function predictions for uncharacterized genes , and highlight the genetic factors and physical interactions coordinating their transcription across growth conditions ., To integrate the information in multiple physical interaction and gene expression datasets , we first represent each dataset as a network or graph whose nodes are the genes of interest and edges indicate relations between gene pairs such as physical interaction between genes or gene products in physical networks , or transcriptional correlation between genes in coexpression networks ., Given multiple graphs defined over the same set of nodes , a simultaneous clustering is a clustering or partition of the nodes such that nodes within each set or cluster in the partition are well connected in each graph , and the total cost of inter-cluster edges ( edges with endpoints in different clusters ) is low ., We use a normalized measure to define the connectedness of a cluster in a graph , and take the cost of a set of edges to be the ratio of their weight to the total edge weight in the graph ., These normalized measures on clustering quality , described in detail in Methods , enable integration of heterogeneous graphs such as graphs with varying edge densities , and are beneficial over simpler formulations as described in detail in a previous study on clustering a single graph 15 ., Our work extends the framework used in the single graph clustering study to jointly cluster multiple graphs , such that the information in all graphs is used throughout the algorithm ., The algorithm we designed , JointCluster , simultaneously clusters multiple graphs using techniques that permit theoretical guarantees on the quality of the output clustering relative to the optimal clustering ., Since finding the optimal clustering is a computationally hard problem , we prove certain approximation guarantees that show how the cluster connectedness and inter-cluster edge cost measures of the clustering output by our algorithm are reasonably close to that of the optimal clustering ( as formalized in Methods , Theorem 2 ) ., The basic algorithm , to which these guarantees apply , works with sparse cuts in graphs ., A cut refers to a partition of nodes in a graph into two sets , and is called sparse-enough in a graph if the ratio of edges crossing the cut in the graph to the edges incident at the smaller side of the cut is smaller than a threshold specific to the graph ., Graph-specific thresholds enable search for clusters that have varying connectedness in different graphs ., The main steps in the basic JointCluster algorithm are: approximate the sparsest cut in each input graph using a spectral method , choose among them any cut that is sparse-enough in the corresponding graph yielding the cut , and recurse on the two node sets of the chosen cut , until well connected node sets with no sparse-enough cuts are obtained ., JointCluster implementation employs a novel scaling heuristic to reduce the inter-cluster edge cost even further in practice ., Instead of finding sparsest cuts in input graphs separately as in the basic algorithm , the heuristic finds sparsest cuts in mixture graphs that are obtained from adding each input graph to a downscaled sum of the other input graphs ., The mixture graph with unit downscaling is the sum graph whose edge weights are the sum of weights of the corresponding edges in all input graphs , and the mixture graphs with very large downscaling approaches the original input graphs ., The heuristic starts with mixture graphs with small downscaling to help control inter-cluster edges lost in all graphs ., But the resulting clusters are coarse ( eg . clusters well connected in some graphs but split into smaller clusters in the rest are not resolved further ) ., The heuristic then refines such coarse clusters at the expense of more inter-cluster edges by increasing the downscaling factor ( see Figure 1 ) ., The scaling heuristic works best when combined with a cut selection heuristic: if for a particular downscaling , more than one mixture graph yields a sparse-enough cut , choose among them the cut that is sparse-enough in the most number of input graphs ( breaking ties toward the cut with the least cost of edges crossing the cut in all graphs ) ., A rigorous description of the algorithm with heuristics for advancing the downscaling factor and selection of cuts is provided in Methods ., Our method runs in an unsupervised fashion since algorithm parameters such as graph-specific thresholds are learnt automatically ., The recursive cuts made by our algorithm naturally lead to a hierarchical clustering tree , which is then parsed objectively to produce the final clusters 16 using a modularity score function used in other biological contexts 17 , 18 ., The modularity score of a cluster in a graph is the fraction of edges contained within the cluster minus the fraction expected by chance in a randomized graph obtained from degree-preserved shuffling of the edges in the original graph , as described in detail in Supplementary Methods in Text S1 ., To aggregate the scores of a cluster across multiple graphs , we take their minimum and use this min-modularity score as the cluster score ., The ( min-modularity ) score of a clustering is then the sum of the ( min-modularity ) scores of the constituent clusters ., We used simulated datasets to benchmark JointCluster against other alternatives:, ( a ) Tree: Choose one of the input graphs as a reference , cluster this single graph using an efficient spectral clustering method 16 to obtain a clustering tree , and parse this tree into clusters using the min-modularity score computed from all graphs;, ( b ) Coassociation: Cluster each graph separately using the spectral method , combine the resulting clusters from different graphs into a coassociation graph 19 , and cluster this graph using the same method ., Tree method resembles the marginal cluster analysis in 20 as it analyses multiple networks using the clustering tree of a single network ., The simulated test data was generated as in an earlier study 18 , under the assumption that the true classification of genes into clusters is known ., Specifically , one random instance involved generating two test graphs over nodes each , and implanting in each graph the same “true” clustering of equal-sized clusters ., A parameter controlled the noise level in the simulated graphs by controlling the average number of inter-cluster edges incident at a node ., The average number of total edges incident at a node was set at 16 , so measures the false positive rate in a simulated graph ., We used the standard Jaccard index , which ranges from 0 to 1 , to measure the degree of overlap between the true clustering and the clustering detected by the methods ., Please see Supplementary Text S1 for more details ., Figure 2 A shows the performance of different methods in recovering common clusters in graphs with the same noise level , averaged over random instances of for each value of the noise level parameter ., When the noise level is low ( or false positive rate at most 25% ) , the clusters output by all methods are close to the true set of clusters ( a Jaccard index close to ) ., But when the noise level is high ( or false positive rate 25%–50% ) , the cluster structure becomes subtler , and JointCluster starts to outperform other methods and achieves the best improvement in Jaccard index over other methods at ., Note that values where false positive rates are above 50% do not lead to a meaningful cluster structure , and are only shown for context ., Thus , within the setting of this benchmark , JointCluster outperformed the alternatives in recovering clusters , especially ones with a weak presence in multiple graphs ., To simulate real-world scenarios where the integrated networks couldve different reliabilities , we benchmarked the methods on clustering graphs with different noise levels ., Instead of varying the common value of the graphs as above , we fixed the noise level of at and varied the of the other graph from 0 to 16 ., The relative performance of Tree and Tree methods ( see Figure 2 B ) showed that better clusters were obtained when clustering tree of the graph with the lower noise level was used ., JointCluster integrated the information in the two graphs to produce a joint clustering tree , which when parsed yielded better clusters than Coassociation and single tree clusters for a larger range of the parameter values ( see Figure 2 B ) ., The empirical evaluation of JointCluster and competing methods was done using large-scale yeast datasets , and described in detail next ., Expression of transcripts were measured in segregants derived from a cross between the BY and RM strains of the yeast Saccharomyces cerevisiae ( denoted here as the BxR cross ) , grown under two conditions where glucose or ethanol was the predominant carbon source , by an earlier study 14 ., From these expression data , we derived glucose and ethanol coexpression networks using all 4 , 482 profiled genes as nodes , and taking the weight of an undirected edge between two genes as the absolute value of the Pearsons correlation coefficient between their expression profiles ., The network of physical interactions ( protein-protein indicating physical interaction between proteins and protein-DNA indicating regulatory interaction between a protein and the upstream region of a gene to which it binds ) among the same genes or their protein products , collected from various interaction databases ( eg . BioGRID 21 ) , was obtained from an earlier study 9 ., The physical network was treated as an undirected graph after dropping interaction orientations , and contained 41 , 660 non-redundant interactions ., We applied JointCluster and other clustering methods to integrate the yeast physical and glucose/ethanol coexpression networks , and assessed the biological significance of the detected clusters using reference sets of genes collected from various published sources ., The reference sources fall into five diverse classes: We overlapped the detected clusters with the reference sets in these classes to differentiate clusters arising from spurious associations from those with genes coherently involved in a specific biological process , or coregulated due to the effect of a single gene , TF , or genetic factors ., The results are summarized using standard performance measures , sensitivity ( fraction of reference sets significantly enriched for genes of some cluster output by a method ) and specificity ( fraction of clusters significantly enriched for genes of some reference set ) , both reported as percentages for each reference class ., The significance cutoff for the enrichment P-value ( denoted hereafter ) is 0 . 005 , after Bonferroni correction for the number of sets tested ., The sensitivity measures the “coverage” of different biological processes by the clusters , and the specificity the “accuracy” of the clusters ., We compared JointCluster with Coassociation 19 , single graph 16 , and single tree ( Tree ) methods , and when applicable with competing methods , Matisse 9 and Co-clustering 7 , which integrate a single coexpression network with a physical network ., All reported results focus only on clusters with at least 10 genes ., To provide context , we present results from clustering each network separately using the single graph method ( Glucose/Ethanol/Physical Only ) in Figure 3 A . Physical Only performs better than the other two methods wrt ( with respect to ) GO Process and TF Binding Sites , and Glucose/Ethanol Only fare well wrt eQTL Hotspots ., This relative performance is not surprising due to the varying levels of bias in the reference classes , and the different data sources used to construct the networks ., Though physical interactions between genes or gene products are known to be predictive of shared GO annotations , certain GO annotations inferred from physical interactions introduce bias ., The same ChIP binding data 26 was used to predict TF binding sites and protein-DNA interactions , so validation of clusters derived from the physical network using TF Binding Sites is biased ., Finally , the same expression data underlying the coexpression networks was used with the independent genotype data to define the eQTL hotspots 14 ., Hence the eQTL Hotspots class does not by itself provide a convincing validation of the coexpression clusters; however it can be used to understand the extent of genetic control of coordinated transcription and to validate clusters derived from networks comprising only physical interactions ., The reference classes offering truly independent validation of clusters are TF Perturbations and Compendium of Perturbations , and the three single graph methods perform similarly in these perturbation classes ., Integration of the yeast physical network with the glucose/ethanol coexpression networks was done to find sets of genes that clustered reasonably well in all three networks ., JointCluster performed a better integration of these networks than Coassociation for all reference classes except eQTL Hotspots ( Figure 3 B ) ., The enrichment results of single tree methods in Figure 3 B followed a trend similar to the single graph methods in Figure 3 A , reflecting the bias in the reference classes ., In the two truly independent perturbation classes , JointCluster showed better sensitivity than the other methods at comparable or better specificity ., In summary , though different single graph and single tree methods were best performers in different reference classes ( from Figures 3 A and 3 B ) , JointCluster was more robust and performed well across all reference classes characterizing diverse cellular processes in yeast ( Figure 3 B , first bar ) ., The clusters identified by JointCluster that were consistently enriched for different reference classes are explored in depth next ., The clusters in a clustering were ordered by their min-modularity scores , and identified by their rank in this ordering ., We highlight the biology and multi-network connectivity of the top-ranked clusters detected by JointCluster in an integrated analysis of the yeast physical and glucose/ethanol coexpression networks ., The member genes and enrichment results of all preserved clusters detected by JointCluster are provided as Supplementary Data in Text S1 ( see also Table 1 in Supplementary Text S1 for GO Process enrichment of many top-ranked clusters ) ., The preserved cluster with the best min-modularity score , Cluster #1 , comprised genes with a min-modularity score of ., The respective modularity scores in the physical , glucose , and ethanol networks were , , and , which were significantly higher than the modularity of a random set of genes of the same size in the respective networks ( see Figure 1 in Supplementary Text S1 for the clusters connectivity in the three networks ) ., This cluster was significantly enriched for genes involved in the GO Processes , translation ( 1e-20; see Table 1 in Supplementary Text S1 ) , mitochondrion organization ( 1e-20 ) , mitochondrial translation ( 1 . 8e-17 ) and cellular respiration ( 3 . 1e-8 ) ., The enrichments noted for Cluster #1 is consistent with and even extend published results on this dataset ., The shift in growth conditions from glucose to ethanol triggers large changes in the transcriptional and metabolic states of yeast 28 , with the primary state being fermentation in glucose and respiration in ethanol ., The transcription of functionally related genes , measured across different timepoints during the shift , are highly coordinated 28 ., The coregulation of related genes is also evident from the clusters of coexpressed genes found under the glucose condition , using expression profiles of genetically perturbed yeast segregants from the BxR cross 29 ., Our results take this evidence a step further , because the coexpression of cluster genes are elucidated by genetic perturbations in both growth conditions ( regardless of the expression level changes of cluster genes between the conditions ) ., We also note that the top-ranked cluster is significantly enriched for genes linking to the eQTL hotspot region glu11 in Chromosome 14 14 ( 4 . 6e-25 ) , which highlights the role of genetic factors in the coregulation of genes involved in ( mitochondrial ) translation and cellular respiration ., A different perspective on yeast biology in the glucose medium is offered by Cluster #2 consisting of genes ( with a significant min-modularity score 0 . 00021; see Figure 1 in Supplementary Text S1 ) ., This cluster is significantly enriched for ribosome biogenesis ( 2 . 4e-37; see Table 1 in Supplementary Text S1 ) , and related GO Process terms such as ribonucleoprotein complex biogenesis and assembly ( 9 . 4e-37 ) , ribosomal large subunit biogenesis ( 8 . 8e-35 ) , and rRNA processing ( 3 . 8e-33 ) ., Genes in this cluster significantly overlap with the perturbation signature of BUD21 , a component of small ribosomal subunit ( SSU ) processosome ( 4 . 1e-15 ) , and with genes whose expression links to genetic variations in the eQTL hotspot region glu12 in Chromosome 15 14 ( 7 . 9e-16 ) ., These results are consistent with the literature on the regulation of yeast growth rate in the glucose or ethanol medium , achieved by coregulation of genes involved in ribosome biogenesis and subsequent protein synthetic processes 28 ., To further understand the biological significance of these preserved clusters in physical and coexpression networks , we used the reference yeast protein complexes in MIPS 30 ( comprising literature-based , small-scale complexes of at least five genes , at level at most two in the MIPS hierarchy ) ., The enrichment of the joint clusters wrt this MIPS Complex class was % sensitivity and % specificity ., Of the clusters not enriched for any MIPS complex , some were significantly enriched for other functionally coherent pathways ( eg . Cluster #13 was enriched for amino acid biosynthetic process; see Table 1 in Supplementary Text S1 ) ., So the clusters detected by JointCluster overlapped with several known complexes or other functional pathways ., One of the goals of jointly clustering multiple networks is to identify subtle clusters: sets of genes that cluster reasonably well , but not strongly , in all networks ., We start with biologically significant clusters i . e . , clusters enriched for some reference set wrt all five reference classes , and test if any such cluster has a weak modularity score in some graph ., We identified 5 biologically significant clusters using JointCluster: Clusters #4 , #13 , #15 , #19 , and #28 ., Table 2 in Supplementary Text S1 shows the reference sets they were enriched for , and Figure 2 in Supplementary Text S1 the modularity scores of Clusters #4 and #28 ., Cluster #28 , the biologically significant cluster with the lowest min-modularity score , had genes and was enriched for the GO Processes , multi-organism process ( 2 . 5e-12 ) and conjugation ( 2e-10 ) ., This clusters role in mating was further supported by its significant enrichment for perturbation signatures of STE12 ( 5 . 7e-9 ) and FUS3/KSS1 ( 8 . 1e-21 ) , because Ste12p is a TF regulating the expression of mating genes and is activated by the Fus3p/Kss1p kinases in the well-studied mitogen-activated protein kinase ( MAPK ) cascade 31 ., Such a cluster of well-studied genes was recovered just by the single graph method Physical Only , but not by Glucose/Ethanol Only ., Here we considered a cluster of genes to be recovered by a method if this cluster is significantly enriched for some cluster found by the given method ( as in reference set enrichment ) ., JointCluster was able to detect this cluster due to its high modularity in the physical network combined with its significant , albeit weak , modularity in the coexpression networks ( see Figure 2 in Supplementary Text S1 ) ., To explore more subtle clusters , we focused on the clusters identified by JointCluster that were enriched for at least four reference classes , instead of all five required above ., Clusters #52 and #54 had the two lowest min-modularity scores among such clusters , and were each recovered just by the Physical Only method , but not by Glucose/Ethanol Only ., Cluster #52 comprised of genes had a significant min-modularity score ( see Figure 3 in Supplementary Text S1 ) , and was enriched for the GO Process , ubiquitin-dependent protein catabolic process ( 4 . 4e-23 ) ., RPN4 is a TF involved in regulation of the protein catabolic process 32 , and this cluster was significantly enriched for genes in the deletion signature of RPN4 ( 1 . 4e-9 ) and genes with predicted binding sites of RPN4 ( 1 . 8e-23; see Supplementary Data in Text S1 for other enrichments ) ., These examples reiterate how a combined analysis of multiple networks by JointCluster detects meaningful clusters that would be missed by separate clustering of the networks ., Despite the intense focus on elucidating yeast biology by many researchers , roughly 1 , 000 Open Reading Frames ( ORFs ) are still uncharacterized 33 ., Therefore , predicting the function of these ORFs is important to guide future experiments towards strains and perturbations that likely elucidate these ORFs 33 ., While there have been many network-based function prediction studies ( see survey 34 ) , our study provides a different perspective by using clusters preserved across multiple coexpression and physical networks ., Our prediction strategy , based on a module-assisted guilt-by-association concept 34 , annotates the uncharacterized ORFs in a cluster detected by JointCluster to the GO Process reference set for which this cluster is most significantly enriched ., To test the utility of these predictions for a well-studied process in yeast , we focused on clusters enriched for ribosome biogenesis ( Clusters #2 and #22; see Table 1 in Supplementary Text S1 ) ., Two ORFs in Cluster #2 , a top-ranked cluster discussed above , were marked as uncharacterized by SGD 35 ( April 2009 version ) : YER067W and YLR455W ., Our predictions for these ORFs have different types of support: YER067W is significantly correlated with 67 and 33 of the 76 genes in this cluster in glucose/ethanol expression datasets respectively ( Pearsons correlation test , Bonferroni corrected for the cluster size ) , and YLR455W has known protein-protein interactions with five other genes in the cluster , NOC2 , BRX1 , PWP1 , RRS1 , EBP2 , all of which were implicated in ribosome biogenesis ., Cluster #22 had 9 uncharacterized ORFs , YIL096C , YOR021C , YIL091C , YBR269C , YCR087C-A , YDL199C , YKL171W , YMR148W , and YOR006C ., Two of them ( YIL096C and YOR021C ) have predicted roles in ribosome biogenesis based on function predictions collected from the literature by SGD for some of the uncharacterized ORFs ., This lends support to the two predictions and leaves the other novel predictions for further validation ., All of the uncharacterized ORFs in Cluster #22 except YBR269C were significantly correlated with more than three-fourths of the 35 genes in the cluster in both glucose/ethanol expression datasets ( using the same criteria above based on Pearsons correlation test ) ., The predictions here were based on either support from the physical network ( for YLR455W ) or from both coexpression networks ( for the rest ) , and hence illustrates the advantage of using multiple data sources ., Of the 990 ORFs classified as uncharacterized by SGD ( April 2009 version ) , 524 overlapped with the genes used to build the yeast networks ., We could predict the function for 194 of them , by virtue of their membership in preserved clusters significantly enriched for some GO Process term ., Using single graph ( Glucose/Ethanol/Physical Only ) clusters in place of the preserved clusters detected by JointCluster yielded predictions for 143 , 148 and 247 uncharacterized ORFs respectively , reflecting the relative GO Process specificity of these methods ( Figures 3 A and 3 B ) ., The relative number of predictions from different methods should be viewed in context of the systematic evaluations above , which showed that whereas Physical Only performed best wrt GO Process , JointCluster produced clusters that were more coherent across all reference classes ., The predictions from JointCluster were also complementary to those from Physical Only , with the functions of only uncharacterized ORFs predicted by both methods ., The functions predicted using the preserved clusters are available as Supplementary Data in Text S1 , and point to well-studied biological processes that have escaped complete characterization ., To compare JointCluster against methods that integrate only a single coexpression network with a physical network , such as Matisse and Co-clustering , we considered joint clustering of a combined glucose+ethanol coexpression network and the physical network ., The glucose+ethanol network refers to the single coexpression network built from expression data that is obtained by concatenating the normalized expression profiles of genes under the glucose and ethanol conditions ., The results of different methods on this two-network clustering is in Figure 4 A . Since our results focus on clusters with at least 10 genes , we set the minimum cluster size parameter in Matisse to 10 ( from its default 5 ) ., All other parameters of Matisse and other competing methods were set at the default values ., The default size limit of 100 genes for Matisse clusters was used for JointCluster as well to enable a fair comparison ., Co-clustering didnt have a parameter to directly limit cluster size ., Despite setting its parameter for the number of clusters at 45 to get an expected cluster size of 100 , Co-clustering detected very few ( 26 ) clusters of size at least 10 genes , half of which were large with more than 100 genes ( including one coarse cluster with more than 800 genes ) ., So Co-clustering achieves greater specificity than other methods ( Figure 4 A ) at the expense of a coarser clustering comprising few large clusters ., JointCluster has sensitivity and specificity that is comparable or slightly lower than Matisse across all reference classes except TF Binding Sites ., However , JointCluster produces clusters that cover significantly more genes than Matisse ( 4382 vs 2964 genes respectively; see also Figure 4 A ) ., Matisse assumes that the physical network is of better quality , and searches for coexpression clusters that are each connected in the physical network ., This connectivity constraint excludes genes whose physical interactions are poorly studied or untested ., JointCluster does not use such a constraint when parsing the clustering tree into clusters , and hence identifies clusters supported to varying extents in the two networks , including ones with weak support in the physical network ., This could be a huge advantage in organisms such as human and mouse where the knowledge of physical interactions is far less complete than in yeast , especially for interactions that are tissue-specific or condition-specific ., The extreme examples among the roughly 1500 genes excluded by Matisse clusters were the physically isolated genes ( i . e . , genes that do n | Introduction, Results, Discussion, Methods | Many genome-wide datasets are routinely generated to study different aspects of biological systems , but integrating them to obtain a coherent view of the underlying biology remains a challenge ., We propose simultaneous clustering of multiple networks as a framework to integrate large-scale datasets on the interactions among and activities of cellular components ., Specifically , we develop an algorithm JointCluster that finds sets of genes that cluster well in multiple networks of interest , such as coexpression networks summarizing correlations among the expression profiles of genes and physical networks describing protein-protein and protein-DNA interactions among genes or gene-products ., Our algorithm provides an efficient solution to a well-defined problem of jointly clustering networks , using techniques that permit certain theoretical guarantees on the quality of the detected clustering relative to the optimal clustering ., These guarantees coupled with an effective scaling heuristic and the flexibility to handle multiple heterogeneous networks make our method JointCluster an advance over earlier approaches ., Simulation results showed JointCluster to be more robust than alternate methods in recovering clusters implanted in networks with high false positive rates ., In systematic evaluation of JointCluster and some earlier approaches for combined analysis of the yeast physical network and two gene expression datasets under glucose and ethanol growth conditions , JointCluster discovers clusters that are more consistently enriched for various reference classes capturing different aspects of yeast biology or yield better coverage of the analysed genes ., These robust clusters , which are supported across multiple genomic datasets and diverse reference classes , agree with known biology of yeast under these growth conditions , elucidate the genetic control of coordinated transcription , and enable functional predictions for a number of uncharacterized genes . | The generation of high-dimensional datasets in the biological sciences has become routine ( protein interaction , gene expression , and DNA/RNA sequence data , to name a few ) , stretching our ability to derive novel biological insights from them , with even less effort focused on integrating these disparate datasets available in the public domain ., Hence a most pressing problem in the life sciences today is the development of algorithms to combine large-scale data on different biological dimensions to maximize our understanding of living systems ., We present an algorithm for simultaneously clustering multiple biological networks to identify coherent sets of genes ( clusters ) underlying cellular processes ., The algorithm allows theoretical guarantees on the quality of the detected clusters relative to the optimal clusters that are computationally infeasible to find , and could be applied to coexpression , protein interaction , protein-DNA networks , and other network types ., When combining multiple physical and gene expression based networks in yeast , the clusters we identify are consistently enriched for reference classes capturing diverse aspects of biology , yield good coverage of the analysed genes , and highlight novel members in well-studied cellular processes . | genetics and genomics/bioinformatics, computational biology/systems biology, computational biology/genomics, computational biology/transcriptional regulation | null |
667 | journal.pcbi.1002883 | 2,013 | Important miRs of Pathways in Different Tumor Types | MicroRNAs ( miRs ) are small non-coding RNAs that interact with their gene target coding mRNAs ., Such small RNAs putatively inhibit translation by direct and imperfect binding to the 3′- and 5′-untranslated regions ( UTR ) 1 and exert expression control with other regulatory elements such as transcription factors 2 , 3 , 4 ., The elementary role of miRs in gene expression has been indicated in tissue- and organ-specific development 5 ., miRs also play an important role in tumors 6 , 7 , 8 , where over-expressed miRs might diminish the level of expression of targeted tumor suppressor genes 9 ., In turn , miRs may act as tumor suppressors , when their down-regulation leads to enhanced expression of targeted oncogenes 10 or are involved in various steps of the metastatic process 11 ., Generally , aberrant expression of miRs in cancers can arise from the deletion or mutation as well as methylation of miR coding regions 12 ., Furthermore , miRs may be located in common breakpoint regions and genomic areas of amplification and loss of heterozygosity 13 ., Such alterations of miR-expression levels have been implicated in the de-regulation of critical players in major cellular pathways , modifying the differentiation , proliferation and survival of tumor cells ., For example , miR-7 and miR-221/222 have been shown to be involved in the activation of the Akt and epidermal growth factor receptor ( EGFR ) signaling pathways in gliomas 14 , 15 while miR-34a was found to be a key regulator of p53 16 ., To provide a better understanding of the involvement of miRs in pathways , we computationally determined miRs that are significantly associated with molecular pathways ., In particular , we utilized gene expression profiles to determine a pathway specific enrichment score in diverse cancer types , such as glioblastomas , ovarian and breast cancers ., Using data of physical interactions between miRs and the 3′UTR of mRNAs we counted the numbers of leading edge genes ( LEG ) in each pathway that were targeted by a given miR ., We assumed that the topology of interactions between LEGs of pathways and miRs allows an assessment of the tumor-specific importance of the given miR for the expression of the underlying pathways ., Therefore , we used a machine learning approach to fit pathway-specific enrichment scores as a function of the corresponding number of LEGs that were targeted by an array of miRs ., Despite the diversity of the underlying cancer types , we obtained a large , overlapping set of important miRs ( IM ) that significantly influenced the regression process in all cancer types considered ., Furthermore , IMs that were important for an increasing number of pathways were enriched with literature curated cancer miRs and differentially expressed miRs ., Such sets of IMs also coincided well with clusters of miRs that were experimentally indicated in numerous other cancer types ., Focusing on such an overlapping set of overall important miRs ( OIM ) in glioblastomas , ovarian and breast cancers , we investigated their interactions to LEGs in differentially expressed pathways ., We observed that such interactions were characterized by considerable changes in their expression correlations ., Such gains or losses of expression correlations indicated OIM/LEG pairs that may influence expression changes in the underlying pathways ., Using The Cancer Genome Atlas ( TCGA , http://cancergenome . nih . gov/ ) , we utilized 77 glioblastoma samples and 10 non-tumor control samples that provided matching gene and miR expression profiles ., We also used 77 samples of ovarian cancer and 8 non-cancer tumor samples , as well as 79 breast cancer and 19 non-cancer control samples ., Comparing disease and control samples , we determined differentially expressed miRs by a Students t-test if FDR<0 . 01 ., Accordingly , we found 164 differentially expressed miRs in GBMs , 282 in ovarian and 82 in breast cancers ., We collected overlapping sets of 35 oncomiRs , 42 tumor suppressor- miRs 6 , 9 , 17 , 18 , 19 , 20 , 21 and 32 miRs that were involved in metastasis 11 , 19 , 20 , 21 , 22 ( Fig . S1 ) ., The HMDD database 23 collects reports from the literature that experimentally indicated a miRs involvement in different tumor types ., Specifically , we utilized sets of 45 miRs in glioblastomas , 81 in ovarian and 125 in breast cancer ., As a source of reliable protein pathway information , we used 429 annotated pathways from the Reactome database 24 ., Utilizing human specific data from PicTar 25 , miRanda 26 , 27 and TargetScanS 28 we assembled 48 , 939 interactions between 386 miRNAs and 6 , 725 mRNAs , demanding that each interaction was reported by at least two sources 29 ., All interaction pairs are presented in Table S1 ., Using gene expression data of a cancer type , we applied GSEA 30 to calculate a normalized enrichment score of each pathway ., We represented each pathway by a profile of miRs that reflected the number of leading edge genes ( LEG ) in the underlying pathway a given miR interacts with ., Focusing on a given miR we normalized such numbers by a Z-score averaging over all pathways ., Finally , we used random forest algorithm 31 to perform a regression of the pathways normalized enrichment scores as a function of the miR profiles of Z-scores ., In each of 10 , 000 regression trees , we randomly sampled of all n miRs and of all x pathways 21 , 29 ., As for the assessment of a miRs importance for each pathway in the fitting process , we permuted enrichment scores and the number of targeted LEGs , calculating randomized local importance values for each miR/pathway pair ., We repeated the randomization process 100 times and constructed null-distributions of randomized importance scores for each miR/pathway pair ., Fitting such distributions with a Z-test , we calculated P-values for each miR/pathway pair ., We corrected for multiple testing by calculating the corresponding false discovery rate ( FDR ) 32 and defined an important miR ( IM ) of a pathway if FDR<0 . 01 ., We grouped important miRs ( IM ) according to their number of pathways ., Specifically , we represented each group by IMs that had at least k pathways ., In each group we calculated the number of IMs with a certain feature i ( i . e . being differentially expressed or a cancer miR ) , ., Randomly assigning feature i to IMs we defined as the enrichment of IMs with feature i where was the corresponding random number of IMs with feature i among all IMs ., After averaging Ei over 10 , 000 randomizations Ei>1 pointed to an enrichment and vice versa , while Ei∼1 indicated a random process 33 ., Analogously , we determined the enrichment of differentially expressed pathways as a function of the number of their IMs ., Assuming ND cancer and NC non-tumor control samples , we calculated Pearsons correlation coefficient of an interacting miR i and gene j in the disease ( ) and control ( ) samples ., Subsequently , we Fisher transformed correlation coefficients into a Z-score reflecting the difference of correlation coefficients defined as ., Therefore , a positive ΔZ corresponded to a gain of correlation in the disease case and vice versa ., In the first step of our procedure ( Fig . 1A ) , we applied Gene Set Enrichment Analysis ( GSEA ) 30 to determine a normalized enrichment score of each pathway , comparing expression profiles in cancer cases to their non-cancer controls ., Accounting for the expression characteristics of different cancer types , we represented each pathway by ‘leading edge genes’ ( LEG ) , a subset of genes that significantly drove the enrichment of a given pathway in the disease cases 30 ., Furthermore , we assembled 48 , 939 interactions between 386 miRs and 6 , 725 mRNAs ( Table S1 ) ., Pooling such miR-gene interaction data from PicTar 25 , miRanda 26 , 27 and TargetScan 28 , we demanded that each interaction was reported by at least two sources 29 ., Considering each pathway as a set of LEGs , we counted the number of such genes that a given miR interacted with ., Consequently , each pathway was further represented by a miR interaction profile , indicating the number of LEGs in a pathway a given miR interacted with ( Fig . 1B ) ., Averaging over all pathways , we normalized miR-specific entries in this matrix by a Z-score ., Representing each pathway by its normalized enrichment score , we applied the random forest algorithm , allowing the calculation of an importance value for each miR/pathway pair ., Such an importance measure reflects the impact of the given miR on the fitting process of the underlying pathways enrichment score ., To assess the statistical significance of local importance scores we resorted to permutation tests ( Fig . 1C ) ., Randomizing both pathway enrichment scores and the miRs numbers of targeted LEGs we generated null-distributions of importance scores for each miR/pathway pair ., Utilizing a Z-test we determined P-values and observed a pair of an important miR ( IM ) and a pathway if FDR<0 . 01 32 ., In glioblastomas , we found a total of 2 , 320 significant pairs between 167 IMs ( 49 . 6% out of all miRs that interacted with LEGs in 429 pathways ) and 265 pathways ( 61 . 8% out of all 429 pathways ) ., Furthermore , we observed that the set of pathways was significantly enriched with differentially expressed pathways as provided by GSEA ( FDR<0 . 01 ) applying Fishers exact test ( P<10−12 ) ., Similarly , we found 2 , 564 pairs between 171 IMs ( 50 . 3% ) and 322 pathways ( 75 . 1% ) in ovarian cancer ( P<10−7 ) while 156 IMs ( 47 . 3% ) were linked to 309 pathways ( 72 . 0% ) through 2 , 041 pairs in breast cancer ( P<10−9 ) ., For a complete list of all IM/pathway pairs see Tables S2 , S3 , S4 ., In Fig . 1D , we observed that sets of IMs largely overlapped , allowing us to find 99 overall important miRs ( OIM ) , corresponding to 59 . 2% of IMs in GBM , 57 . 9% in ovarian and 63 . 5% in breast cancer ., In turn , we also found that pathways overlapped strongly ( Fig . S2A ) with 182 pathways present in all cancer types considered , a value that translated into 68 . 7% of pathways in GBMs , 56 . 5% in ovarian and 58 . 9% in breast cancers ., Furthermore , we observed a small overlap of 98 IM-pathway pairs that appeared in all cancer types considered ( Fig . S2B , Table S5 ) ., Since we determined the impact of each interacting miR on the fit of each pathways enrichment score , an IM may be important to more than one pathway and vice versa ., In Fig . S3A , we observed a logarithmic decay in the frequency distribution of the number of pathways an IM targeted in all cancer types ., In turn , the frequency distribution of the number of IMs a given pathway is significantly linked to decreased exponentially as well ( inset , Fig . S3B ) ., Obtaining auxiliary cancer-related information , we collected 72 cancer-related miRs from the literature , consisting of overlapping sets of 35 onco- , 42 tumor suppressor- and 32 metastamiRs 6 , 9 , 17 , 18 , 19 , 20 , 21 ( Fig . S1 ) ., Furthermore , we utilized the HMDD database 23 pooling experimental evidence that a miR was involved in given cancer types ., We also determined differentially expressed miRs with a t-test ( FDR<0 . 01 ) 32 using miR expression profiles of glioblastomas , ovarian and breast cancer ., In Table S6 we ordered IMs according to their corresponding number of pathways in each cancer type ., Specifically , IMs that were linked to an increasing number of pathways seemed to be enriched with literature curated cancer miRs , tend to be differentially expressed and experimentally indicated in the given cancer types ., On a more quantitative basis , we grouped IMs according to their number of pathways in a given cancer type ., In groups of IMs that were linked to at least k pathways we determined the number of literature-curated miRs ., In a null-model , we randomly picked sets of literature-curated miRs and determined their enrichment in each group as the ratio of the observed and expected numbers ., Fig . S4A suggests that groups of IMs with increasing numbers of pathways tend to be enriched with cancer miRs in all given cancer types ., Analogously , we determined the enrichment of differentially expressed miRs and observed that such groups of IMs were predominantly enriched with differentially expressed miRs as well ( inset , Fig . S4A ) ., Similarly , we calculated the enrichment of differentially expressed pathways as a function of the number of IMs of a given pathway ., Distributions in Fig . S4B suggested that pathways with an increasing number of IMs had a heightened tendency to be differentially expressed in all tumor types considered ., Utilizing data from the HMDD database 23 we collected information about miRs that were experimentally found to play a role in more than 90 cancer types ., Focusing on 25 cancer types with at least 25 different , implicated miRs ( including glioblastoma , ovarian and breast cancer ) we constructed a bipartite matrix , indicating if a given miR was experimentally reported in a certain cancer type ., Ward-clustering such a binary matrix , we observed two large clusters of miRs ( Fig . 2 ) ., Counting the number of different cancer types a miR was experimentally found in , we observed that such clusters consisted of the most frequently indicated miRs ( histogram , Fig . 2 ) ., Therefore , we expected that such clusters may be enriched with IMs ., Indeed , our separate sets of IMs in glioblastoma , ovarian and breast cancer overlapped well with this general pattern of miR involvement in different tumor types ., Applying a hypergeometric test we further checked if IMs were enriched among miRs that appeared in at least 3 different cancer types ., Indeed , 106 IMs in GBMs occurred in such a set of miRs ( P<10−5 ) , while we found 107 in ovarian ( P<10−4 ) and 100 in breast cancers ( P<10−4 ) ., Focusing on our set of 99 overlapping , overall important miRs ( OIM ) in GBMs , ovarian and breast cancers we also observed a significant overlap of 74 miRs ( P<10−5 ) ., Furthermore , literature curated cancer miRs were largely placed in previously mentioned clusters as well ., In particular , 38 cancer miRs overlapped with our set of 99 OIMs ( P<10−10 ) , suggesting that OIMs may play a central role in different cancer types ., Utilizing such an overlapping set of 99 OIMs , we focused on connections to differentially expressed pathways and found a total of 93 pathways in glioblastoma , 55 in ovarian and 87 in breast cancers ., Mapping the corresponding links between OIMs and these pathways in glioblastoma we constructed a binary matrix ., Ward clustering allowed us to obtain two large clusters of either up- or down-regulated pathways that strongly corresponded to two groups of largely down- or up-regulated , differentially expressed OIMs ( Fig . 3A ) ., Down-regulated pathways mostly revolved around neurotransmitter specific pathways while up-regulated pathways covered prominent signaling , regulation and transcription functions ( see for an enlargement ) ., As for ovarian ( Fig . S6A ) and breast cancers ( Fig . S7A ) , we obtained similar results ., Notably , we only observed interactions between OIMs and up-regulated pathways in ovarian cancers that largely revolved around signaling and regulation functions ., Using such pairs of OIMs and pathways in GBMs , we retrieved all interactions between OIMs and LEGs in the corresponding pathways that were placed in the previously found clusters ., Merging gene and miR expression data , we calculated Pearsons correlation coefficients using gene and miR expression profiles in glioblastoma and non-tumor control samples ., As a measure of the difference between expression correlation coefficients in the disease ( rD ) and non-tumor control cases ( rC ) we Fisher-transformed correlation coefficients into Z-scores and calculated the corresponding change in correlation , ΔZ ., A negative/positive value of ΔZ indicates a loss/gain of correlation in the disease case ., Focusing on interactions between OIMs and the corresponding LEGs of pathways in these clusters we observed bimodal distributions of ΔZs in glioblastoma ( Fig . 3B ) ., Notably , interactions between OIMs and LEGs that corresponded to down-regulated pathways and predominantly up-regulated miRs were characterized by a peak at ΔZ\u200a=\u200a−1 . 0 , pointing to a loss of expression correlation ., Focusing on miR/gene interactions in the cluster of up-regulated pathways and largely down-regulated miRs we observed a peak at ΔZ\u200a=\u200a+1 . 0 , pointing to a gain of correlation ., Analogously , we obtained such distributions for pairs of OIMs and LEGs in ovarian ( Fig . S6B ) and breast cancer ( Fig . S7B ) ., Focusing on GBMs , we mapped all interactions between OIMs and LEGs we found in the corresponding clusters if their correlation change was |ΔZ|>1 . 0 ., As for the cluster that revolved around down-regulated pathways and up-regulated OIMs ( Fig . 3C ) , we observed many interactions between differentially expressed OIMs and ITPR1 ( inositol 1 , 4 , 5-trisphosphate receptor type 1 ) with losses of expression correlations ., Overall important miRs mapped in this analysis included miR-34a , -27b , -128ab and -15b ., Focusing on the cluster composed by down-regulated pathways and largely up-regulated OIMs ( Fig . 3D ) , we found miR-21 and let-7i in interactions with losses of expression correlation and miR-137 in interactions that gained expression correlation ., We mapped miRs and associated pathways in ovarian ( Fig . S6C ) and breast cancers as well ( Fig . S7CD ) ., While we found a strong presence of signaling , transcription and translation related pathways in ovarian cancers , we also observed pathways that revolved around transcription factor E2F and the SFRS1 protein ., Focusing on a cluster of up-regulated pathways in breast cancers and largely up-regulated miRs ( Fig . S7C ) we found down-regulated AKT3 that was interacting with a couple of up-regulated miRs ., These results are discussed below ( see Discussion ) ., Although a growing appreciation of the importance of miRs in cancers is emerging , much remains unknown about their regulatory impact ., Current knowledge appears rather scattered , focusing on single interactions between miRs and target genes of interest in a given cancer type ., Here , we chose a different approach by utilizing pairwise interactions between miRs and target genes to identify combinations of important miRs ( IM ) and pathways in a given cancer type ., A major criterion that may influence our results is the accuracy of computational methods that predict interactions between miRs and the UTRs of genes ., Since such computational approaches suffer from false positives , we chose results of three different algorithms and demanded that each interaction was at least predicted twice , potentially allowing us to limit spurious signals 34 ., We modeled the expression change of pathways comparing sets of cancer to non-tumor control cases as a function of the number of interactions between leading edge genes that drive the expression of a given pathway and miRs ., We stress our initial assumption that the mere number of targeted LEGs in a pathway is a reasonable proxy to model the expression change of pathways in a disease , therefore allowing us to capture tumor specific effects ., Although our approach did not account for any expression levels of miRs in given tumor types , we assume that the expression change of pathways is not only a matter of leading edge genes but the binding miRs as well ., As such , we modeled expression change as a skeleton of miR interactions ., Since such links strongly influence the flow of molecular information , we conclude that the consideration of miRs expression putatively wont override results that were largely imposed by the underlying topology of miR interactions ., Furthermore , such an approach allows us to determine combinations of important miRs that potentially influence such expression changes through their targeted LEGs in the given pathways ., Utilizing data of diverse cancer types , such as glioblastomas , ovarian and breast cancers , we clearly observed largely overlapping sets of IMs that were predominantly linked to differentially expressed pathways ., Confirming our initial hypotheses , IMs with many pathways were predominately enriched with literature-curated cancer miRs and differentially expressed miRs ., Besides , such pathway specific connections may be harnessed to predict meaningful sets of miRs that play a role in the underlying cancers ., Notably , overall important miRs ( OIM ) in all cancer types coincided well with the most frequently indicated cancer –related miRs in different cancer types , indicating the relevance of our predictions ., While the consideration of miR expression levels may change the number of IMs , such observations strongly suggest that a diminished set of OIMs will continue to show similar characteristics ., Focusing on specific details of glioblastomas , ovarian and breast cancers , such cancer types are typically stratified by certain subtypes as indicated by subtle changes in gene expression profiles ., While we acknowledge that pairs of pathways and important miRs may vary , we dont expect that the sets of IMs will dramatically change: considering that completely different cancer types with significant differences in their gene expression profiles provided largely overlapping sets of IMs , we expect that results that account for subtype information will be largely robust ., Focusing on our set of 99 OIMs , we identified all interactions to LEGs in differentially expressed pathways ., Comparing non-tumor control to disease cases , such interactions suffered partially from a massive loss of ( anti- ) correlation that were indicated by multimodal distributions of expression correlation changes ., Dramatic changes of the expression correlation of interactions may therefore be considered to significantly influence the expression of LEGs , contributing to the perturbation of pathways in the underlying cancer types ., As for qualitative observations of such OIM-LEG pairs we found that many differentially expressed miRs appeared interacting with ITPR1 in GBMs ( Fig . 3C ) ., This receptor1 is central to many signaling GBM-relevant pathways , including NGF and Plc-γ1 signaling pathways as well as insulin regulation and diabetes related pathways ., miR-34a has been found to play an important role in glioblastoma as a tumor suppressor 16 , 35 while being a mediator of p53 14 , 36 , 37 , 38 , 39 in an interaction with a loss of expression correlation ., Important targets of miR-34a included members of the Notch family and the oncogene c-met 40 ., Specifically , we found an association of miR-34a with phospholipase C ( PLCB1 ) , which has recently been identified as a regulator of glioma cell migration 41 ., The result of miR-27b was rather unexpected , since this miR has been reported up-regulated in gliomas 42 ., However , the observed discrepancy may result from the experimental setup where the up-regulated miR-27b might have resulted from an inflammatory reaction 43 and originated from other than the glioma cells ., Moreover , miR-27b has been identified as a pro-angiogenic miR in endothelial cells 44 and found to be involved in tumor angiogenesis 45 ., Regarding the up-regulation of miR-27b in glioma cells , cell culture conditions used in 42 promote cell differentiation ( medium containing fetal bovine serum ) that may artificially affect the miRs expression profile ., Therefore , we believe that the down-regulation of miR27b and its effects on calcium metabolism ( CALM3 , CACNB2 ) and exocytosis-related ( SNAP25 ) genes reflect the actual situation in GBMs ., The down-regulation of miR-128ab in human glioma and glioblastoma cell lines has previously been reported 46 to increase the expression of ARP5 , Bmi-1 and E2F-3a , promoting neural stem cells renewal and regulate cell-cycle progression 46 ., Beside miR-128ab being important regulators of brain cell proliferation , we indicated that miR128ab may also affect expression of genes involved in energy metabolism ( PFKM ) and transmembrane signal transduction ( SYT1 , EPB41 , ADCY3 ) ., miR-15b has been identified as an inhibitor of glioma growth while cyclin E1 has been found as a target of miR-15b , suggesting its role in cell cycle regulation 47 ., Here , we observed that serotonin receptor 4 ( HTR4 ) was down-regulated in glioblastoma samples , a process that is associated with up-regulation of miR-15b ., The cluster composed of down-regulated pathways and largely up-regulated OIMs ( Fig . 3D ) revealed miR-21 , let-7i , and miR-137 to be involved in interactions with losses and gains of expression correlation , respectively ., Putatively , miR-21 works as an ‘oncomiR’ , decreasing apoptosis in malignant cells while down-regulated miR-137 is involved in the differentiation of glioma stem cells 48 ., Implicated in the development of glioblastomas 49 , 50 , knockdown of miR-21 leads to reduced cell proliferation , invasiveness , tumorigenicity and increased apoptosis 49 , 50 , 51 ., Furthermore , miR-21 was reported to be involved in at least three tumor-suppressive pathways including mitochondrial apoptosis , p53 and TGF-β 50 , 52 , 53 , 54 pathways ., Our results revealed further cancer-relevant target genes including STAG2 , CNOT6 , SOX2 , CDC25A and SFRS3 ( Fig . 3D ) ., Specifically , STAG2 encodes a subunit of cohesion , a multimeric protein complex required for cohesion of sister chromatids after DNA replication ., Furthermore , STAG2 is cleaved at the metaphase-to-anaphase transition to enable chromosome segregation 55 , 56 , 57 ., Chromosomal instability , which leads to aneuploidy , loss of heterozygosity , translocations and other chromosomal aberrations is one of the hallmarks of cancer 57 ., Robust STAG2 expression has been shown in non-neoplastic tissues while significant fractions of glioblastomas had completely lost expression of STAG2 58 , suggesting that miR-21 may have both oncogenic and tumor-suppressive effects ., A link between miR-21 and the p53 pathway could be CNOT6 ( Ccr4a ) , a deadenylase subunit of the Ccr4-Not complex that is involved in mRNA degradation 59 ., Ccr4a , together with Ccr4b , has been identified as a key regulator of insulin-like growth factor-binding protein 5 , mediating cell cycle arrest and senescence through the p53-dependent pathway 60 , 61 ., Moreover , CNOT6 plays an important role in chemotherapy resistance to cisplatin through down-regulation of DNA-damage response by targeting Chk2 62 ., miR-21 expression was shown up-regulated in response to ionizing radiation while the inhibition of miR-21 enhanced the radiation-induced glioblastoma cell growth arrest and increased the level of apoptosis ., While this effect may be mediated by CDC25A 63 , our results suggested that CDC25A was targeted by miR-21 ( Fig . 3D ) ., Additionally , Cdc25A appears to be a promising therapeutic target in glioblastomas as its levels were reported to correlate with Ki-67 labeling index 64 ., Another target gene that we identified to be controlled by miR-21 , SFRS3 , is a pro-oncogene involved in mRNA and rRNA processing ., Furthermore , SFRS3 has been reported as a critical factor for tumor induction , progression and maintenance 65 , 66 ., Lastly , the association of miR-21 with SOX2 , a marker for undifferentiated and proliferating cells with up-regulated expression in glioblastomas 67 further underlined the importance of miR-21 for the pathogenesis of these tumors ., Let-7 appears to be a tumor suppressor while inhibiting K-ras and C-myc 68 , 69 ., In glioblastomas , overexpression of let-7 has been shown to decrease cell proliferation 70 ., We found a link between let-7i and integrin β3 ( ITGB3 ) whose pro-apoptotic role has been reported in glioma cells 71 ., miR-137 is also a putative tumor suppressor and is down-regulated in gliomas through a DNA hypermethylation mechanism 48 ., Cooperating with miR-124 , miR-137 may suppress expression of phosphorylated Rb and CDK6 while inducing cell cycle arrest at G0/G1 in glioma cells 48 ., Our results further suggested glioma relevant targets that are involved in AKT-mTOR signaling ( MAPKAPK2 and YBX1 ) ( Fig . 3D ) ., The significance of other associated partners such as genes that encode ribosomal proteins RPL28 and RPS13 remains to be established ., Mapping OIMs and their pathways in ovarian cancer revealed interactions between several miRs and transcription factor E2F and particularly between E2F3 and miRs-148b , -124 and -34a ( Fig . S6C ) ., Indeed , miR-34a was shown to epigenetically govern the expression of E2F3 through methylation of its promoter 72 ., In our analysis , miR-132 and miR-212 gain expression correlation in interactions with SFRS1 , a proto-oncogene that is involved in pre-mRNA splicing with the ability to change the splicing patterns of crucial cell cycle regulators and suppressor genes ., Of particular interest is the observation that SFRS1 is up-regulated in many cancer types and therefore a potential target for cancer therapy 73 ., Importantly , the role of these miRs and their interactions with target genes in ovarian cancers is not well understood ., However , indications exist that both miRs that share a seed sequence may play a role since both miRs were found to be down-regulated by promoter methylation that contributes to pancreatic cancers 74 ., The down-regulation of AKT3 upon interaction with several up-regulated miRs was the highlight observation in the cluster of up-regulated pathways in breast cancers ( Fig . S7D ) ., AKT kinases are regulators of cell signaling in response to insulin and growth factors and are involved in a wide variety of biological processes including cell proliferation , differentiation , apoptosis , tumorigenesis as well as glycogen synthesis and glucose uptake ., In our analysis , we found that AKT3 interacted with miRs-181ac , gaining expression correlation , while miR15a , -16 and -20a lost expression correlations with their target genes ., In particular , miR-15a and -16 were already indicated as relevant in different cancers 75 ., Furthermore , members of the miR-181 family were shown to induce sphere formation in breast cancer cells 76 . | Introduction, Methods, Results, Discussion | We computationally determined miRs that are significantly connected to molecular pathways by utilizing gene expression profiles in different cancer types such as glioblastomas , ovarian and breast cancers ., Specifically , we assumed that the knowledge of physical interactions between miRs and genes indicated subsets of important miRs ( IM ) that significantly contributed to the regression of pathway-specific enrichment scores ., Despite the different nature of the considered cancer types , we found strongly overlapping sets of IMs ., Furthermore , IMs that were important for many pathways were enriched with literature-curated cancer and differentially expressed miRs ., Such sets of IMs also coincided well with clusters of miRs that were experimentally indicated in numerous other cancer types ., In particular , we focused on an overlapping set of 99 overall important miRs ( OIM ) that were found in glioblastomas , ovarian and breast cancers simultaneously ., Notably , we observed that interactions between OIMs and leading edge genes of differentially expressed pathways were characterized by considerable changes in their expression correlations ., Such gains/losses of miR and gene expression correlation indicated miR/gene pairs that may play a causal role in the underlying cancers . | We assume that a network of physical interactions between miRs and genes allows us to determine miRs that influence the expression of whole pathways in different tumor types ., Specifically , we represented each pathway by an enrichment score and an array of miRs counting the number of genes in the pathway a given miR can bind ., Despite the different nature of the considered tumor types , we obtained a large set of overlapping miRs using a machine-learning algorithm ., Such associated miRs were enriched with literature-curated cancer and differentially expressed miRs and also coincided well with clusters of miRs that were experimentally indicated in numerous other cancer types ., Focusing on such sets of miRs we observed that interactions with genes in differentially expressed pathways were characterized by massive gains/losses of expression correlations ., Such drastic changes of miR and gene expression correlation indicate miR/gene pairs that may play a causal role in the underlying cancers . | systems biology, regulatory networks, biology, computational biology | null |
933 | journal.pcbi.1005762 | 2,017 | Bistability, non-ergodicity, and inhibition in pairwise maximum-entropy models | Correlated activity between pairs of cells was observed early on in the history of neuroscience 1 , 2 ., Immediately the question arose whether there is a functional interpretation of this observation 3 , and this question is still with us ., Hypotheses range from synchronous activation of neurons to bind representations of features into more complex percepts 4–7 , to the involvement of correlations in efficiently gating information 8 ., Direct experimental evidence for a functional role of correlated activity is the observation that the synchronous pairwise activation of neurons significantly deviates from the uncorrelated case in tight correspondence with behaviour ., Such synchronous events have been observed in motor cortex 9 , 10 at time points of expected , task-relevant information ., In primary visual cortex they appear in relation to saccades ( eye movements ) 11 , 12 ., Another argument for the functional relevance of correlations is the robustness of signals represented by synchronous activity against noise 13 ., Non-Gaussian distributions of membrane potentials of neurons indeed point towards the synchronized arrival of synaptic events 14 , 15 ., An opposite view regards correlated activity merely as an unavoidable epiphenomenon of neurons being connected and influencing one another 16 ., In the worst case , both these views are partly true , prompting us to find ways to distinguish functionally relevant correlated events from the uninformative background ., In the context of experimental paradigms that perform repeated trials , the co-variability of neurons across trials has been termed “noise correlation” ., Recurrent network models are able to reproduce and explain the weak magnitude and wide spread across pairs of second-order 17–23 and higher-order correlations 24 , 25 ., These simple dynamical models effectively map the statistics of the connectivity to the statistics of the activity ., Even though they explain the uninformative part of correlated activity , it is unclear how to use them to distinguish this background from departures thereof ., The separation of the noise- or background correlation from functionally meaningful correlation is in addition hampered by the diverse dimensions of information processing’s not being completely orthogonal ., Indeed , correlation transmission may be modulated by changes of firing rate 9 ., Theory 26 , 27 confirmed this entanglement in the regime of Gaussian fluctuating membrane potentials ., The dynamical-model approaches just outlined pivot on a more or less realistic physical description of the network , with some stochastic features ., A complementary approach is also possible , fully pivoting on statistical models ., The latter try to predict and characterize neuronal activity without relying on a definite physical network model ., Statistical models have two convenient features ., First , intuitive statistical working hypotheses usually translate into a unique statistical model 28 , 29; this fact streamlines the construction and selection of a such a model ., For example , the assumption that first- and second-order correlations recorded in an experiments are sufficient to predict the activity recorded in a new experiment , uniquely selects a truncated Gaussian model 29 , 30 ., Second , a successful statistical model implicitly restricts the set of possible dynamical physical models of the network: only those reflecting the well-modelled statistical properties are acceptable ., Statistical models thus help in modelling the actual physical network structure ., A limit case of this kind of statistical models is obtained by choosing probability distributions having maximum entropy under the constraints of experimentally observed quantities 31 , 32; in neuroscience see e . g . 33 ., The suitability of such maximum-entropy distributions for neuronal activities has been tested in various experimental and simulated set-ups ., For example , to explore the sufficiency of pairwise correlations or higher-order moments , or their predictive power for distribution tails e . g . 34–48 , and to characterize dynamical regimes 36 , 49–51 ., The probability distribution thus obtained , which includes the single-unit and pairwise statistics of the observation by construction , could help us to solve the background-correlation problem described above ., In assigning to every observed activity pattern a probability , we obtain a measure of “surprise” for each such pattern; this surprise measure e . g . 52 , 53 is related to the logarithm of the probability and thus to Shannon’s entropy ., Periods of activity with low probability correspond to large surprise: these patterns cannot be explained by the statistical properties that entered the construction of the probability distribution ., In this way , we are able to effectively differentiate expected , less surprising events from those that are unexpected , surprising , and functionally meaningful ., Computing the maximum-entropy distribution from moment constraints—usually called the inverse problem–is simple in principle: it amounts to finding the maximum of a convex function ., Hence optimization is straightforward 54 , 55 ., The maximum can be searched for with a variety of methods ( downhill simplex , direction set , conjugate gradient , etc . 56 , ch . 10 ) ., The convex function , however , involves a sum over exp ( N ) terms , where N is the number of neurons ., For 60 neurons , that is roughly twice the universe’s age in seconds , and modern technologies enable us to record hundreds of neurons simultaneously 57–60 ., Owing to the combinatorial explosion for such large numbers of neurons , the convex function cannot be calculated , not even numerically ., It is therefore “sampled” , usually via Markov-chain Monte Carlo techniques 61 , 62 ., In neuroscience the Glauber dynamics , also known as Gibbs sampling 61 , 63 , chap . 29 , is usually chosen as the Markov chain whose stationary probability distribution is the maximum-entropy one ., Boltzmann learning 64 is the iterative combination of sampling and search for the maximum , and is still considered the most precise method of computing a maximum-entropy distribution ., Alternatively one may try to approximate the convex function by an analytic expression , as done with the mean-field 65 , 66 , Thouless-Anderson-Palmer 66 , 67 , and Sessak-Monasson 68 , 69 approximations ., The goodness of these approximations is usually checked against a Boltzmann-learning calculation cf . 45 ., Moment-constrained maximum-entropy models have also been used 70 , 71 as generators of surrogate data , again via a Glauber dynamics ., Such surrogates are used to implement a null hypothesis to estimate the statistical significance level of correlations between spike trains 70 , 72–77 ., The pairwise maximum-entropy model is applicable to experimentally recorded activities of populations of a couple hundreds neurons at most , so far; but its success , or lack thereof , cannot be automatically extrapolated to larger population sizes ., Roudi et al . 78 gave evidence that the maximized Shannon entropy and other comparative entropies of such a model may present qualitatively different features above a particular population size ., In the present paper we discuss a feature of the pairwise maximum-entropy model that may be problematic or undesirable: the marginal distribution for the population-averaged activity becomes bimodal , and one of the modes may peak at high activities ., In other words , maximum-entropy claims that the population should fluctuate between a regime with a small fraction of simultaneously active neurons , and another regime with a higher fraction of simultaneously active neurons; the fraction of the second regime can be as high as 90% ., This feature of the maximum-entropy model has been observed before in several theoretical studies that assumed a homogeneous neuronal population see e . g . 34 , 41 , 79 , 80 ., Our analysis has several points in common with Bohte & al ., ’s 34 ., Bohte et al . wanted to see whether a maximum-entropy distribution can correctly predict the distribution of total activity , given only firing rates and pairwise correlations from a simulated network model as constraints ., They found that both the simulation and the maximum-entropy model yield a bimodal distribution of total activity within particular ranges of firing rates and correlations ., The fundamental difference from our work is that our experimental data do not show a bimodal distribution , but the maximum entropy model wrongly predicts such bimodality from the measured rates and correlations ., More quantitatively , the pairwise correlation found in our data is much lower than that reported in Bohte et al . ; in particular , it seems to belong to the range in which their simulation yielded a unimodal distribution 34 , p . 169 ., Their simulations therefore seems to corroborate that a second mode is biologically implausible in our correlation regime ., Amari & al ., 79 notice the appearance of bimodal distributions for the averaged activity and analyse some of their features in the N → ∞ limit ., Their focus is on the correlations needed to obtain a “widespread” distribution in that limit ., Our focus is on the bimodality appearing for large but finite N , and we find some mathematical results that might be at variance with Amari & al ., ’s ., They seem to find 79 , p . 135 that the Dirac-delta modes are at values 0 and 1; we find that they can appear also strictly within this range ., They say 79 , p . 138 that the “bigger peak” dominates as N → ∞; we find that the height ratio between the peaks is finite and depends on the single and pairwise average activity , and for our data is about 2000 as N → ∞—an observable value for recording lengths achievable in present-day experiments ., We provide evidence that the bimodality of the pairwise model is bound to appear in applications to populations of more than a hundred neurons ., It renders the pairwise maximum-entropy model problematic for several reasons ., First , in neurobiological data the coexistence of two regimes appears unrealistic—especially if the second regime corresponds to 90% of all units being simultaneously active within few milliseconds ., Second , two complementary problems appear with the Glauber dynamics and the Boltzmann-learning used to find the model’s parameters ., In the Glauber dynamics the activity alternately hovers about either regime for sustained periods , which is again unrealistic and rules out this method to generate meaningful surrogate data ., In addition , the Glauber dynamics becomes practically non-ergodic , and the pairwise model cannot be calculated at all via Boltzmann learning or via the approximations previously mentioned cf . 62 , S 2 . 1 . 3; 61 , chap . 29 ., This case is particularly subtle because it can go undetected: the non-ergodic Boltzmann learning yields a distribution that is not the maximum-entropy distribution one was looking for ., Bohte & al ., 34 remark that their neuronal-network simulation had to incorporate one inhibitory neuron , with the effect of “curtailing population bursts” 34 , p . 175 , because “the absence of inhibitory neurons makes a network very quickly prone to saturation” 34 , p . 162 ., This is something that a standard maximum-entropy distribution cannot do , hence a limitation in its predictive power ., It is intuitively clear that lack of inhibition and bimodality are related problems: we show this in section “Intuitive understanding of the bimodality: Mean-field picture” using a simple mean-field analysis ., In the present work we propose a modified maximum-entropy model; more precisely , we propose a reference probability measure to be used with the method of maximum relative entropy e . g . 31 , 81 ( also called minimum discrimination information 82; see 83 for a comparison of the two entropies ) ., The principle and reference measure can be used with pairwise or higher-order constraints; standard maximum-entropy corresponds to a uniform measure ., The proposed reference measure , presented in section “Inhibited maximum-entropy model” , solves three problems at once: ( 1 ) it leads to distributions without unrealistic modes and eliminates the bistability in the Glauber dynamics; ( 2 ) it leads to a maximum-entropy model that can be calculated via Boltzmann learning; ( 3 ) it can also “rescue” interesting distributions that otherwise would have to be discarded because incorrect ., The reference measure we propose is neurobiologically motivated ., It is a minimal representation of the statistical effects of inhibition naturally appearing in brain activity , and directly translates Bohte & al’s device of including one inhibitory neuron in the simulated network ., Moreover , the reference measure has a simple analytic expression and the resulting maximum-entropy model is still the stationary distribution of a particular Glauber dynamics , so that it can also be used to generate surrogate data ., In the final “Discussion” we argue that the use of such a measure is not just an ad hoc solution , but a choice required by the underlying biology of neuronal networks: the necessity of non-uniform reference measures is similarly well-known in other statistical scientific fields , like radioastronomy and quantum mechanics ., The plan of this paper is the following: after some mathematical and methodological preliminaries , we show the appearance of the bimodality problem in the maximum-entropy model applied to an experimental dataset of the activity of 159 neurons recorded from macaque motor cortex ., Then we use an analytically tractable homogeneous pairwise maximum-entropy model to give evidence that the bimodality problem will affect larger and larger ranges of datasets as the population size increases ., We show that typical experimental datasets of neural activity are prone to this problem ., We then investigate the underlying biological causes of the bimodality problem and propose a way to eliminate it: using a minimal amount of inhibition in the network , represented in a modified Glauber dynamics that includes a minimal asymmetric inhibition ., We show that this correction corresponds to using the method of maximum entropy with a different reference measure , as discussed above , and that the resulting maximum entropy distribution is the stationary distribution of a modified Glauber dynamics ., We finally bring to a close with a summary , a justification and discussion of the maximum-entropy model with the modified reference measure , and a comparison with other statistical models used in the literature ., Our study uses three main mathematical objects: the pairwise maximum-entropy distribution , a “reduced” pairwise maximum-entropy distribution , and the Glauber dynamics associated with them ., We review them here; some remarks about their range of applicability are given in ., Towards the end of the paper we will introduce an additional maximum-entropy distribution ., We first show how the bimodality problem subtly appears with a set of experimental data , then explore its significance for larger population sizes and other samples of experimental data of brain activity ., Let us briefly summarize our results so far and the reason why a maximum-entropy model yielding a bimodal distribution in the population-averaged activity is problematic: We will propose a solution that addresses all three issues at once ., This solution pivots on the idea of inhibition and can be grasped with an intuitive explanation of how the bimodality arises ., In this work we have shown that pairwise maximum-entropy models , widely used as references distributions in the statistical description of the joint activity of hundreds of neurons , are poised to suffer from three interrelated problems when constrained with mean activities and pairwise correlations typically found in cortex: We have given an intuitive explanation of the common cause of these issues: positive pairwise correlations imply positive Lagrange multipliers between pairs of neuron , corresponding to a symmetric network that is excitatory on average ., For typical values of correlations observed in neuroscientific experiments , this network can therefore possess two metastable dynamic regimes , given sufficiently many units ., The mechanism is identical to the ferromagnetic transition in the Ising model , as explained in “Bimodality of the inhomogeneous model for large N” ., An analogous bimodality appears in the statistical mechanics of finite-size systems e . g . 108 , 115 , and refs therein—but it is experimentally expected and verified there , unlike our neurobiological case ., Although we did not study maximum-entropy models typically used in other fields , like structural biology and genetic networks 116–118 , social behavior in mammals 119 , 120 , natural image statistics 121 , 122 , and economics 123 , the problems we have addressed are generic and emerge as soon as we study a large network with positive pairwise correlations on average; hence they might be of relevance to these fields ., In this work we have also suggested a remedy , based on the explanation above: the intuitive idea is to add a minimal asymmetric inhibition to the network , in the guise of an additional , asymmetrically coupled inhibitory neuron ( Fig 8A ) cf . 34 , p . 175 ., This leads to an “inhibited” Glauber dynamics that is free from bistable regimes and has a unimodal stationary distribution Pi ( s ) , Eq ( 22 ) ., This dynamics depends on an inhibition-coupling parameter JI and a threshold parameter θ ., Most important , we have shown that this new stationary distribution Pi ( s ) belongs to the maximum-entropy family: it can be obtained with the maximum-relative-entropy method with respect to a reference measure , Eq ( 25 ) ( Fig 9 ) , that represents the neurobiologically natural presence of inhibition in the network ., We call this model an “inhibited” pairwise maximum-entropy model ., The inhibited pairwise model solves all three problems above: We wish to stress that the presence of bimodality and non-ergodicity can easily go unnoticed ., Sampling from a bimodal distribution , the probability to switch to the second mode may be so small that it occurs over more sampling steps larger than those typically used in the literature , and the high mode is not visited during Boltzmann learning or surrogate generation ., We then face a subtle situation: The obtained distribution is not a pairwise maximum-entropy distribution Eq ( 3 ) —the Lagrange multipliers are incorrect—yet a consistency check ( also affected by undersampling ) may wrongly seem to validate it , and also analytic approximations ( outside of their convergence domain ) may wrongly validate it ., The distribution found in this circumstance is not a standard pairwise distribution , but our inhibited maximum entropy distribution Eq ( 22 ) , for appropriately chosen JI and θ ., In this regard we urge researchers who have calculated pairwise ( and even higher-order ) maximum-entropy distributions for more than 50 neurons using short Boltzmann-learning procedures , to check for the possible presence of higher metastable regimes ., The presence of bimodality and non-ergodicity can be checked , for example , by starting the sampling from different initial conditions , at low and high activities , looking out for bistable regimes cf . 62 , S 2 . 1 . 3 ., Another way out of this problem is to use other sampling techniques or Markov chains different from the Glauber one 61 , 62 , 97 , 98 ., Alternatively , one may use the inhibited model Eq ( 22 ) with the standard approaches ., In the presence of inhomogeneous and randomly chosen parameters and large network sizes , the standard pairwise maximum-entropy distribution is mathematically identical with the Boltzmann distribution of the Sherrington & Kirkpatrick infinite-range spin glass 124 , 125 ., A more systematic analysis of the effect of inhomogeneity on the appearance of the second mode could therefore employ methods developed for spin glasses 126 , which could produce approximate expressions for the inverse problem: the determination of Lagrange multipliers from the data ., One may think of modifying the Thouless-Anderson-Palmer ( TAP ) mean-field approach 67 , 127 , generalizations of which exist for the asymmetric non-equilibrium case 93 appearing here due to the inhibitory unit ., An appropriate modification of the ideas of Sessak and Monasson 68 , 69 could also be an alternative ., Another possibility is the use of cumulant expansions 17 , 128 , which unlike TAP-based approaches have the advantage of being valid also in regimes of strong coupling; recent extensions allow us to obtain the statistics at the level of individual units 129 ., In this work we have not investigated other models , like general linear models or kinetic Ising models for example ., Considering the fundamental mechanism by which the bimodality arises , we expect similar problems in other models ., The reasoning backing this hypothesis is this: Pairwise correlations in cortical areas are on average positive but very weak ., In this limit we expect that these correlations require slightly positive “excitatory” couplings between units in most other models; an independent-pair approximation also suggests this 127 ., As a result of this rough approximation determined at the level of individual pairs , we expect the couplings to be independent of the number of units of a dynamic or statistical model ., With increasing number of units in the model the overall “excitatory feedback” ∑ j N J i j will increase , and a simple mean-field analysis makes us expect the appearance of a second mode at a certain critical number , what in statistical mechanics is called a ferromagnetic transition; cf ., Fig 7B ., We expect similar ferromagnetic transitions to happen in a wide class of statistical models that only represent the observed , on average positively correlated units ., Similar transitions are also reported in Bohte et al . 34 for a biological—as opposed to statistical—neuron model composed of excitatory neurons only ., In fact , they had to introduce one inhibitory neuron in their model to avoid such transitions , which is also the idea behind our inhibitory term ., The bimodality problem could be cured by allowing for asymmetric connections , enabling the implementation of possibly hidden inhibitory units that stabilize the activity ., For example , kinetic Ising models 130–132 , which are maximum-entropy models over the possible histories of network activity 133–135 , can have positive correlations among excitatory units in the asynchronous irregular regime , while their dynamics is stabilized by inhibitory feedback see e . g . 136 , Fig 3A ., Scaling of network properties with the number of units N is often studied in this context ., In the asynchronous regime , mean pairwise correlations decrease as N−1 18 , 22 , 110 , 136 ., This scaling is the result of a fictive experiment , typically used to derive a theoretical results in the N → ∞ limit—any biological neuronal network has of course a certain fixed size N . The mean correlation measured in a sample of size M , with 1 ≪ M ≤ N , is by sampling theory expected to be roughly equal to the mean correlation of the full network , and does not vary much with M; only the variance around this expectation declines to 0 as M approaches N . The inhibited maximum-entropy model Pi , Eq ( 22 ) , solves the problems discussed above; but we may ask if this is enough to motivate its use ., We consider it an interesting model for at least two reasons ., First , it actually is a class of models rather than a single specific model ., In the present work we have focused on its use with pairwise constraints because these are still widely discussed in the literature ., But the inhibition reference measure Eq ( 25 ) can be used with higher-order constraints or other kinds of constraints as well ., We leave to future works the analysis of this possibility ., Second , there are neurobiological reasons why the reference measure Eq ( 25 ) can be methodologically more appropriate than the uniform measure of the standard maximum-entropy method ., Let us argue this point in more depth ., Standard ( i . e . uniform reference measure ) maximum-entropy distributions are often recommended as “maximally noncommittal” 137 ., But this adjective needs qualification ., Jaynes precised: ‘“maximally noncommittal” by a certain criterion’—that the possible events or states be deemed to have a priori equal probabilities before any constraints are enforced 31 ., When the initial probabilities are not deemed equal , for physical or biological reasons for example , reference measures appear ., An important example of reference measure is the “density of states” that multiplies the Boltzmann factor e − E/ ( kT ) in statistical mechanics e . g . 138 , ch . 16: we cannot judge energy levels to be a priori equally probable because each one comprises a different amount of degrees of freedom ., The proper choice of this reference measure is so essential as to be the first manifest difference between classical and quantum statistical mechanics , from “classical counting” to “quantum counting” of phase-space cells 138 , ch . 16 ., Owing to quantized energy exchanges , a quantum density of states is necessary in statistical mechanics; likewise we could say that owing to inhibitory feedback an inhibitory reference measure is necessary in the statistical mechanics of neuronal networks ., The uniform reference measure of standard maximum-entropy expresses that network units have a priori equally probable {0 , 1} states ., But these units are neurons , whose states are not a priori equally likely ., The measure of the inhibited model Pi reflects this a priori asymmetry in a simplified way ., There are surely other reference measures that reflect this asymmetry in a more elaborated way , but the one we have found is likely one of the simplest; cf ., Bohte et al . ’s 34 inhibitory solution ., The choice of an appropriate reference measure is critically important in neuroscientific inferences also for another reason ., When maximum-entropy is used to generate an initial distribution to be updated by Bayes’s theorem , the choice of reference measure is not critical , because a poor choice gets anyway updated and corrected as new data accumulate ., Not so when maximum-entropy is used to generate a sort of reference distribution that will not be updated , as is often done in neuroscience: an unnaturally chosen reference measure will then bias and taint all conclusions derived from comparisons with the maximum-entropy distribution ., The inhibited pairwise model can therefore be quite useful in all applications of the maximum-entropy model mentioned in “Introduction” ., For example , it can serve as a realistic hypothesis against which to check or measure the prominence of correlations in simulated or recorded neural activities , to separate the low baseline level of correlation from the potentially behaviourally relevant departures thereof ., The surprise measure to effect such separation would , according to the inhibited model , take into account the presence of inhibition and the overall low level of activity that are natural in the cortex ., The inhibited model can also be used for the generation of surrogate data which include the natural effect of inhibition besides the observed level of pairwise activity ., It can also be useful in the study of the predictive sufficiency of pairwise correlations as opposed to higher-order moments , for example for distribution tails e . g . 34–36 , 38–44; and in the characterization of dynamical regimes of neuronal activity 36 , 49–51 ., The inhibition reference measure Eq ( 25 ) contains the threshold θ and the inhibitory coupling JI as parameters ., The choice of their values depends on the point of view adopted about the measure ., Three venues seem possible: ( 1 ) One might think of choosing ( θ , JI ) to better fit the specific dataset under study , but this would counter the maximum-entropy spirit: the threshold cannot be a constraint , and the inhibitory coupling would acquire infinite values , as explained in section “Inhibited maximum-entropy model” ., Moreover for our dataset this strategy would only give a worse fit ( cf . Fig 2B ) because the inhibition term flattens the distribution tails ., ( 2 ) One might only want to get rid of the bistability of the Glauber dynamics and the bimodality of the distribution ., In this case the precise choice of ( θ , JI ) is not critical within certain bounds ., The inhibition coupling JI < 0 must be negative and sufficiently large to suppress activity once the population-averaged activity reaches θ ., The self-consistency condition Eq ( 21 ) then gives 1 + exp ( ∑ j j ≠ i J i j m j + h i + J I ) - 1 ⪡ θ for all i ., The threshold θ can be safely set to any value between the highest observed population activity s ¯ and the second fixed point of the self-consistency equation Eq ( 21 ) , which is indicative of the second mode and is beyond s ¯ > 1 / 2 ( see Fig 7B ) for the typically low mean activities observed in the cortex ., ( 3 ) A methodologically sounder possibility , in view of the remarks about maximum-entropy measures given above , is to choose ( θ , JI ) from general neurobiological arguments and observations ., This was implicitly done in Bohte & al ., ’s neuron model 34 for example , but unfortunately they did not publish the values they chose ., We leave the discussion of the neurobiological choice of these parameters to future investigations ., Our inibition term J I N G ( s ¯ - θ ) , Eq ( 22 ) , formally includes Shimazaki et al . ’s “simultaneous silence” constraint 44 as the limit JI → −∞ , θ = 1/N ., Because of this limit their model has a sharp jump in probability at s ¯ = 1 / N: their constraint uniformly removes probability for s ¯ > 1 / N and assigns it to the single point s ¯ = 0 ., In contrast , our inhibited model Pi presents a kink but no jump for s ¯ = θ , with a discontinuity in the derivative proportional to JI ., But besides this mathematical relationship , our inhibition term and the “simultaneous silence” constraint have different motivations and uses ., As discussed at length above and in section “Inhibited maximum-entropy model” , our term is best interpreted as a reference measure expressing the effects of inhibition , providing a biologically more suitable starting point cf . 34 for maximum-entropy , rather than a constraint ., Its goal is not to improve the goodness-of-fit for activities well below threshold , in contrast to earlier works e . g . 35 , 40 , 50 , 78 , 80 and to the “simultaneous silence” constraint 44 ., The goodness-of-fit is determined by the constraints alone ., In this regard we do not present any improvement of the fit compared to a pure pairwise model ., Future work could explore combinations of the here proposed reference measure and additional constraints that improve the fitness of the model ., Maximum-entropy models are an approximate limit case of probability models by exchangeability 139–141 , or sufficiency 141 , 142 , §§4 . 2–5 ., This approximation holds if the constraints are empirical averages ( e . g . time averages in our case ) over enough many data compared with the number of points in the sample space ., How much is “enough” depends on where the empirical averages lie within their physically allowed ranges: If they are well within their ranges , then a number of data values large but still smaller than the number of sample-space points may be enough ., If the empirical averages are close or equal to their physically allowed extreme values , then the number of data values should be much larger than the number of sample-space points ., If these conditions are not met the maximum-entropy method gives unreasonable or plainly wrong results , as can be ascertained by comparison with the non-approximat | Introduction, Results, Discussion, Materials and methods | Pairwise maximum-entropy models have been used in neuroscience to predict the activity of neuronal populations , given only the time-averaged correlations of the neuron activities ., This paper provides evidence that the pairwise model , applied to experimental recordings , would produce a bimodal distribution for the population-averaged activity , and for some population sizes the second mode would peak at high activities , that experimentally would be equivalent to 90% of the neuron population active within time-windows of few milliseconds ., Several problems are connected with this bimodality:, 1 . The presence of the high-activity mode is unrealistic in view of observed neuronal activity and on neurobiological grounds ., 2 . Boltzmann learning becomes non-ergodic , hence the pairwise maximum-entropy distribution cannot be found: in fact , Boltzmann learning would produce an incorrect distribution; similarly , common variants of mean-field approximations also produce an incorrect distribution ., 3 . The Glauber dynamics associated with the model is unrealistically bistable and cannot be used to generate realistic surrogate data ., This bimodality problem is first demonstrated for an experimental dataset from 159 neurons in the motor cortex of macaque monkey ., Evidence is then provided that this problem affects typical neural recordings of population sizes of a couple of hundreds or more neurons ., The cause of the bimodality problem is identified as the inability of standard maximum-entropy distributions with a uniform reference measure to model neuronal inhibition ., To eliminate this problem a modified maximum-entropy model is presented , which reflects a basic effect of inhibition in the form of a simple but non-uniform reference measure ., This model does not lead to unrealistic bimodalities , can be found with Boltzmann learning , and has an associated Glauber dynamics which incorporates a minimal asymmetric inhibition . | Networks of interacting units are ubiquitous in various fields of biology; e . g . gene regulatory networks , neuronal networks , social structures ., If a limited set of observables is accessible , maximum-entropy models provide a way to construct a statistical model for such networks , under particular assumptions ., The pairwise maximum-entropy model only uses the first two moments among those observables , and can be interpreted as a network with only pairwise interactions ., If correlations are on average positive , we here show that the maximum entropy distribution tends to become bimodal ., In the application to neuronal activity this is a problem , because the bimodality is an artefact of the statistical model and not observed in real data ., This problem could also affect other fields in biology ., We here explain under which conditions bimodality arises and present a solution to the problem by introducing a collective negative feedback , corresponding to a modified maximum-entropy model ., This result may point to the existence of a homeostatic mechanism active in the system that is not part of our set of observable units . | statistical mechanics, neural networks, vertebrates, neuroscience, animals, mammals, primates, probability distribution, mathematics, statistics (mathematics), thermodynamics, old world monkeys, computer and information sciences, entropy, monkeys, animal cells, probability theory, macaque, approximation methods, physics, statistical models, cellular neuroscience, eukaryota, cell biology, neurons, biology and life sciences, cellular types, physical sciences, amniotes, organisms | null |
22 | journal.pcbi.1002917 | 2,013 | The Vast, Conserved Mammalian lincRNome | The great majority of mammalian genome sequences are transcribed , at least occasionally , a phenomenon known as pervasive transcription 1–4 ., More specifically , tiling array analyses of several human chromosomes have shown that over 90% of the bases are transcribed in at least one cell type 1 , 5–8 ., The analogous analysis in mouse has demonstrated transcription for over 60% of the genome 9–11 ., Among the transcripts there are numerous long intergenic non-coding RNA ( lincRNA ) , i . e . RNA molecules greater than 200 nucleotides in length that are encoded outside other identified genes ., Some of the lincRNAs have been shown to perform various regulatory roles but the majority remain functionally uncharacterized 7 , 12–17 ., Furthermore , the fraction of the genome allotted to lincRNAs remains unknown ., A popular view that the vast majority of lincRNAs are by-products of background transcription , “simply the noise emitted by a busy machine” 18 , 19 , is rooted in their typically low abundance and poor evolutionary conservation compared to protein-coding sequences and small RNAs such as miRNAs and snoRNAs 20 ., However , some of the lincRNAs do contain strongly conserved regions 21 , and most lincRNAs show reduced substitution and insertion/deletion rates suggestive of purifying selection 12 , 22 , 23 ., Given the general lack of strong sequence conservation , identification of lincRNAs on genome scale relies on expression analysis which makes comprehensive characterization of the mammalian lincRNome an elusive goal ., The combination of different experimental approaches applied to transcriptomes of several species has resulted in continuous discovery of new transcripts 24 , with the FANTOM project alone cataloguing more than 30 , 000 putative long non-coding transcripts in mouse tissues by full-length cDNA cloning 11 , 25 ., The Support Vector Machine method has been applied to classify transcripts from the FANTOM3 project into coding and non-coding ones and accordingly estimate the number of long non-coding RNA in mouse ., This analysis has led to the identification of 14 , 000 long non-coding RNAs and an estimate of the total number of such RNAs in the FANTOM3 data at approximately 28 , 000 26 ., Here we re-analyze the most reliable available sets of human and mouse lincRNAs using the latest next generation sequencing ( RNAseq ) data and apply a maximum likelihood approach to obtain a robust estimate of the size of the mammalian lincRNome ., The results suggest that mammalian genomes are likely to encode at least twice as many lincRNAs as proteins ., We performed comparative analysis of the recently reported validated sets of 4662 human lincRNAs 27 and 4156 mouse lincRNAs 12 , 20 , 23 ( see Methods for details ) in an attempt to produce robust estimates of the human and mouse lincRNome sizes , and to measure the turnover of lincRNA genes in mammalian evolution ., The validated sets consist of lincRNA species for which a specific profile of expression across tissues – and hence distinct functionality – are supported by multiple lines of evidence ., Assuming that these sets of lincRNAs are random samples from human and mouse lincRNomes , comparison of the validated sets should yield robust estimates of the lincRNome size for each species ., For this analysis , we deliberately chose to employ the validated sets only rather than the available larger sets of reported putative lincRNAs in order to reduce the effect of transcriptional noise and other artifacts ., A substantial fraction of the vast mammalian transcriptome , most likely the lower expressed transcripts , is expected to be non-functional ., Therefore , to minimize the contribution of transcriptional noise , cut-off values were imposed on expression levels of lincRNA genes and their putative orthologs that were used for the lincRNome size estimation ., Similarly , a series of cut-off values was applied for the fraction of indels in pairwise genomic alignments ( see Methods for details ) ., A computational pipeline was developed to compare the sets of validated lincRNAs from human and mouse and to identify expressed orthologs by mapping the sequences to the respective counterpart genome and searching the available RNAseq data 28 ( Figure 1 ) ., We then applied a maximum likelihood ( ML ) technique to estimate the total number of lincRNA genes in the human and mouse genomes as well as the number of orthologous lincRNA genes ( see Online Methods ) ., The following simplifying assumptions were made: Let Lh and Lm be the sizes of the experimentally validated sets of lincRNAs for human and mouse , respectively ., Also let Kh be the number of confirmed human lincRNAs that have an expressed orthologous sequence in mouse and Km be the corresponding number of mouse lincRNAs ., Finally , Kb is the number of confirmed , expressed human lincRNAs whose orthologs in mouse are also confirmed lincRNAs ., If the orthology relations between the human and mouse lincRNAs are strictly one-to-one , the number of confirmed mouse lincRNAs for which the human ortholog is also a confirmed lincRNA should be Kb as well ., This is indeed the case in practice , with a few exceptions ., Given assumption ( 1 ) , the lincRNAs can be partitioned into three pools:, i ) those present in both species , pool size Nb ,, ii ) unique to human , Nh-Nb , and, ii ) unique to mouse , Nm-Nb; here Nh and Nm are the total sizes of the complete human and mouse lincRNomes , respectively ., Assumption ( 2 ) allows us to compute the probability of observing a particular set of Kh , Km and Kb simply by counting the number of possible samples of Lh and Lm lincRNAs drawn at random from the respective pools of Nh and Nm that result in the given set of Kh , Km and Kb values: ( 1 ) Maximizing the probability P in Eq ., ( 1 ) with respect to Nh , Nm and Nb , we obtain ( see Methods for details ) : ( 2 ) To assess the robustness of the estimates , ranges of open reading frame size thresholds used to eliminate putative protein-coding genes and RPKM ( reads per kilobase of exon model per million mapped reads 29 ) thresholds used to gauge the expression level were employed ( Tables 1 and 2 ) ., The ML estimates converged at approximately 50 , 000 lincRNAs encoded in the human genome and approximately 40 , 000 lincRNAs encoded in the mouse genome ( Table 1 and Figure 2 ) ., These are conservative estimates given the use of strict thresholds on predicted open reading frame size and expression level ( Table 1 ) , so the actual numbers of lincRNAs are expected to be even greater ., Approximately two-thirds of the lincRNA genes were estimated to share orthologous relationships ( Figure 2 and Table 1 ) ., The subsets of lincRNAs with the increasing expression levels were found to be smaller and slightly but consistently more conserved ( Table 2 ) , a result that is compatible with our previous observation of positive correlation between sequence conservation and expression level among lincRNAs 23 ., We next used the length distributions of human and mouse lincRNAs in the validated sets to estimate the total lengths of the lincRNomes and the fraction of the genome occupied by the lincRNA-encoding sequences , once again under the assumption that the validated sets are representative of the entire lincRNomes ., Strikingly , the fraction of the human and mouse euchromatic genome sequence dedicated to encoding lincRNAs was found to be more than twofold greater than the fraction allotted to protein-coding sequences and greater even than the total fraction encoding mRNAs ( including untranslated regions ) ( Table 3 ) ., The relatively poor sequence conservation and often low expression of lincRNAs hamper robust estimation of the size of the lincRNome from expression data alone and render comparative-genomic estimation an essential complementary approach ., Strikingly , the estimates obtained here by combining comparative genomic and expression analysis suggest that the mammalian lincRNome is at least twice the size of the proteome 30 , 31 ., Given that intron-encoded long-non-coding RNAs and non-coding RNAs encoded in complementary strands of protein-coding genes ( long antisense RNAs ) 32 are disregarded in these estimates , the total set of lncRNAs and the fraction of the genome dedicated to the lincRNA genes are likely to exceed the respective values for protein-coding genes several-fold ., In order to assess the reliability and robustness of the model with respect to parameters , we produced series of estimates of the total size of the human and mouse lincRNomes and their conserved subset with varying thresholds on expression level , extent of sequence similarity and the maximum allowed open reading frame size ., Nevertheless , it is impossible to rule out some sources of bias that might have affected the estimates ., For example , some orthologous lincRNA genes might remain undetected because they were not included in the UCSC genome alignments due to high divergence or synteny breaks in ( for example , inversions or translocations ) ., Such under-detection of orthologs could cause an underestimate of evolutionary conserved lincRNA genes although it has been reported that the of breakpoints is not large ( <250 ) for the human/mouse genomic comparison 33 , so this type of bias is likely to be negligible ., Another , potentially more serious source of bias could be a correlation between two lists of lincRNA genes which again would result in biased estimates of evolutionary conserved lincRNA genes ., However , because the human and mouse lincRNA sets were obtained using quite different approaches 12 , 20 , 23 , 27 , there is no reason to expect that any strong correlation between the two lists would be caused by the employed experimental and/or computational procedures ., An under-estimate of the number of orthologous lincRNAs as well as the total size of the mouse lincRNome also might be caused by smaller RNAseq dataset for mouse ( 10 tissue/cell types , see Methods for details ) compared to human ( 16 tissue/cell types ) ., This difference could explain the systematically smaller predicted numbers of mouse lincRNA genes ( Tables 1 and 2 ) ., More generally , given that expression of a large fraction of lincRNAs appears to be tissue-specific , the availability of sufficient data for relatively small numbers of tissue/cell types could cause substantial underestimate of the size of both lincRNomes and their conserved fraction ., Thus , the estimates obtained here should be regarded as highly conservative , essentially low bounds the lincRNome size and the set of orthologous lincRNA genes ., Some of the transcripts identified as lincRNAs potentially might represent fragments generated from long ( alternative ) 5′UTRs or 3′UTRs of protein-coding genes ., Such transcripts could results from utilization of alternative poly ( A ) addition signals and/or could represent alternative splice forms separated by long introns 3 , 18 , 19 , 34 ., If many purported lincRNAs actually are fragments of protein-coding genes , one would expect a strong correlation to exist between the expression of lincRNAs and neighboring protein-coding genes ., Cabili and co-workers analyzed this correlation for the set of validated human lincRNA genes 27 ., Their analysis focused on those protein-coding genes that had a lincRNA neighbor on one side and a coding neighbor on the other side , and used a paired test to compare the correlation between each protein-coding gene and its lincRNA neighbor with that between the same protein-coding gene and its protein-coding gene neighbor ., This comparison showed a weak opposite trend , namely that expression of pairs of coding gene neighbors was , on average , slightly but significantly more strongly correlated than the expression of neighboring lincRNA/protein-coding gene pairs ., The results of this analysis appear to be best compatible with the hypothesis that any co-expression between lincRNAs and their protein-coding neighbors results from proximal transcriptional activity in the surrounding open chromatin 35 ., These findings effectively rule out the possibility that the majority of lincRNAs are fragments of neighboring protein-coding genes although there are anecdotal observations that 3′UTR-derived RNAs can function not only in cis to regulate protein expression but also intrinsically and independently in trans , likely as non-coding RNAs 36 ., The possibility that some lincRNA genes encode short peptides that are translated , perhaps in a tissue-specific manner , is the subject of an ongoing debate 13 , 37–40 ., It is extremely hard to rule out such a role for a fraction of purported lincRNAs as becomes obvious from the long-standing attempts to investigate potential functions of the thousands upstream open reading frames ( uORFs ) that are present in 5′UTR of protein-coding genes in eukaryotes 41–44 ., Although some of the uORFs are translated , the functions of the produced peptides if any remain unclear 45 ., Even application of modern high-throughput techniques in simple eukaryotic model systems so far have failed to clarify this issue ., For example , analysis of 1048 uORFs in yeast genes has supported translation of 153 uORFs 46 ., Furthermore , numerous uORF translation start sites were found at non-AUG codons , the frequency of these events was even higher than for uAUG codons even though the frequency of non-AUG starting codons is extremely low for protein-coding genes 46 ., Another intriguing recent discovery is the potential presence , in the yeast genome , of hundreds of transiently expressed ‘proto-genes’ that are suspected to reflect the process of de novo gene birth 40 ., However , the functionality of these peptides remains an open question ., Establishing functionality of short ORFs in mammalian genomes is an even more difficult task ., For example , analysis of translation in mouse embryonic stem cells revealed thousands of currently unannotated translation products ., These include amino-terminal extensions and truncations and uORFs with regulatory potential , initiated at both AUG and non-AUG codons , whose translation changes after differentiation 47 ., However , contrary to these emerging indications of abundant production of short peptides , a recent genome-wide study has reported very limited translation of lincRNAs in two human cell lines 48 ., In general , at present it appears virtually impossible to annotate an RNA unequivocally as protein-coding or noncoding , with overlapping protein-coding and noncoding transcripts further confounding the issue ., Indeed , it has been suggested that because some transcripts can function both intrinsically at the RNA level and to encode proteins , the very dichotomy between mRNAs and ncRNAs is false 38 ., Taking all these problems into account , here we adopted a simple , conservative approach by excluding from the analysis lincRNAs containing relatively long ORFs , under a series of ORF length thresholds ., However , it should be noted that human and mouse lincRNAs used in this study had been previously filtered for the presence of evolutionary conserved ORFs and the presence of protein domains , and the most questionable transcripts were removed at this stage 12 , 20 , 23 , 27 ., For example , 2305 human transcripts were excluded from the stringent human lincRNA set 27 under the coding potential criteria ( the presence of a Pfam domain , a positive PhyloCSF score , or previously annotated as pseudogenes ) ., The majority of these discarded transcripts ( 1533 ) were previously annotated as pseudogenes 27 ., Similar to the stringent set of lincRNAs , these transcripts are expressed at lower and more tissue-specific patterns than bona fide protein-coding genes , suggesting that these effectively are non-coding transcripts ., Nevertheless , Cabili and co-workers employed a conservative approach and excluded them from the stringent lincRNA set 27 ., Questions about functional roles of lincRNAs and the fraction of the lincRNAs that are functional loom large ., For a long time , the prevailing view appeared to be that , apart from a few molecular fossils such as rRNA , tRNA and snRNAs , RNAs did not play an important role in extant cells ., More recently , the opposite position has become popular , namely that ( almost ) every detectable RNA molecule is functional ., It has been repeatedly pointed out that this view is likely to be too extreme 49 , 50 ., Although it has been shown that many lincRNA genes are evolutionarily conserved and perform various functions 7 , 12–17 , an unknown fraction of lincRNAs should be expected to result from functionally irrelevant background transcription 19 ., In the present work , phylogenetic conservation is the principal support of functional relevance of lincRNAs ., Given that neutrally evolving sequences in human and mouse genomes are effectively saturated with mutations and show no significant sequence conservation 51–53 , expression of non-coding RNAs at orthologous genomic regions in human and mouse should be construed as strong evidence of functionality ., It should be noted , however , that sequence conservation gives the low bound for the number of functional lincRNAs , and the lack of conservation is not a reliable indication of lack of function ., First , the possibility exists that orthologous genes diverge to the point of being undetectable by sequence comparison , e . g . because short conserved , functionally important stretches are interspersed with longer non-conserved regions , as is the case in Xist , H19 , and similar lincRNAs 54 , 55 20 ., The results of this work predict that , despite the fact that on average sequence conservation between orthologous lincRNAs is much lower than the conservation between protein-coding genes 12 , 23 , 60 to 70% of the lincRNAs appear to share orthologous relationship between human and mouse , which is only slightly lower than the fraction of protein-coding genes with orthologs , approximately 80% 51 ., These findings imply that , even if many of the species-specific lincRNAs are non-functional , mammalian lincRNAs perform thousands of evolutionarily conserved functional roles most of which remain to be identified ., As the human lincRNA data set , the ‘stringent set’ of 4662 lincRNAs , which is a subset of the over 8000 human lincRNAs described in a recent comprehensive study 27 , was used ., The validated set of mouse lincRNA genes was constructed by merging our previously published set of 2390 lincRNA transcripts with the set of 3051 transcripts produced by Ponting and coworkers 12 ., After the merge , a unique list of 4989 GenBank transcript IDs was generated , coordinates of the newest mouse assembly , mm9 , were downloaded in BED format from the UCSC Table Browser 56 , and entries shorter than 200 nt were discarded ., Overlapping chromosomal coordinates were merged using the mergeBed utility from BEDtools package 57 , with the command line option -s ( “force strandedness” , i . e . merge overlapping features only if they are on the same strand ) , and unique IDs were assigned to the resulting 4156 mouse lincRNA clusters ., ( format: mlclust_N where mlclust stands for mouse lincRNA cluster , and N is a unique integer number; see Supporting Table S1 ) ., Expression of the lincRNAs was assessed by analysis of the available RNAseq data ., For human , the run files of the Illumina Human Body Map 2 . 0 project for adipose , adrenal , brain , breast , colon , heart , kidney , liver , lung , lymph node , ovary , prostate , skeletal muscle , testis , thyroid , white blood cells , were downloaded from The NCBI Sequence Read Archive ( SRA , http://www . ncbi . nlm . nih . gov/Traces/sra; Study ERP000546; runs ERR030888 to ERR030903 ) ., For mouse , RNAseq data of the ENCODE project 58 for tissues: bone marrow , cerebellum , cortex , ES-Bruce4 , heart , kidney , liver , lung , mouse embryonic fibroblast cells ( MEF ) and spleen , were downloaded from the UCSC Table Browser 56 FTP site ( ftp://hgdownload . cse . ucsc . edu/goldenPath/mm9/encodeDCC/wgEncodeLicrRNAseq/ ) ., Pre-built Bowtie indices of human and mouse , based on UCSC hg19 and mm9 , were downloaded from Bowtie FTP site ( ftp://ftp . cbcb . umd . edu/pub/data/bowtie_indexes/hg19 . ebwt . zip and ftp://ftp . cbcb . umd . edu/pub/data/bowtie_indexes/mm9 . ebwt . zip , respectively ) ., The reads were aligned with the cognate genomic sequences using TopHat 59 ., The TopHat-generated alignments were analyzed using an ad hoc Python script that accepts alignments and genomic coordinates in SAM and BED formats , respectively , and uses the HTSeq Python package ( http://www-huber . embl . de/users/anders/HTSeq ) to calculate the number of aligned reads ( “counts” ) ., The RPKM ( i . e . reads per kilobase of exon model per million mapped reads 29 ) values were calculated from the counts values ., Because we were interested to determine whether particular regions are expressed in any of the analyzed tissues , the maximum value among all tissues was assigned as the expression level of lincRNA genes and putative orthologous lincRNA genes ., An ORF was defined as a continuous stretch of codons starting from the ATG codon or beginning of the cDNA ( to take into account potentially truncated cDNAs ) and ending with a stop codon ., The ORFs were identified by using the ATG_EVALUATOR program 60 combined with the ORF predictor from the GeneBuilder package 61 with relaxed parameters ( the program was required to correctly predict 95% of the human and mouse cDNA training sets 61 ) ., Control experiments with independent human and mouse cDNA data sets 61 showed a 94–98% true positive rate depending on the ORF length threshold ( 90 , 120 or 150 nucleotides ) ., However , a high rate of false positives is expected for such relaxed parameters ., Analysis of human and mouse introns and UTRs data sets showed false positives rates of 10–20% depending on the threshold 60 , 61 ., For the purpose of the present analysis , false positives in ORF identification represent random removal of lincRNA sequences from the samples resulting in conservative estimates of the total lincRNA number ., Thus , we used the ORF cut-off values of 90 , 120 or 150 nucleotides to remove putative mRNAs for short proteins separately from the human and mouse sets of lincRNAs ., To obtain the subset of human lincRNAs with expressed orthologs in mouse ( Kh ) , human lincRNA gene coordinates of assembly hg19 were converted to mouse mm9 using the liftOver tool of the UCSC Genome Browser 62 ., Out of the 4662 human lincRNAs ( Lh ) , for 3529 putative orthologous regions were identified in the mouse genome ., These sequences were checked for the evidence of expression in mouse tissues using the RNAseq data ., Exon coordinates of putative lincRNAs were obtained by mapping their coordinates onto exons of all known genes of mm9 assembly of UCSC Genome Browser ., The sums of exons were then used in expression level calculation to normalize for sequence length ., Out of the 3369 putative lincRNAs for which the exon models could be determined , 2872 had expression level greater than zero ., Similarly , the subset of mouse lincRNAs with expressed putative orthologs in human ( Km ) was found by converting the coordinates of initial 4156 mouse lincRNAs ( Lm ) from mm9 to hg19 and searching for the evidence of expression in human tissues ., The exon models could be determined for 3656 of the 3677 putative lincRNAs , out of which 3157 had expression level greater than zero ., The subset of orthologous lincRNAs ( Kb ) was obtained by selecting those lincRNAs whose putative orthologs in another species overlap with the validated lincRNAs of that species ., That is , we searched for the overlap of putative orthologs of human lincRNAs ( in hg19 coordinates ) with the mouse lincRNAs ( in mm9 coordinates , minimal overlap 100 nucleotides ) ., The overlap was determined using intersectBed from BEDtools package with the command line option -s ( “force strandedness” ) ., This resulted in 196 pairs of unique human and mouse lincRNAs ., Approximate indel values were estimated from the sequence length differences between the lincRNAs and their orthologs , i . e . the following formula was used:where LllincRNA is the total length of lincRNA exons , and Lortholog is the total length of the exons of lincRNA ortholog ., Manual examination of orthologous lincRNA alignments and putative orthologs suggested that approximately 5% of the alignments with the largest INDEL values were unreliable ., Thus , all lincRNA alignments with INDEL >95% were removed from further analysis ., Similarly , a cut-off was imposed on expression level of putative human and mouse orthologs of lincRNA ., This cut-off was set at the lowest 5% of the expression levels of the 196 orthologous validated lincRNA genes ( Supporting Table S1 ) ., All putative orthologs of lincRNA genes with lower expression values were discarded under the premise that these low values could represent experimental noise , i . e . the top 95% of the expression values EXP95% was used for all analyses ( Table 1 and Supporting Table S1 ) ., In addition , EXP90% , 80% , 70% , 60% , 50% , 40% , 30% , 20% , 10% were calculated to compare subsets of lincRNAs expressed at different levels ( Table 2 ) ., We also used different sets of expression/indel filters combined with the 5 input parameters ( see Results ) in different experiments ( Tables 1 and 2 ) ; no substantial differences between results were found ( see Discussion for details ) ., For calculating the 5 input parameters ( see Results ) , all the collected information was stored in an SQLite database , and after applying ORF , indel and expression thresholds , final data sets were assembled ( Tables 1 , 2 and Supporting Table S1 ) ., Using the experimentally validated sets of human and mouse lincRNAs and the assumptions described in the main text the probability of observing a particular set of Kh , Km and Kb for the given values of Lh and Lm is given by equation ( 1 ) in the main text ., Using the Sterlings approximation for the factorial , we obtain the system of nonlinear equations for the sizes Nh and Nm of the pools and their overlap Nb that maximize the likelihood P in Eq ., ( 1 ) ( 3 ) ( 4 ) ( 5 ) Solving the system ( 3–5 ) for Nh , Nm and Nb we obtain Equation ( 2 ) ( see main text ) ., The confidence region around the maximum likelihood estimate Eq ., ( 5 ) is an ellipsoid in the {Nh , Nm , Nb} space ., The directions of its axes are given by the eigenvectors of the Jacobian matrix J of second derivatives of log P and the magnitudes of the ellipsoids axes are given by the inverse square roots of the negatives of the eigenvalues ., Computing the second derivatives of log P and evaluating them at the maximum likelihood point , we obtain ( 6 ) We found that the confidence ellipsoid is highly elongated , and therefore the estimates for the pool sizes are strongly correlated with each other ., The analytically estimated 95% confidence intervals are shown in Table 1 ., In addition , a bootstrap analysis of the lincRNA numbers was performed ., For this purpose , the initial sets of human and mouse lincRNAs were randomly resampled 1000 times and the calculation of the final numbers was performed using 95% indel and expression ( RPKM ) levels , and all ORF thresholds ., The results of bootstrap analysis are given in the Supporting Table S1 ., The 95% confidence intervals estimated using the boostrapping procedure ( Supporting Table S1 ) were smaller than the analytically obtained 95% confidence intervals ( Table 1 ) , thus we used the latter as conservative estimates of the 95% confidence intervals . | Introduction, Results, Discussion, Methods | We compare the sets of experimentally validated long intergenic non-coding ( linc ) RNAs from human and mouse and apply a maximum likelihood approach to estimate the total number of lincRNA genes as well as the size of the conserved part of the lincRNome ., Under the assumption that the sets of experimentally validated lincRNAs are random samples of the lincRNomes of the corresponding species , we estimate the total lincRNome size at approximately 40 , 000 to 50 , 000 species , at least twice the number of protein-coding genes ., We further estimate that the fraction of the human and mouse euchromatic genomes encoding lincRNAs is more than twofold greater than the fraction of protein-coding sequences ., Although the sequences of most lincRNAs are much less strongly conserved than protein sequences , the extent of orthology between the lincRNomes is unexpectedly high , with 60 to 70% of the lincRNA genes shared between human and mouse ., The orthologous mammalian lincRNAs can be predicted to perform equivalent functions; accordingly , it appears likely that thousands of evolutionarily conserved functional roles of lincRNAs remain to be characterized . | Genome analysis of humans and other mammals reveals a surprisingly small number of protein-coding genes , only slightly over 20 , 000 ( although the diversity of actual proteins is substantially augmented by alternative transcription and alternative splicing ) ., Recent analysis of the mammalian genomes and transcriptomes , in particular , using the RNAseq technology , shows that , in addition to protein-coding genes , mammalian genomes encode many long non-coding RNAs ., For some of these transcripts , various regulatory functions have been demonstrated , but on the whole the repertoire of long non-coding RNAs remains poorly characterized ., We compared the identified long intergenic non-coding ( linc ) RNAs from human and mouse , and employed a specially developed statistical technique to estimate the size and evolutionary conservation of the human and mouse lincRNomes ., The estimates show that there are at least twice as many human and mouse lincRNAs than there are protein-coding genes ., Moreover , about two third of the lincRNA genes appear to be conserved between human and mouse , implying thousands of conserved but still uncharacterized functions . | genomics, biology, computational biology, evolutionary biology, genetics and genomics | null |
913 | journal.pcbi.1000494 | 2,009 | Googling Food Webs: Can an Eigenvector Measure Species Importance for Coextinctions? | The robustness of ecosystems to species losses is a central question in ecology given the current pace of extinctions and the many species threatened by human impacts 1–3 ., The loss of species in complex ecological networks can cascade into further extinctions because of the mutual dependence of species ., Of all the possible causes leading to these “cascading” extinctions , the simplest case to analyze is that of species left with no exploitable resources 4–8 ., These extinctions due to lack of nutrient flows represent the most predictable subset of secondary losses and also the best case scenario , since the addition of other effects 9 , 10 , related to the loss of dynamical regulation , will result in additional losses ., The former scenario is the simplest to analyze because the extinction of consumers that are left with no resources will happen with certainty , unless the consumers can switch to a different set of resources ., Because modern data sets are obtained by sampling extensively a system over time , it is unlikely that potential resources resulting from switching prey go unregistered ., If these potential interactions have been included in the prey of a given predator , then the dynamics of extinction for this flow-based case are completely described by the network structure ., This simple analysis also represents the best case scenario , since other causes of extinction such as low population abundance can increase the loss of species in response to the original disturbance , but cannot prevent flow-based extinctions from happening ., From the flow-based perspective , the effects of a single species loss can be easily analyzed 7 , but those of multiple losses and sequences of extinctions rapidly become an intractable problem ., Species importance in this context has been traditionally measured using local network properties , such as the number of species connections 4 , 5 ., In particular , species with a large number of links are considered keystones ( or hubs 11 ) for the robustness of ecological networks 5 , 6 , 8 , 12 ., A different take on species importance in networks makes use of centrality measures: species that are central mediate the interaction among those that are more peripheral and therefore should be considered the most important species 13–15 ., Here we propose a new algorithm for assessing the importance of species for food web robustness that takes into account the full topology of the network ., When species importance from the perspective of robustness is correctly measured , the ordered removal of species according to this ranking should lead to the fastest collapse of the network ., Our approach inspired by PageRankTM , the algorithm at the heart of GoogleTM 16 , uses a recursive definition: a species is important if important species rely on it for their survival ., Results show that the algorithm outperforms all other measures of species importance from the perspective of fastest route to collapse ., Moreover , it performs as well as a genetic algorithm 17 , 18 , an evolutionary intensive search that can evaluate millions of solutions , even if the eigenvector implementation is much simpler and faster ., A biological interpretation of species importance follows naturally as the amount of matter flowing through a given species , for both qualitative networks constructed from the presence and absence of links , and quantitative networks for which interaction strengths are explicitly specified 19–21 ., The proposed approach provides the basis for a more comprehensive treatment of extinction risk in food webs ., The World Wide Web is a directed network in which web pages ( nodes ) are connected with each other by hyper-links ., We can write a matrix in which the presence and absence of a link from the row-page to the column-page are represented as entries and , respectively ., PageRankTM rates pages as important if they receive links from pages that are in turn also rated as important ., The PageRankTM algorithm solves this recursive definition using a clever application of linear algebra 16 ., Each page is assigned an importance , and each link ( exiting page to enter page ) carries an equal fraction of the importance value ., The importance of a page is the sum of the importance assigned to the incoming connections ., The recursive problem can be solved by building a matrix in which each element represents the fraction of importance assigned to a linkand given by ., When matrix satisfies two conditions ( it is both irreducible and primitive 16 ) , then the problem of assigning importance is solved by computing a fundamental and well-known quantity in linear algebra , the eigenvector associated with the dominant eigenvalue of ., If the two conditions are met , the Perron-Frobenius Theorem guarantees the existence of this dominant eigenvector ( Text S1 ) ., One main problem , besides the numerical challenge of computing the eigenvectors of a matrix with several billions rows and columns , is that the World Wide Web is not irreducible 16 ., For irreducible matrices , the associated network must be strongly connected , with any two nodes connected by a directed pathway ., Because he WWW clearly does not meet this condition , the matrix is modified by applying a “damping factor” , ., A new matrix is constructed with entries , where is the number of nodes in the network ., The damping factor effectively mimics the probability that a user browsing the web can decide to move directly to another ( random ) page 16 ., The eigenvector is then computed for ., Here we propose an algorithm to rank the importance of species for food web robustness that uses a similar principle ., Nutrients move from one species to another in a food web through feeding links ., For their survival , species must be able to receive energy and matter from primary producers through some pathway in the network 7 , 22 ., Thus , we define a species as important if it supports ( directly or indirectly ) other species that are in turn important ., The problem is similar to that of ranking web pages , with the difference that now importance moves in the opposite direction than that of the links ( i . e . a web page is important if important pages point to it; species are important if they point to important species ) ., Also food webs are neither irreducible nor primitive , but we can find a biologically sound solution to this problem ., A damping factor would be completely unrealistic since nutrients cannot randomly “jump” among links in the food web ., We make instead two observations: first , all matter in the food web must originate from primary producers who receive it from the external environment and channel it through the food web to all other species through feeding pathways 21 , 23 ., We therefore attach to the network a special node ( a “root” ) that points to all the primary producers 7 , 22 ., Second , every species has an intrinsic loss of matter which can be represented by adding a link from every node to the root ., This process represents the buildup of detritus that in turn is partly recycled into the food web 21 , 23 ., With these two modifications any food web becomes irreducible and primitive ( Fig . 1 , Text S1 ) and we can now solve the problem of assigning importance by computing the eigenvector associated with the dominant eigenvalue ., For simplicity , we consider the normalized eigenvector so that ., Recent research on food web robustness has emphasized the role of connectivity: species with a high number of connections are likely to be essential for the survival of other species 4–8 ., In-silico extinction experiments also showed that random removal sequences rarely cascade in the secondary loss of species , whereas the removal of highly connected species is likely to generate many secondary extinctions ., Another line of research borrowed measures of centrality from sociology ., Central species mediate the spread of disturbances through the network ., In this sense , species with high centrality would be considered “keystone” to the maintenance of connectivity in networks 13–15 ., To test our algorithm , we performed in-silico extinction experiments in which a single species is removed at each step and the number of secondary extinctions is recorded ., We compared several simple algorithms:, a ) the removal of the most connected species at each step ( , where we measured the number of connections coming out of each node ) ;, b ) the removal of species according to closeness centrality ( ) : nodes are highly central from this point of view if they have short distance to many nodes in the network;, c ) the removal according to betweeness centrality ( ) : a node has high betweeness if it lies on the shortest path between many couples of nodes;, d ) removal according to dominators ( ) : a node dominates another if all the paths from “root” to contain - the removal of will therefore drive extinct 7; finally ,, e ) we removed according to the eigenvector-based algorithm outlined above ( ) ., All the algorithms are “greedy”: at each step , we compute the “importance” of each node according to a particular algorithm , and we remove the one with the highest importance ., The procedure is repeated until all the species have gone extinct or have been removed ., The algorithms are explained in detail in the Text S1 ., For each extinction sequence , we measured the “extinction area” , a quantity that equals 1 when all species go extinct after the first removal and tends to 1/2 when no secondary extinction is observed ( Fig . 2 ) ., In this way , we can assess the performance of each algorithm with a single number ., If important species are removed early on , then the area will be larger ., The algorithms could yield ties - nodes with the same importance ., Whenever we encountered ties , we considered all the possible sequences of extintions that may result exploring all the ties ., Therefore , algorithms with low ranking power ( i . e . yielding many ties ) could produce very many extinction sequences ., We followed all extinction sequences generated by ties whenever they were less than half a million ., If there were more possible solutions , we analyzed the first half million ., We applied all the algorithms to 12 published food webs ( Table 1 ) ., For each algorithm and network , we tracked the total number of solutions produced by the algorithm , the minimum , maximum and mean “extinction area” and the number of solutions yielding the maximum area ( Text S1 ) ., We then evaluated the value of the maximal extinction area ., Because the number of possible removal sequences is where is the number of species in the network , the enumeration of all possible cases is clearly unfeasible ., We therefore programmed a Genetic Algorithm 17 ( ) that seeks to find the best possible sequence using an evolutionary search ., This type of algorithm has been shown to be effective for similar problems in food web theory 18 , even when computationally expensive and when its performance declines with food web size ., Here , the search performs at least as well as the best among the other algorithms , as expected for an effective search ( Fig . 3 ) ., In all cases , the best solution for the degree-based algorithm ( ) and the closeness centrality ( ) did not match the genetic algorithm ( ) : these measures do not correctly identify the fastest route to collapse ( Fig . 3 ) ., Betweeness centrality yields an area as large as that of the in only 1 case ( benguela ) ., The dominators-based procedure finds the best solution in 2/3 of the cases ., The eigenvector-based algorithm finds the best solution in 11 cases out of 12 ., To improve the algorithm , we build upon a previous approach of ours 22 , based on the observation that not all the links in a food web contribute to robustness ., The idea that more complex networks would contain a multiplicity of pathways that would in turn render the networks more robust was put forward by MacArthur more than fifty years ago 24 ., We recently showed that , while this is generally true , some links do not contribute to robustness , while others dampen the effects of species removal and increase robustness ( Fig . 1 ) 22 ., Thus links can be classified as “redundant” or “functional” from the perspective of their effects on secondary extinctions ., From this classification , one can obtain a simplified food web by removing all redundant connections , that has exactly the same robustness properties than the original network in terms of the secondary extinctions ., For the algorithm , then , we repeated the removal sequence experiment but we computed for the simplified food web obtained by first removing redundant connections ., The results indicate that the algorithm is capable of finding the best solution provided by the in all cases ( Fig . 3 , Text S1 ) ., We have developed two algorithms to rank species in food webs according to their role in extinction cascades ., We considered a flow-based perspective in which species go extinct if they lack a connection through some pathways to primary producers ., Although it is evident that many other types of extinctions can increase total species loss , the subset considered here provides a baseline and corresponds to the best case scenario in which the minimum impact to the network is taken into account ., Species left with no resources will go extinct , unless they can switch their choice of prey sufficiently fast ., It is known that species can exhibit this type of adaptive behavior in response to the relative abundance of prey , with consequences for the stability of predator-prey systems 25 ., Because the food webs we have analyzed are sampled in the field over time and space , it is most likely that the links included in the networks already reflect prey switching ., An important source of additional secondary extinctions will be related to the population dynamics of species ., The complete consideration of dynamics with a system of nonlinear differential equations that simulates the outcome of species losses , will only increase the number of species predicted to go extinct by the simplest scenario ., The analysis of removal effects remains very challenging if not prohibitive for large ecological networks ( but see 9 , 10 , 26 ) , requiring information most often unavailable on the functional form of a large number of interactions and their associated parameters , the exploration of different assumptions and a huge parameter space ., The simple and elegant solution for the flow-based case provides a baseline from which additional impacts can be considered ., The results obtained here with a simple algorithm emphasize that the position of a species in the food web , rather than its sheer number of connections , is the main determinant of its impact on extinction cascades ., This contrasts with the emphasis given so far to the number of connections and to the concept of “hubs” in networks ., We have shown that the performance of the algorithm , which considers only the neighbors of a given species , is considerably worse than that of the eigenvector based algorithms at finding the fastest route to collapse ., The latter algorithms solve the problem of importance by considering the full topology of the network and the particular position that each species occupies ., We further showed that an algorithm that first removes “redundant” connections provides a valuable improvement , because it relies on the functional role of connections in maintaining the flow of nutrients through the food web ., Interestingly , a parallel problem has been analyzed in computer science ( Text S1 ) ., Srinivasan et al . 27 have shown that many realistic removal sequences are not likely to cascade in massive species losses , with the loss of threatened species not necessarily resulting in further extinctions ., It is therefore difficult to discriminate importance among species whose removal has little direct effect on network structure ., The eigenvector approach provides a simple and effective way of comparing species importance even when their removal does not result in extinction cascades ., This should help assessing the relative importance of threatened species for network robustness and from the perspective of network structure ., Coll et al . analyzed the effect of actual human-induced extinction in the Mediterranean sea and found that removing commercially valuable species had typically a higher impact than random removals , but lower than maximum degree driven removals 28 ., The dominant eigenvector has also a simple biological interpretation ., To show this , we assume for the moment that we can fully describe the interacting community by means of differential equations representing the dynamics of species abundance , ., We further consider that the system is at a feasible equilibrium point ( for all species , ) ., For this case , we can measure the flow of biomass entering and exiting each species ( for example , in kilograms of biomass per year per hectare ) and the amount entering and exiting each node must be equal given the equilibrium condition 19–21 ., These quantities are proportional to the eigenvector used here: specifically , provides an estimate of the flow through each species ( Text S1 , Fig . S1 ) ., In the absence of available information on diet preferences , measures the flow that each species would receive if each of its prey provided equal amounts of nutrients ., When quantitative information on these inputs is available , and the flow-based description become exactly equivalent ( Text S1 , Fig . S2 ) ., The proposed algorithm further provides a measure of eigenvector centrality in directed , rooted networks ., Other centrality measures have been proposed to evaluate species importance 13–15 , but they typically consider undirected networks and have not been adapted to food webs ., This is reflected in the poor performance achieved by the centrality algorithms ., Here we have shown that consideration of ecological knowledge on food web processes can improve algorithms that have been developed in other branches of science ., It should be possible to adapt the methods presented here to other types of biological networks , especially metabolic ones ., For food webs , the next challenge is to add other dynamical effects to this framework , to obtain a more complete description of extinction risk in complex ecological networks . | Introduction, Materials and Methods, Results, Discussion | A major challenge in ecology is forecasting the effects of species extinctions , a pressing problem given current human impacts on the planet ., Consequences of species losses such as secondary extinctions are difficult to forecast because species are not isolated , but interact instead in a complex network of ecological relationships ., Because of their mutual dependence , the loss of a single species can cascade in multiple coextinctions ., Here we show that an algorithm adapted from the one Google uses to rank web-pages can order species according to their importance for coextinctions , providing the sequence of losses that results in the fastest collapse of the network ., Moreover , we use the algorithm to bridge the gap between qualitative ( who eats whom ) and quantitative ( at what rate ) descriptions of food webs ., We show that our simple algorithm finds the best possible solution for the problem of assigning importance from the perspective of secondary extinctions in all analyzed networks ., Our approach relies on network structure , but applies regardless of the specific dynamical model of species interactions , because it identifies the subset of coextinctions common to all possible models , those that will happen with certainty given the complete loss of prey of a given predator ., Results show that previous measures of importance based on the concept of “hubs” or number of connections , as well as centrality measures , do not identify the most effective extinction sequence ., The proposed algorithm provides a basis for further developments in the analysis of extinction risk in ecosystems . | Predicting the consequences of species extinction is a crucial problem in ecology ., Species are not isolated , but connected to each others in tangled networks of relationships known as food webs ., In this work we want to determine which species are critical as they support many other species ., The fact that species are not independent , however , makes the problem difficult to solve ., Moreover , the number of possible “importance” rankings for species is too high to allow a solution by enumeration ., Here we take a “reverse engineering” approach: we study how we can make biodiversity collapse in the most efficient way in order to investigate which species cause the most damage if removed ., We show that adapting the algorithm Google uses for ranking web pages always solves this seemingly intractable problem , finding the most efficient route to collapse ., The algorithm works in this sense better than all the others previously proposed and lays the foundation for a complete analysis of extinction risk in ecosystems . | ecology/theoretical ecology, ecology/community ecology and biodiversity, ecology, computational biology | null |
1,306 | journal.pcbi.1006557 | 2,018 | Representations of regular and irregular shapes by deep Convolutional Neural Networks, monkey inferotemporal neurons and human judgments | Recently , several studies compared the representations of visual images in deep Convolutional Neural Networks ( CNN ) with those of biological systems , such as the primate ventral visual stream 1–4 ., These studies showed that the representation of visual objects in macaque inferior temporal ( IT ) cortex corresponds better with the representations of these images in deep CNN layers than with representations of older computational models such as HMAX 5 ., Similar findings were obtained with human fMRI data 6–10 ., The images used in these studies were those of real objects in cluttered scenes , which are the same class of images as those employed to train the deep CNNs for classification ., Other single unit studies of IT neurons employed two-dimensional ( 2D ) shapes and observed highly selective responses to such stimuli ( for review see 11 ) ., If deep CNNs provide a realistic model of IT responses , then the CNNs should capture also the selectivity observed for such two-dimensional shapes in IT ., To our knowledge , thus far there has been no comparison between the 2D-shape representation of IT neurons , measured with such reduced stimuli , and that of deep CNN models ., It is impossible to predict from existing studies that compared deep CNN activations and neurophysiology whether the deep CNNs , which are trained with natural images , can faithfully model the selectivity of IT neurons for two-dimensional abstract shapes ., Nonetheless , such correspondence between CNN models and single unit selectivity for abstract shapes is critical for assessing the generalizability of CNN models to stimuli that differ markedly from those of the trained task but have been shown to drive selectively IT neurons ., Previously , we showed that a linear combination of units of deep convolutional layers of CNNs trained with natural images could predict reasonably well the shape selectivity of single neurons recorded from an fMRI-defined body patch 4 ., However , in that study , we adapted for each single unit the shapes to the shape preference of that neuron , precluding a comparison between the shape representation of the population of IT neurons and deep CNNs ., To perform such a comparison , one should measure the responses of IT neurons to the same set of shapes ., Furthermore , the shape set should include variations in shape properties IT neurons were shown to be sensitive to ., Also , the IT response selectivities for such shapes should not trivially be explainable by physical image similarities , such as pixel-based differences in graylevels ., Kayaert et al . 12 measured the responses of single IT neurons to a set of shapes that varied in regularity and the presence of curved versus straight boundaries ( Fig 1 ) ., The first group of stimuli of 12 was composed of regular geometric shapes ( shown in the first two rows of Fig 1 and denoted as Regular ( R ) ) that all have at least one axis of symmetry ., These shapes are simple , i . e . , have low medial axis complexity 13 ., The stimulus pairs in each column of these two rows ( denoted by a and b ) differed in a non-accidental property ( NAP ) ., NAPs are stimulus properties that are relatively invariant with orientation in depth , such as whether a contour is straight or curved or whether a pair of edges is parallel or not ., These properties can allow efficient object recognition at different orientations in depth not previously experienced 14–16 ., NAPs can be contrasted with metric properties ( MPs ) , which vary with orientation in depth , such as aspect ratio or the degree of curvature ., The three other groups are all ‘Irregular’ ., They differed from the Regular shapes in that they do not have a single axis of symmetry ., The two shapes in each row of the three Irregular groups differed in the configuration of their concavities and convexities or corners ., The shapes in the Irregular Simple Curved ( ISC ) set all had curved contours ., The Irregular Simple Straight ( ISS ) shapes were derived from the ISC shapes by replacing the curved contours with straight lines ., Thus , the corresponding stimuli in the ISS and ISC shapes differed in a NAP ., Last , the Irregular Complex ( IC ) group was more complex in that the shapes in that group had a greater number of contours ., Kayaert et al . 12 found that anterior IT neurons distinguished the four groups of shapes ., Importantly , the differences in IT responses amongst the shapes could not be explained by pixel-based gray level differences , nor by HMAX C2 unit differences ., In fact , none of the tested quantitative models of object processing could explain the IT response modulations ., Furthermore , the IT response modulations were greater for the Regular shapes and when comparing the curved and straight Irregular Simple shapes than within the 3 Irregular shape groups , suggesting a greater sensitivity for NAPs than for MPs ( see also 17 , 18 ) ., We reasoned that this shape set and corresponding IT responses was useful to examine to what degree different layers of deep CNNs and IT neurons represent abstract shapes similarly ., We employed deep CNNs that were pretrained to classify ImageNet data 19 , consisting of images of natural objects in scenes ., Hence , the CNNs were not exposed during training to silhouette shapes shown to the IT neurons ., Deep CNNs have a particular architecture with early units having small receptive fields , nonlinear pooling of units of the previous layer , etc ., Such a serial , hierarchical network architecture with increasing receptive field size across layers may result in itself , i . e . without training , in changes in the representational similarity across layers ., To assess whether potential correlations between IT and CNN layer response modulations resulted from classification training or from the CNN architecture per se , we also compared the activations of untrained CNNs with the IT response modulations ., Kayaert et al . 12 had also human subjects sort the same shapes based on similarity and found that human subjects had a pronounced higher sensitivity to the difference between the curved and straight simple irregular shapes ( relative to the regular shapes ) than the IT neurons ., We examined whether a similar difference in response pattern between macaque IT neurons and human similarity judgements would emerge in the deep CNNs ., We expected that deeper layers would resemble the human response patterns while the IT response pattern would peak at less deep layers ., Kayaert et al 12 recorded the responses of 119 IT neurons to the 64 shapes shown in Fig 1 ., The 64 shapes are divided in four groups based on their regularity , complexity and whether they differed in NAPs ., We presented the same shapes to 3 deep CNNs: Alexnet 20 , VGG-16 , VGG-19 21 and measured the activations of the units in each layer of the deep nets ., These deep nets differ in their number of layers , the number of units in each layer and the presence of a normalization stage , but each have rectifying non-linearity ( RELU ) and max pooling stages ( Fig 2 ) ., We employed deep nets that were pre-trained in classification of a database of natural images , which were very different in nature from the abstract shape stimuli that we employ here to test the models and neurons ., The aim was to compare the representations of the shapes between IT neurons and each layer of the deep nets ., To do this , we employed representational similarity analyses 22 , 23 , following the logic of second order isomorphism 24 , 25 , and examined the correlation between the neural IT-based similarities and CNN-based similarities in responses to shapes ., We are not trying to reconstruct the shapes based on IT neuron or CNN unit outputs but we are examining whether shapes that are represented close to each other in the neural IT space are also represented close to each other in the CNN layer space ., In a first analysis , we computed the pairwise dissimilarity between all 64 stimuli using the responses of the IT neurons and the activations in each of the CNN layers ., We employed two dissimilarity metrics: Euclidean distance and 1 –Spearman rank correlation ρ ., The dissimilarity matrices computed with the Euclidean distance metric for the IT neurons and for 5 layers of the trained CNNs are illustrated in Fig 3B and 3C , respectively ., In this and the next figures , we will show only the data for Alexnet and VGG-19 , since VGG-16 and VGG19 produced similar results ., In addition , Fig 3A shows the pixel-based dissimilarities for all image pairs ., Visual inspection of the dissimilarity matrices suggests that ( 1 ) the pattern of dissimilarities changes from the superficial to deep layers in a relatively similar way in the CNNs , ( 2 ) the dissimilarity matrix of the first layer ( e . g . conv1 . 1 ) resembles the pixel-based similarities ( Fig 3A ) and ( 3 ) the deeper layers resemble more the IT neural data ( Fig 3B ) ., We quantified the similarity between the IT shape representation and that of each layer by computing the Spearman Rank correlation between the corresponding pairwise dissimilarities of IT and each layer ., Thus , we could assess to what degree stimuli that produce a very different ( similar ) pattern of responses in IT also show a different ( similar ) pattern of activations in a CNN layer ., We found that for both dissimilarity metrics the similarity between IT neuronal responses and trained CNN layer activations increased significantly with the depth of the layer ., This is shown using the Euclidean distance metric for Alexnet and VGG-19 in Fig 4 ( see S1 Fig for the data of both distance metrics and the 3 networks ) ., In the VGG nets , the similarity peaked at the deepest convolutional layers ( Fig, 4 ) and then decreased for the deepest layers ., In fact , the Spearman correlations for the last two fully connected layers did not differ significantly from that of the first convolutional layer in each CNN ( Fig 4 ) ., The decrease in similarity for the deepest layers was weaker in Alexnet ., The peak similarity was similar amongst the 3 nets , with ρ hovering around 0 . 60 , and were larger for the correlation ( mean peak ρ = 0 . 64 ) compared with Euclidean distance metric ( mean peak ρ = 0 . 58 ) ., To assess the degree to which the models explained the neural data , we computed the reliability of the neural-based distances giving the finite sampling of the IT neuron population ., This noise ceiling was computed by randomly splitting the neurons into two groups , computing the dissimilarities for each group , followed by computation of the Spearman rank correlation between the dissimilarities of the two groups ., This split-half reliability computation was performed for 10000 random splits ., Fig 4 shows the 2 . 5 , 50 ( median ) and 97 . 5 percentiles of the Spearman-Brown corrected correlations between the two groups ., The correlations between ( some ) CNN layers and neural responses were close but still below the estimated noise of the neural dissimilarities ., In order to assess to what degree the similarity between neural data and the CNN layers reflects the architecture of the CNNs versus image classification training , we computed also the similarity for untrained networks with random weights ., Fig 3C illustrates dissimilarity matrices computed using Euclidean distances for 5 untrained layers of two CNNs ., Visual inspection suggests little change in the dissimilarity matrices of the different layers of the CNNs , except for fc8 ., Furthermore , the pattern of dissimilarities resembled the pixel-based dissimilarities shown in Fig 3A ., Both observations were confirmed by the quantitative analysis ., The Spearman correlations of the neural data and untrained CNNs increased only weakly with depth , except for a marked decrease in correlation for the last two fully connected layers ., Except for the deep convolutional and the last two layers , the trained and untrained networks showed similar Spearman correlations of the neural and CNN distances ( Fig 4 ) ., This suggests that overall the similarity between the IT data and the shallow CNN layers are unrelated to classification training but reflect merely the CNN architecture ., Significant differences between trained and untrained CNNs were observed for the deeper convolutional layers ( Fig 4 ) , suggesting that the similarity between IT and the deep convolutional layers depends on classification training ., The similarities for the first fully connected layer ( fc6 and relu6 in Fig, 4 ) did not differ significantly between the trained and untrained layers ( except for the correlation metric in AlexNet ( S1 Fig ) ., The deepest two ( fully connected ) layers showed again a significantly greater similarity for the trained compared with the untrained networks ., However , this can be the result of the sharp drop in correlations for these layers in the untrained network ., Overall , these data suggest that the shape representations of the trained deep convolutional layers , but not of the deepest layers , shows the highest similarity with shape representations in macaque IT ., Receptive field ( RF ) size increases along the layers of the CNNs , allowing deeper layer units to integrate information from larger spatial regions ., The difference in IT-CNN similarity between untrained and trained layers shows that the increase in RF size cannot by itself explain the increased IT-CNN similarity in deeper layers , since untrained CNN also increase their RFs along the layer hierarchy ., Also , the decrease in similarity between IT responses and the fully connected layers argues against RF size being the mere factor ., Nonetheless , although not the only contributing factor , RF size is expected to matter since arguably small RFs cannot capture overall shape when the shape is relatively large ., Hence , it is possible that the degree of IT-CNN similarity for different layers depends on shape size , with smaller shapes showing a greater IT-CNN similarity at earlier layers ., We tested this by computing the activations to shapes that were reduced in size by a factor of two in all layers of each of the 3 trained CNNs ., Fig 5 compares the correlations between dissimilarities of the trained Alexnet and VGG-19 networks and IT dissimilarities for the original and reduced sizes , with dissimilarities computed using Euclidean distances ., The stars indicate significant differences between the similarities for the two sizes ( tested with a FDR corrected randomization test; same procedure as in Fig 4 when comparing trained and untrained correlations ) ., In each of the CNNs ( S2 Fig ) , the IT-CNN similarity increased at more superficial layers for the smaller shape ., The overall peak IT-CNN similarity was highly similar for the two sizes in the VGG networks and occurred at the deep convolutional layers ., For Alexnet , the overall similarity was significantly higher for the smaller shapes in the deep layers ., This analysis indicates that shape size is a contributing factor that determines at which layer the IT-CNN similarity increases , but that for the VGG networks , peak similarity in the deep layers does not depend on size ( at least not for the twofold variation in size employed here ) ., Note that also for the smaller size the IT-CNN similarity drops markedly for the fully connected layers in the VGG networks ., Thus , the overall trends are independent of a twofold change in shape size ., In the preceding analyses , we included all units of each CNN layer ., To examine whether the similarity between the CNN layers and the IT responses depends on a relatively small number of CNN units or is distributed amongst many units , we reran the representational similarity analysis of deep CNN layers and IT neurons for the whole shape set for smaller samples of CNN units ., We took for each network the layer showing the peak IT-CNN similarity and for that layer sampled 10000 times at random a fixed percentage of units ., We restricted the population of units to those that showed a differential activation ( standard deviation of activation across stimuli greater than 0 ) since only those can contribute to the Euclidean distance ., Fig 6A plots the median and 95% range of Spearman rank correlation coefficients between IT and CNN layer dissimilarities for the whole shape set as a function of the percent of sampled units for two CNNs ., We found that the IT-CNN similarity was quite robust to the number of sampled units ., For instance , for Alexnet , the IT-CNN similarity for the original and the 95% range of the 10% samples overlap , indicating that 315 Alexnet units can produce the same IT-CNN similarity as the full population of units ., Note also that the lower bound of the 95% range is still above the IT-CNN similarities observed for the untrained network ( median Spearman rho about 0 . 40; see Fig 4 ) ., This indicates that the IT-CNN similarity does not depend on a small subset of units , since otherwise the range of similarities ( Spearman rho correlations ) for the 10% samples would be much greater ., The same holds for the other CNNs ( S3 Fig ) , except that these tolerated even smaller percent sample size ( for VGG19 even 0 . 1% , which corresponds to 100 units ) ., The above analysis appears to suggest that the activations of the CNN units to the shapes are highly correlated with each other ., To address this directly , we performed Principal Component Analysis ( PCA ) of unit activations of the same peak CNN layers as in Fig 6 and computed Euclidean distance based dissimilarities between all stimulus pairs for the first , first two , etc . principal components ( PCs ) , followed by correlation with the neural dissimilarities as done before for the distances computed across all units of a CNN layer ., For both the Alexnet and VGG-19 layer , the first 10 PCs explained about 70% of the variance in CNN unit activations to the 64 stimuli ( Fig 7B ) ., Only the first 3 ( Alexnet ) or 5 ( VGG-19 ) PCs were required to obtain a similar correlation between the model and neural distances as observed when using all model units of the layer ( Fig 7A; about 7 PCs were required for VGG-16; see S4 Fig ) ., This analysis shows that the neural distances between the abstract shapes relate to a relatively low dimensional shape representation in the CNN layer , with a high redundancy between the CNN units ., In the above analyses , we compared the overall similarity of the shape representations in IT and CNN layers ., However , a more stringent comparison between the shape representations in IT and the CNNs involves response modulations for the shape pairs for which Kayaert et al 12 observed striking differences between predictions of pixel-based models or computational models like HMAX and the neural responses ., The average response modulations ( quantified by pairwise Euclidean distances ) for the different group pairs comparisons are shown in Fig 8 for the IT neural data , the HMAX C2 layer and the pixel differences ., Kayaert et al 12 showed that the mean response modulation in IT ( Fig 8A ) was significantly greater for the regular shape pairs ( 1–8 in Fig, 1 ) than for the 3 irregular shape group pairs , despite the pixel differences between members of a pair being , on average , lower or similar for the regular group than for the 3 irregular groups ( Fig 8D ) ., In addition , the response modulation to ISC vs . ISS was significantly greater than the modulations within IC , ISC and ISS , although the average pixel-difference within the ISC vs . ISS-pairs was much lower than the pixel-differences within the other pairs ., This differential neural response modulation to ISC vs ISS was present for both members of the ISC and ISS pairs ( a and b members: “ISCa vs ISSa” and “ISCb vs ISSb” ) and thus was highly reliable ., Note that the difference between ISC vs . ISS and the IC and ISS shape groups that are present in the neural data is not present for the HMAX C2 distances ( Fig 8C ) ., Kayaert et al . 12 reported also a relatively higher sensitivity to the straight vs . curved contrast of the ISC vs . ISS comparison compared with the regular shapes in human similarity ratings ( Fig 8B ) , compared with the IT neural data ., In other words , human subjects appear to be more sensitive to the curved versus straight NAP difference than macaque IT neurons ., In a second analysis , we determined whether the marked differences in IT response modulations and human judgements shown in Fig 8 are present in the dissimilarities for the different layers of the deep CNNs ., Fig 9 illustrates the results for 8 layers of VGG-19 ., The left column of the figure plots the distances for the trained network ., The dissimilarities for the first convolutional layer fits the pixel-based distances amongst the shape pairs ( Fig 8D; Pearson correlation between pixel-based distances and first layer distances = 0 . 966 ) , but differ from those observed in IT and for human judgements ., Similar trends are present until the very deep convolutional layers where the dissimilarities became strikingly similar to those observed in macaque IT ( e . g . compare trained conv5 . 4 or pool5 of Fig 9 with Fig 8A ) ., The dissimilarities for the last two layers ( e . g . trained relu7 and fc8 in Fig, 9 ) are strikingly similar to those observed for the human judgements ( Fig 8B ) , and differ from the pattern seen in macaque IT neurons ., Indeed , as noted above , the human judgements differ from the IT responses in their sensitivity for the ISC vs ISS comparison relative to that for the regular shape pairs: for the human judgement distances , the ISC vs ISS distances are greater than for the regular shape distances while for the neural distances both are statistically indistinguishable ( Kayaert et al . 12 ) ., Therefore , we tested statistically for which CNN layer the ISC vs ISS distances were significantly greater than the regular shape distances ( Wilcoxon test ) , thus mimicking the human distances ., We found a significant difference for the very deep VGG19 layer fc8 ( p = 0 . 039 ) and VGG16 layers fc7 ( p = 0 . 039 ) , relu7 ( p = 0 . 023 ) , and fc8 ( p = 0 . 023 ) ., Although the deepest Alexnet ( fully connected ) layers showed the same trend , this failed to reach significance ., These tests showed that only the very deep CNN layers mimicked the human judgements ., None of the untrained CNN layers showed a dissimilarity profile similar to that observed in monkey IT or in human judgements ( Fig 9 , right column ) ., In fact , the untrained data resembled more the pixel-based distances ( see Fig 8D ) ., Indeed , the Pearson correlation between the pixel-based distances and the conv1 . 1 distances was 0 . 999 for the untrained VGG-19 ., We quantified the correspondence between the neural response dissimilarities of Fig 8A and the CNN layer dissimilarities ( as in Fig, 9 ) by computing the Pearson correlation coefficient between the dissimilarity profiles ( n = 6 dissimilarity pairs ) ., The same quantification was performed for the human judgements ( Fig 8B ) and the CNN dissimilarities ( n = 5 pairs ) ., These correlations are plotted in Fig 10A and 10B as a function of layer for two CNNs , trained and untrained ., For the neural data , the correlations are negative for the shallow layers and highly similar for the trained and untrained CNNs ., The negative correlations are a result of the nearly inverse relationship between neural and low-level ( pixel ) differences between the shapes ( Fig 8D ) ., This was not accidental , but by design: when creating the stimuli , Kayaert et al 12 ensured that the NAP differences ( e . g . between ISC and ISS ) were smaller than MP differences ., For both VGG networks ( S5 Fig; Fig 10B ) , there was a sharp increase in correlations at the trained deep 5 . 1 convolutional layer , followed by a decrease of the correlations for the fully connected layers ., This trend was similar , although more abrupt , to that observed for the global dissimilarities of Fig 4 ., For Alexnet , the increase of the correlations with increasing depth of the trained convolutional layers was more gradual , but like the VGG networks , high correlations were observed for the deeper trained convolutional layers ., For the human judgement data , the correlations were already higher for the trained compared with the untrained CNNs at the shallow layers , although still negative ., Like the neural data , there was a marked increase in correlation at the very deep trained layers ., Contrary to the neural data , the correlations for the human judgements continued to increase along the trained fully connected layers , approaching a correlation of 1 at the last layer ., These data show that the average response modulations of IT neurons for the shape groups of Fig 1 correspond nearly perfectly with those of the deeper layers of CNNs , while the differences in human similarity judgements between the groups are captured by the later fully connected layers of the CNNs ., This holds for Alexnet and VGG nets ., Note that the deep CNN layers performed better at predicting the neural IT and human perceptual dissimilarities than the HMAX C2 layer output ( Fig 10C ) ., As for the representational similarity analysis for all shapes ( Fig 6A ) , we computed also the Pearson correlation coefficients between the dissimilarity profiles ( n = 6 dissimilarity pairs ) of the same peak CNN layers and the IT distances for the 6 shape groups ( as in Fig, 10 ) for smaller samples of units ., As shown in Fig 6B , we observed similar IT-CNN correlations for the within-group distances up to the 1% and 0 . 1% samples compared with the full population of units for Alexnet and VGG , respectively ., Again , this suggests that IT-CNN similarity does not depend on a small number of outlier CNN units ., The greater tolerance for percent sample size for the VGG units is because the VGG layers consisted of a larger number of units per se ( total number of units are indicated in the legend of Fig 6 ) ., In addition , we computed the mean distances for the same layers and their correlation with the mean neural modulations as a function of retained PCs ( Fig 7B ) ., Up to 30 PCs were required to obtain a similar correlation between neural and CNN layer distances for the six groups of shapes as when including all units of the layer ( Fig 7B ) ., This suggests that the close to perfect modeling of the mean response modulations across the 6 shape groups required a relatively high dimensional representation of the shapes within the CNN layer ., The particular set of shapes that we employed in the present study was designed originally to test the idea that the shape selectivity of IT neurons reflects the computational challenges posed when differentiating objects at different orientations in depth 12 , 14 ., Here , we show that deep CNNs that were trained to classify a large set of natural images show response modulations to these shapes that are similar to those observed in macaque IT neurons ., We show that untrained CNNs with the same architecture than the trained CNNs , but with random weights , demonstrate a poorer IT-CNN similarity than the CNNs trained in classification ., The difference between the trained and untrained CNNs emerged at the deep convolutional layers , where the similarity between the shape-related response modulations of IT neurons and the trained CNNs was high ., Unlike macaque IT neurons , human similarity judgements of the same shapes correlated best with the deepest layers of the trained CNNs ., Early and many later studies of IT neurons employed shapes as stimuli ( e . g . 26–31 , 22 , 32–37 ) , in keeping with shape being an essential object property for identification and categorization ., Deep CNNs are trained with natural images of objects in cluttered scenes ., If deep CNNs are useful models of biological object recognition 38 , their shape representations should mimic those of the biological system , although the CNNs were not trained with such isolated shapes ., We show here that indeed the representation of the response modulations by rather abstract , unnatural shapes is highly similar for deep CNN layers and macaque IT neurons ., Note that the parameters of these CNN models are set via supervised machine learning methods to do a task ( i . e . classify objects ) rather than to replicate the properties of the neural responses , as done in classic computational modeling of neural selectivities ., Thus , the same CNN model that fits neural responses to natural images 1–4 also simulates the selectivity of IT neurons for abstract shapes , demonstrating that these models show generalization across highly different stimulus families ., Of course , the high similarity between deep CNN layers and IT neurons activation patterns we show here may not generalize for ( perhaps less fundamental ) shape properties that we did not vary in our study ., Kubilius et al . 39 showed that deep nets captured shape sensitivities of human observers ., They showed that deep Nets , in particular their deeper layers , show a NAP advantage for objects ( “geons” ) , as does human perception ( and macaque IT 18 ) ., Although we also manipulated NAPs , our shapes differed in addition in other properties such as regularity and complexity ., Furthermore , our shapes are unlike real objects and more abstract than the shaded 3D objects employed by Kubilius et al . 39 when manipulating NAPs ., In both the representational similarity analysis and the response modulations comparisons amongst shape groups , we found that the correspondence between IT and deep CNN layers peaked at the deep convolutional layers and then decreased for the deeper layers ., Recently , we observed a similar pattern when using deep CNN activations of individual layers to model the shape selectivity of single neurons of the middle Superior Temporal Sulcus body patch 4 , a fMRI-defined region of IT that is located posterior with respect to the present recordings ., The increase with deeper layers of the fit between CNN activations and neural responses has also been observed when predicting with CNN layers macaque IT multi-unit selectivity 40 , voxel activations in human LO 9 and the representational similarity of macaque and human ( putative ) IT 8 , 10 using natural images ., However , the decrease in correlation between CNNs and neural data that we observed for the deepest layers was not found in fMRI studies that examined human putative IT 8 , 10 , although such a trend was present in 6 when predicting CNN features from fMRI activations ., The deepest layers are close to or at the categorization stage of the CNN and hence strongly dependent on the classifications the network was trained on ., The relatively poor performance of the last layers is in line with previous findings that IT neurons show little invariance across exemplars of the same semantic category 41 , 42 , unlike the deepest CNN units 43 ., The question of what the different layers in the various CNN models with different depths represent neurally remains basically unanswered ., Shallow CNN layers can be related to early visual areas ( e . g . V1; V4 ) and deeper layers to late areas ( e . g . IT ) ., However , different laminae within the same visual area ( e . g . input and output layers ) may also correspond to different layers of CNNs ., Furthermore , units of a single CNN layer may not correspond to a single area , but the mapping might be more mixed with some units of different CNN layers being mapped to area 1 , while other units of partially overlapping CNN layers to area 2 , etc ., Finally , different CNN layers may represent different temporal processing stages within an area , although this may map partially to the different laminae within an area ., Further research in which recordings in different laminae of several areas will be obtained for the same stimulus sets , followed by mapping these to units of different layers in various CNNs , may start to answer this complex issue ., In contrast with IT neurons , human similarity judgements of our shapes | Introduction, Results, Discussion, Materials and methods | Recent studies suggest that deep Convolutional Neural Network ( CNN ) models show higher representational similarity , compared to any other existing object recognition models , with macaque inferior temporal ( IT ) cortical responses , human ventral stream fMRI activations and human object recognition ., These studies employed natural images of objects ., A long research tradition employed abstract shapes to probe the selectivity of IT neurons ., If CNN models provide a realistic model of IT responses , then they should capture the IT selectivity for such shapes ., Here , we compare the activations of CNN units to a stimulus set of 2D regular and irregular shapes with the response selectivity of macaque IT neurons and with human similarity judgements ., The shape set consisted of regular shapes that differed in nonaccidental properties , and irregular , asymmetrical shapes with curved or straight boundaries ., We found that deep CNNs ( Alexnet , VGG-16 and VGG-19 ) that were trained to classify natural images show response modulations to these shapes that were similar to those of IT neurons ., Untrained CNNs with the same architecture than trained CNNs , but with random weights , demonstrated a poorer similarity than CNNs trained in classification ., The difference between the trained and untrained CNNs emerged at the deep convolutional layers , where the similarity between the shape-related response modulations of IT neurons and the trained CNNs was high ., Unlike IT neurons , human similarity judgements of the same shapes correlated best with the last layers of the trained CNNs ., In particular , these deepest layers showed an enhanced sensitivity for straight versus curved irregular shapes , similar to that shown in human shape judgments ., In conclusion , the representations of abstract shape similarity are highly comparable between macaque IT neurons and deep convolutional layers of CNNs that were trained to classify natural images , while human shape similarity judgments correlate better with the deepest layers . | The primate inferior temporal ( IT ) cortex is considered to be the final stage of visual processing that allows for object recognition , identification and categorization of objects ., Electrophysiology studies suggest that an object’s shape is a strong determinant of the neuronal response patterns in IT ., Here we examine whether deep Convolutional Neural Networks ( CNNs ) , that were trained to classify natural images of objects , show response modulations for abstract shapes similar to those of macaque IT neurons ., For trained and untrained versions of three state-of-the-art CNNs , we assessed the response modulations for a set of 2D shapes at each of their stages and compared these to those of a population of macaque IT neurons and human shape similarity judgements ., We show that an IT-like representation of similarity amongst 2D abstract shapes develops in the deep convolutional CNN layers when these are trained to classify natural images ., Our results reveal a high correspondence between the representation of shape similarity of deep trained CNN stages and macaque IT neurons and an analogous correspondence of the last trained CNN stages with shape similarity as judged by humans . | medicine and health sciences, diagnostic radiology, functional magnetic resonance imaging, neural networks, visual object recognition, vertebrates, social sciences, neuroscience, animals, mammals, learning and memory, magnetic resonance imaging, primates, perception, cognitive psychology, cognition, brain mapping, memory, vision, neuroimaging, old world monkeys, research and analysis methods, computer and information sciences, imaging techniques, monkeys, animal cells, macaque, cellular neuroscience, psychology, eukaryota, diagnostic medicine, radiology and imaging, cell biology, neurons, biology and life sciences, cellular types, sensory perception, cognitive science, amniotes, organisms | null |
1,991 | journal.pcbi.1004849 | 2,016 | A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina | Implantable electrode arrays are widely used in clinical studies , clinical practice and basic neuroscience research and have advanced our understanding of the nervous system ., Implantable electronic devices can be used to record neurological signals and stimulate the nervous system to restore lost functions ., Sensing electrodes have been used in applications such as brain-machine interfaces 1 and localization of seizure foci in epilepsy 2 ., Stimulating electrodes have been used for the restoration of hearing 3 , sight 4 , 5 , bowel control 6 , and balance 7 , and in deep brain stimulation ( DBS ) to treat a range of conditions 8 ., Most neuroprostheses operate in an open-loop fashion; they require psychophysics to tune stimulation parameters ., However , devices that can combine both sensing and stimulation are desirable for the development of a new generation of neuroprostheses that are controlled by neural feedback ., Feedback in neuroprostheses is being explored in applications such as DBS for the enhancement of memory 9 , abatement of seizures 10 , control of Parkinson’s disease 11 , and the control of brain machine interfaces 12 ., Models that can accurately characterize a neural system and predict responses to electrical stimulation are beneficial to the development of improved stimulation strategies that exploit neural feedback ., Volume conductor models are typically used to describe retinal responses to electrical stimulation , however these are computationally intensive and can be difficult to fit to neural response data 13–15 ., Simpler models that can be constrained using neural recordings are necessary for real-time applications ., Linear-nonlinear models based on a spike-triggered average ( STA ) have been successfully used to characterize retinal responses to light 16–19 ., Models that incorporate higher dimensional components identified through a spike-triggered covariance ( STC ) analysis have been explored to describe higher order excitatory and suppressive features of the visual system 20–25 ., Generally , STA and STC models make use of white noise inputs and have the advantage that a wide repertoire of possible inputs patterns can be explored ., White noise models have previously been explored to describe the temporal properties of electrical stimulation in the retina 26 , 27 ., Spatial interactions between stimulating electrodes has not been previously investigated ., An example of a stimulation algorithm that could benefit from an accurate description of the spatial interactions is current steering , which attempts to improve the resolution of a device by combining stimulation across many electrodes to target neurons at a particular point 28 ., Two benefits obtained by using neural feedback algorithms are ( 1 ) the accurate prediction of the response to an arbitrary stimulus across the electrode array and ( 2 ) the ability to fit the device to individual patients from the recorded neural responses to a set of stimuli presented in a reasonable amount of time ., Here , we combine whole cell patch clamp recordings from individual retinal ganglion cells ( RGCs ) with stimulation using a multi-electrode array to demonstrate a model with the above advantages ., We find that a simple linear-nonlinear model accurately captures the effects of multi-electrode interactions and estimates the spatial relationship between stimulus and response ., The approach is scalable to a large number of electrodes , which is prohibitive to accomplish with psychophysics ., In contrast to conventional volume conductor models of electrical stimulation 13–15 , our model is straightforward to constrain using neural response data and is orders of magnitude more computationally efficient , making it suitable for use in real-time applications ., Methods conformed to the policies of the National Health and Medical Research Council of Australia and were approved by the Animal Experimentation Ethics Committee of the University of Melbourne ( Approval Number: 1112084 ) ., Data were acquired from retinae of Long-Evans rats ranging from 1 to 6 months of age ., Long-Evans rats were chosen for several reasons ., First , rat RGC morphological types have been examined in detail 29 , 30 and have similarities to RGCs found in other species , including the macaque monkey 31 and cat 32 , 33 ., Second , the size of the rat retina is larger than the mouse retina and so we were able to cover the entire stimulating electrode array with half of the retina ., The animals were initially anesthetized with a mixture of ketamine and xylazine prior to enucleation ., After enucleation , the rats were euthanized with an overdose of pentobarbital sodium ( 350 mg intracardiac ) ., Dissections were carried out in dim light conditions to avoid bleaching the photoreceptors ., After hemisecting the eyes behind the ora serrata , the vitreous body was removed and each retina was cut into two pieces ., The retinae were left in a perfusion dish with carbogenated Ames medium ( Sigma ) at room temperature until used ., Pieces of retina were mounted on a multi-electrode array ( MEA ) with ganglion cell layer up and were held in place with a perfusion chamber and stainless steel harp fitted with Lycra threads ( Warner Instruments ) ( Fig 1A ) ., Once mounted in the chamber , the retina was perfused ( 4–6 mL/min ) with carbogenated Ames medium ( Sigma-Aldrich , St . Louis , MO ) at room temperature ., The chamber was mounted on the stage of an upright microscope ( Olympus Fluoview FV1200 ) equipped with a x40 water immersion lens and visualized with infrared optics on a monitor with x4 additional magnification ., Electrical stimulation was applied subretinally through a custom-made MEA fabricated on a glass coverslip consisting of 20 platinum stimulating electrodes ( Fig 1 ) ., Each electrode had an exposed disc of 400 μm diameter , and a vertical pitch of 1 mm ., The stimulating area of the MEA spanned an area of 3 . 5 x 3 . 5 mm2 ( excluding the outer ring which was not used ) ., Glass coverslips were cleaned in an oxygen plasma chamber for 20 minutes ( Fig 1B ) ., Next , a positive ( UV-removable ) photoresist ( AZ1415H , Microchemicals ) was spin-coated onto the surface at 4000 revolutions per minute for 60 seconds ( Fig 1C ) ., A laser-printed chrome on soda glass photolithography mask was used to expose a pattern in the photoresist , then developed chemically ( MIF726 , Microchemicals ) revealing openings for electrode pads and tracks ( Fig 1D and 1E ) ., The developed cover slips were loaded into an electron beam deposition chamber ( Thermionics ) and pumped to a vacuum pressure of 1 . 5×10−6 mbar ., A 20 nm titanium adhesion layer was deposited at a rate of 0 . 2 Å/sec , followed by a platinum deposition of 130 nm at a rate of 0 . 6 Å/sec ., Residual photoresist was removed by soaking in acetone for 30 minutes , rinsing with deionized water , and finally oxygen plasma cleaning for 10 minutes ., For electrode isolation , a negative ( UV-curing ) photoresist ( SU8-2002 , Microchemicals ) was spin-coated onto the coverslip and exposed through a different photolithography mask leaving only metal exposed for stimulating electrodes and contact pads ( Fig 1F and 1G ) ., The entire device was then cured at 200°C on a temperature-controlled hotplate ., Whole cell intracellular recordings were obtained using standard procedures 34 at room temperature ., The main reason for recording at room temperature was to ensure that recordings lasted for many hours ., To obtain a whole cell recording , a small hole was made in the inner limiting membrane to expose a small number of RGC somas ., A pipette was then filled with internal solution containing ( in mM ) 115 K-gluconate , 5 KCl , 5 EGTA , 10 HEPES , 2 Na-ATP , 0 . 25 Na-GTP ( mosM = 273 , pH = 7 . 3 ) , Alexa Hydrazide 488 ( 250 mM ) , and biocytin ( 0 . 5% ) ( Fig 1A ) ., Initial pipette resistance in the bath ranged between 5–10 MΩ ., Prior to recording , the pipette voltage was nulled , pipette resistance was compensated with the bridge balancing circuit of the amplifier , and capacitance was compensated on the amplifier head stage ., Voltage recordings were collected in current clamp mode and amplified ( SEC-05X , NPI electronic ) , digitized with 16-bit precision at 25 kHz ( National Instruments BNC-2090 ) , and stored for offline analysis ., Intracellular recordings lasting up to 4 hours were obtained ., Stimulation artefacts that were present in the intracellular recording were removed offline by setting the membrane potential to a constant value for the duration of the stimulus ., Spikes in the remaining membrane potential waveform could be easily detected by finding peaks that crossed a set value ., Spike times were calculated as the time that the action potential reached its peak value ., Spike delay times were calculated by taking the difference between the spike time and the preceding stimulus offset time ., Intrinsic physiological differences , such as spike width , membrane time constant , and input resistance , among RGC types have been described 35 , 36 , which could lead to differences in response latencies to electrical stimulation ., Therefore , we performed a k-means cluster analysis on the spike latency from stimulation offset time ., The number of clusters ( k ) to fit was set manually by visual inspection of the clusters ., From the cluster analysis , we could detect if there were two or more clusters that might be attributed to direct activation or indirect activation via activation of presynaptic neurons ., Unless otherwise stated , responses to electrical stimulation were evaluated by analyzing the short-latency responses ., Short-latency responses were spikes that fell within two standard deviations of the mean of the shortest-latency cluster ., Long-latency responses fell within the cluster that occurred directly after the short-latency response ., Stimulation consisted of biphasic pulses of equal phase duration ( 500 μs ) with an interphase gap ( 50 μs ) and random amplitude ., The random amplitudes were sampled from a Gaussian distribution with variance σ2 ., Fig 2 illustrates the random amplitude pulses applied to all electrodes ., Stimulation waveform signals were generated by a custom-made MATLAB ( MathWorks version 2014a ) interface commanding a multi-channel stimulator ( Tucker Davis Technologies: RZ2 base station and IZ2 multichannel stimulator ) ., All stimulus amplitudes were bounded by the limits of the stimulator ( ±300 μA ) ., Biphasic pulses were applied to all electrodes at a frequency of 10 Hz and the numbers of short-latency responses were recorded ., To avoid overstimulation of a cell , an appropriate value of σ was chosen for each cell ., Three stimulus trains of 500 pulses were initially generated with fixed σ = 50 μA and applied to the tissue ., Next , a new set of stimulus trains were generated using a σ that varied between 50 μA and 250 μA in steps of 50 μA ., The number of spikes detected within 5 ms from the stimulus time was used to compute a response probability ., A sigmoidal curve was fit to the data of σ versus response probability to find the value of σ that resulted in half the maximum level of response ., For cells where the maximum response probability was close to 1 , σ was chosen to be a value that resulted in a response probability of 0 . 5 ., For other cells that saturated at a response probability less than 1 , σ was a lower value ., Once an appropriate value for σ was chosen for the cell , a stimulus vector ,, St→, , of length 20 ( equal to the number of electrodes ) was generated by sampling each element from a Gaussian distribution ., If the amplitude of stimulation of an electrode exceeded the stimulator limits ( ±300 μA ) , then the amplitude value was discarded and a new value was generated from the distribution ., Each stimulus was applied 3–5 times before a new, St→, was generated ., The experiment continued for as long as the cell’s responses remained stable ( usually 3–4 hours ) ., Once cells started to show signs of deterioration ( e . g . unstable high frequency spontaneous activity ) , the experiment was terminated ., After recording , the retinal tissue was removed from the chamber and mounted onto filter paper ., The tissue was then fixed for ~45 min in phosphate-buffered 4% paraformaldehyde and stored for up to 2 weeks in 0 . 1 M phosphate-buffered saline ( PBS; pH 7 . 4 ) at 4°C ., The tissue was then immersed in 0 . 5% Triton X-100 ( 20 μg/ml streptavidin conjugated Alexa 488; Invitrogen ) in PBS overnight to expose biocytin-filled cells ., Tissue was washed thoroughly in PBS , mounted onto Superfrost+ slides , and coverslipped in 60% glycerol ., Cells were then reconstructed in 3D using a confocal microscope ( FluoView FV1200 ) ., RGC types were initially identified by their focal light response at the beginning of each experiment ., ON cells showed an increase in spike rate at the onset of light; OFF cells showed an increase in spike rate at the offset of light; ON-OFF cells showed an increase in spike rate at the onset or offset of light ., Additionally , RGC classification was correlated with morphology based on dendritic stratification in the inner plexiform layer ( IPL ) 29 , 30 ., The level of stratification was defined as 0–100% from the level of the inner nuclear layer to the level of the ganglion cell layer ., The stratification depth ( s ( x ) ) was quantified as a percentage of the inner plexiform layer ( IPL ) thickness , according to, s ( x ) =100 ( Ls−xLs−Le ) ., ( 1 ), Here , x refers to the depth of a terminal dendrite and Ls and Le represent the IPL-GCL border and the GCL-INL border of the inner plexiform layer , respectively , where depth decreases from the ganglion cell layer towards the photoreceptor layer ., Cells that stratified in the inner part of the IPL ( s ( x ) ≤ 40% ) are denoted as OFF-cells ., Cells that stratified in the outer part of the IPL ( s ( x ) ≥ 60% ) are referred to as ON-cells ., For all cells in this study , the physiological and morphological classifications correlated well ., Dendritic field sizes were calculated by tracing out a region connecting the dendritic tips of each cell and fitting an ellipse to the region ., The major axis of the ellipse was used to estimate the dendritic field size ., Our objective was to find a mathematical description able to accurately capture the spiking probability of RGCs to subretinal stimulation using a MEA ., We characterized neural responses using an N-dimensional linear subspace of the stimulus space , combined with a nonlinearity describing the intrinsic nonlinear firing properties of neurons ., Using STC analysis , we derived the lower dimension stimulation subspace that led to a short-latency response in the neuron ., By projecting the raw and spike-triggered stimuli onto the lower dimension subspace , we estimated the intrinsic nonlinearity ., The probability of generating a spike , given stimulus, St→, , was estimated as, P ( R=spike|St→ ) =NN ( v→1∙St→ , v→2∙St→ , … , v→N∙St→ ) ,, ( 2 ), where, N, represents the static nonlinear function operating on arguments in μA and, v→i, ( i = 1 , 2 , … , N ) represent the N significant principal components ., To find, v→i, ( i = 1 , 2 , … , N ) , the stimulus data were first separated into a matrix containing only stimuli generating a short-latency response , SD , and a matrix containing all stimuli , ST ( Fig 3A ) ., We found the low-dimensional linear subspace that best captured the difference between the spike-triggered stimuli and the raw ensemble by performing principal component analysis ( PCA ) on the covariance matrix of the spike-triggered ensemble ,, Cs=cov ( SD ) ,, ( 3 ), and comparing it to the variance of the raw ensemble which was approximately σ2 in all stimulus directions due to the Gaussian nature of ST . PCA on Cs produce a set of eigenvectors giving a rotated set of axes in stimulus space and a corresponding set of eigenvalues giving the variance of the spike-triggered ensemble along each of the axes ., Eigenvalues that are greater than the variance of the input stimuli reveal the directions where the spike-triggered stimuli have experienced an increase in variance from the raw ensemble ., Similarly , eigenvectors that are smaller than the variance of the input stimuli reveal directions where the spike-triggered stimuli have experienced a decrease in variance from the raw ensemble ., The eigenvalues that are sufficiently different from the raw ensemble correspond to eigenvectors (, v→i , i=1 , 2 , … , N, ) pointing in directions in the stimulus space that carry information about the spiking probability of the neuron ., To test if the neural response could be accurately characterized by a one-dimensional model , we examined how many eigenvalues resulting from PCA were significantly different to chance 20 ., We compared the eigenvalues obtained by PCA on spike-triggering stimuli to a distribution of eigenvalues for a randomly chosen ensemble of stimuli ., This was done by randomly time-shifting the spike train and performing PCA on the corresponding randomized spike-triggered stimuli to recover a new set of eigenvalues ., By repeating these 1000 times , we construct a distribution of eigenvalues and set a confidence criterion outside of which we presumed the magnitude of the true eigenvalues to be significant ., The confidence criterion used was two standard deviations , or a 95% confidence interval ., If the greatest or least eigenvalue fell outside these bounds , we rejected the null hypothesis that the spike-triggered ensemble was not significantly different to the full ensemble ., We then projected out the axis corresponding to this eigenvalue from the raw data ., We repeated the test until all remaining eigenvalues fell within the bounds of the null distribution , suggesting that the remaining eigenvalues were insignificant in affecting the variance of the neuron ., Components having an eigenvalue significantly greater than the variance of the randomly time-shifted ensemble were considered to be components that generate an excitatory response on the cell ., Conversely , components that are significantly smaller than the variance of the raw ensemble were considered to be components that suppressed the cell’s response ., For the majority of cells , we found that a simplification to one-dimension (, v→1, ) accurately captured the spike-triggering information , thereby reducing the equation to one dimension ., Using this simplification , Eq ( 2 ) becomes, P ( R=spike|St→ ) =N1 ( v→1⋅St→ ) ., ( 4 ), Results in the literature indicate that response thresholds to electrical stimulation for some cell types might differ depending on pulse polarity 37 ., To explore difference in response to pulse polarity , we allowed the probability to be described by two different nonlinear functions and we found the electrical receptive fields ( ERFs ) for stimuli having a net effect that was either cathodic-first or anodic-first ., Eq ( 4 ) then becomes, P ( R=spike|St→ ) =N+ ( w→+⋅St→ ) +N− ( w→−⋅St→ ) +cRs ,, ( 5 ), where, N+, and, N−, represent static nonlinear functions and, w→+, and, w→−, represent the ERFs for stimuli with positive projections (, v→1∙St→>0, , net anodic-first ) and negative (, v→1∙St→<0, , net cathodic-first ) , respectively ., Rs represents the spontaneous firing rate in Hz and c represents a scaling factor of units Hz-1 ., To find the nonlinearities and the ERFs , the first principal component (, v→1, ) was used to divide the stimulus space into positive and negative regions by projecting all stimuli of SD and ST onto the first principal component ( Fig 3B ) ., Positive and negative regions were defined by the stimuli having either a positive or negative projection onto the first principal component ., This produced two spike-triggered stimulus matrices ,, SD+, and, SD−, ., The means of the matrices are analogous to the spike-triggered average for net anodic-first and net cathodic-first stimuli 16 , and provide an estimate of the ERFs ,, w→+, and, w→−, , respectively ., Fig 3B shows an example of the stimuli projected onto the first two principal components and the ERFs ,, w→+, and, w→−, ., After the stimuli were separated into two regions , the nonlinear functions ,, N+, and, N−, , were recovered by projecting all stimuli onto the normalized vectors, w→+, and, w→−, and segmenting the projected stimuli into 30 bins ( 15 for the net anodic-first and 15 for the net cathodic-first regions ) such that each bin contained an equal number of spikes ., By comparing the number of spike-eliciting stimuli to the total number of stimuli in each bin , an estimate of the spike probability was generated ., The nonlinear function from Eq ( 5 ) was then fit to the data , with the following equations for the sigmoidal curves:, N+ ( x+ ) =a+1+exp ( −b+ ( x+−c+ ) ), ( 6 ), N− ( x− ) =a−−a−1+exp ( −b− ( x−−c− ) ) ,, ( 7 ), where, x+=w→+∙St→, and, x−=w→−∙St→, ., Coefficients a+ and a− represent scaling factors that determine the saturation amplitudes , b+ and b− represent the gain of the sigmoidal curves , and c+ and c− represent the thresholds ( 50% of the saturation level ) for the net anodic-first and net cathodic-first stimulation , respectively ., Note that the vectors, w→+, and, w→−, might not necessarily be parallel to each other , nor parallel to, v→1, ., This may result in electrodes that differentially influence the neuron’s response to anodic-first or cathodic-first stimulation ., To test the similarity between the positive and negative ERFs , we calculated the correlation coefficient between them ., A correlation coefficient close to -1 indicated that the two ERFs are approximately equal in magnitude but opposite in sign , and therefore the cell was equally influenced by the same combination of electrodes ., A value closer to 0 indicates that the two ERFs have no correlation , and therefore the cell is not influenced by the same electrodes ., Positive correlation coefficients were not expected and did not occur ., The spatial extent over which a cell was influenced by electrical stimulation was estimated by computing a weighted mean of the distance between the cell and the electrodes ., The distance between the cell and each electrode center was weighted by the electrode’s influence on the cell as given by the ERFs ., The weighted mean for both ERFs was given by ,, D+=∑i=120wi+di∑i=120wi+, ( 8 ), D−=∑i=120wi−di∑i=120wi−, ( 9 ), where, wi+, and, wi−, are the weights given by, w→+, and, w→−, respectively , and di is the distance between the cell and electrode i ., To test which electrodes significantly affected the cells response ,, w→+, and, w→−, were recalculated 1000 times by projecting the data onto the first eigenvector of the randomly time-shifted distribution of eigenvectors from the significance test ., A distribution for, w→+, and, w→−, was constructed from which the true, w→+, and, w→−, could be compared ., Electrodes from the true, w→+, and, w→−, were compared to the root mean square ( RMS ) of the distribution and electrodes that were larger than the RMS bounds were considered significant ., For cells where more than one significant principal component was obtained from the significance test , we compared the variance explained by the first principal component to that of the next most significant component ., This was done by comparing the separation of the first eigenvalue e1 from the mean of the randomized distribution of eigenvalues (, e¯rnd, ) with the separation between the next most significant eigenvalue ( e2 ) and the same mean ., The strength was defined as, G=|e1−e¯rnd||e2−e¯rnd| ,, ( 10 ), and gives a relative measure of how much larger e1 is compared to the next most significant eigenvalue ., e¯rnd, was calculated from the first iteration of the hypothesis test ., For each cell , 80% of the data were used to fit the model parameters , while the remaining data were used to validate the model ., To obtain a quantitative estimate of the performance of the model , the probability of response given a stimulus was calculated from the validation data and compared to the model prediction ., The validation stimuli were assigned 1 if they produced a direct response and 0 otherwise ., Each stimulus was also assigned a predicted probability using the model ( Eq ( 5 ) ) recovered from the training data ., The stimuli were then binned into segments in the range of 0 to 1 depending on their predicted probability and an actual probability for each bin was calculated by the fraction of stimuli assigned a 1 ., The mean square error ( EMS ) was then calculated ,, EMS=1B∑i=1B ( P^i−Pi ) 2 ,, ( 11 ), where B is the number of bins ,, P^i, is the predicted probability , and Pi is the calculated probability from the data for a particular bin ., For all cells , B was equal to 10 ., The root mean square error ( ERMS ) of the model , given by, ERMS=EMS ,, ( 12 ), was used as a quantitative measure of the model accuracy ., We also compared the error of a one-dimensional model to that of a two-dimensional model ., The two-dimensional spike probability was estimated by, P ( R=spike|St→ ) =N2 ( v→1⋅St→ , v→2⋅St→ ) ,, ( 13 ), where, v→2, represented the next most significant component , either the second ( excitatory ) or last ( suppressive ) principal component ., To find the two-dimensional nonlinearity (, N2, ) , a surface was fit to the spike-triggered data projected onto these two most significant components ., The surface fit was obtained using a cubic spline interpolation on MATLAB’s curve fitting toolbox ., Once the surface was fit , the validation data was used to calculate the mean model error calculated using Eqs ( 11 ) and ( 12 ) ., Intracellular recordings lasting up to 4 hours were obtained from 25 cells ., This population included 7 ON , 13 OFF , 3 ON-OFF , and 2 cells where 3-D morphological reconstructions were not possible ., Our comparison of histological and physiological results were consistent with those of Huxlin and Goodchild 29: ON center cells stratify in the inner IPL ( 40–100% depth ) , while OFF center cells stratify in the outer IPL ( 0–40% depth ) ., ON-OFF types stratify in both the inner and outer layers of the IPL ., Fig 4 shows an example of an ON-OFF RGC with dendrites stratifying in both inner and outer layers of the IPL ., A summary of the stratification depths for the ON , OFF , and ON-OFF cells are given in Table 1 ., To fit the model parameters , cells were simultaneously stimulated with biphasic pulses on all electrodes , where the amplitude of the pulses were randomly chosen from a Gaussian distribution of zero mean and standard deviation σ ( here after white noise stimuli ) ., To determine an appropriate value of σ for each cell , three short stimulus trains ( approximately 3 min each ) of white noise stimuli with different σ were initially presented to the cell ( σ varied from 50–250 μA in steps of 50 μA ) ., The number of times the cell responded within 5 ms was used to obtain a response probability ., Each cell responded with a different maximum response probability when stimulated with white noise at the highest value of σ; some cells could respond with a spike probability close to one , while others only responded with a spike probability less than one ., However , cells that responded to fewer pulses tended to show an increased level of long-latency activity ( > 5 ms ) , most likely due to intensified network activation ., The value of σ used for white noise stimulation for each cell in the rest of the experiment was the value corresponding to half the saturation level ., Fig 5 shows examples of two cells with different σ values ., Cell 2 responded with a spike probability close to one even at low σ values while cell 1 responded maximally with a spike probability of around 0 . 6 ( Fig 5A ) ., The value of σ used for white noise stimulation for cell 1 was 85 μA and for cell 2 was 145 μA ., Note that we used this method to calibrate our experiments and the nonlinear curves do not show the maximum probability of firing , as each point is an average over a variety of stimulus amplitudes ., Following this calibration , longer trains of white noise stimulation ( approximately 2 minutes each ) with the corresponding value of σ for each cell were used to obtain data for recovering the model parameters ., The corresponding Gaussian distributions for cells 1 and 2 are shown in Fig 5B ., The experiment for each cell lasted approximately 3–4 hours ., Stimulation artefacts were present in the recordings that could be removed by blanking without affecting the ability to detect the cells’ spikes ., Fig 6A shows examples of some of the spiking patterns observed during experiments:, ( i ) a failed anodic-first stimulus ,, ( ii ) a successful short-latency anodic-first stimulus ,, ( iii ) a successful short-latency cathodic-first stimulus , and, ( iv ) a successful long-latency cathodic-first stimulus ., The top panel in each subplot shows the raw recording and the bottom panels show the same signals with the artefact removed by blanking ., Also shown in the bottom panels are the thresholds used to detect spikes ( horizontal lines ) ., These figures show that spikes could be easily identified without interference from the stimulus artefact ., The spike latencies after a stimulus pulse were analyzed for each cell ., Some cells produced a bimodal distribution attributed to the short- and long-latency responses ( N = 13 ) , with four cells showing overlapping distributions for the two latencies ., The remaining cells only produced short-latency spikes that were close to the timing of the stimulus pulse ( N = 8 ) ., Fig 6B depicts the spike latencies for all cells ., The average short-latency cluster mean for all cells was 1 . 75 ms from stimulus offset ( SD 1 ms ) ., The longest short-latency cluster for a cell had a mean of 4 . 35 ms ( SD 1 . 37 ms ) ., Fig 6C and 6D show the distributions of spike latencies for two sample cells , along with fitted Gaussian distributions obtained from the cluster analyses ., Fig 6C shows a cell with two distinct clusters , with a short-latency cluster mean at 1 . 95 ms . Fig 6D shows two overlapped clusters with the short-latency cluster mean at 4 . 35 ms . Our aim was to find a mathematical description that could accurately capture the response probability of neurons to concurrent stimulation using a MEA ., To do this we first performed a principal components analysis on the ensemble of stimuli that triggered a short latency spike ., For all cells we found that the neural response could be well predicted by projection onto a subspace spanned by the first principal component ,, v→1, ., The variance explained by, v→1, was significantly higher than that of next greatest component ,, v→2, , suggesting that the spiking information was well captured by, v→1, ., Fig 7A shows the spike-triggered probabilities projected onto, v→1, and, v→2, from the sample cell in Fig 3B ., The histograms show the number of stimuli ( gray ) and responses ( black ) along each axis; the ratio of the bars of the two histograms is used to determine the spike probabilities along each axis ., From the histograms , it is clear that the distribution of the spike-triggered stimuli was bimodal in the, v→1, axis; however , it remained unimodal along, v→2, , similar to Gaussian distribution of the full stimulus ensemble ., A statistical hypothesis test was used to determine how many eigenvalues recovered by PCA revealed a significant amount of the spike-eliciting information ., The test compares the eigenvalues recovered from the data , to a set of eigenvalues produced by randomly time-shifting the spike train and performing PCA on the new set of stimuli ., From the set of time-shifted eigenvalues , a 95% confidence limit was set to determine | Introduction, Materials and Methods, Results, Discussion | Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system ( e . g . cochlear , retinal , and cortical implants ) ., Currently , most neural prostheses use serial stimulation ( i . e . one electrode at a time ) despite this severely limiting the repertoire of stimuli that can be applied ., Methods to reliably predict the outcome of multi-electrode stimulation have not been available ., Here , we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells ( RGCs ) stimulated with a subretinal multi-electrode array ., In the model , the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability ., The low-dimensional subspace is estimated using principal components analysis , which gives the neuron’s electrical receptive field ( ERF ) , i . e . the electrodes to which the neuron is most sensitive ., Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes ., We find that the model captures the responses of all the cells recorded in the study , suggesting that it will generalize to most cell types in the retina ., The model is computationally efficient to evaluate and , therefore , appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy . | Implantable multi-electrode arrays ( MEAs ) are used to record neurological signals and stimulate the nervous system to restore lost function ( e . g . cochlear implants ) ., MEAs that can combine both sensing and stimulation will revolutionize the development of the next generation of devices ., Simple models that can accurately characterize neural responses to electrical stimulation are necessary for the development of future neuroprostheses controlled by neural feedback ., We demonstrate a model that accurately predicts neural responses to concurrent stimulation across multiple electrodes ., The model is simple to evaluate , making it an appropriate model for use with neural feedback ., The methods described are applicable to a wide range of neural prostheses , thus greatly assisting future device development . | medicine and health sciences, action potentials, engineering and technology, signal processing, membrane potential, ocular anatomy, retinal ganglion cells, electrophysiology, neuroscience, surgical and invasive medical procedures, mathematics, functional electrical stimulation, ganglion cells, algebra, white noise, membrane electrophysiology, bioassays and physiological analysis, research and analysis methods, animal cells, electrophysiological techniques, cellular neuroscience, retina, electrode recording, linear algebra, cell biology, anatomy, physiology, neurons, biology and life sciences, cellular types, physical sciences, afferent neurons, ocular system, eigenvalues, neurophysiology | null |
1,761 | journal.pcbi.1000536 | 2,009 | Antigenic Diversity, Transmission Mechanisms, and the Evolution of Pathogens | There are two major principles by which pathogens avoid their elimination: escaping the host immune response via antigenic variation or immune evasion , or transmission to a new immunologically naive host ., Directly transmitted pathogens which cause chronic diseases , such as many sexually transmitted infections ( STIs ) , tend to rely more on the former , while many acute infections , for instance measles , rely more on high transmissibility ., Indeed pathogens such as measles show very little antigenic diversity , with immune responses being strongly cross-reactive between strains ., There are then those pathogens which have intermediate levels of both immune escape and transmissibility — such as influenza , rhinovirus and RSV ( here referred to as FLIs — flu-like infections ) ., The evolutionary success of directly transmitted pathogens can also be seen to depend on the nature , frequency and structure of contacts between hosts ., Infections transmitted to a small number of hosts ( per time unit and infected individual ) via intense contact ( e . g . , via fluids ) are usually caused by pathogens of high antigenic diversity and long duration of infection , while those transmitted via casual contact ( e . g . , via aerosol ) with a large number of hosts may typically have lower diversity and much shorter durations of infection ., While many of the evolutionary constraints are different 1 , 2 , vector-borne infections typically fall in the former of these two classes 3 , 4 ., The relationship between so-called infection and transmission modes with respect to substitution rates of RNA viruses has been investigated in 5 ., It is straightforward to explain the long duration of infection and consequent antigenic diversity of sexually transmitted or blood-borne infections: the frequency of relevant contacts between hosts is low , meaning infection needs to be extended to ensure the reproduction number ( the number of secondary cases per primary case 6 ) exceeds one ., However , many childhood diseases ( ChDs ) — at least those caused by RNA viruses — would also seem to have the genetic potential to prolong their survival within one host via by generating antigenic variants ., The fact this is not observed is much harder to explain ., At its root are the tradeoffs between maximizing between-host transmissibility and within-host duration of infection , and these are what we focus on exploring in this paper ., The molecular genetic basis of transmissibility is still poorly understood for most pathogens ., However , all other things being equal , the level of pathogen shedding by a host ( whatever route is relevant ) must be positively correlated with infectiousness ., A first-pass analysis might therefore postulate that overall transmissibility ( as quantified by the basic reproduction number , ) might be proportional to the total number of pathogen copies produced during an infection — the cumulative pathogen load ., Past work using a simple model of the interaction between a replicating pathogens and adaptive host immune responses examine what rate of antigenic diversification within the host would maximize cumulative pathogen load 7 ., This showed that the combination of resource-induced ( whether nutrients or target cells ) limits on peak pathogen replication rates and an ever more competent immune response mean that the optimal strategy is not to diversify as rapidly as possible , but instead to adopt an intermediate rate of diversification ., In addition , there are further tradeoffs associated with high mutation rates — the ultimate being the error catastrophe associated with error rates in genome replication which exceed those seen in RNA viruses 8–11 ., However , the assumption that transmission fitness ( as quantified by ) is linearly proportion to total pathogen load is clearly naïve ., The instantaneous hazard of infection for a susceptible host in contact with an infected host at a point in time may indeed be linearly related to pathogen load at that time , but going from this assumption to a calculation of the overall reproduction number is far more complex than simply calculating the area under the pathogen load curve ., Integrating a hazard over the finite time of contact gives an exponential dependence between the probability of infection and pathogen load , i . e . , ., Such an expression fits experimental data 12 on the relationship between HIV viral load and transmission rates well ( cf . Fig . 1 ) ., This means the parameter represents a pathogen load threshold below which the probability of infection declines rapidly , and above which it rapidly saturates to some maximal value ., Hence can be thought of as the characteristic pathogen load required for transmission — though it is not a true minimum infectious dose — there is a finite probability of infection for , but that probability decays exponentially fast with reducing ., A key insight ( and assumption ) of the work presented here is that while we might expect pathogens to be able to evolve to reduce ( or increase ) , there are fundamental physical constraints imposed by transmission routes on the minimum value of attainable ., An STI might have a minimum value of approaching a single pathogen particle ( e . g . virion ) but , for respiratory infections , the much lower proportion of all pathogen particles emitted from a host , which have any chance of contacting epithelial tissues of a susceptible host ( even conditioning on a susceptible host being in the near vicinity of the infected individual ) , necessarily means that must be orders of magnitude larger for such pathogens ., We will show that there is a critical value of above and below which two different sets of pathogen types are evolutionarily favored ( in terms of having maximal ) ., Within each set , the particular type which has maximal will be seen to depend on the local structure of the contact network between hosts ., Our approach is to construct a model of within-host pathogen dynamics which incorporates adaptive host immunity and antigenic diversification ., The key output from this model is how pathogen load varies through time during an infection ., We then calculate the basic reproduction number , , for that infection assuming a particular local contact network structure and frequency of contacts ., The within-host model developed here is an extension of a model studied earlier by one of us 7 ., Our work builds on a range of past work examining the tradeoffs between within-host replication and persistence , antigenic variation and between-host transmission success , initiated by 13 , and followed by 14 , 15 , which first include immune response and explore cross-immunity ., More recent studies , to mention a few , investigate pathogen evolution under limited resources 16 , include virulence 17 , consider the immunological response in more detail 18 , examine the impact of between-host contact structure on pathogen evolution 19 , 20 , and explore host-pathogen co-evolution 21 , 22 ., We use as our fitness measure for determining evolutionarily optimal phenotypic strategies ., We do not explicitly model competition between pathogen strains with different phenotypes co-circulating in a host population , since for infinite populations , has been shown to be the fitness measure which determines the outcome of such competition 23 ., This holds even when comparing strains with different rates of antigenic diversification — if the strain with lower induces no long-lived immunity in the host ( giving SIS dynamics ) and the higher strain induces life-long immunity , ( giving SIR dynamics ) the higher strain will still always ( eventually ) outcompete the lower strain ., There are limitations to the use of as a fitness measure ( further considered in the Discussion ) — for instance , in situations where strains interact asymmetrically via cross-immunity , or when populations are small and stochastic extinction is significant ., In addition , while we take account of local ( egocentric ) network structure in defining in our analysis , large-scale network structure might also affect the determinants of evolutionary fitness ., However , we feel these limitations are outweighed for an initial analysis by the analytical and computational tractability afforded by use of a relatively simple transmission measure , and the consequent ability not to rely on unintuitive large-scale simulations ., We do not explicitly consider how a pathogen could evolve its biological characteristics to maximize transmission fitness ( i . e . the evolutionary trajectory a pathogen would take through parameter space ) ., There are undoubtedly many constraints on the possible paths which pathogens can take 24 , however , and exploring how these affect , for instance , pathogen adaptation to a new host species , will be an important topic for future work ., The multi-strain model used extends past work 7 by adding cross-immunity between strains ( see Methods for details ) ., The infection within one host starts with a single strain , with further strains arising through random mutation ., All strains compete for resources ( e . g . target cells ) to replicate ., Immune responses to strains are assumed to be predominantly strain-specific , albeit with a degree of cross-immunity , the strength of which decays with the genetic distance between strains ., Pathogen replication depletes resource , and independently from immunity , limits to pathogen growth are set by the replenishment rate of resource ., This quantity only determines the short-term dynamics of the model whereas immunity is also responsible for the long-term behavior ., The dynamics of the model is characterized by an initial period of exponential growth of the pathogen load , which eventually slows due to immune responses and resource limitations ., One observes a latency period and an initial peak ., Pathogen load then declines exponentially ., If the trough load of a pathogen strain drops below a threshold level we assume the pathogen is eliminated from the host ( to avoid persistence at unrealistically low , fractional , loads ) ., However if a novel strain emerges before the seed strain goes extinct , pathogen load can recover , so long as there is sufficient resource available and cross-immunity is not too strong — leading to a second , albeit lower peak in pathogen load ., Further peaks in pathogen load can occur via the same mechanism ., The rate at which new strains arise is the most important determinant of the number of pathogen load peaks seen and thus the overall duration of infection ., Less intuitively , this rate also determines the size of the initial peak ( discussed below ) ., Since mutation is modeled stochastically , we average over multiple realizations ( e . g . Fig . 2A , B ) of the model to calculate an average pathogen load distribution over time ( Fig . 2C ) ., The average distribution consists of a first latency period , a large initial peak , a second latency period and possibly an irregular oscillating part of low pathogen load ., The point at which the viral load vanishes determines the duration of infection ., We systematically calculate average pathogen load curves from the within-host model for wide ranges of two biological parameters: the antigenic mutation rate ( i . e . , the rate of mutations which lead to antigenically novel strains ) and the pathogen replication rate ., These two parameters span what we call pathogen parameter space , in which evolutionarily favored pathogens are represented by points that are associated with maximal fitness values ., From the discussion in the introduction , we can immediately identify the cumulative pathogen load and duration of infection as epidemiologically relevant quantities ., Fig . 3A , B show these as a function of the parameters and ., In addition , Fig . 3C shows a quantity — interpolating between the two former — evaluated only for the initial period of the infection ( utilizing the expression relevant for transmission , i . e . , , quantified at the initial peak of the pathogen load ) ., We will see below that all the surfaces shown in Fig . 3A–C crudely represent fitness surfaces associated with three distinct pathogen types ., The plots in Fig . 2 show the corresponding within-host dynamics for the different pathogen types ., The within-host dynamics generate a tradeoff between initial peak pathogen load and antigenic diversity: high initial peak load corresponds to low diversity and vice-versa ( see Methods for more details ) ., This tradeoff has implications for transmission , giving an enhanced spread of pathogens of low antigenic diversity during the initial peak of pathogen load ., This effect explains the emergence of ( ChD-like ) infections with short durations of infection within our model framework ( Fig . 3C vs . 3F ) ., Long durations of infections ( Fig . 3B ) are also obtained , as expected , for pathogens with greater antigenic variation ., To calculate the reproduction number ( i . e . , the pathogen fitness ) , we model a dynamic contact network in the neighborhood of one initially infected host ., The profiles of pathogen load over time obtained from the within-host model then determine the infectiousness of the infected host to its neighbors ., ( We utilize the mean-load profiles averaged over individual hosts . ), Epidemiological dynamics are determined by 4 parameters ., Two of these relate to properties of the transmission route: the infectiousness parameter and the contact rate between hosts ., Together these define a two-dimensional parameter space we term transmission space ., The other two define properties of the contact network between hosts: the replacement rate of neighbors and the cliquishness/clustering of the network ( i . e . , the proportion of pairs of contacts of a host who are also contacts of each other ) ., These two parameters define what we term contact space ., We build a model ( cf . Methods ) incorporating these 4 parameters ( plus implicitly the within-host pathogen space parameters ) to calculate the number of first generation infections from an infected individual in an entirely susceptible population ., Varying the 4 parameters of transmission and contact space , we obtain three different classes of fitness landscapes over pathogen space — as represented by Fig . 3D–F ., The maxima of each landscape differ with respect to their antigenic mutation rate ( and hence the resulting level of antigenic diversity ) and within-host pathogen replication rate ., By changing the contact rate and keeping the other transmission as well as the contact space parameters fixed , one can shift between these classes ., In general ( as shown further below ) , low , intermediate , and high contact rates induce moderate , high , and low antigenic diversity , respectively , as evolutionarily favored outcomes ( represented by the locations of the fitness maximum in Fig . 3D–F ) ., There are clear similarities between the three classes of fitness landscapes ( Fig . 3D–F ) and the different within-host infection characteristics plotted in Fig . 3A–C ., Low contact rates induce landscapes that resemble the cumulative pathogen load , intermediate contact rates give landscapes resembling the the duration of infection surface , and high contact rates map onto the surface of Fig . 3C which characterizes the relative importance of the initial peak in the pathogen load profile ., We classify the optima of these 3 classes of fitness landscape infection types , labeling them A , B , and C , respectively ., Varying the infectiousness parameter can also move the fitness landscape between these types — as ( the STI limit; i . e . , , ) , the fitness landscape becomes more similar to the duration of infection surface ( Fig 3B ) , while for ( the FLIs limit; i . e . , , ) , it becomes more similar to the cumulative pathogen load surface ( Fig . 3A ) ; cf ., ( 7 ) and ( 6 ) ., It is important to note that both of these limits involve substantial antigenic diversity — where transmission fitness is dominated by cumulative pathogen load ( infection type A ) , while moderate antigenic diversity is seen , and when infection duration dominates fitness ( infection type B ) , high antigenic diversity is selected for ., Neither maps on to the special case of infection type C ( Fig . 3F ) in which optimal transmission fitness is achieved by a set of parameters giving very low antigenic diversity ( in essence a single strain ) ., For low antigenic diversity to be optimal , it is necessary for fitness to be dominated by the peak pathogen load achieved during primary infection ( i . e . , the first peak of pathogen load ) ., Varying the transmission and contact space parameters more systematically , one can map out the regions of parameter space for which particular infection types are optimal ( Fig . 4 ) ., This shows how the emergence of pathogens of different types depends on the properties of the between-host contact network ., Pathogens with low antigenic diversity ( and thus short infectious periods ) are favored by high network cliquishness ( i . e . , when an individuals contacts are contacts of each other — as is the case for household and school contacts ) , and the rate of turnover of network neighbors is low ( again the case for household and school contacts ) ., So far we have assumed only the pathogen space parameters ( and ) can change during pathogen evolution ., Now we examine making the infectiousness threshold a parameter which can evolve under selection — albeit with constraints on its lower bound set by the transmission route of the pathogen concerned ., Fig . 5 shows the results as a function of contact rate for two different choices of contact space parameters and lower bounds on the infectiousness threshold parameter , suitable for a respiratory pathogen and an STI respectively ., Reproduction numbers ( Fig . 5B ) lie in the expected range , and the three regimes of antigenic diversity corresponding to the types A/B/C ) can be found in the evolutionarily optimal values of ( Fig . 5A , C ) ., Note that only type A and type C diversity is seen for the respiratory pathogen parameter choices , while only type B is seen for the STI parameter set ., Indeed for the STI parameter set , the evolutionary stable state is independent of the contact rate , and is determined by evolving to its minimum value ., As expected , the evolutionary optimal value of the infectiousness parameter ( Fig . 5B ) is always close to the minimal attainable value , except in the type C pathogen regime ( where cliquishness is necessary; cf . Fig . 4 ) ., The reason for the deviation from the minimum value lies in a reduced local network saturation , which is characteristic for type C: concentrating infectiousness over the shortest possible time period ( and consequently lengthening the latent period ) shortens the overlap between generations of infections , and this reduces the chance that the secondary cases of an index case infect remaining susceptible contacts of the index ( before the index can infect them ) ., The effect ( which yields an enlarged susceptible number in ( 6 ) ) is minor , however — the difference in between the optimal value of and the minimum bound set for a pathogen type is typically very small ., The evolutionarily optimal replication rate is always low for STI-like contact parameters ( giving type B pathogens ) , reflecting the need for long-lived infections , but shows greater variability for respiratory pathogen parameter regimes ( Fig . 5D ) — being high in the type A regime , but low for type C . The latter result reflects a tradeoff between height of the initial peak in pathogen load and length of the latent period — longer latency , as explained above , can increase the number of direct infections caused by an index case by reducing the overlap between generations of infection ., Only higher ( minimal ) infectiousness values — realistic for ChDs utilizing the respiratory transmission route — increase the optimal replication rate for type C infections ( cf . Text S1 , Sect . B . 2 ) ., Note that these results are consistent with a recently formulated hypothesis on tradeoffs between reproductive rate and antigenic mutability 25 , proposing a reciprocal relationship between these two ( pathogen space ) parameters in real-world infections ., Re-examining Fig . 4 , it is clear that type A infections ( green areas ) only exist when the infectiousness parameter exceeds some minimum value ( indicated on the graphs in Fig . 4 with an arrow ) ., In the absence of constraints , selection for maximal transmissibility will clearly cause to evolve towards 0 ., Hence the effect of constraints on imposing a lower bound on has a critical effect on what range of pathogen types are expected ., We define the value of the lower bound on infectiousness below which infection type A is no longer found the critical infectiousness threshold ., Evolutionary dynamics show a phase transition at this point , as can be seen in Fig . 6 which maps the areas of contact parameter space for which different infection types are seen for choices of the lower bound on just above and below the critical point ., As discussed already , the transmission route is likely to be the most important determinant of the lower bound on , with STIs and other non-airborne pathogens , including those requiring a vector , being likely to achieve a much lower value of than respiratory pathogens ( as assumed in Fig . 5 ) ., This is clear if one views as quantifying how much shed pathogen is typically wasted to achieve a single infectious contact ., We therefore speculate that the critical infectiousness threshold may have a significant biological effect , with STIs — and also vector-borne infections — being within the sub-critical domain ( Fig . 6B ) , and with ChDs and FLIs — not necessarily relying on a respiratory transmission route — being super-critical ( Fig . 6A ) ., Within the super-critical regime , the presence of low-diversity ChD-like type C infections depends less on the precise value of the critical infectiousness threshold and more on the contact rate and contact parameters ., Infections of type C occur in contact networks with high cliquishness and low replacement rates — but not in the opposite case ( cf . presence of blue areas in Figs . 4 and 5A ) ., Vector-borne infections ( representing contact networks of large neighborhood sizes or high replacement rates , and cliquishness not playing a role ) are thus excluded to be type C . At first sight they seem to be type A , because of large reproduction numbers ., Large , however , can also be the result of large neighborhood sizes or high replacement rates — immediate from ( 6 ) and ( 8 ) ., The quantity being important in this context is the lower bound on possible infectiousness values , which is small ( i . e . , sub-critical , ) — this identifies vector-borne infections as type B ., The work in this paper was motivated by a desire to understand why the most transmissible human pathogens — archetypal childhood diseases such as measles and rubella — show remarkably little antigenic variation , while less transmissible diseases — such as influenza ( and many other respiratory viruses ) and sexually transmitted diseases show substantial diversity ., Addressing this question requires consideration of how evolvable parameters governing the natural history of infection within a host affect the transmission characteristics of a pathogen in the host population ., We developed a relatively simple multi-strain model of the within-host dynamics of infection ., Pathogen particle consume resource to replicate , and their replication is inhibited by a dynamically modeled immune response with two components: strain-specific immunity , and cross-immunity ., Cross-immunity was assumed to be the key fitness cost of antigenic diversity within the host; the benefit is a much enhanced duration of infection ( and thus transmission ) ., Pathogens which have a low rate of generating new antigenic variants are cleared from the host much faster than those with a high rate of antigenic diversification , but also maximize the initial peak level of parasite load reached prior to clearance ( cf . Methods ) ., The second evolvable within-host parameter we considered was the within-host pathogen replication rate ., Given the resource-dependent model of replication assumed , this has a more limited effect than in some models , but can set the timescale for pathogen load to initially peak and thus determine the effective latent period of the disease ., At the between-host level , we assume a simple relationship between pathogen load and infectiousness which has been shown to be appropriate to model HIV transmissibility 12 , and incorporates the concept of a soft threshold level of pathogen load needed for a substantial level of transmissibility , ., As argued above , this parameter is perhaps best viewed as the amount of excreted pathogen which is wasted to achieve an infectious contact ., For a perfect pathogen , the value could correspond to a single pathogen particle , but in reality the physics of transmission will typically mean is much higher ., We have considered to be an evolvable parameter , but introduced the concept of minimum possible value of which is transmission route specific — being intrinsically much higher for respiratory pathogens ( where transmission occurs via virus filling a three-dimensional volume around the infected individual ) , and potentially much lower for sexually transmitted diseases where transmission occurs over a two-dimensional contact surface ., The final element we incorporate into the framework developed is contact between hosts , assumed to occur at some rate , within a contact network of hosts with a certain mean neighborhood size and cliquishness ., We derive a simple model to calculate the reproduction number of a single infected host in this network allowing for local saturation effects in the network caused by clustering ., It is the network-specific reproduction number we have used as our overall measure of pathogen fitness , and examine what within- and between-host pathogen characteristics maximize fitness for different types of transmission route and host contact network ., Putting these elements together , we found that optimizing reproductive fitness in this way leads to well-defined infection types A , B , C , as contact rates ( and reproductive numbers ) increase ( cf . Fig . 5 ) ., Type A and B both represent infections with low , with A being influenza-like and B mapping more to sexually transmitted diseases ., When contact rates are very low , only one of these two types is evolutionary stable , with the stable type being determined by the assumed minimum infectiousness threshold ., The latter serves as an order parameter and determines the mode of transmission ., Consistently , type A corresponds to a high minimum infectiousness threshold whereas type B results from a low minimum threshold ., The change of the transmission mode as a function of transmission threshold is phase transition-like ., Infection type C represents childhood diseases with the highest values of ., This regime is not possible for small network neighborhood sizes or low values of cliquishness ( i . e . random networks ) ., It relies on the existence of large , persistent and highly clustered contact neighborhoods ., In this context , maximizing the number of secondary infections ( and thus overall fitness ) requires a pathogen strain able to, ( a ) infect as many of the index hosts contacts as possible in as short a possible time , and, ( b ) minimize the extent to which generations of infections overlap ., The latter constraint is a result of the network clustering — if secondary cases become infectious while the index case is still infectious , they may deplete susceptible from the contact neighborhood before the index case has the chance to infect them ., A latent period of the same or longer duration as the infectious period results in more discrete generations and maximizes the reproduction number of the index case ., The need for a long latent period results in the evolutionary optimal value of the within-host replication rate , being relatively low for type C pathogens ., The limited antigenic diversity and short infectious periods of type C pathogens are determined by the higher infectiousness threshold and the consequent need to maximize the peak pathogen load attained early in infection ., When contact rates are high , the increase in duration of infection resulting from higher rates of antigenic diversity is insufficient to compensate for the reduction in peak pathogen load ( and therefore infectiousness ) caused by cross-immunity being generated against multiple pathogen strains simultaneously ., A single strain pathogen generating a single immune response is able to generate a larger primary infection peak — though at the cost of being unable to sustain infection further ., It is encouraging to see that the classification of infection types our model predicts closely corresponds to many of the pathogen regimes identified in other work 24 ., However , our focus has been slightly different from that work , which focused more on the effect of different intensities of cross-immunity on between host phylodynamics ., In contrast , we have focused more on examining how differences in transmission routes and contact rates ( ) determine pathogen characteristics — though the influence of different levels of cross-immunity could be explored in future work ., Furthermore , it is interesting to note that in the context of our model only the concept of a minimal infectiousness threshold — introduced to characterize transmission modes — is necessary to explain the findings of 25 on tradeoffs between reproductive rate and antigenic mutability ., Reference to the hosts age is not needed here ., The key limitation of our analysis is our highly simplified treatment of between-host transmission — namely using a network-corrected reproduction number as our measure of strain fitness ., Doing so assumes evolutionary competition occurring in infinite ( non-evolving ) host populations in infinite timescales ., It would clearly be substantially more realistic to explicitly simulate the transmission process in a large host population ., The computational challenges are considerable — while large-scale simulations of influenza A evolution and transmission have been undertaken 7 , 26 , 27 , these have not included within-host dynamics , and have simulated evolution for decades rather than millennia ., Other work 20 , 28 has simulated the evolution of pathogen strains on a contact network for longer time periods , but only in very small ( ) populations , and without modeling within-host dynamics ., However , continuing advances in computing performance mean that it may now be feasible to explicit model multiple strains evolving within hosts and being transmitted independently in a large population ., Such an approach would allow exploration of the relationship between antigenic diversity ( and cross-immunity ) within single hosts and strain dynamics at a population level ., Perhaps even more importantly , it would allow extinction processes to be properly captured , while our current approach implicitly assumes fixation probabilities to be 1 even when fitness differences are marginal ., Proper representation of finite population sizes and extinction will also allow the evolutionary emergence of childhood diseases ( such as measles ) as a function of early urbanization to be modeled ., A second limitation is that we only consider a single , highly simplified within-host model ., Future work to test the sensitivity of our results to the choice of within-host model would be valuable ( cf . Text S1 , Sect . A , which investigates an extension of the model here ) ., That said , we would argue that the key qualitative feature of our within-host model driving the evolutionary results is the tradeoff — mediated by cross-immunity — between the maximum value of parasite load attained in i | Introduction, Results, Discussion, Methods | Pathogens have evolved diverse strategies to maximize their transmission fitness ., Here we investigate these strategies for directly transmitted pathogens using mathematical models of disease pathogenesis and transmission , modeling fitness as a function of within- and between-host pathogen dynamics ., The within-host model includes realistic constraints on pathogen replication via resource depletion and cross-immunity between pathogen strains ., We find three distinct types of infection emerge as maxima in the fitness landscape , each characterized by particular within-host dynamics , host population contact network structure , and transmission mode ., These three infection types are associated with distinct non-overlapping ranges of levels of antigenic diversity , and well-defined patterns of within-host dynamics and between-host transmissibility ., Fitness , quantified by the basic reproduction number , also falls within distinct ranges for each infection type ., Every type is optimal for certain contact structures over a range of contact rates ., Sexually transmitted infections and childhood diseases are identified as exemplar types for low and high contact rates , respectively ., This work generates a plausible mechanistic hypothesis for the observed tradeoff between pathogen transmissibility and antigenic diversity , and shows how different classes of pathogens arise evolutionarily as fitness optima for different contact network structures and host contact rates . | Infectious diseases vary widely in how they affect those who get infected and how they are transmitted ., As an example , the duration of a single infection can range from days to years , while transmission can occur via the respiratory route , water or sexual contact ., Measles and HIV are contrasting examples—both are caused by RNA viruses , but one is a genetically diverse , lethal sexually transmitted infection ( STI ) while the other is a relatively mild respiratory childhood disease with low antigenic diversity ., We investigate why the most transmissible respiratory diseases such as measles and rubella are antigenically static , meaning immunity is lifelong , while other diseases—such as influenza , or the sexually transmitted diseases—seem to trade transmissibility for the ability to generate multiple diverse strains so as to evade host immunity ., We use mathematical models of disease progression and evolution within the infected host coupled with models of transmission between hosts to explore how transmission modes , host contact rates and network structure determine antigenic diversity , infectiousness and duration of infection ., In doing so , we classify infections into three types—measles-like ( high transmissibility , but antigenically static ) , flu-like ( lower transmissibility , but more antigenically diverse ) , and STI-like ( very antigenically diverse , long lived infection , but low overall transmissibility ) . | computational biology/evolutionary modeling, public health and epidemiology/infectious diseases, infectious diseases, public health and epidemiology/epidemiology | null |
1,566 | journal.pcbi.1003651 | 2,014 | The Effects of Theta Precession on Spatial Learning and Simplicial Complex Dynamics in a Topological Model of the Hippocampal Spatial Map | Considerable effort has been devoted over the years to understanding how the hippocampus is able to form an internal representation of the environment that enables an animal to efficiently navigate and remember the space 1 ., This internal map is made possible , in part , by the activity of pyramidal neurons in the hippocampus known as place cells 2 , 3 ., As an animal explores a given environment , different place cells will fire in different , discrete regions of the space that are then referred to as that cells “place field” 2 , 3 ., Despite decades of research , however , the features of the environment that are encoded , the identity of the downstream neurons that decode the information , and how the spiking activity of hundreds of cells is actually used to form the map all remain unclear ., We recently developed a computational model for spatial learning , focusing on what information is available to the still-unidentified downstream neurons 4 ., We reasoned that the information they decode must be encapsulated in the temporal patterns of the place cell spike trains , specifically place cell co-firing 4 , 5 ., Because place cell co-firing implies that the respective place fields overlap , the resulting map should derive from a sequence of overlaps between parts of the environment ., The information encoded by the hippocampus would therefore emphasize connectivity between places in the environment , which is a topological rather than a geometric quality of space 4 ., One advantage of this line of reasoning is that a topological problem should be amenable to topological analysis , so we developed our model using conceptual tools from the field of algebraic topology and , in particular , persistent homology theory 6 , 7 ., We simulated a rat exploring several topologically distinct environments and found that the information encoded by place cell co-firing can , in fact , reproduce the topological features of a given spatial environment ., We also found that , in order to form an accurate spatial map within a biologically reasonable length of time , our simplified model hippocampus had to function within a certain range of values that turned out to closely parallel those obtained from actual experiments with healthy rodents ., We called this sweet spot for spatial learning the learning region , L 4 ., As long as the values of the three parameters ( firing rates , place field sizes , and number of active neurons ) remain within the learning region , spatial learning is reliable and reproducible ., Beyond the perimeters of L , however , spatial learning fails ., Several features of this model are intuitively appealing ., First , the size and shape of L vary with the difficulty of the task: the greater the complexity of the space to be learned , the narrower the range of values that can sustain learning and thus the more compact the learning region ., Second , there is a certain tolerance for variation among the three parameters within L: if one parameter begins to fall outside the sweet spot , spatial learning can still occur if there is sufficient compensation in the other two parameters ., Our model suggests that certain diseases ( e . g . , Alzheimers ) or environmental toxins ( e . g . , ethanol , cannabinoids ) disrupt spatial learning over time by gradually shifting mean neuronal function ( place cell firing , neuronal number , or place field size ) beyond the perimeter of the learning region ., This notion receives support from studies of mouse models that show a correlation between impairment in spatial cognition and larger , more diffuse place fields , lower place cell firing rates , and smaller numbers of active cells 8 , 9 ., All this corresponds well with our subjective experiences of learning: the complexity of the task influences learning time; when the task is difficult we can feel we are at or just beyond the limits of our capacity; disease or intoxication can reveal limits in our spatial cognition that would normally be compensated for ., In this paper we focus on analyzing the structure of the learning region itself ., We begin by making the computational model more physiologically accurate ., There is a θ ( theta ) component of subcortical LFP oscillations that occurs in the frequency range of 6-12 Hz and regulates spiking activity 10 ., The timing of place cell spiking in the hippocampus is coupled with the phase of θ-oscillations so that , as a rat progresses through a particular place field , the corresponding place cell discharges at a progressively earlier phase of each new θ-cycle 11 ., This phenomenon , called theta phase precession , reproduces short sub-sequences of an animals current trajectory during each θ-cycle 11 ., This has been construed to suggest that θ-phase precession helps the hippocampus remember the temporal sequence of the rats positions in a space ( i . e . , its trajectory ) 12 , 13 , thereby enhancing spatial learning and memory ., If this is the case , θ phase precession should enhance learning in our computational model ., Indeed , we find that it significantly improves and stabilizes spatial learning ., We also find that different temporal windows to define co-firing exert a pronounced influence on learning time , and the most efficacious window widths correspond with experimental predictions ., Finally , we analyze simplicial complex formation within the learning region , examining both the structure of the complexes and the dynamics of loop formation , and find an explanation for the poor efficiency of ensembles at the boundary of the learning region compared to peak-performing ensembles ., We will first briefly describe the fundamental concepts on which our model is based ( this section is an abbreviated version of the approach described in 4 ) ., Central to this work is the concept of a nerve simplicial complex , in which a space X is covered by a number of smaller , discrete regions 14 ., If two regions overlap , the corresponding vertices , vi and vj , are considered connected by a 1D link vij ( Figure 1 ) ., If three regions overlap , then vij , vjk , and vki support a 2D triangular facet or simplex σijk , and so on as the number of overlaps and links increase ., The structure of the simplicial complex approximates the structure of the environment: the complex N ( X ) obtained from a sufficiently dense cover of the space X will reproduce the correct topological indices of X ( see 4 for details ) ., For our model we developed a temporal analogue to the simplicial complex , i . e . , a simplicial complex that builds over time: when the animal is first introduced to the environment , there will be only a few data points from place cell firing , but as the animal explores the space the place cell firing data accumulate ., ( Rodent experiments indicate that place fields take about four minutes to develop 15 . ), As the animal explores its environment and more place cells fire ( and co-fire ) , the simplicial complex T grows with T ( time ) ( T\u200a=\u200aT ( T ) ) ., Eventually , after a certain minimal time Tmin , the spaces topological characteristics will stabilize and produce the correct topological indices , at which point the topological information is complete ., Tmin is thus the minimal learning time , the time at which a topologically correct map is first formed ., The correct topological indices are indicated by Betti numbers , which in turn are manifested in persistent cycles ( see 4 , 7 , 16 ) ., As the rat begins to explore an environment , the simplicial complex T ( T ) will consist mostly of 0-cycles that correspond to small groups of cofiring cells that mark contractible spatial domains ., As the rat continues to explore the environment , the co-firing cells will produce links between the vertices of T ( T ) , and higher-dimensional cycles will appear ., As T increases , most cycles in each dimension will disappear as so much “topological noise , ” leaving only a few persisting cycles that express stable topological information ( Figure 1C ) ., The persistent homology method 6 ( see 4 Methods ) enables us to distinguish between cycles that persist across time ( reflecting real topological characteristics ) and transient cycles produced by the rats behavior ( e . g . , circling in a particular spot during one trial or simply not venturing into one part of the space during early explorations ) ., The pattern of cycles is referred to as a barcode 16 that can be easily read to give topological information about a given environment ( Figure 1C ) 6 , 7 ., If theta precession serves to enhance learning , as has been predicted 17–19 , then it should enhance spatial learning in our model ., This could occur by any of several means ., First , theta precession might enlarge the number of ensembles capable of the task by expanding the scope of the parameters ( including firing rates or place field sizes normally out of the bounds of L ) ., Second , it might make the ensembles that are in L converge on the correct topological information more rapidly ., Third , it might make the same ensembles perform more reliably ( e . g . , succeeding in map formation a greater percentage of times in our simulations ) ., To test the effect of theta precession in our model , we compared the rates of map formation for those formed with and without θ-precession ., We tested 1710 different place cell ensembles by independently varying the number of place cells ( N; 19 independent values , from 50 to 500 ) , the ensemble mean firing rate ( f; 10 independent values , from 4 to 40 ) , and the ensemble mean place field sizes ( s; 9 independent values , from 5 to 30 ) Methods; see 4 and Methods therein for further details ., For statistical analysis , we simulated each map 10 times so that we could compute the mean learning time and its relative variability , ξ\u200a=\u200aΔ Tmin/Tmin , for each set of ( s , f , N ) values ., In the following we will suppress the bar in the notation for the mean f , s , N , and Tmin ., Figure 2 shows the results of these simulations in a 1×1 m space with one hole ., ( The size of the environment in this study is smaller than the ones used in 4 , for two reasons: to avoid the potential problem of place cells with more than one field , and to reduce computational cost; see Methods . ), The learning region L is small and sparse in the θ-off case , but notably larger and denser in the θ-on case ( Figure 2A ) ., Values that would be just beyond the learning region—N that may be too small , or place fields that are too large or too small , or firing rates too high or too low 4—thus become functional with the addition of θ-precession ., Two criteria reveal the quality of the map-forming ensembles: speed and consistency in converging toward the correct topological signature ., The fastest map formation times ( under 4 minutes ) are represented by blue dots; as the color shifts toward red , map formation times become longer and the error rate ( failure to converge ) increases ., The size of the dot represents the success rate: small dots represent ensembles that only occasionally converge on the correct information , large dots represent ensembles that converge most or all of the time ., θ-precession increases the probability of convergence across all ensembles that can form accurate maps at all ( Supplemental Figure S1 ) ., Since we were interested in understanding the dynamics of efficient learning , however , we created a more stringent definition of the learning region to focus on the core of L where map-formation is most rapid and reliable , as well as to make the results more legible ( L can be quite dense , as in Figure 2A and Supplemental Figure S1 ) ; if θ-precession truly enhances learning , its effect should be apparent even in the most successful ensembles , and indeed this was the case ., The point clouds in Figure 2B depict those ensembles that formed maps with a convergence rate of ρ≥0 . 7 ( i . e . , those that produced correct topological information at least 70% of the time ) and simultaneously had low relative variability of the Tmin values , ξ<0 . 3 ., Even within this more efficient core of L , the effect of θ-precession was pronounced ., The histograms of the computed mean learning times are closely fit by the Generalized Extreme Value ( GEV ) probability distribution ( Figure 2B ) ., The distributions show that θ-precession reduced the mean learning times Tmin: the mode of all the θ-on GEV distributions decreased by ∼50% compared with the θ-off case for the learning region as a whole ( Figure 2C ) and by ∼15% for the efficient ensembles at the core of L ( Figure 2D ) ., Moreover , the effects of adding θ-precession—reducing map formation time and decreasing the relative variability of the Tmin values—were manifested in all maps , not just those with high ( ρ ≥0 . 7 ) convergence rates ( Figure 2C , D ) ., The histograms for all maps ( all ρ -values ) fit by the GEV distribution reveal that the typical variability ( the mode of the distributions ) in the θ-on cases is about half the size of the θ-off case ( Figure 2E ) ., In our model , therefore , θ-precession strongly enhances spatial learning ., Since we do not know what features of θ-oscillations might be important 20 , we studied four different θ-oscillations , two simulated and two derived from electrophysiological experiments in wild-type rodents ., Specifically , we modeled the effect of theta precession on the topological map by coupling the place cells Poisson firing rates , λc , with the phase of the following four θ-oscillations:, 1 ) θ1 – a single 8 Hz sinusoidal wave ,, 2 ) θ4 – a combination of four sinusoids ,, 3 ) θM – a subcortical EEG signal recorded in wild-type mouse , and, 4 ) θR – a subcortical EEG signal recorded in a rat ( Supplemental Figure S2; see Methods ) ., The last three signals were filtered in the θ-domain of frequencies ( 6–12 Hz ) ., The distribution of the learning times , the histograms of the mean learning times , and the histograms of the relative variability , ξ , for all four different theta cases are shown in Supplemental Figures S3 and S4 ., To compare the θ-off and θ-on cases , we performed two-sample Kolmogorov-Smirnov ( KS ) tests for all pairwise combinations of the studied sample sets 21 ., This produced a 5×5 matrix of the p-values , pij , where i , j\u200a=\u200a0 ( no theta ) , θ1 , θ4 , θM , and θR ., Black squares signify a statistically significant difference between cases i and j ( p<0 . 05 ) ; gray squares signify no statistically significant difference ., The statistical difference diagrams for the sets of Tmin values ( Supplemental Figure S3 ) and for the learning time variability ( Supplemental Figure S4 ) indicate that the distributions of learning times in the various θ-on cases were very similar , but the difference between all of these and the θ-off case was statistically significant ., So far we have described the outcome of place cell ensemble activity in terms of the time at which the correct number of loops in the simplicial complex T emerges ., But the learning process can also be described by how spurious loops are handled in the system ., These loops are a fair representation of the subjective experience of learning ., It takes time to build a framework into which new information can be properly slotted: until that framework is in place—whether its a grasp of the layout of a neighborhood or the basic principles of a new field of study—we have incomplete hunches and many incorrect notions before experience ( more learning ) fills in our understanding ., Translating this into topological terms , as the knowledge gaps close , the spurious loops contract ., We therefore wanted to study the effects of theta precession on the dynamics of loop formation ., Does a “smarter” ensemble form more spurious loops or fewer ?, Does it resolve those loops more quickly ?, We concentrated on the 1D cycles , which represent path connectivity within the simplicial complex , because they are more numerous and thus produce more robust statistics than the 0D cycles ., Figure 3 shows that θ-precession shortened the duration of the spurious loops ., The KS test reveals a statistically significant difference between the lifetimes of spurious loops in the θ-off case and those in all the θ-on cases ( Figure 3B ) ., To simplify the presentation of the results produced by the statistically similar θ-on cases , we combined the data on spurious loop duration from all four θ-driven maps into a single histogram ., It is interesting to note that the probability distributions for loop dynamics are typically better fit by the gamma distribution ( Figure 3B–D ) ., In the θ-driven maps , a typical spurious loop persisted for 50% less time than it would without θ-precession ( Figure 3B ) ., It is worth noting that the spurious loops persisted longer at the lower boundary of the learning region , where the mean firing rates and place field sizes are smallest ., This makes sense , insofar as whatever information appears will take longer to be corrected ., Statistical analysis of the largest number of loops observed at any given point over the course of the map formation period also differentiated θ-driven from θ-off maps ., Curiously , θ-driven cases tended to produce a significantly higher mean number of spurious loops than the θ-off case ( Figure 3C ) , but with a lower peak number of loops ( Figure 3D ) ., This implies that θ-precession enhances the speed of spatial learning overall at the price of creating more ( transient ) errors; lots of spurious loops are formed early on , but they disappear faster ., The KS test shows that the distributions of the mean number of loops in all θ-on cases differ from one another; only the maps driven by the two simulated θ-signals gave statistically similar results ., In our model , spatial learning can be quantified by the time required for the emergence of correct topological information , but it can also be quantified by studying the simplicial complex itself ., We noted earlier that the structure of the simplicial complex approximates the structure of the environment ., Similarly , it is possible to conceive of a simplex as a mathematical analogue to a cell assembly ( a group of at least two cells that repeatedly co-fire and form a synapse onto a readout neuron ) , and to view the simplicial complex as analogous to the realm of possible connections within the hippocampus ., We were therefore curious: since it is in the interest of neural function to be efficient , how many cell assemblies ( simplices ) does it take to encode a given amount of information ?, We would predict that the fewer the connections , the better , for the sake of efficiency ., One of the major characteristics of a simplicial complex T is the number of n-dimensional simplices it contains , traditionally denoted as fn ., The list of all fn –values , ( f1 , f2 , … , fn ) , is referred to as the f-vector 22 ., Since the D-dimensional simplices in T correspond to ( D+1 ) -ary connections , the number of which depends on the number of vertices , N , we considered the fn values normalized by the corresponding binomial coefficients , which characterize the number of simplices connecting vertices in the complex T . We can consider η an index of the connectivity of the simplicial complex ., Since we model 2D spatial navigation , we analyzed the connections between two and three vertices , i . e . , the 1D and 2D simplices , of T ( the number of 0D simplexes normalized by the number of vertices in T is η0≡1 ) ., Figure 4 shows the distribution of the normalized number of simplices at the time the correct signature is achieved ( η1 and η2 , for 1D and 2D , respectively ) ., As expected , the number of simplices was smaller at the lower boundary of the learning region L ( the base of the point cloud ) and increased towards the top of L where place fields are larger and the firing rates are higher , each of which would produce more place cell co-firing events ., Remarkably , the number of simplices depended primarily on the mean place field size and on the mean firing rate of the ensemble , and not on the number of cells within the ensemble ., This suggests a certain universality in the behavior of place cell ensembles that is independent of their population size ., In the ensembles with smaller place fields and lower firing rates , about 1 . 5% of place cell pairs and 1 . 7% of the triplets were connected , and this was enough to encode the correct topological information , whereas in the ensembles with low spatial selectivity and higher firing rates , 25% of pairs and 8% of triplets were connected ., These ensembles , in which the place fields and spike trains will by definition have a lot of overlap , are forced to form many more 1D and 2D simplices in order to encode the same amount of information and are thus less efficient ( Figure 4 , third column ) ., According to our model , such ensembles and the hippocampal networks whose activity they represent are inefficient on two counts ., First , these larger , more complex temporal simplicial complexes ( analogous to a larger number of coactive cell groups ) will take longer to form correct topological information , if they can manage it at all ., Second , a larger number of coactive place cells would hamper the training of downstream readout neurons , thereby impeding reliable encoding of spatial information ., This is consistent with studies showing that the number of cells participating in a particular task decreases until it reaches an optimal number that fire at a slightly higher rate than their no-longer-participating neighbors 23 ., Our model depends on patterns of place cell co-firing , but we had not previously explored what the optimal temporal window for defining co-activity might be ., Experimental work supports the widely held assumption that the temporal unit for defining coactivity ranges between tens 24 and hundreds of milliseconds 25–27 ., Our model , however , enables us to approach the question of optimal width for the coactivity window theoretically ., Clearly , if the time window w is too small , then the spike trains produced by the presynaptic place cells will often “miss” one another , and the map will either require a long time to emerge or it may not be established at all ., One would thus expect large values Tmin ( w ) for a small w ., On the other hand , if w is too large , it will allow cells whose place fields are actually distant from one another to be grouped together , yielding incorrect topological information ., Theta rhythm itself will have a tendency to group sequential spike trains together , but clearly there must be limits to this , or else some place cells would be read downstream as co-firing when they actually are not ., Therefore , there should exist an optimal value of w that reliably produces a finite , biologically relevant learning time Tmin at which the learning region L is robust and stable ., We assume that the capability of a read-out neuron to detect place cell co-activity is specified by a single parameter , the width of the integration time window w , over which the co-appearance of the spike trains is detected ., ( We considered the possible effect of time bin position on co-activity , but found this did not affect outcome; see Methods . ), We defined cell coactivity as follows: if presynaptic neurons c1 and c2 send a signal to a read-out neuron within a certain time window w , their activity will be interpreted as contemporaneous ., The width of this time window may be positive ( c2 becomes active w seconds after c1 ) or negative ( c2 becomes active while c1 is still firing ) ., We studied window widths w for which the place cell spike trains would eventually be able to produce the correct topological signature ( the Betti numbers , see Methods in 4 ) ., In order to describe the dependence of learning times on the window width , Tmin ( w ) , we scanned an array of 24 values of w ( ranging from 0 . 1 to 5 θ-periods ) for each combination of the parameters ( mean s , f , and N ) and noted the width of the value wo , at which the map began to produce the correct topological signature ., We call this initial correct window width the “opening” value ., A typical result is provided by an ensemble with f\u200a=\u200a28 Hz , s\u200a=\u200a23 cm and N\u200a=\u200a350 , in which an accurate topological map emerges at a fairly small window width , wo∼25 msec ( Figure 5 ) ., The distribution of the opening window widths shows that wo may exceed 1 . 5 θ-periods ( ∼25 msec ) , which matches the slow γ-period 24 , 28 ( Supplemental Figure S5 ) ., Since at this stage γ-oscillations have not been explicitly built into the model , this correspondence is coincidental , if suggestive ., As expected , the values of learning times at wo were rather large: Tmin ( wo ) ∼20 minutes in θ-off case and Tmin ( wo ) ∼30 minutes in the θ-on case , and in some cases exceeding one hour ( mostly for the ensembles with low firing rates ) ., For small window widths , the value of the learning time Tmin ( w ) was very sensitive to variations in w ( Supplemental Figure S6 ) ., As w increased , however , the learning time reached a plateau around some larger value ws ., This implies that in order to produce stable values for Tmin that are biologically plausible , the values of the window widths should start around ws ., The distribution of the ws values demonstrates that in the θ-off case the stabilization is typically achieved at approximately one θ-period , and in the θ-on case at about ∼1 . 2–1 . 5 θ-cycles ( Figure 6 ) , which justifies our choice of a two θ-period window width for the computations and corresponds well with the predicted limit of 150 msec for θ-cycle cofiring in sequence coding 29 ., Further increasing the integration time window w did not significantly alter the learning time Tmin in L; instead , the rate of map convergence decreased until the maps completely fail to encode the correct topological information at w ζ 4 . 5 θ-periods ., From the perspective of our current model , the range of optimal window widths w is between 20–25 msec and 0 . 5 secs ., Finally , we sought to uncover a relationship between learning time and window width ., Our analysis suggests that Tmin is inversely proportional to a power of the window width ( Figure 5B , Supplemental Figure S7 ) ., Numerous experiments have demonstrated that θ precession is important for spatial learning ., θ-power increases with memory load during both spatial and non-spatial tasks in humans 30 , 31 and in rodents 32 , 33; spatial deficits correlate with a decrease in the power of theta oscillations in Alzheimers disease 34 and in epilepsy 35 , 36 ., If θ-signal is blocked by lesioning the medial septum ( which does not affect hippocampal place cell representations ) , it severely impairs memory 37 and the acquisition of new spatial information 38 ., Recent experiments demonstrate more directly that destroying θ precession by administering cannabinoids to rats correlates with behavioral and spatial learning deficits 17 , 39 ., But at what level , and through what mechanisms , does θ precession exert its influence ?, The effect of θ-precession on the structure of the spike trains is rather complex 40 ., On the one hand , it groups cell spikes closer together in time and enforces specific sequences of cell firing , which is typically interpreted as increasing the temporal coherence of place cell activity 41–43 ., One might predict that grouping spikes together would ( somehow ) speed up learning ., On the other hand , θ-precession imposes extra conditions on the spike timing that depend on θ-phase and on the rats location with respect to the center of the place field through which it is presently moving ., Since every neuron precesses independently , one could just as well predict that θ modulation would either restrict or enlarge the pool of coactivity events , which in turn would slow down learning at the level of the downstream networks , and that the beneficial effect of the θ rhythm is a higher-order phenomenon that occurs elsewhere in the brain ., Our results suggest that θ precession may not just correlate with , but actually be a mechanism for , enhancing spatial learning and memory ., The interplay of θ precession and window width , especially the extremely long learning times at the opening window width wo , is particularly illuminating here ., As noted , theta precession acts at both the ensemble and the individual neuron level: it groups spikes together , but each neuron precesses independently ., When the time window is sufficiently wide , the coactivity events are reliably captured , the first effect dominates , and the main outcome of theta precession is to supply grouped spikes to downstream neurons ., For very small time windows , however , the system struggles to capture events as coactive , and the extra condition imposed by phase precession acts as an impediment: detected coactivities are rarer , and learning slows down ., Put more simply , imagine in Suppl ., Figure S8 that the window is only one spike wide: in a train of 10 spikes that overlaps by one spike with another train , it will take 10 windows before the overlap is detected ., It is noteworthy that the presence of theta precession was clearly more important than the details of the oscillation ., Although theta precession enhanced learning in our simulations , learning times were relatively insensitive to the details of the theta precession chosen ., One might expect differences in spike train structure induced by the four different θ-signals studied to alter the dynamics of the persistent loops and thus learning efficiency ., Our results show , however , that differences that would matter at the level of individual cells are averaged out at the level of a large ensemble of cells ., Here again the model shows its particular strength: it allows us to correlate parameters of activity at the level of individual neurons with the outcome at the level of an ensemble of hundreds of cells , providing a framework for understanding how micro-level changes play out at the behavioral level ., Interestingly , we also saw a difference between the micro and macro levels when we considered whether the placement of a temporal window affected what would be considered co-activity ( see Methods and Supplementary Figure S8 ) ., In theory it should , but the effect at the macro level washes out and we found that only the temporal width of the window matters for learning time ., Beyond validating the model as a reliable way to study physiological aspects of spatial learning , we have gone further in this work to analyze the simplicial complex itself as a way of describing learning ., As a rat starts to explore an environment , some cells begin to form place fields ., Then , the co-firing of two or more place cells will define the respective places as connected in space and temporal experience and will create corresponding simplices in the simplicial complex T . With time , these simplices form a chain corresponding to the animals route through the space ., If the environment is bounded , the rat will discover new connections between the same places ( arriving to the same location via different routes ) ., As a result , the chains of connected simplices grow together to form loops ., Existing loops become thicker and may eventually “close up” and disappear , yielding surfaces ., The appearance of such surfaces is significant: the closing up of a D-dimensional surface corresponds to the contraction , or disappearance , of one of its boundaries , which itself is a D1-dimensional loop ., Eventually , the structure of the simplicial complex saturates such that no new simplices ( connections between places ) are produced and no more loops contract because all that could close have already closed ., At this point , the saturated simplicial complex T encodes not only the possible locations of the rat , but also connections between the locations , along with the information about how these | Introduction, Results, Discussion, Materials and Methods | Learning arises through the activity of large ensembles of cells , yet most of the data neuroscientists accumulate is at the level of individual neurons; we need models that can bridge this gap ., We have taken spatial learning as our starting point , computationally modeling the activity of place cells using methods derived from algebraic topology , especially persistent homology ., We previously showed that ensembles of hundreds of place cells could accurately encode topological information about different environments ( “learn” the space ) within certain values of place cell firing rate , place field size , and cell population; we called this parameter space the learning region ., Here we advance the model both technically and conceptually ., To make the model more physiological , we explored the effects of theta precession on spatial learning in our virtual ensembles ., Theta precession , which is believed to influence learning and memory , did in fact enhance learning in our model , increasing both speed and the size of the learning region ., Interestingly , theta precession also increased the number of spurious loops during simplicial complex formation ., We next explored how downstream readout neurons might define co-firing by grouping together cells within different windows of time and thereby capturing different degrees of temporal overlap between spike trains ., Our models optimum coactivity window correlates well with experimental data , ranging from ∼150–200 msec ., We further studied the relationship between learning time , window width , and theta precession ., Our results validate our topological model for spatial learning and open new avenues for connecting data at the level of individual neurons to behavioral outcomes at the neuronal ensemble level ., Finally , we analyzed the dynamics of simplicial complex formation and loop transience to propose that the simplicial complex provides a useful working description of the spatial learning process . | One of the challenges in contemporary neuroscience is that we have few ways to connect data about the features of individual neurons with effects ( such as learning ) that emerge only at the scale of large cell ensembles ., We are tackling this problem using spatial learning as a starting point ., In previous work we created a computational model of spatial learning using concepts from the field of algebraic topology , proposing that the hippocampal map encodes topological features of an environment ( connectivity ) rather than precise metrics ( distances and angles between locations ) —more akin to a subway map than a street map ., Our model simulates the activity of place cells as a rat navigates the experimental space so that we can estimate the effect produced by specific electrophysiological components —cell firing rate , population size , etc ., —on the net outcome ., In this work , we show that θ phase precession significantly enhanced spatial learning , and that the way downstream neurons group cells together into coactivity windows exerts interesting effects on learning time ., These findings strongly support the notion that theta phase precession enhances spatial learning ., Finally , we propose that ideas from topological theory provide a conceptually elegant description of the actual learning process . | computational neuroscience, biology and life sciences, computational biology, neuroscience, learning and memory | null |
1,440 | journal.pcbi.1005150 | 2,016 | How Do Efficient Coding Strategies Depend on Origins of Noise in Neural Circuits? | Our sensory systems encode information about the external environment and transmit this information to higher brain areas with remarkable fidelity , despite a number of sources of noise that corrupt the incoming signal ., Noise—variability in neural responses that masks the relevant signal—can arise from the external inputs to the nervous system ( e . g . , in stochastic arrival of photons at the retina , which follow Poisson statistics ) and from properties intrinsic to the nervous system , such as variability in channel gating , vesicle release , and neurotransmitter diffusion ( reviewed in 1 ) ., This noise places fundamental limits on the accuracy with which information can be encoded by a cell or population 2–5 ., An equally important consideration , however , is that noise dictates which processing strategies adopted by the nervous system will be most effective in transmitting signal relative to noise ., Efficient coding theory has been an important principle in the study of neuroscience for over half a century , and a number of studies have found that neural circuits can encode and transmit as much useful information as possible given physical and physiological constraints 6–13 ., Foundational work by Laughlin successfully predicted the function by which an interneuron in the blowfly eye transformed its inputs 7 ., This and other early work prompted a myriad of studies that considered how neurons could make the most efficient use of their output range in a variety of systems and stimulus conditions 14–19 ., Efficient coding theory has played an important role in how we interpret biological systems ., However , one cannot know how efficiently a neuron or population is encoding its inputs without understanding the sources of noise present in the system ., Several previous studies have recognized noise as an important factor in determining optimal computations 8 , 11 , 12 , 20 , 21 ., These and related studies of efficient coding often make strong assumptions about the location of noise in the system in question , and these assumptions are typically not based on direct measurements of the underlying noise sources ., For example , noise is often assumed to arise at the output stage and follow Poisson statistics ., Yet experimental evidence has shown that spike generation itself is near-deterministic , implying that most noise observed in a neuron’s responses is inherited from earlier processing stages 22–24 ., Indeed , several different sources of noise may contribute to response variability , and the relative contributions of these noise sources can change under different environmental and stimulus conditions 25–27 ., Importantly , the results of efficient coding analyses depend on the assumptions made about the locations of noise in the system in question , but there has been to date no systematic study of the implications that different noise sources have for efficient coding strategies ., In particular , identifying failures of efficient coding theory—i . e . , neural computations that do not optimally transform inputs—necessitates a broad understanding of how different sources of noise alter efficient coding predictions ., Here , we consider how the optimal encoding strategies of neurons depend on the location of noise in a neural circuit ., We focus on the coding strategies of single neurons or pairs of neurons in feedforward circuits as simple cases with physiologically relevant applications ., Indeed , early sensory systems often encode stimuli in a small number of parallel channels , including in vision 28–30 , audition 31 , chemosensation 32 , thermosensation 33 , and somatosensation 34 ., We build a model that incorporates several different sources of noise , relaxing many of the assumptions of previously studied models , including the shape of the function by which a neuron transforms its inputs to outputs ., We determine the varied , and often competing , effects that different noise sources have on efficient coding strategies and how these strategies depend on the location , magnitude , and correlations of noise across neurons ., Much of the efficient coding literature is impacted by these results ., For example , Laughlin’s predictions assume that downstream noise is identical for all responses; when this is not true , a different processing strategy will be optimal ., Other recent work , considering such questions as when it is advantageous to have diverse encoding properties in a population and when sparse firing is beneficial , bears reinterpretation in light of these results 21 , 35 ., Our work demonstrates that understanding the sources of noise in a neural circuit is critical to interpreting circuit function ., The model is schematized in Fig 1 , and is detailed below ., We constructed this model with retinal circuitry in mind , though the model could be reinterpreted to represent other primarily feedforward early sensory systems , or even small segments of cortical circuitry ., We begin with a simple feature of neural circuits that captures a ubiquitous encoding transformation: a nonlinear conversion of inputs to outputs ., Nonlinear processing arises from several biological processes , such as dendritic integration , vesicle release at the synapse , and spike generation 36 , 37 ., Such nonlinearities appear in most neural coding models ( such as the commonly used linear-nonlinear-poisson ( LNP ) models or generalized linear models 38–40 ) ., Although there are likely several sites with some level of nonlinear processing in the retinal circuitry , there is a single dominant nonlinearity at most light levels which can be localized to the output synapse of the bipolar cells 41 ., Our goal is to determine the shape of the nonlinearity in this model that most faithfully encodes a distribution of inputs—i . e . , the optimal encoding strategy ., Indeed , in the retina , the shape of this nonlinearity has been shown to adapt under different stimulus conditions , suggesting that this adaptation might serve to improve encoding of visual stimuli as environmental conditions ( and hence noise ) change 18 , 42 ., The pathway receives an input signal or stimulus s , which is drawn from the standard normal distribution ., Generally , an individual value of s can represent any deviation from the mean stimulus value , and the full distribution of s represents the set of inputs that might be encountered over some time window in which the circuit is able to adapt ., In the context of the retinal circuitry , s can be understood as the contrast of a small region , or pixel , of the visual stimulus ., The contrast in this pixel might be positive or negative relative to the ambient illumination level ., The full distribution of s would then represent the distribution of contrasts encountered by this bipolar cell as the eye explores a particular scene ., ( We use Gaussian distributions here for simplicity in analytical computations , though similar results are obtained in simulations with skewed stimulus distributions , similar to the distributions of pixel contrast of natural scenes 43 . ), We assume the distribution of s is fixed in time ., If properties of the signal distribution varied randomly in time ( for example , if the variance of possible signals the circuit receives fluctuates between integration times ) , over long times the circuit would see an effectively broader distribution due to this extra variability ., Conversely , if the particular visual scene being viewed or other environmental conditions change suddenly , the input distribution as a whole ( for example , the range of contrasts , corresponding to the width of the input distribution ) also changes suddenly ., Therefore we expect the shape of the optimal nonlinearity to adapt to this new set of signal and noise distributions ., We do not model the adaptation process itself; our results for the optimal nonlinearity correspond to the end result of the adaptation process in this interpretation ., We incorporate three independent sources of noise , located before , during , and after the nonlinear processing stage ( Fig 1A and 1B ) ., The input stimulus is first corrupted by upstream noise η ., This noise source represents various forms of sensory noise that corrupt signals entering the circuit ., This might include noise in the incoming stimulus itself or noise in photoreceptors ., The strength of this noise source is governed by its variance , σ up 2 ., The signal plus noise ( Fig 1B , purple ) is then passed through a nonlinearity f ( ⋅ ) , which sets the mean of a scaled Poisson process with a quantal size κ ., The magnitude of κ determines the contribution of this noise source , with large values of κ corresponding to high noise ., This noise source captures quantal variations in response , such as synaptic vesicle release , which can be a significant source of noise at the bipolar cell to ganglion cell synapse 26 ., Finally , the scaled Poisson response is corrupted by downstream noise ζ ( with variance σ down 2 ) to obtain the output response ( Fig 1B , green ) ., This source of noise captures any variability introduced after the nonlinearity , such as noise in a postsynaptic target ., In the retina , this downstream noise captures noise intrinsic to a retinal ganglion cell , and the final output of the model is the current recorded in a ganglion cell ., If the sources of upstream and downstream noise are independent ( e . g . , photoreceptor noise and retinal ganglion cell channel noise , respectively ) , then the two kinds of noise will be uncorrelated in a feedforward circuit like we model here ., Lateral input from other channels , which we do not consider , could potentially introduce dependence between upstream and downstream noise ., Feedback connections operating on timescales within a single-integration window could also potentially introduce correlations between additive upstream and downstream noises ., However , while such connections could be important in cortical circuits , they are not significant in the sensory circuits that inspired this model , so we assume independent upstream and downstream noise in this work ., For further biological interpretation of the model , see Discussion ., We begin by studying a model of a single pathway ., We then consider how two pathways operating in parallel ought to divide the stimulus space to most efficiently code inputs ., These models are constructed of two parallel pathways of the single pathway motif ( Fig 1C ) , with the addition that noise may be correlated across both pathways ., The study of two parallel channels is motivated by the fact that a particular area of visual space is typically encoded by paired ON and OFF channels with otherwise similar functional properties , but similar parallel processing occurs throughout early sensory systems and in some cortical areas 29 , 31 , 32 ., We will return to further discussion of parallel pathways in the second half of the Results ., We begin with the case of a single pathway ., For simplicity , we start with cases in which one of the three noise sources dominates over the others ., Considering cases in which a single noise source dominates isolates the distinct effects of each noise source on the optimal nonlinearity ., We then show that these same effects govern how the three noise sources compete in setting the optimal nonlinearity when they are all of comparable magnitude ., Information in many sensory systems is encoded in parallel pathways ., In vision , for example , inputs are encoded by both ON cells and OFF cells ., In audition , an incoming stimulus is encoded in many parallel channels , each encoding a particular frequency band ., Allowing for multiple parallel channels raises fundamental questions about how these resources should be allocated: should multiple channels have the same or different response polarities ?, Should an input be encoded in multiple channels redundantly , or should different channels specialize in encoding a particular range of inputs ?, To understand these tradeoffs , we solved our model for the optimal nonlinearities for a pair of parallel pathways , the simplest case in which these questions can be investigated ., Indeed , in many cases , a small number of sensory neurons are responsible for carrying the relevant signal 46–49 ., Our circuit model for multiple pathways comprises parallel copies of the single pathway model ( Fig 1C ) , with the additional detail that both upstream and downstream noise may be correlated across pathways ., We show below that the sign and strength of these correlations can strongly affect optimal encoding strategies ., To focus on the effects of noise on optimal encoding strategies , we added complexity to the noise structure , while making significant simplifications in the stimulus structure ., In particular , we assume that both channels receive the same stimulus ., Correlated but non-identical stimuli in the two channels would likely affect optimal encoding strategies , but we did not explore this possibility and leave it as a direction for future inquiry ., We discuss the parallel pathway results in the following order: first , we discuss the possible pairs of nonlinearities , which are richer than the single-pathway case ., We then discuss the functional effects that each of the parameters , or in some cases combinations of parameters , has on the shapes of the nonlinearities , with a focus on which parameter regimes favor highly overlapping versus minimally overlapping encoding of inputs ( hereafter referred to as “overlapping” and “non-overlapping” ) ., Finally , we discuss factors that determine whether a circuit should encode inputs with channels of opposite polarity versus channels of the same polarity ., Noise in neural circuits arises from a variety of sources , both internal and external to the nervous system ( reviewed in 1 ) ., Noise is present in sensory inputs , such as fluctuations in photon arrival rate at the retina , which follow Poisson statistics , or variability in odorant molecule arrival at olfactory receptors due to random diffusion and the turbulent nature of odor plumes ., Noise also arises within the nervous system due to several biophysical processes , such as sensory transduction cascades , channel opening , synaptic vesicle release , and neurotransmitter diffusion ., Past work has focused on two complementary , but distinct aspects of neural coding:, 1 ) how noise limits coding fidelity , and, 2 ) how circuits should efficiently encode inputs in the presence of such noise ., Much of the work to date has focused on the first aspect , investigating how noise places fundamental limits on information transfer and coding fidelity for fixed neural coding strategies ( e . g . , tuning curves ) 2–5 ., Examples include studying how noise correlations lead to ambiguous separation of neural responses 2 and which correlation structures maximally inhibit coding performance 5 ., The second perspective dates back to the pioneering work of 6 and 7 ., These early works primarily considered how efficient codes are affected by constraints on neural responses , such as limited dynamic range ., Recent studies have built upon these foundational studies , investigating further questions such as how circuit architecture shapes optimal neural codes 20 , 21 , 35 , 56–58 ., However , this body of work has not systematically studied how efficient coding strategies depend on assumptions made about the nature of noise in a circuit ., Previous work has shown that the amount of noise in a circuit can qualitatively change optimal coding strategies 8 , 59 ., We also find that noise strength can be an important factor in determining efficient coding strategies ., A 5- to 10-fold decrease in the signal-to-noise ratio produces dramatic qualitative changes in the optimal nonlinearities ( Fig 2 ) , and those changes depend on noise location ., The SNR values used in our study correspond to a range of SNR values commonly observed in responses of neurons in early sensory systems 60 , 61 , suggesting that this result could be observed in biological circuits ., Our analysis goes beyond considerations of noise strength to reveal how efficient coding strategies change depending on where noise arises in a circuit , showing that different noise sources often having competing effects ., Other work in the context of decision making has similarly shown that the location of noise can impact the optimal architecture of a network , thus demonstrating that noise location in a circuit is important not only for signal transmission but also for computation 62 ., Knowledge of both noise strength and where noise arises is therefore crucial for determining whether a neural circuit is encoding efficiently or not ., Notably , even when the SNR of the circuit outputs is the same , the optimal nonlinearity can be very different depending on the location of the dominant noise source ., The locations of different noise sources have perhaps been most clearly elucidated in the retina ., Several studies have investigated noise within the photoreceptors , and in some cases have even implicated certain elements within the transduction cascade 61 , 63 , 64 ., Additional noise arises at the photoreceptor to bipolar cell synapse , where stochastic fluctuations in vesicle release obscure the signal 45 , 65–67 ., It has also been suggested that noise downstream of this synapse contributes a significant amount of the total noise observed in the ganglion cells , with some studies pointing to the bipolar cell to ganglion cell synapse specifically 26 , 67 ., Several pieces of evidence show that the relative contributions of different noise sources can change under different conditions as a circuit adapts ., For example , in starlight or similar conditions , external noise due to variability in photon arrival dominates noise in rod photoreceptors and the downstream retinal circuitry 61 , 68–70 ., As light levels increase , noise in the circuits reading out the photoreceptor signals—particularly at the synapse between cone bipolar cells and ganglion cells—can play a more prominent role 26 , 67 ., Moreover , even in cases where the magnitude of a given noise source remains unchanged , adaptation can engage different nonlinearities throughout the circuit , shifting the location of the dominant nonlinearity and thereby effectively changing the location of the noise sources relative to the circuit nonlinearity ., The fact that noise strength and nonlinearity location in neural circuits is subject to change under different conditions underscores the importance of understanding how these circuit features shape optimal encoding strategies ., In the retina , it has been observed that the nonlinearity at the cone bipolar to ganglion cell synapse can change dramatically depending on ambient illumination ., Under daylight viewing conditions , this synapse exhibits strong rectification ., Yet under dimmer viewing conditions , this synapse is nearly linear 42 ., The functional role of this change is unclear , though the fact that noise sources are known to change under different levels of illumination points to a possible answer ., If the dominant source of noise shifts from external sources to sources within downstream circuitry with increasing light level , as suggested by the evidence in 42 , our results indicate that the circuit indeed ought to operate more nonlinearly at higher light levels ., Furthermore , it is known that the strength of correlations not only varies between different types of retinal ganglion cells 71 , but these correlations may be stimulus dependent 72 , 73 ., Based on our results for paired nonlinearities , we predict that types of neurons that receive highly correlated input will have nonlinearities with small overlap , while cells that receive uncorrelated input will have highly overlapping nonlinearities ., Fully understanding this adaptation , and adaptations in other systems , will require further elucidation of the noise sources in the circuit ., Understanding how different aspects of circuit architecture shape efficient coding strategies has been a recent area of interest 20 , 21 , 35 , 56–58 ., However , a systematic study of the effects of noise was not the goal of these works , and so the properties of the noise in these studies has been limited , bound by specific assumptions on noise strength and location , and the allowed shapes of nonlinearities ., As a result , while there is some overlap in the conclusions of these studies , the differences in assumptions about the noise and nonlinearities also lead to some apparent disagreement ., Fortunately , we can investigate many similar questions within our model , and thereby complement the results of these previous studies and enrich our understanding of the role of circuit architecture and function ., We briefly discuss the connections that other published studies have to the work presented here , focusing on studies with questions that can be most directly investigated as special cases of our model ., Early work by Laughlin suggested a simple solution for how a single neuron can maximize the amount of information transmitted: a neuron should utilize all response levels with equal frequency , thereby maximizing the response entropy 7 ., Laughlin found that an interneuron in the compound eye of the blowfly transforms its inputs according to this principle ., More recent work investigated nonlinearities in salamander and macaque retinal ganglion cells , predicting that optimal nonlinearities should be steep with moderate thresholds 35 ., Experimental measurements of nonlinearities in ganglion cells were found to be near-optimal based on these predictions ., Although both of these studies ( along with many others ) predict that neurons are efficiently encoding their inputs , assumptions about noise are not well-constrained by experiment ., ( In one case , the model assumes very low noise of equal magnitude for all output levels , while in the other all noise is at the level of the nonlinearity output . ), As our work shows , one can arrive at different—even opposite—conclusions depending on where noise is assumed to enter the circuit ., Without experimentally determining the sources of noise in each circuit , it is impossible to determine whether that circuit is performing optimally ., Going beyond single neurons or pathways , several recent studies have investigated the benefits of using multiple channels to encode stimuli and assigning different roles to each of those channels depending on circuit inputs ., For example , Gjorgjieva and colleagues investigated when it is beneficial to encode inputs with multiple neurons of the same polarity versus encoding inputs with neurons of different polarity 56 ., They conclude that ON-ON and ON-OFF circuits generally produce the same amount of mutual information , with ON-OFF circuits doing so more efficiently per spike ., Our results provide a broader context in which we can interpret their findings , showing that when additive downstream noise ( which was not included in their model ) is anti-correlated , encoding with same polarity neurons can become a more favorable solution ., Another recent study investigated under what conditions it is beneficial for multiple neurons of the same polarity to have the same threshold and when it is beneficial to split thresholds 21 ., In particular , 21 find that nonlinearities split when the strength of upstream noise is weak ., Our results are consistent with this finding and again broaden our understanding of why this splitting occurs: by incorporating correlations , we show that it is not simply the amount of noise that determines splitting , but the combination of noise strength and noise correlations ., This identifies additional possibilities for testing these efficient coding predictions , by looking not just for cells that receive noisy input with similar magnitudes , but by looking for types of cells that receive correlated versus uncorrelated input and determining the degree of overlap of their nonlinearities ., We find that even in relatively simple circuit models , assumptions about the location and strength of multiple noise sources in neural circuits strongly impact conclusions about optimal encoding ., In particular , different relative strengths of noise upstream , downstream , or associated with nonlinear processing of signals yield different optimal coding strategies , even if the overall signal-to-noise ratio is the same ., Furthermore , correlations between noise sources across multiple channels alter the degree to which optimal channels encode overlapping portions of the signal distribution , as well as the overall polarity of the channels ., On the other hand , different combinations of noise sources can also yield very similar nonlinearities ., Consequently , measurements of noise at various locations in neural circuits are necessary to verify or refute ideas about efficient coding and to more broadly understand the strategies by which neurons mitigate the effects of unwanted variability in neural computations ., Our model is schematized in Fig 1 ., Biophysical interpretation is discussed in detail in the Results and Discussion ., We model the input to the circuit as a signal or stimulus s that comes from a distribution of possible inputs within a short integration time window , and hence is a random variable in our model ., Before this input can be encoded by the circuit , it is corrupted by noise η , which we also take to be a random variable ., The circuit then encodes total signal s + η by nonlinearly transforming it , f ( s + η ) ., This transformed signal sets the mean of a variable circuit response ., That is , the circuit does not respond deterministically , but stochastically ., We do not take this stochastic response to be spiking , due to the fact that spike generation has been shown to be repeatable , attributing variability in spiking to other sources 22–24 ., Instead , inspired by quantal neurotransmitter release , which results in post-synaptic potentials of integer multiples of a fixed minimum size , we model the stochastic response as a scaled Poisson distribution: responses come in integer multiples of a minimum non-zero response size κ , with an overall mean response f ( s + η ) , conditioned on the total input , s + η ., This stochastic response is then corrupted by downstream noise ζ , which we also take to be a random variable ., The total response r of a single-path circuit is thus, r = κ m + ζ , ( 2 ), where m is a Poisson-distributed random variable with mean κ − 1 f ( s + η ) , such that the mean of κm is f ( s + η ) ., Our circuit model thus has three sources of intrinsic variability: the additive noise sources ( η and ζ ) and the stochastic scaled-Poisson response ., We assume the statistics of the signal and noise are held fixed over a time window long enough that the circuit can adapt its nonlinearity to the full distribution of signal and noise ., That is , in a small integration time window Δt , the channel receives a draw from the signal and noise distributions to produce a response ., Thus , we model the signal s and noises η , ζ , and the scaled Poisson responses as random variables rather than stochastic processes ., In this work , we assume the distribution of possible inputs to be Gaussian with fixed variance σ s 2; without loss of generality we can take the mean to be zero ( i . e . , the signal represents variations relative to a mean background ) ., We assume the upstream and downstream noise to be Gaussian with mean 0 and variances σ up 2 and σ down 2 , respectively ., The assumption of Gaussian distributions for the input and noise is not a restriction of the model , but a choice we make to simplify our analyses and because we expect physiologically relevant noise sources to share many of the properties of a Gaussian distribution ., Even in cases where the input distribution is not Gaussian , pre-processing of inputs can remove heavy tails and lead to more Gaussian-like input distributions ., It has been shown that stimulus filtering in the retina indeed has this effect 74 ., An additional scenario to consider is the possibility that the signal properties , such as the variances , could themselves be random ., We might then wonder how this would impact the predicted nonlinearities ., As a “trial” of our model is a single draw from the stimulus and noise distributions , there is no well-defined variance on a single trial ., A changing variance on every trial would be equivalent to starting with a broader noise distribution of fixed variance ., We can thus interpret the stimulus distribution we use in the study to be the effective distribution after trial-by-trial variations in variance have already been taken into account ., The results for a signal of constant variance can thus be adapted , qualitatively , to the case of random trial-by-trial variance by increasing the stimulus variance in order to mimic the impact that trial-by-trial changes in variance have on the shape of the nonlinearity ., In order to understand how noise properties and location impact efficient coding strategies , we seek the nonlinearity that best encoded the input distribution for a variety of noise conditions ., We primarily consider the mean squared error ( MSE ) of a linear estimator of the stimulus , as outlined below , as our criterion of optimality ., This is not the only possible optimality criterion , so to check the effects that other criteria might have , we also consider maximizing the mutual information ( MI ) between stimulus and response ., MI provides a measure of coding fidelity that is free from assumptions about how information is read out from responses ., However , MI is difficult to evaluate analytically for all but the simplest models ., Indeed , for our model , deriving exact analytic equations for the optimal nonlinearities using MI is intractable ., We turn to simulations in this case ., We determine the nonlinearities obtained by minimizing the MSE using two complementary methods ., First , we take variational derivatives of the MSE with respect to the nonlinearities themselves to derive a set of exact equations for the optimal nonlinearities , free from any assumptions about their shape or functional form , as described below ., The only constraints we apply are that the nonlinearity must be non-negative and saturate at a value of 1 ., ( The choice of saturation level is arbitrary and does not affect the results . ), Applying such constraints are non-trivial—in most variational problems constraints enforce an equality , but in our method we are enforcing an inequality , discussed in the next section ., Using this analytic approach , we minimize the assumptions we make about the nonlinearities and obtain insights into the behavior of the model that are otherwise inaccessible ., Second , we parametrize the nonlinearities as sigmoidal or piecewise linear curves with two parameters that control the slope and offset ., We simulate the model , sweeping over the slope and offset parameters ( Fig 8A ) until we find the parameter set that minimizes the MSE of the linear readout ., This parametric approach makes strong assumptions about the form of the nonlinearity but also has distinct advantages ., Simulations allow us to test to what extent our conclusions about the shape ( i . e . , slope and offset ) of the optimal nonlinearity depend on its specific functional form ., For example , we find from our analytical calculations that optimal nonlinearities are roughly piecewise linear ( Fig 8C ) , but one might expect biophysical constraints to restrict neurons to having smooth nonlinearities ., For this reason , we also test sigmoidal shaped nonlinearities , a smooth approximation of the piecewise linear solutions that emerge from the nonparametric analytical approach , and use simulations to find the optimal parameters ., We find the results with sigmoidal nonlinearities qualitatively very similar to the analytical solution ( Fig 8C ) ., | Introduction, Results, Discussion, Methods | Neural circuits reliably encode and transmit signals despite the presence of noise at multiple stages of processing ., The efficient coding hypothesis , a guiding principle in computational neuroscience , suggests that a neuron or population of neurons allocates its limited range of responses as efficiently as possible to best encode inputs while mitigating the effects of noise ., Previous work on this question relies on specific assumptions about where noise enters a circuit , limiting the generality of the resulting conclusions ., Here we systematically investigate how noise introduced at different stages of neural processing impacts optimal coding strategies ., Using simulations and a flexible analytical approach , we show how these strategies depend on the strength of each noise source , revealing under what conditions the different noise sources have competing or complementary effects ., We draw two primary conclusions: ( 1 ) differences in encoding strategies between sensory systems—or even adaptational changes in encoding properties within a given system—may be produced by changes in the structure or location of neural noise , and ( 2 ) characterization of both circuit nonlinearities as well as noise are necessary to evaluate whether a circuit is performing efficiently . | For decades the efficient coding hypothesis has been a guiding principle in determining how neural systems can most efficiently represent their inputs ., However , conclusions about whether neural circuits are performing optimally depend on assumptions about the noise sources encountered by neural signals as they are transmitted ., Here , we provide a coherent picture of how optimal encoding strategies depend on noise strength , type , location , and correlations ., Our results reveal that nonlinearities that are efficient if noise enters the circuit in one location may be inefficient if noise actually enters in a different location ., This offers new explanations for why different sensory circuits , or even a given circuit under different environmental conditions , might have different encoding properties . | medicine and health sciences, statistical noise, engineering and technology, nervous system, signal processing, ocular anatomy, electrophysiology, neuroscience, mathematics, statistics (mathematics), computational neuroscience, coding mechanisms, gaussian noise, computer and information sciences, animal cells, neural pathways, source coding, cellular neuroscience, retina, neuroanatomy, anatomy, cell biology, synapses, signal to noise ratio, neurons, physiology, information theory, biology and life sciences, cellular types, physical sciences, computational biology, ocular system, neurophysiology | null |
2,267 | journal.pcbi.0030169 | 2,007 | A Numerical Approach to Ion Channel Modelling Using Whole-Cell Voltage-Clamp Recordings and a Genetic Algorithm | Ion channels are trans-membrane proteins that close and open in reaction to changes in membrane potential , among other factors , thus leading to a change in ion flow across the membrane ., Membrane potential may be the most significant factor affecting the activity of ion channels , for not only do the kinetics of many channels depend on membrane potential but also changes in membrane potential are the main instigator of neuronal activity 1 , 2 ., The kinetics of these voltage-dependent ion channels are complex , requiring the construction of intricate kinetic models to understand ion channel behaviour ., The dominant paradigm for ion transport over the past 50 years is based on the seminal experiments of Hodgkin and Huxley 3–8 ., Their detailed kinetic models derived from the giant axon of Loligo are still extremely useful in studies of ion channels and of neuronal physiology ., However , a much more detailed picture of the mechanisms underlying membrane excitation has emerged over the years , emphasizing several disagreements with the Hodgkin-Huxley model ., These include their proposed lack of connectivity between the activating and inactivating “gates” of the voltage-dependent sodium channel 9 and their premise that the inactivation gate voltage dependence is due to its coupling to the activation process; i . e . , the inactivation gate can close only after the activation gate opens 10 , 11 ., Nevertheless , the Hodgkin-Huxley model is still predominant in many simulations of neuronal physiology , mostly due to its ease of use and conceptualization combined with the relative small number of free parameters needing estimation for the quantification of channel behavior ., Most models proposed to replace the Hodgkin-Huxley model still use the same kinetic formalism and are only satisfactory in explaining certain aspects of channel behaviour but fail in others ., For example , the classical Hodgkin-Huxley paradigm does not consider interactions between varying kinetic states , particularly that between activation and inactivation ., This failure leads to an erroneous estimation of the kinetic parameters and thus of the predicted channel dynamics 12 ., The best method , so far , for discovering the kinetics of ionic channels is by analysis of single-channel recordings 13 ., However , single-channel recording and analysis suffer from several problems ., One needs to accurately subtract the capacity of the electrodes , and the analysis of first latencies is extremely difficult 13–18 ., Most data on neuronal voltage-gated conductances obtained in studies of cellular physiology have thus been collected when many channels were activated simultaneously , either in the whole-cell mode or in excised patches containing many channels ., That is , most models of voltage-gated channels that aim to explain the physiological function of the channels are generated by analysing simultaneous activity in many channels ., Here we propose a method for analyzing whole-cell recordings of voltage-gated channels ., Our working hypothesis is that it is possible to verify the viability of voltage-dependent ion channel models using a genetic optimization algorithm concurrently with a full-trace fit of experimental data to the model ., We therefore scanned several of the better-known models of voltage-gated sodium and potassium channels to examine their accuracy in predicting and reproducing measured currents ., Though the data provided for this paper derive from somata of L5 pyramidal neurons of the rat cortex , the method suggested here is applicable to all types of voltage-clamp recordings from different neuron classes ., Note that our data were obtained using the nucleated patch configuration; thus , the models proposed here are not as detailed as those that may be obtained using single-channel recording ., However , they are functionally significant models which allow us ( and hopefully future researchers ) to predict , simulate , and analyze neuronal physiology ., The model and the genetic algorithm ( GA ) were programmed using NEURON 5 . 7 and 5 . 8 19 ., We parallelized the process using a cluster of ten Pentium 4 computers with a 3 GHz clock speed sharing the same network file system ( NFS ) ., One of the machines functioned as a master , submitting and managing the jobs using a Parallel Virtual Machine ( PVM ) , while the rest were slaves , reading and writing information from a shared directory in the network file system ., Ion channel models were implemented using the NMODL extension of NEURON 20 ., Results were analyzed using custom procedures written in IgorPro 5 . 01 ( Wavemetrics , http://www . wavemetrics . com/ ) ., A GA is a search algorithm based on the mechanisms of Darwinian evolution ., It uses random mutation , crossover , and selection operators to breed better models or solutions ( individuals ) from an originally random starting population 21 ., In this study , we started each search with a random population that was at least 20 times larger than the number of free parameters in the fitted model ., Each individual in the population described a parameter set and the model was evaluated for each one of them ., A search space was defined for each parameter; this avoided parameter combinations causing instability to the set of differential equations , while covering most of the physiological range expected for the parameters ., Thus , for rate constants ( k ) the range was set from zero to 2 , 000 s−1 , for voltage-dependence parameters ( z ) from zero to 2000 V−1 , and zero to 100 pS for the conductance ., Only in model C , where a parameter describing a voltage shift was required , was the range set from zero to +100 mV , disregarding the negative range of this parameter , since the expected value for this parameter was positive in both the simulated and the real data ., The population was sorted according to the value of the cost function ( Equation 1 ) of each individual , and a new generation was created using selection , crossover , and mutation as operators ., Selection used a tournament in which two pairs of individuals were randomly selected and the individual with the better score from each pair was transferred to the next generation ., This procedure was repeated N/2 times ( where N is the size of the population ) until the new population was full ., The one exception to this selection process ( and later to the crossover and mutation operators ) was the best individual that was transferred unchanged to the next generation to prevent a genetic drift ., Each pair selected for transfer to the new population was subjected to a one-point crossover operator with a probability of 0 . 5 ., After the new population was created , each parameter value in the new population was subjected to mutation with a probability of 0 . 01 ., This allowed , albeit with a low occurrence frequency , the creation of double and even triple mutations to the same individual , thus increasing the variability in the new population ., As detailed in Figures 1 and 2 , two types of mutation operators were used ., The first was a substitution of the parameter value with a random value drawn from a flat random number distribution that spanned the entire search space of the parameter ., The second mutation operator , depicted in Figure 2 , was a relative operator which changed the value of a parameter relative to its current value using a random number drawn from a Gaussian distribution centred on the current value of the parameter with a relative variance of 5% ., We tested several values of mutation and crossover probabilities and found these values to be the optimal for the current project ., Ideally , the termination criterion should be that some cost function reaches a value of zero ., In practice this is not possible since the run time of the process is limited and reaching this score can take a long time ., The simulations ranged from less than an hour for a simple model with a small amount of data ( see also the demonstration code in Text S1 , which takes two hours on a single Pentium 4 with a 3 GHz clock speed and less than 20 minutes on our cluster and the animation of the convergence of the GA to a model of a voltage-gated ion channel in Video S1 ) to more than a week for a 20-parameter model fitting many experimental points ., Therefore , the process was terminated when the value of the best individual had not changed for several hundred generations ., Depending on the complexity of the model , this occurred after 1 , 000–30 , 000 generations ., The cost function calculated root mean distance between the target and the test ionic current:, where T represents the target data set and t the test dataset ., N was the total number of points in each simulated ionic current trace and M the number of voltage-clamp sweeps simulated in the model ., To rank the ability of various models to fit the data , we used the Log Error Ratio ( LER ) :, where χA and χB are the sum of squared errors for fitting the data to models A and B , respectively 22 ., Equation 2 applies in theory to models containing a similar number of parameters ., This can be corrected for by using the asymptotic information criterion AIC = 2 · ( NPA − NPB ) / n 23 , where NPA and NPB are the number of free parameters in each model and n is the number of data points ., In this study a large dataset with 15 , 000–30 , 000 data points was used for fitting ., Therefore , the AIC correction was small and not applied in the calculations ., Only LER values are reported ., The models used to generate simulated currents are described below ., Some minor changes to these models were made when voltage-gated K+ currents recorded from nucleated patches were analyzed ., The modifications are noted in Table 1 ., Moreover , many published models contain mathematical expressions that hinder the proper use of minimization algorithms ., For example , a common expression of a rate constant in a model for a voltage-gated channel can often be seen to assume the general form , k = Aexp ( −z ( V − V1/2 ) ) ., However , the expression exp ( zV1/2 ) can also be expressed as part of the pre-exponential value leading to a simpler equation k = A′exp ( −zV ) ., Using the former expression in a minimization scheme invariably leads the algorithm astray due to the interchangeability of the pre-exponential and the fixed parameters in the exponent ., Therefore , in all the models we have taken from the modeling literature and that appear below we verified that such interchangeability was eliminated by modifying the formal description of the model ., Slices ( sagittal , 300 μm thick ) were prepared from the somatosensory cortex of 13–15 days old Wistar rats that were killed by rapid decapitation , according to the guidelines of the Bar-Ilan University animal welfare committee ., Slice preparation followed Stuart 27 ., Slices were perfused throughout the experiment with an oxygenated artificial cerebrospinal fluid ( ACSF ) containing: ( mM ) 125 NaCl , 25 NaHCO3 , 2 . 5 KCl , 1 . 25 NaH2PO4 , 1 MgCl2 , 2 CaCl2 , and 25 Glucose ( pH 7 . 4 with 5% CO2 ) ., All experiments were carried out at room temperature ( 20–22 °C ) ., Pyramidal neurones from L5 in the somatosensory cortex were visually identified using infrared differential interference contrast ( IR-DIC ) videomicroscopy 27 ., To record voltage-gated K+ currents , the standard pipette solution contained ( mM ) : 125 K-gluconate , 20 KCl , 10 HEPES , 4 MgATP , 10 Na-phosphocreatine , 0 . 5 EGTA , and 0 . 3 GTP ( pH 7 . 2 with KOH ) ., 10 mM 4-AP was included in the bath solution to reduce the amplitude of the A-type K+ conductance 28 ., The pipette solution for recording voltage-gated Na+ currents contained ( mM ) 120 Cs-gluconate , 20 CsCl , 10 HEPES , 4 MgATP , 10 Na-phosphocreatine , 1 EGTA , and 0 . 3 GTP ( pH 7 . 2 with CsOH ) ., In addition , 30 mM TEA was added to the bath solution to reduce residual K+ current amplitude ., A similar amount of NaCl was removed from the bath solution to maintain constant osmolarity ., Nucleated outside-out patches 29 were extracted from the soma of L5 pyramidal neurons ., Briefly , negative pressure ( 180–230 mbar ) was applied when recording in the whole-cell configuration , and the pipette was slowly retracted ., Gentle and continuous retraction created large patches of membrane engulfing the nucleus of the neuron ., Following extraction of the patch , the pressure was reduced to 20–40 mBar for the duration of the experiment ., All measurements from nucleated and cell-attached patches were carried out with the Axopatch-200B amplifier ( Axon Instruments , http://www . axon . com ) ., The capacitive compensation circuit of the amplifier reduced capacitive transients ., Nucleated patches were held at −60 mV unless otherwise stated ., Linear leak and capacitive currents were subtracted off-line by scaling of 20–30 average pulses measured during hyperpolarization ( −80 to −100 mV ) ., Currents were filtered at 5–10 kHz and sampled at rates two to ten times higher than the filtering frequency ., The reference electrode was an Ag-AgCl pellet placed in the pipette solution and connected to the experimental chamber via an agar bridge containing 150 mM KCl ., Under these conditions , the total voltage offset due to electrode and liquid junction potentials 30 was 2 mV ., Membrane potential was not corrected for this potential difference ., Recordings were made with fire-polished Sylgard-coated ( General Electric , RTV615 ) pipettes ( 5–8 MΩ ) ., One of the main issues concerning GAs ( or any minimizing algorithm ) is computing power and , consequently , execution times ., One of the main goals , when dealing with minimization algorithms , is therefore to produce algorithms which are non-cumbersome , while retaining their ability to efficiently sample the parameter space ., A common method for achieving this goal is to limit the parameter search space , for example by adaptive trimming down of the ranges for parameter search 31 ., These methods did not reduce the complexity and time detriments of our algorithm ., Instead , while running and analyzing the performance of the GA , we observed a recurring behavior ., In 14 preliminary runs , the GA demonstrated a comparable pattern of convergence , where after several hundred generations the best individual in each generation quickly converged to a region in the parameter space ., This is illustrated in Figure 1 , which describes the steps taken by each parameter of the best individual in each generation in a nine-parameter model ., The steps toward each target value are displayed as the percentage change from that parameter value in the previous generation ., Figure 1A displays the steps taken by the parameters over the first two hundred generations of the GA progress ., During the first fifty generations , large changes were observed in the values of some of the parameters in the best individual vector ., In the further generations , parameter values in the best individual changed only in small steps ., Figure 1B shows a detailed view of the algorithm convergence presented above , showing clearly that after several hundred generations the parameters have mutated to only a few percent from their values in the previous generation ., Since the parameter search space remained constant , the small changes in value as parameters converged became more insignificant in comparison with the full range of the parameter search space ., We solved this problem by defining a new range for each parameter mutation search space using a Gaussian centered on the best value of that parameter obtained in the previous generation with a relative variance of 5% ., The randomly picked value was then multiplied by the current value of each individual found by the GA , providing a new value limited to ∼±22% of the current value of each individual ( Figure 2A ) ., The Gaussian range was implemented only after the GA went through at least 500 generations , taking advantage of the fast convergence observed in the initial stages of the run ., This new adaptive range for parameter mutation search space proved more efficient than the previously attempted fixed ranges ., It increases the probability of the algorithm searching for better individuals around the best value of the individual found so far , as was indeed observed in the preliminary runs ., At the same time the possibility of a better individual existing farther away is not dismissed ( as the Gaussian is infinite on both ends ) ., A comparison between the GA with and without the adaptive Gaussian range revealed that , while the original algorithm gradually converged and reached a score of 0 . 1 after 10 , 000 generations ( Figure 2B , smooth line ) for the nine-parameter Model A described in the Methods , the new GA converged after 1 , 000 generations to a score of 10−4 ( Figure 2B , dashed line ) ., Subsequently , we simulated potassium currents using several basic kinetic models for potassium channels ( see the Channel Models section in Methods ) ., The potassium current data were saved and used as a reference for the GA convergence ., We initially tested a nine-parameter model ( Model A ) with equal value parameters ., This did not prevent the resulting simulated currents from resembling experimentally recorded ones ., Figure 3A displays the activation of the conductance in response to 20 mV voltages steps from −80 to +60 mV ., Figure 3B shows the deactivation of the conductance after 20 ms activation to +60 mV from potentials ranging from −120 mV to −20 mV in steps of 10 mV ., This entire set of data was used as the target dataset for calculating the score function ( Equation 1 ) for each individual generated by the GA ., After approximately 5 , 000 generations , the GA converged all of the nine parameters to within 40% of their true target ( Figure 3C ) ., After the GA run was stopped , the data were run through a hill-climbing Principle Axis algorithm ( PrAxis ) 32 , which is part of the NEURON simulation environment ., This converged the remaining parameters to 1%–2% of the target values ( Figure 3C ) ., To determine the efficiency of the PrAxis routine alone , we generated 100 random parameter vectors and used them as starting guesses for this routine ., In none of the cases was the PrAxis routine able to provide even a rough fit of the data ., Thus , it is only the combination of the GA followed by the PrAxis that produced a good fit ., We further simulated potassium currents using a thirteen-parameter model ( Model B ) ., Again , the potassium current data for activation ( Figure 4A ) and deactivation ( Figure 4B ) was saved and used as the target dataset for the GA convergence ., As in the previous experiment , after ∼5 , 000 generations the GA converged all but two parameters to within 40% of their target values ( the remaining two reaching values less than 200% of their target value ) ., Once more ,, ( being independent of membrane potential ) converged to within a few hundredths of a percent from its target value ., As before , the GA data were run through PrAxis , producing a fit that was only a few percent from the target value for all parameters ( Figure 4C ) ., Similar results were obtained for several different target parameters sets ., Using the dataset produced by Models A and B , we also investigated the ability of simulated annealing ( SA ) and random sampling to replace the GA scheme we present here ., The SA algorithm performed as efficiently as the GA in some cases and much worse in others ( simulations not shown ) ., Analysis of the simulations revealed that , unlike the GA , the cooling scheme of the SA algorithm had to be repeatedly fine-tuned to properly constrain the parameters of different models ., We also tested the ability of extensive random sampling as a substitute for the GA ., After 1 , 000 , 000 iterations , the best score obtained for Model B was ∼6 , 000 , at least two orders of magnitude bigger than the score obtained by the GA after a similar number of iterations ., Since our analysis uses a large amount of data , we tested its accuracy in converging on the target parameters using varying amounts of data ., We compared the accuracy of the combined convergence of the GA and PrAxis for an increasing number of stimulation sweeps ( Figure 9 ) previously used with Model B to produce the activation traces in Figure 4A ., The score function produced a seemingly near-perfect fit when using a small number of stimulation sweeps , but its divergence from zero increased as the number of sweeps grew ( Figure 7A ) ., However , an opposing trend was observed when comparing the mean deviation of the best parameter values reached by the GA and PrAxis from their target values ( Figure 7B ) ., We therefore conclude that an increasing number of data sweeps is needed for an accurate estimate of the kinetic parameters of a model and does not lead to overfitting of the model ., This conclusion was further supported by simulations run on Model C , which proved that an increasing number of stimulation protocols , expressing varying aspects of channel kinetics , also add to the accuracy of the process ( unpublished data ) ., As our aim was to use our GA in fitting a model to currents recorded from neurons , we needed to test our GAs ability to converge in the presence of noise ., We therefore created several simulations consisting of randomly generated noise of varying amplitudes ., We first tested a 14-parameter model with equal parameters in the presence of random noise whose amplitude was 5% of the current value ., ( This model was identical to the 13-parameter Model B plus a maximal conduction density for the deactivation protocol , simulating variability between consecutive recordings or , alternatively , data obtained from two different recordings . ), The final convergence of both GA and PrAxis resulted in a fit of an average of 5% from the true values ( fit not shown ) ., We then ran three similar experiments with constant amplitude noise varying between ±10 , ±20 , and ±30 pA in each experiment ., The results illustrate the GAs ability to converge in spite of the noisy data ( Figure 8 ) ., The average distance of the parameters from their target values was 1 . 4% , 2 . 5% , and 1 . 4% for the three noise levels respectively ., The response of voltage-gated channels to changes in membrane potential is traditionally measured using a step change in the membrane potential ., This is the essence of the voltage-clamp method , which stems from the relative ease of analytically solving the differential equations describing channel relaxation following step activation ., Here we have used numerical methods to solve the equations describing channel activation ., Therefore , it occurred to us that the GA may also be able to estimate the parameters of a model using a set of data obtained with not-so-standard voltage-clamp protocols ., One such protocol is the voltage ramp , which is appealing mainly from the experimental point of view ., The voltage ramp is a very useful protocol since it allows the experimentalist a glimpse of the full voltage dependence of a channel using one fast protocol ., Furthermore , of all voltage-clamp protocols , the voltage ramp is unique in that during the ramp the contribution of the capacitance to the current is constant ., This allows simpler and cleaner leak subtraction than when a square pulse is applied and the capacitive current approaches infinity at the onset of the pulse ., Figure 9A displays the activation ( top traces ) of a nine-parameter model of a voltage-gated K+ channel ( Model A ) in response to ramps of varying duration with potential ranging from −100 mV to +50 mV ( Figure 9A , bottom traces ) ., Figure 9B displays the response of the same model to deactivating voltage ramps from +50 mV to −100 mV ( following a 50-ms step to +50 mV to fully activate the conductance ) ., The dataset displayed in Figure 9A and 9B was used as target dataset to the GA to determine whether it can be used to constrain the parameters of Model A . After 5 , 000 generations , the error in the parameters , relative to the target parameters , ranged between 0 . 01 to 42% ( Figure 9C ) ., When this set of parameters was used as an initial guess for the PrAxis hill-climbing algorithm , the error range was reduced to between 10−4% to 5% ( Figure 9C ) ., Similar results were obtained when Model B was used to generate the target dataset ., Thus , this set of simulations demonstrated that the GA could locate the global minimum even when the target dataset was generated using non-classical voltage-clamp protocols ., Following our successful simulations of potassium channel models , we proceeded to more complex simulations of sodium channels ., One model tested was based on the sodium current recorded from retinal ganglion cells 24 ., The small changes we made to the model ( Model C in Methods ) were necessary to eliminate redundancy in the original model , which would have prevented the GA from constraining the parameters to the appropriate values ., When considering a voltage-gated channel that displays both activation and inactivation , the repertoire of voltage protocols increases substantially ., The five basic voltage protocols routinely applied in such cases are activation ( Figure 5A ) , deactivation ( Figure 5B ) , steady-state inactivation ( Figure 5C ) , pulse inactivation ( Figure 5D ) , and recovery from inactivation ( Figure 5E ) ., Using this target dataset , the GA , after ∼11 , 000 generations , generated a parameter set that deviated from the target parameter set by 10%–200% ( Figure 5F ) ., Using this set of parameters as an initial guess for the PrAxis hill-climbing algorithm reduced the error range from 10−4% to 10−2% , practically a perfect fit ( Figure 5F ) ., Next we tested the GA on voltage-gated K+ currents recorded in nucleated outside-out patches extracted from layer 5 neocortical pyramidal neurons ., These neurons contain several voltage-gated K+ channels 28 , 33 ., To reduce the number of channels , we blocked the A-type K+ channel with 10 mM 4-AP ., The residual current , the slow voltage-gated K+ channel 28 , 33 , was activated by voltage steps from −80 to +40 mV ( Figure 10A ) ., All the traces in Figure 10 were recorded with an extracellular K+ concentration of 6 . 5 mM to generate larger deactivating tail currents ., The activation and deactivation data traces were used as the target dataset for the GA , and the fitness of several models was tested ., Following convergence of the GA , the best parameter set was used as an initial guess for the PrAxis routine ., The results of these minimizations are summarized in Table 1 where models are compared using their LER 22 ., The best fit was obtained for a model with three closed states and one open state in which the rate constants were exponential ., However , the fitness of this model , containing 21 free parameters , did not differ substantially from a model with two closed states and only 11 parameters ., Thus , to avoid overfitting , we decided that the latter model produced the best fit for this dataset ., This fit is shown in Figure 10A and 10B as red lines ., Similar results were obtained in two more patches ., Next , we investigated whether the model obtained in this way could predict the response of the conductance to other , less traditional , voltage-clamp protocols ., Responses to activation ramps from −100 mV to +40 mV , with varying durations starting from 40 ms and increasing in steps of 10 ms ( blue lines ) , were simulated ., These were compared with currents recorded from the patch used in Figure 10A and 10B using identical voltage-clamp protocols ( Figure 10C ) ., Simulations of responses to deactivation ramps from +40 mV to −80 mV with durations increasing in 5-ms steps from 2 ms ( blue lines ) were similarly compared with recorded current traces ( Figure 10D ) ., Figure 10E and 10F depict a further comparison of data recorded from the patch and data simulated on the best model using the stimulation protocols shown in Figure 10A and 10B ., The voltage-clamp protocols in Figure 10E and 10F were sine waves from −70 mV to +70 mV with a frequency of 50 Hz and 100 Hz , respectively ., Again , the blue lines depict the current produced by Model A which had the best fit to the activation and deactivation data in Figure 10A and 10B ., Subsequently , we used the GA on voltage-gated Na+ currents recorded in nucleated outside-out patches extracted from layer 5 neocortical pyramidal neurons ., Data traces from activation , steady-state inactivation , pulse inactivation , and recovery from inactivation were used as the target dataset for the GA , and the fitness of several models was tested ., The activation , steady-state inactivation , and pulse inactivation currents of the voltage-gated Na+ channel are shown in Figure 6 ., The activation currents were produced in response to voltage steps from −30 to +35 mV in steps of 5 mV from a holding potential of −110 mV ., The steady-state inactivation currents were produced in response to a voltage step to 0 from holding potentials varying from −105 mV to −5 mV in 10-mV increments ., The pulse inactivation currents were produced in response to a voltage step to −50 mV ( from a holding potential of −110 mV ) for durations varying from 0 ms to 45 ms in increments of 5 ms followed by another voltage step to 0 ms for 30 ms . After convergence of the GA , the best parameter set was used as an initial guess for the PrAxis routine ., The results of these minimizations are summarized in Table 2 where models are compared using their LER ., The best fit was obtained for a model containing two closed states , an open state , and two inactivated states ( Model D , see Figure 6 ) ., Examination of the visual fit reveals that Model D has a more accurate description of the channel inactivation , while Model E ( Figure 6 ) is slightly better in depicting the channel activation ., Our study describes a method for analyzing voltage-dependent ionic currents ., We tested and affirmed the ability of the GA presented here to fit current traces using the full-trace analysis method ., The GA was then used to test the fit of various previously published and new voltage-dependent ion channel models ( Figures 3 , 4 , 7–9 ) ., The models were further used to produce currents equivalent to the potassium ( Figure 10 ) and sodium currents ( Figure 6 ) measured in patch-clamp experiments ., We conclude that it is possible to verify the viability of voltage-dependent ion channel models using a genetic optimization algorithm concurrently with a full-trace fit of experimental data to the model ., The advantage of our scheme over the more commonly used gradient descent algorithms is that GAs do not require an initial guess of the parameters 12 , 34 , 35 ., With a large enough dataset , it is possible to arrive at the global minimum even from a random starting point ., This is very important since , given wrong starting parameters , gradient descent algorithms may arrive at a local minimum ., However , although this study obtained good results in almost all cases , this does not constitute a proof that this method will work each time , since there may be a random combination of parameters that defy the GA ., Still , our results show that a useable and functionally significant ion channel model describing whole-cell currents may be produced using a GA with a dataset containing a “complete” whole-cell activation of the channel as its input ., In contrast to the disjoint conventional method , the approach to data analysis and model fitting used here has been designated global curve fitting or the full-trace method 12 , 34 ., However , the analysis presented here shows several important differences from previously suggested full-trace analyses ., The most obvious difference is the use of multiple stimulation protocols; previous full-trace analyses used only activation and deactivation protocols for their data and fit 12 , 34 , 35 ., Furthermore , while using the full current | Introduction, Methods, Results, Discussion | The activity of trans-membrane proteins such as ion channels is the essence of neuronal transmission ., The currently most accurate method for determining ion channel kinetic mechanisms is single-channel recording and analysis ., Yet , the limitations and complexities in interpreting single-channel recordings discourage many physiologists from using them ., Here we show that a genetic search algorithm in combination with a gradient descent algorithm can be used to fit whole-cell voltage-clamp data to kinetic models with a high degree of accuracy ., Previously , ion channel stimulation traces were analyzed one at a time , the results of these analyses being combined to produce a picture of channel kinetics ., Here the entire set of traces from all stimulation protocols are analysed simultaneously ., The algorithm was initially tested on simulated current traces produced by several Hodgkin-Huxley–like and Markov chain models of voltage-gated potassium and sodium channels ., Currents were also produced by simulating levels of noise expected from actual patch recordings ., Finally , the algorithm was used for finding the kinetic parameters of several voltage-gated sodium and potassium channels models by matching its results to data recorded from layer 5 pyramidal neurons of the rat cortex in the nucleated outside-out patch configuration ., The minimization scheme gives electrophysiologists a tool for reproducing and simulating voltage-gated ion channel kinetics at the cellular level . | Voltage-gated ion channels affect neuronal integration of information ., Some neurons express more than ten different types of voltage-gated ion channels , making information processing a highly convoluted process ., Kinetic modelling of ion channels is an important method for unravelling the role of each channel type in neuronal function ., However , the most commonly used analysis techniques suffer from shortcomings that limit the ability of researchers to rapidly produce physiologically relevant models of voltage-gated ion channels and of neuronal physiology ., We show that conjugating a stochastic search algorithm with ionic currents measured using multiple voltage-clamp protocols enables the semi-automatic production of models of voltage-gated ion channels ., Once fully automated , this approach may be used for high throughput analysis of voltage-gated currents ., This in turn will greatly shorten the time required for building models of neuronal physiology to facilitate our understanding of neuronal behaviour . | physiology, neuroscience, rattus (rat), computational biology | null |
2,280 | journal.pcbi.1005436 | 2,017 | Computation and measurement of cell decision making errors using single cell data | Each individual cell receives signals from the surrounding environment and is supposed to respond properly through a variety of biochemical interactions among its signaling molecules ., Single cell studies and modeling approaches have emerged in recent years 1 , 2 , 3 , to understand the biochemical processes in each individual cell , as opposed to a large population of cells and their average behavior ., Due to signal transduction noise , a cell can respond differently to the same input , which may result in incorrect ( unexpected ) cell decisions and responses 2 ., Upon providing an input signal , however , it is not clear whether the cell is going to make a correct decision or not ., Due to the random nature of the transduction noise , this decision making becomes somewhat probabilistic 2 ., Here we introduce a method for characterization and quantification of decision making processes in cells , using statistical signal processing and decision theory concepts 4 used in radar and sonar systems ., The basic goal of such systems is the ability to correctly decide on the presence or absence of an object ., For example , in a radar system it is of interest to decide if there is an object transmitting a constant signal , while noise is present ., If the received signal is much stronger than noise , the system can correctly declare the presence of the object ., However , if the received signal is much weaker than noise , the system will miss the presence of the object ., This erroneous decision is called a miss event ., The radar system can make another type of erroneous decision , called a false alarm event , where there is no object but noise misleads the system to falsely declare the presence of an object ., A mathematical model for this example 4 , including received signal and noise models , the decision making algorithm , probabilities for making incorrect decisions and some numerical results are presented in Materials and Methods ., To explain the method in a practical way and in the context of molecular computational biology , we use the tumor necrosis factor ( TNF ) signaling pathway 2 which regulates the transcription factor nuclear factor κB ( NF-κB ) ( Fig 1A ) ., NF-κB is a nuclear transcription factor that regulates numerous genes which play important roles in cell survival , apoptosis , viral replication , and is involved in pathological processes such as inflammation , various cancers and autoimmune diseases ., In the TNF signaling pathway ( Fig 1A ) , the molecule A20 has an inhibitory feedback effect , whereas TRC stands for the TNF receptor complex 2 ., TNF is a cytokine that can mediate both pro-apoptotic and anti-apoptotic signals 5 ., In wild-type cells and upon binding of TNF ligands , NF-κB translocates to the nucleus , temporarily increasing the level of nuclear NF-κB ., NF-κB activation rescues the cell from apoptosis ., Then due to the negative feedback of A20 , the nuclear NF-κB level decreases ., This short period of NF-κB activity is sufficient to activate transcription of the so called early genes , including numerous cytokines and its inhibitor A20 ., In A20-deficient cells , the level of nuclear NF-κB remains relatively high for several hours ., Loss or mutation of A20 can result in chronic inflammation and can promote cancer 6 , 7 ., The signal transduction noise considered in our analysis encompasses all factors that make cell responses to the same signal variable or heterogeneous ., In reference 3 it is demonstrated that both intrinsic and extrinsic noise contribute to the transduction noise in the NF-κB pathway ., Extrinsic noise results from the fact that at the time of stimulation , cells are not identical and may have different levels of TNF receptors and other components of the signal transduction cascade ., Intrinsic noise , on the other hand , results from the randomness of the biochemical reactions that involve a small number of molecules ., Recent information theoretical analysis of single cell data has demonstrated that in the TNF signaling pathway , cell can only decide whether TNF level at the system input is high or low 2 ., In other words , based on the nuclear NF-κB level , cell can only tell if there is high TNF level at the input or not 2 ., During this process , we formulate that cell can make two types of incorrect decisions: deciding that TNF is high at the system input whereas in fact it is low , or missing TNF’s high level when it is actually high ., These two incorrect decisions can be called false alarm and miss events , respectively , similarly to the terminology used in radar and sonar 4 ., The likelihood of occurrence of these incorrect decisions depends on the signal transduction noise ., To understand how cell makes a decision on whether TNF is high or low , we first studied two TNF concentrations of 8 and 0 . 0021 ng/mL , respectively ( other TNF levels are discussed later ) ., The histograms representing NF-κB responses of hundreds of cells to each TNF stimulus after 30 minutes are shown in Fig 1B ., By using a probability distribution such as Gaussian ( Fig 1C ) ( see Materials and Methods ) for histograms , we specified the regions associated with incorrect decisions ( Fig 1C ) ( see Materials and Methods ) ., These regions are determined by the optimal decision threshold obtained using the maximum likelihood principle\u200e4 ( see Materials and Methods ) , which simply indicates that the best decision on some possible scenarios is selecting the one that has the highest likelihood of occurring 4 ., The area to the right of the decision threshold under the low TNF response curve is the false alarm region ( Fig 1C ) , meaning that nuclear NF-κB level could be greater than the threshold due to the noise , which falsely indicates a high level of TNF at the system input ., The size of this shaded area specifies PFA , the false alarm probability ., On the other hand , the area to the left of the decision threshold under the high TNF response curve is the miss region ( Fig 1C ) , meaning that due to the noise , nuclear NF-κB level could be smaller than the threshold , which results in missing the presence of high TNF level at the system input ., The size of this shaded area is PM , the miss probability ., Using the single cell experimental data we calculated PFA = 0 . 04 and PM = 0 . 1 ( see Materials and Methods ) ., The higher value for PM can be attributed to the broader response curve when TNF is high ( Fig 1C ) ., The overall probability of error Pe for making a decision is given by Pe = ( PFA + PM ) /2 = 0 . 07 ( see Materials and Methods ) , which is the average of false alarm and miss probabilities ., We also collected the histograms of NF-κB responses of hundreds of cells to each TNF stimulus after 4 hours ( Fig 1D ) , which seem to have more overlap , compared to the response histograms collected at 30 min ., This can be better understood by looking at the two response curves and the larger false alarm and miss regions ( Fig 1E ) ., In fact , we observed higher values for false alarm and miss probabilities , i . e . , PFA = 0 . 2 and PM = 0 . 29 ( see Materials and Methods ) ., These higher values for false alarm and miss probabilities , as well as the higher overall probability of error Pe = ( 0 . 2 + 0 . 29 ) /2 = 0 . 245 can be due to the negative feedback of A20 ( Fig 1A ) , which reduced the level of nuclear NF-κB in 4 hours , when TNF was high ( notice the considerable shift of the TNF-high response curve to the left that we observe in Fig 1E , compared to Fig 1C ) ., To understand the decision making process based on both early and late responses , we computed ( see Materials and Methods ) high and low TNF joint response curves of the nuclear NF-κB at 30 minutes and 4 hours ( Fig 1F ) ., The top view of the response curves ( Fig 1G ) shows that while high and low TNF concentrations produce relatively distinct distribution patterns in the early response domain , they have a higher degree of overlap in the late response domain ., Using a more sophisticated approach to determine decision thresholds and decision probabilities based on joint early and late response data ( see Materials and Methods ) , we calculated PFA = 0 . 03 , PM = 0 . 1 and Pe = 0 . 065 ., These results turned out to be about the same as early decision probabilities , i . e . , PFA = 0 . 04 , PM = 0 . 1 and Pe = 0 . 07 ., It appears that in this signaling pathway , maximum likelihood decisions based on joint early/late events and early event alone provide the same finding on whether TNF level at the system input is high or low ., In the presence of abnormalities in a cell , such decision making processes can significantly change , compared to a wild-type cell ., For example , in the absence of A20 , a cell is unable to inhibit the TNF-induced NF-κB response 2 , 8 ., Under this condition , response curves of hundreds of A20-/- cells to high and low TNF levels after 30 minutes ( Fig 2A ) show significant overlap , compared to the response of wild-type cells ( Fig 1C ) ., This is because the negative feedback was no longer present in A20-/- cells , which resulted in the broadening of the TNF-low response curve and the increase in its mean value ( Fig 2A ) ., Therefore , the false alarm and miss regions in A20-/- cells turned out to be much larger ( Fig 2A ) , for which we computed PFA = 0 . 37 and PM = 0 . 15 ( see Materials and Methods ) ., Both false alarm and miss probabilities were greater than those of wild-type cells ( Fig 2B ) ., In biological terms , the higher false alarm rate in this abnormal TNF signaling system means perceiving more signals which in fact do not exist at the system input , whereas the higher miss rate indicates that it is more likely to miss signals that actually exist ., Using the response curves after 4 hours in A20-/- cells ( Fig 2C ) , we computed PFA = 0 . 73 and PM = 0 . 12 ( see Materials and Methods ) ., The increase in PFA and decrease in PM , compared to the wild-type cells , reflected a more profound effect of the lack of negative feedback after 4 hours in A20-/- cells , which resulted in an increase in the mean nuclear NF-κB level for both low and high TNFs ( Fig 2C ) ., Computations using both early and late response data ( see Materials and Methods ) revealed that in this signaling pathway , decisions based on joint early/late events and early events in A20-/- cells provide about the same results and probabilities on whether TNF level at the system input is high or low ( Fig 2B ) ., To study the impact of different TNF concentrations on cell decisions , we computed the overall probability of error Pe in making decisions after 30 minutes and 4 hours in both wild-type and A20-/- cells ( Fig 2D ) , after treatment with six different TNF concentrations ., This analysis shows that in wild-type cells a higher decision error rate Pe is observed over time for all TNF concentrations ., Also in wild-type cells Pe decreases as TNF concentration increases up to about 3 ng/mL , and then becomes less sensitive to the higher concentrations of TNF ., On the other hand , depletion of A20 increases the decision error rate Pe , compared to the wild-type cells , after 30 minute treatment ( Fig 2D ) ., Interestingly , A20-/- cells show higher Pe after the 4 hour treatment that is nearly insensitive to the increase in TNF concentration ., Overall , for each time course , there is a significant increase in Pe in A20-/- cells , compared to wild-type cells ( Fig 2D ) ., This is because of the failure of the signaling pathway due to A20 deficiency , where cells fail to stop TNF-induced NF-κB response ., This observation further confirms the usefulness of the decision error rate Pe as a metric and method for modeling and measuring cell decision making processes under normal and abnormal conditions and in the presence of transduction noise uncertainty ., The developed approach can be extended to more complex and larger signaling networks , where inputs could be ligands or secondary messengers , and outputs could be several transcription factors that produce certain cellular functions 9 ., Then by analyzing the concentration levels of these transcription factors at single or multiple time points using the proposed approach , probabilities of various cell fates in response to the input signals can be computed ., In a broader context , one notes that in various organisms ranging from simple ones such as viruses to bacteria , yeast , lower metazoans and finally complex organisms such as mammals , various decisions are made in the presence of noise 10 ., Depending on the concentration levels of certain molecules and their changes , regulated by some intracellular molecular networks , a cell may select from several possible fates ., For example , in embryonic stem cells in mammals , the Nanog transcription factor expression level , which might be affected by molecular noise , is a determinant of cell differentiation , if proper signals are present 10 ., In this context , one can use the approach presented here to compute false alarm and miss probabilities at different time instants , to better understand how precise or erroneous the decision to differentiate is ( given that noise is present ) , and how it changes over time ., In a broader context , one may envision studying cell decision making processes in other organisms , such as those reviewed in 10 , using the developed approach ., This study shows that compared to the overall probability of error Pe introduced in this paper for signaling systems , the signaling capacity defined as the maximum amount of information between the system input and output , may not be a convenient metric for revealing dysfunctionalities in the system ., The rationale is that while in the TNF—NF-κB pathway ( Fig 1A ) a reduction in capacity is observed in A20-/- cells in 30 minutes , compared to wild-type cells , an opposite effect , i . e . , capacity increase , is observed after 4 hours 2 ., Therefore , the impact of A20 deficiency on the pathway capacity appears in different directions over time ., The introduced error probability metric , on the other hand , consistently shows the increased level of erroneous behavior of this signaling pathway , in both short and long terms ., The difference between decision error probability and capacity in the context of dysfunctionalities can be anticipated ., This is because decision error probability is a metric defined such that it directly reflects departure of the pathway from normal behavior and its expected response ., Capacity , on the other hand , is defined to measure the maximum amount of information that can flow from the pathway input to its output ., While , in general , one may expect that a higher capacity in a pathway is a desired outcome , one can also note that the increased capacity might be caused by an alteration or loss of some otherwise important molecular functions in the pathway ., In the TNF—NF-κB pathway , it has indeed been observed 2 that after 4 hours , A20-deficient cells exhibit a higher capacity , compared to wild-type cells ., The point we are making here is that the higher amount of information that can travel from TNF to NF-κB in A20-deficient cells may not necessarily reflect biologically appropriate functioning of the pathway ., To be able to understand dysfunctionalities in a pathway and how they affect cell decision makings , one can therefore benefit from a complementary metric and approach to characterize cell decision making errors in abnormal pathways , which we have studied here ., In summary , capacity is a useful metric for studying information transmission in signaling pathways , whereas the introduced metrics of false alarm , miss and overall error rates are suitable for modeling decision making errors caused by noise and signaling failures ., The goal of dynamical modeling is to use tools such as differential equations or stochastic processes , to model changes in the concentration levels of molecules with time ., On the other hand , our approach aims at statistical characterization of decision making processes in cells , based on the concentration levels of certain molecules that control cell decisions , using statistical signal processing and decision theory tools ., The concentration levels can be obtained via either experiments or stochastic simulations ., As an example , in reference 3 a stochastic dynamical model is developed , which mimics nuclear NF-κB level changes with time , in response to a given TNF dose ., The model is designed to assess the kinetics of molecular activities in a representative cell , provides information about single cell responses , and can also be used to simulate distributions of given protein levels across a population ., It does not quantify the chance of missing a signal ., The proposed approach provides methods to analyze single cell data in the context of cell decision making ., For example , TNF high level of 8 ng/mL indicates the presence of a strong signal ., However , due to noise , there is a chance for a cell to miss this signal ., The approach presented here addresses probabilistic decision making , and the fidelity of decision making in noisy signaling networks ., In the particular example of TNF = 8 ng/mL , our approach reveals that there is a 10% chance for a cell not to respond to the signal , based on the measured nuclear NF-κB levels after 30 minutes ., We also note that while our approach is not meant to provide tools to model temporal variations of concentration levels , it allows to analyze and quantify the dynamics of signaling pathways and helps to understand cell decision making processes ., In the above example , our approach shows that based on the measured nuclear NF-κB levels after 4 hours of TNF stimulation , the chance for missing the strong signal increases to 29% ., This observation agrees with the dynamics of the TNF- NF-κB pathway activity , where due to the negative feedback of A20 , the level of nuclear NF-κB decreases after 4 hours , as discussed in the paper ., To further relate the developed approach to the dynamics of signaling , here we have also developed a more sophisticated method to determine cell decision making probabilities , if a cell can make decisions based on the nuclear NF-κB level at the two time points jointly , compared to deciding based on 30 minute or 4 hour levels only ., Our results show that in this example , joint decision based on the two time points has a 10% chance of missing the signal ., As discussed in the paper , for this specific pathway , our results suggest that decisions based on joint early/late signaling events versus the early event alone show similar chance for missing the presence of the signal ., In other pathways and signaling systems , however , this does not have to be the case , and the presented method can still be used to determine the probability of missing a signal and taking a certain cell fate road , based on multiple observations at different time points ., Overall , the approach complements dynamic modeling by providing quantitative results for assessing the dynamical decision-making performed by a cell in the presence of an external stimulus ., In contrast to the more common dynamical modeling analysis , the approach presented here does not explicitly characterize changes in the concentration levels of molecules with time ., These approaches are compatible , as a stochastic dynamical model can yield distributions of input-conditioned output levels , expressed in the form of the concentration of a singling molecule of interest ., Then our approach can use the simulated concentration level distributions to determine decision thresholds , false alarm and miss probabilities , etc ., While it is preferred to use experimental data directly to understand cell decisions , it may be advantageous to use data generated by dynamical models , including those that were developed to describe the TNF-stimulated NF-κB signaling 11 ., Furthermore , by perturbing kinetic parameters of a dynamical model , one can investigate the sensitivity of both the concentration level distributions and false alarm and miss probabilities to those parameters ., This analysis may reveal that some kinetic parameters can significantly affect cell decisions , while others may play less important roles ., In summary , the proposed method of the analysis of possible cellular decisions , as applied to the TNF—NF-κB pathway , yields insights that are biologically meaningful and are in agreement with the known pathway functionality ., NF-κB is a potent transcription factor regulating expression of numerous genes controlling cell fate decisions , including those regulating proliferation , apoptosis , or transition to the antiviral state ., The accuracy of transmitting information between TNF stimulation and NF-κB activation is therefore crucial for proper fate decisions ., Based on our analysis we found that the pathway can transmit within 30 minutes the information about the increase of TNF concentration , from a very low level to a high value of 8 ng/mL , with the transmission error of 0 . 07 ., Interestingly , when the NF-κB translocation is measured at 4 hours post-stimulation , the transmission error increases to 0 . 245 ., This finding reflects the presence of a negative feedback that attenuates the strength of the response at longer times and shifts the TNF-high response histogram to the left ( Fig 1D ) ., This causes a greater overlap between the two response histograms after 4 hours ( Fig 1D ) and therefore results in a higher decision error probability , compared to that corresponding to the lower overlap between the response histograms after 30 minutes ( Fig 1B ) ., Consistent with this result , our analysis also indicates a dramatic increase in the decision error in the feedback deficient cells , lacking expression of A20 ., This implies that cells are not able to compensate for the loss of A20 feedback controlling NF-κB activity ., This finding can help account for experimental observations that a loss or mutation of A20 can lead to chronic inflammation and can promote cancer due to the persistent activation of anti-apoptotic genes induced by NF-κB 12 ., The decision is expected to become less uncertain with an increasing input dose ., Our method can help analyze and quantify this effect ., For instance , increasing the TNF dose from 0 . 2 to 0 . 51 ng/mL reduces the decision error probability from 0 . 25 to 0 . 11 in 30 minute data ., The same behavior is observed for 4 hour data ., The method described here can be expanded to describe the performance of more complex and larger signaling networks , including those with multiple ligands or second messengers as network inputs and several transcription factors involved in certain cellular functions as network outputs ., By analyzing the concentration levels of these transcription factors using the proposed approach , probabilities of various cell fates in response to the input signals can be computed ., We also note that the proposed decision error metrics complement the previously introduced analysis of the information capacity of signaling pathways and networks 2 ., The information capacity is a useful metric to study information transmission in signaling pathways , but it does not address how the information transmitted by a signaling network can be converted into cellular decision making ., Our results show that the introduced metrics of false alarm , miss and overall error rates can on the other hand be used for modeling decision making errors caused by noise and signaling failures ., Overall , our analysis presents a powerful and widely applicable methodology to evaluate the expected fidelity of cellular decision making that can be used to further evaluate the performance of cellular signaling and communication ., This radar example is presented for illustrative purposes to show how statistical signal processing and decision theory concepts and tools are used in an engineering discipline ., It paves the way for understanding the proposed methods and concepts in the context of molecular computational biology and cellular decision making ., In radar systems , the system makes a decision based on samples of the received input waveform xn , where n is the time index ., Based on the N samples x0 , x1 , … , xN−1 , the system should decide between two hypotheses about xn: H0 which indicates that only noise is received , i . e . , no object is present , and H1 which represents that signal plus noise is received , i . e . , an object is present ., With wn and A representing noise and constant amplitude signal , respectively , these two hypotheses can be written as, H0:xn=wn , n=0 , 1 , … , N−1 , H1:xn=A+wn , n=0 , 1 , … , N−1 . ,, ( 1 ), To simplify the notation for computing the optimal decision metric , typically it is reasonable to assume both hypotheses have the same probability , i . e . , P ( H0 ) = P ( H1 ) = 1/2 , especially when we do not have a priori information about these probabilities ( the case of non-equal probabilities is discussed in the next section ) ., It can be proved 4 that the optimal decision making system which minimizes the decision error probability is the one that compares probabilities of x under H0 and H1 ., More specifically , let p ( x|H0 ) and p ( x|H1 ) represent conditional probability density functions ( PDFs ) of x under H0 and H1 , respectively ., Then the optimal system decides H1 if p ( x|H1 ) > p ( x|H0 ) , otherwise decides H0 ., This simply means that the optimal decision making system , after observing the input data , picks up the hypothesis which is more probable ., This decision strategy is also called the maximum likelihood 4 decision , since it chooses the hypothesis with the highest likelihood ., To compute p ( x|H0 ) and p ( x|H1 ) , we need the PDF of noise wn ., Upon using a Gaussian noise model with zero mean and variance σ2 in ( 1 ) , the univariate conditional PDFs of xn for each n under H0 and H1 can be written as p ( xn|H0 ) = ( 2πσ2 ) −1/2 exp− ( xn ) 2/ ( 2σ2 ) and p ( xn|H1 ) = ( 2πσ2 ) −1/2 exp− ( xn − A ) 2/ ( 2σ2 ) , respectively ., These two PDFs are graphed in S1 Fig for A = 2 and σ = 1 ., When noise samples are independent , joint PDF of x0 , x1 , … , xN−1 becomes the product of individual univariate PDFs ., This results in the following expressions for p ( x|H0 ) and p ( x|H1 ), p ( x|H0 ) =p ( x0 , x1 , … , xN−1|H0 ) = ( 2πσ2 ) −N/2exp−∑n=0N−1 ( xn ) 2/ ( 2σ2 ) , p ( x|H1 ) =p ( x0 , x1 , … , xN−1|H1 ) = ( 2πσ2 ) −N/2exp−∑n=0N−1 ( xn−A ) 2/ ( 2σ2 ) ., ( 2 ), To compare the above two PDFs , we need to set them equal , to find the optimal decision metric , as well the optimal decision threshold, p ( x|H0 ) =p ( x|H1 ) , → ( 2πσ2 ) −N/2exp−∑n=0N−1 ( xn ) 2/ ( 2σ2 ) = ( 2πσ2 ) −N/2exp−∑n=0N−1 ( xn−A ) 2/ ( 2σ2 ) , →exp−∑n=0N−1 ( xn ) 2/ ( 2σ2 ) =exp−∑n=0N−1 ( xn−A ) 2/ ( 2σ2 ) , →−∑n=0N−1 ( xn ) 2/ ( 2σ2 ) =−∑n=0N−1 ( xn−A ) 2/ ( 2σ2 ) , →∑n=0N−1 ( xn−A ) 2−∑n=0N−1 ( xn ) 2=0 , →−2A∑n=0N−1xn+NA2=0 , →N−1∑n=0N−1xn=A/2 ., The above equation indicates that the radar system makes an optimal decision , by comparing the average of N observed samples with the optimal threshold A/2 ., It decides H1 , an object generating a constant signal with amplitude A is present , if the average of observed samples is greater than A/2, x¯=x0+x1+…+xN−1N>A2 , decideH1 ., ( 3 ), Otherwise , the radar decides H0 , i . e . , no object is present and there is only noise ., This optimal radar system still may make mistakes in its decisions due to noise , although the probability of its incorrect decisions is minimized ., To calculate the probability of error in making decisions , first we need to calculate probability of deciding H1 when H0 is true , false alarm probability , and probability of deciding H0 when H1 is true , i . e . , miss probability, PFA=P ( decidingH1|H0 ) , PM=P ( decidingH0|H1 ) ., To compute the above probabilities , we need to determine the PDF of the decision variable x¯=N−1∑n=0N−1xn introduced earlier , under the two hypotheses ., As discussed previously and under H0 , x0 , x1 , … , xN−1 are noise samples , independent and Gaussian with zero mean and variance σ2 ., Using properties of Gaussian random variables , it can be shown that x¯ here is Gaussian with zero mean and variance σ2/N, p ( x¯|H0 ) = ( 2πσ2/N ) −1/2exp−x¯2/ ( 2σ2/N ) ., Under H1 , on the other hand , x0 , x1 , … , xN−1 are signal plus noise samples , independent and Gaussian with mean A and variance σ2 ., Using properties of the sum of Gaussian random variables , it can be shown that now x¯ is Gaussian with mean A and variance σ2/N, p ( x¯|H1 ) = ( 2πσ2/N ) −1/2exp− ( x¯−A ) 2/ ( 2σ2/N ) ., To compute PFA , we note that false alarm occurs when H0 is true , but according to Eq ( 3 ) we have x¯>x¯th , where x¯th=A/2 ., This results in, PFA=P ( x¯>x¯th|H0 ) =∫x¯th∞p ( x¯|H0 ) dx¯ ., Integrating the expression for p ( x¯|H0 ) , derived earlier , provides us with the following formula for the false alarm probability, PFA=Q ( Nx¯thσ ) ,, where Q is a commonly-used Gaussian probability function, Q ( η ) = ( 2π ) −1/2∫η∞exp ( −u2/2 ) du ., To compute PM , we similarly note that miss occurs when H1 is true , but we have x¯<x¯th ., This results in, PM=P ( x¯<x¯th|H1 ) =∫−∞x¯thp ( x¯|H1 ) dx¯ ., Integration of the expression for p ( x¯|H1 ) , derived earlier , gives the following formula for the miss probability in terms of the Q function, PM=Q ( N ( A−x¯th ) σ ) ., The overall probability of error in making decisions by the radar system is a mixture of false alarm and miss probabilities, Pe=P ( H0 ) P ( decidingH1|H0 ) +P ( H1 ) P ( decidingH0|H1 ) =P ( H0 ) PFA+P ( H1 ) PM ., By substituting P ( H0 ) = P ( H1 ) = 1/2 , and PFA and PM formulas , finally the probability of error can be written as, Pe=12Q ( Nx¯thσ ) +12Q ( N ( A−x¯th ) σ ) ., The above formula holds true for the optimal threshold x¯th=A/2 , as well as other choices for x¯th ., To understand the importance of the decision threshold and how it affects Pe , the above formula is graphed in S2 Fig versus x¯th , for A = 2 , σ = 1 and N = 4 ., We observe that the probability of error is minimal when x¯th is the optimal threshold of A/2 = 1 , and departure of the decision threshold from the optimal value increases Pe ., With the choice of the optimal threshold , x¯th=A/2 , the above Pe formula simplifies to, Pe=Q ( NA2σ ) ., This formula is graphed in S3 Fig versus the signal-to-noise ratio A/σ , for N = 4 ., We observe that the probability of error in making decisions decreases as signal-to-noise ratio increases , as expected ., Making a decision on whether TNF level at the signaling system input is high or low is a binary hypothesis testing problem ., The two hypotheses are H1: TNF is high , and H0: TNF is low ., Due to the signal transduction noise or signaling malfunctions in a cell , it can respond differently to the same input , which may result in incorrect ( unexpected ) cell decisions and responses ., Cell can make two types of incorrect decisions: deciding that TNF is high at the system input whereas in fact it is low ( deciding H1 when H0 is true ) , and missing TNF’s high level when it is actually high ( deciding H0 when H1 is true ) ., These two incorrect decisions can be called false alarm and miss events , respectively ., Let x be the measured quantity based on which the decision is going to be made ., With p ( x|H0 ) and p ( x|H1 ) as the conditional probability density functions ( PDFs ) of x under H0 and H1 , respectively , false alarm and miss probabilities can be written as 4, PFA=∫x∈falsealarmregionp ( x|H0 ) dx ,, ( 4 ), PM=∫x∈missregionp ( x|H1 ) dx ,, ( 5 ), where false alarm and miss regions will be specified later ., The overall probability of error Pe for making a decision is given by, Pe=P ( H0 ) PFA+P ( H1 ) PM ,, ( 6 ), where P ( H0 ) and P ( H1 ) are probabilities of H0 and H1 , respectively ., It can be shown 4 the op | Introduction, Results and discussion, Extensions to more complex settings and broader signaling contexts, Comparison with other approaches, Conclusion, Materials and methods | In this study a new computational method is developed to quantify decision making errors in cells , caused by noise and signaling failures ., Analysis of tumor necrosis factor ( TNF ) signaling pathway which regulates the transcription factor Nuclear Factor κB ( NF-κB ) using this method identifies two types of incorrect cell decisions called false alarm and miss ., These two events represent , respectively , declaring a signal which is not present and missing a signal that does exist ., Using single cell experimental data and the developed method , we compute false alarm and miss error probabilities in wild-type cells and provide a formulation which shows how these metrics depend on the signal transduction noise level ., We also show that in the presence of abnormalities in a cell , decision making processes can be significantly affected , compared to a wild-type cell , and the method is able to model and measure such effects ., In the TNF—NF-κB pathway , the method computes and reveals changes in false alarm and miss probabilities in A20-deficient cells , caused by cell’s inability to inhibit TNF-induced NF-κB response ., In biological terms , a higher false alarm metric in this abnormal TNF signaling system indicates perceiving more cytokine signals which in fact do not exist at the system input , whereas a higher miss metric indicates that it is highly likely to miss signals that actually exist ., Overall , this study demonstrates the ability of the developed method for modeling cell decision making errors under normal and abnormal conditions , and in the presence of transduction noise uncertainty ., Compared to the previously reported pathway capacity metric , our results suggest that the introduced decision error metrics characterize signaling failures more accurately ., This is mainly because while capacity is a useful metric to study information transmission in signaling pathways , it does not capture the overlap between TNF-induced noisy response curves . | Cell continuously receives signals from the surrounding environment and is supposed to make correct decisions , i . e . , respond properly to various signals and initiate certain cellular functions ., Modeling and quantification of decision making processes in a cell have emerged as important areas of research in recent years ., Due to signal transduction noise , cells respond differently to similar inputs , which may result in incorrect cell decisions ., Here we develop a novel method for characterization of decision making processes in cells , using statistical signal processing and decision theory concepts ., To demonstrate the utility of the method , we apply it to an important signaling pathway that regulates molecules which play key roles in cell survival ., Our method reveals that cells can make two types of incorrect decisions , namely , false alarm and miss events ., We measure the likelihood of these decisions using single cell experimental data , and demonstrate how these incorrect decisions are related to the signal transduction noise or absence of certain molecular functions ., Using our method , decision making errors in other molecular systems can be modeled ., Such models are useful for understanding and developing treatments for pathological processes such as inflammation , various cancers and autoimmune diseases . | decision making, engineering and technology, radar, gene regulation, signal processing, regulatory proteins, dna-binding proteins, social sciences, signaling networks, neuroscience, cognitive psychology, mathematics, probability distribution, cognition, network analysis, remote sensing, transcription factors, computer and information sciences, proteins, gene expression, probability theory, biochemistry, signal transduction, psychology, cell biology, genetics, biology and life sciences, physical sciences, statistical signal processing, cognitive science | null |
1,540 | journal.pcbi.1006855 | 2,019 | An enriched network motif family regulates multistep cell fate transitions with restricted reversibility | Cell fate transition , including differentiation , de-differentiation and trans-differentiation , is a fundamental biological process in which the function of a cell gets specialized , reprogrammed or altered ., The process often involves significant changes of multiple cellular properties , including the morphology , the self-renewal capacity and the potentials to commit to alternative lineages 1 , 2 ., These changes are controlled by the dynamics of interacting transcription factors ( TFs ) and the modulation of chromatin structures , which in term are governed by complex regulatory networks in the cells 3–5 ., Interestingly , the fate transitions in many systems are achieved by sequential commitments to a series of cellular states with stepwise changes in their transcriptional profile towards the final stage of the program ( Fig 1 ) 6–11 ., The intermediate states between the initial state ( e . g . the undifferentiated state in the case of cell differentiation ) and the final state may be important for multiple purposes , such as facilitating ‘checkpoints’ that ensure appropriate development of cellular behaviors , or allowing the cells to make correct decisions at the lineage branching points 11–15 ., One example of these stepwise cell lineage transitions is the development of T lymphocytes in the thymus ., The differentiation from multipotent pre-thymic progenitor cells to committed T cells involves multiple cellular states with stepwise changes of their cellular properties and the transcriptional profiles ( Table 1 ) 16–19 ., Several lines of evidence suggest that the transition states at an early phase of the differentiation can serve as stable checkpoints for sequential lineage commitments ., The progress through these intermediate states is accompanied by stepwise loss of their potentials to differentiate into other cell types: pre-thymic progenitor cells can be converted to a few types of cells , including B cells , natural killer ( NK ) cells , dendritic cells ( DCs ) etc . , whereas the multipotency of the intermediate cell types is more limited but not completely lost 20–26 ., In addition , the stability of these intermediate states is substantial because the loss of differentiation signals does not result in de-differentiation of some intermediate states 20 , suggesting restricted reversibility ( or complete irreversibility ) of the multiple transitions ., In addition , the lymphoid progenitor cells need to divide for a certain number of times at an intermediate state before committing to the T cell lineage , and the stable activities of the lineage defining transcriptional program at the intermediate stages may be important for the proliferations 27 ., Finally , the loss of certain transcription factors ( e . g . BCL11B ) can lead to the termination of the differentiation at some intermediate states , which is often associated with diseases such as leukemia 18 , 20 , 28 ., This further suggests that the intermediate states are cellular ‘attractors’ between the initial and the final stages of the differentiation ( Fig 1 , bottom panel ) ., Similar stable intermediate states during cell lineage transitions are observed in other systems , such as the epithelial-mesenchymal transition , and the skin development ( Table 1 ) , and those states also serve as regulatory stages for altering cellular properties including self-renewal and migration 10 , 29–37 ., Therefore , the multiple intermediate states are involved in diverse normal development and pathological conditions ., Understanding the regulatory programs for the sequential cell lineage commitments is a key step towards the elucidation of mechanisms underlying various biological processes involving multistep lineage transitions ., Despite the accumulating data and observations on these stepwise lineage commitments , general mechanisms governing these differentiation processes with multiple intermediate cellular states remain unclear ., In this study , we explored the strategies in terms of the transcriptional network design that gives rise to stepwise transitions during cell differentiation ., We first used a generic form of networks containing three interacting TFs to find network motifs that can produce four attractors ( the minimum number of attractors in the examples of T cell development , epithelial-mesenchymal transition and skin development ) with stepwise changes of transcriptional factor levels ., We found two types of network motifs , both involving interconnections of positive feedback loops , which can generate the four-attractor systems ., These motifs constitute a large family of gene regulatory networks ., We found that there is an enrichment of these motifs in a network controlling the early T cell development ., We built a specific model using known interactions among key transcription factors in developing T cells , and the model shows that the transcriptional network governs multistep and irreversible transitions in the development process ., To investigate the stochastic dynamics for early T cell development , we mapped out the quasi-energy landscape for the early T cell development ., This landscape characterizes the four attractors representing four stages of early T cell development quantitatively ., In addition , by calculating the minimum action paths ( MAPs ) between different attractors , we quantified the dynamics of the key factors in response to Notch signal with fluctuations , which are in good agreement with experimental observations ., Finally , we identified the critical factors influencing T cell development by global sensitivity analysis based on the landscape topography ., Overall , our model for early T cell development elucidates the mechanisms underlying the stepwise loss of multipotency and multiple stable checkpoints at various stages of differentiation ., The network topologies for multiple attractors found in this study and our motif discovery strategy combined with the landscape methodology can be useful for analyzing a wide range of cell differentiation systems with multiple intermediate states ., To find transcriptional network topologies that can generate multiple intermediate states during cell fate transition , we first performed random parameter sampling with a network family containing up to 3 nodes ( Fig 2A ) ., In this framework of network topology , each node represents a transcription factor ( TF ) that can potentially influence the transcription levels of other two TFs and itself ., Topology searching with a 3-node network was used for motif discovery for various performance objectives in previous studies 38 , 39 ., We performed exhaustive search for topologies with up to 6 regulations from a total of 9 regulations of the network family , and constructed a mathematical model for each topology ( see Methods for details ) ., For each model , we performed random sampling in the parameter space from uniformly distributed values ( S1 Table ) ., We selected topologies containing at least one parameter set that is able to generate four attractors with stepwise changes of transcriptional levels ., We define the system with four attractors with the stepwise changes of transcriptional levels as the scenario in which there are four stable steady states and they can be consistently ordered by the concentrations of any pairs of TFs ., In other words , one TF always monotonically increases or decreases with another TF in these four states , and we term these states ‘ordered’ attractors in this paper ., Among the 2114 network topologies that we searched , we found 216 topologies that can produce such behavior ., In addition , we found 417 topologies that can only produce four unordered steady states ( TF concentrations are non-monotonically correlated among the states ) ( S11 Fig , S12 Fig ) ., To visualize the relationships among these topologies , we constructed a complexity atlas ( Fig 2B ) , in which the nodes represent the network structures that gave rise to four attractors , and the edges connect pairs of topologies that differ by a single regulation ( addition or removal of a transcriptional interaction ) 40 ., We define the minimum topologies as those of which the reduction of complexity , or the removal of any regulation from the network , will abolish its capability to generate four attractors ( solid nodes in Fig 2B and examples in Fig 2C ) ., We found 29 such minimum topologies which represent the non-redundant structures for producing the four-attractor system ., Interestingly , all of the 216 topologies obtained from our search contain three distinct positive feedback loops ( including double-negative feedback loops ) , and they can be categorized into two types of motifs ( Fig 2B , bottom panel ) ., The Type I motif contains three positive feedback loops that are closed at a single TF ( red nodes and edges in Fig 2B ) ., The Type II motif contains three connected positive feedback loops , two of which do not share any TF but are connected via the third loop ( blue nodes and edges in Fig 2B ) ., There is a remarkable diversity of each of the motif types because the interconnected positive feedback loops can share multiple TFs ( S1 and S2 Figs ) ., Based on the complexity atlas ( Fig 2B ) , we found that Type II motifs contain 4–6 regulations , and Type I motifs contain 5–6 regulations ., Some of the networks with 6 regulations contain subnetworks of both Type I and Type II motifs ( Hybrid type , green nodes ) ., The four attractors in the space of two TFs exhibit a variety of patterns of nonlinear monotonic correlations ( Fig 2C , S3 Fig ) , which are governed by intersections of highly nonlinear nullclines in the state space containing the two TFs ( Fig 2D , S1 and S2 Figs ) ., The definitions of various types of motifs are listed in Table 2 , and the statistics of the topologies discovered are summarized in Table 3 ( also see S11 Fig for an illustration ) ., Overall , this motif family represents a large number of networks that can produce a common type of behaviors: multiple stable intermediate states in terms the transcriptional activity ., We next asked whether there is a difference between Type I and Type II motifs in terms of their ability to generate systems with four ordered attractors ., We found that with the same number of sampled parameter sets , Type II motifs have greater fractions of parameter sets that give rise to four ordered attractors than Type I motifs do ( Fig 3A and 3B , p-value < 0 . 0001 , Mann-Whitney U test ) ., This suggests that Type II motifs may be more robust for governing the four ordered attractors ., However , among the 15 Type I and 14 Type II minimum motifs , all Type II motifs are able to generate four unordered attractors ( Some TFs are not monotonically correlated . See S12 Fig ) , whereas there is no Type I motif that has any parameter set that gives rise to four unordered attractors ( Fig 3C ) ., This suggests that as compared to Type II motifs , Type I motifs has higher specificity in generating four ordered attractors , which is more relevant to the stepwise cell fate transitions than the unordered ones ., Moreover , we observed that the inter-attractor distances between neighboring attractors in the gene expression space were generally more variable with Type I motifs than those with Type II motifs ( Fig 3D , magenta boxes ) ., In particular , among the three inter-attractor distances for each model , Type I motifs generated smaller minimum distance than Type II motifs did ( Fig 3D , orange boxes . p-value < 0 . 0001 , Mann-Whitney U test ) ., We did not observe any significant difference between Type I and Type II motifs in terms of the stabilities of the attractors and the kinetic paths that they generate ( S13 Fig See Methods for calculation of quasi-energy landscape and kinetic path ) ., In addition to the effects of motif types , we also asked whether the fractions of positive or negative regulations in the network can influence the function of generating four-attractor systems ., We found that the fraction of positive regulations has a weak but significant correlation with the fraction of successful parameter sets generating four-attractor systems ( S14A and S14B Fig ) ., Although negative regulations are slightly less favorable , they might be important to ensure that the levels of some TFs are inversely correlated during multistep cell fate transitions , which is necessary for having at least one highly expressed TFs in each of the initial and the final cell states ( S14C Fig ) ., In summary , we found two types of network motifs that generate four attractors with stepwise changes of the transcriptional profile ., Two of these attractors represent the multiple intermediate states observed in various biological systems ., This exploratory analysis elicits several interesting questions: what are the biological examples of such network motifs ?, Can the conclusions with respect to the two types of motifs be generalized to networks with more than three TFs ?, Is there any advantage of combining both types of motifs ?, How are the transitions among these states triggered deterministically and stochastically ?, To provide insights into these questions in a more biologically meaningful context , we will use a specific biological system to describe more detailed analysis of these motifs and their underlying gene regulatory networks in the following sections ., We asked whether the motifs that we discovered can be found in any known transcriptional network that potentially control multistep cell differentiation ., We used the early T cell differentiation in the thymus as an example to address this question ., The differentiation from multipotent lymphoid progenitor cells to unipotent early T cells involves multiple stages at which the cells have differential potentials to commit to non-T lineages and other cellular properties such as proliferation rates ., At the early phase of this process , four stages of development T cells ( ETP/DN1 , DN2a , DN2b , DN3 ) were identified experimentally , and the progression through these stages is controlled by a myriad of transcription factors including four core factors , TCF-1 , PU . 1 , GATA3 and BCL11B ., These TFs form a complex network among themselves ( see Fig 4A and supporting experimental observations in S3 Table ) , and the stepwise changes in the levels of these TFs were observed in the four developmental stages of T cells 20 , 28 ., The interactions involving these core TFs were shown to be critical for the irreversible commitment to the T cell lineage by forming a bistable switch 41 ., Among these factors , PU . 1 level decreases as the cells commit to later stages , whereas the levels of other three factors increase in this process ., It is unclear , however , whether this transcriptional network can serve as a regulatory unit that governs the multistep nature of the T cell differentiation ., We noticed that this T cell transcriptional network contains the motifs that we found in our analysis using the generic form of networks , we therefore hypothesized that the models based on this network can have four attractors with sequential changes of the four TFs ., Indeed , using random sampling we were able to find parameter sets that give rise to four-attractor systems similar to what we obtained with the generic 3-node framework ., To find the functional components that generate this behavior , we analyzed the subnetworks of the complex T cell regulatory network 42 ., We removed the regulations from the network systematically , and we found that out of the non-redundant 1553 topologies ( 2047 subnetworks ) , there are 568 topologies ( 701 subnetworks ) that can generate four attractors with stepwise changes of the TFs ( Fig 4B ) ., We used a complexity atlas to visualize the relationships among these subnetworks ( Fig 4C ) ., We found that the network can be reduced to one of the 66 minimum topologies ( 97 minimum subnetworks ) which retains the four-attractor property ( solid nodes in Fig 4C ) ., Notably , these networks can be classified into the two types of motifs described earlier ( Fig 2B ) ., Similar to the networks that we obtained through the generic framework , the two types of minimum motif have 4–6 regulations ., Subnetworks with both types of motifs ( green nodes and edges ) start to appear when the number of regulations reaches six ., The numbers of motifs and subnetworks obtained for the generic framework and the T cell model are summarized in Table 3 ., We next quantified the enrichment of the two motif families in the early T cell transcription network ., We first generated random networks by perturbing the existing regulations in the network model and computed the empirical p-values for observing the numbers of different types of network motifs ., The T cell network contains a large number of positive feedback loops and the two types of motifs that we described earlier ( Fig 5 , top panel ) ., As expected , the network is significantly enriched with positive feedback loops in general ( Fig 5 , middle panel , red bars ) ., However , the enrichments of Type I motifs and the combinations of Type I and Type II motifs are even more significant than that of the single positive feedback loops ( Fig 5 , middle panel , red bars ) ., To exclude the possibility that this differential significance was observed due to the way we generate random networks which gives low p-values ( <10−4 ) in general , we used another method to generate random networks with an augmented number of regulations ( Fig 5 , middle panel , blue bars ) ., Each pair of TFs were assigned with a pair of random regulations ( positive , negative or none ) ., Consistent with the previous method , the T cell transcriptional network is enriched with positive feedback loops overall , but the enrichment is more significant for Type I motifs or for the combination of Type I and Type II motifs ., Interestingly , motifs that are similar to Type I motif but have higher complexity ( more positive feedback loops ) do not show more significant enrichment than Type I motif does ( S15 Fig ) ., In addition , we found that networks controlling switch-like behaviors , but not multistep cell fate transitions , are not enriched with Type I or Type II motif 43–46 ., These results suggest the possibility that the network has been evolved to reach more complex performance objectives than those enabled by simple positive feedback loops alone ., Since the minimum motifs alone can generate the four-attractor system , we asked whether the combination of these motifs enhances the ability of the network to produce the system ., We therefore compared a subnetwork containing only one minimum Type I motif with another one containing multiple such motifs in terms of the performance to generate a particular four-attractor system ( Fig 6A See Methods and S1 Text for details ) ., We found that the subnetwork with multiple Type I motifs ( Network, 1 ) outperforms the one with only one motif ( Network, 2 ) in that Network 1 can give a better fit to a hypothetical four-attractor system ( Fig 6A–6C ) ., In this hypothetical ‘target’ system , the four attractors are assumed to be determined by dynamics of PU . 1 with multiple feedback loops ., The assumed degradation ( Fig 6B , gray curve ) and production rate functions of PU . 1 ( Fig 6B , green curve ) are specified ., The curves of these two functions have 7 intersections , four of which represent attractors ., The optimized production function obtained from Network 1 ( Fig 6B , purple curve ) has more robust intersections with the degradation curve than one obtained from Network 2 ( Fig 6B , red curve ) ., This difference was observed for production functions of these two categories from multiple runs of optimization ( Fig 6C ) ., This suggests the advantage of combining multiple motifs with similar functions to enhance its overall performance ., We next compared the 66 minimum motifs ( Fig 4C bottom solid nodes ) and the full topology ( Fig 4C top node ) in terms of the parameter values that gave rise to the four-attractor systems ., We found that the full model contains more parameter sets that have low nonlinearity of the regulations than the minimum motifs do ( S16A Fig ) , and the parameter values are distributed in larger regions with the full model than those with the minimum motifs ( S16B Fig ) ., We next asked whether the topologies that contain both Type I and Type II motifs have greater probabilities to generate the four-attractor system than the topologies with one type of motifs do ., When we explored the parameter space randomly for each topology with a fixed number of samples , a larger number of parameter sets that can generate the four-attractor system were found with the topologies containing both motifs than with those containing either Type I or Type II motifs only ( S5 Fig and Fig 6D ) ., This suggests that the combination of both motifs might be a robust strategy to generate the four-attractor system ., This pattern was observed for all the topologies in the complexity atlas ( S6 Fig ) as well as those with the same degree of complexity ( Fig 6D , networks with 7 regulations were chosen because they have comparable fractions of the three types of motifs ) ., In summary , we found that the core transcriptional network controlling early T cell differentiation are enriched with Type I and Type II network motifs ., The network composed of these two types of motifs governs a dynamical system containing four attractors , corresponding to four known stages in the early T cell development ., The networks with both types of motifs and greater number of such motifs have more robust capability of generating the four-attractor systems than those networks with fewer types of numbers of motifs do ., We next characterized the dynamical features of the four-attractor system of the T cell development model in response to differentiation signals ., For this and subsequent analysis , we focused on a model describing the network topology shown in Fig 4A ( the full model ) ., We first performed bifurcation analysis of the system to the changes of Notch signaling ( Fig 7A ) ., With the increasing Notch signal , the system undergoes three saddle-node bifurcations , at which the stability of the proceeding cellular states is lost ( Fig 7A , black arrows ) ., These bifurcation points therefore represent the cell state transitions from one stage to the next ., The structure of the bifurcation diagram shows a remarkable robust multistep commitment program governed by the T cell transcription network: the commitment to each stage of the program has restricted reversibility in that the attenuation or withdrawal of the Notch signaling does not result in de-differentiation of the developing T cells ( i . e . the return of the transcription profile to earlier stages that may have greater multipotency ) ., It was previously shown that the commitment from DN2a to DN2b is an irreversible process with respect to Notch signaling , and this transition eliminates developing T cells’ potential to be diverted to any other lineages when Notch signaling is abolished 20 , 41 ., However , simple toggle-switch models do not explain the observation that the multipotency of the early T cells is lost in a stepwise manner ., For example , cells at ETP can be differentiated into B cells , macrophages , dendritic cells ( DCs ) , granulocytes , natural killer ( NK ) cells and Innate lymphoid cellsubset2 ( ILC2 ) , whereas the potentials to commit to many of the lineages are blocked even in the absence of Notch signaling at the DN2a stage , at which the cells can only be differentiated into NK cells and ILC2 20 ., Therefore , the stepwise , irreversible transcriptional transitions revealed by our model is consistent with the experimental observations with respect to the loss of multipotency in the stepwise manner ., Although the absence of Notch signal does not allow the reversal of lineage progression , it was previously shown that the absence of BLC11B in lymphoid progenitor cells blocks its ability to progress to DN2b stage , whereas the Cre-controlled knockout of Bcl11b in committed T cells ( e . g . DN3 cells ) reverts its transcriptional profile to DN2a-like cells 28 ., Upon blocking the production of BCL11B in our model , we observed the loss of attractors of DN2b and DN3 , and the DN2a state is the only stable stage even in the presence of the strong Notch signaling ( Fig 7B ) ., As a result , increasing Notch signaling only triggers one saddle-node bifurcation , representing the transition from ETP to DN2a cell ( Fig 7B black arrow ) , whereas the decrease of the BCL11B production triggers the transition back to DN2a instead of ETP ( Fig 7C ) ., These results are in agreement with the previous experimental findings 28 , and they further support the importance of the multistep differentiation system revealed by our model ., The bifurcation analysis shows how the lineage progression is influenced by stably increasing or decreasing Notch signal strengths ., We next asked how the duration of Notch signal may control the multistep lineage transition ., By inducing the differentiation with varying durations of the Notch signaling , we found that cells experiencing transient Notch signals may only commit to intermediate stages of differentiation ( Fig 8A ) ., In addition , the system is able to integrate the information of the signal intensity and duration to make decision on the lineage progression ., These results suggest that the multistep lineage transition can be triggered by the increasing strength of the signal , the increasing duration of the signal , or the combination of both types of signal dynamics ., Earlier experimental studies have shown that transient Notch signaling can irreversibly drive the cells to an intermediate , but committed stage with a definitive T cell identity ( DN2b ) 28 , 41 , 47 ., This is in agreement with our results , and our model further suggests that the commitment to other intermediate states is also irreversible with respect to the lineage progression ( note that this irreversibility does not refer to the establishment of T cell identity ) ., One possible advantage of the multi-stable system is its robustness of response in facing fluctuating signals ., We therefore performed numerical simulations of the dynamical system under increasing Notch signaling with significant fluctuations ., Under this condition , transient reduction of Notch signaling halted the progress of the lineage commitment but did not trigger the de-differentiation ( Fig 8B ) ., Our model suggests that the design of transcriptional network allows the system to stop at intermediate stages before proceeding to the next ones ., This strategy has several potential physiological benefits:, 1 ) it protects the cell lineage progression against sporadic fluctuations of Notch signaling;, 2 ) it facilitates the ‘checkpoints’ before lineage commitment in the middle of the developmental process and, 3 ) it allows the stable storage of differentiation intermediates which can be differentiated into mature T cells rapidly when there is an urgent need of new T cells with a diverse T cell receptor repertoire ., With the deterministic modeling and bifurcation approaches , we described the local stability for multi-stable T cell model ., However , the global stability is less clear from the bifurcation analysis alone ., In addition , it is important to consider the stochastic dynamics for T cell development model because the intracellular noise may play crucial roles in cellular behaviors 48 , 49 ., The Waddington landscape has been proposed as a metaphor to explain the development and differentiation of cells 50 ., Recently , the Waddington epigenetic landscape for the biological networks has been quantified and employed to investigate the stochastic dynamics of stem cell development and cancer 51–57 ., Following a self-consistent approximation approach ( see Methods ) , we calculated the steady state probability distribution and then obtained the energy landscape for the model of the early T cell development ( full model in Fig 4A ) ., For visualization , we selected two TFs ( PU . 1 and TCF-1 ) as the coordinates and projected the 4-dimensional landscape into a two-dimensional space by integrating the other 2 TF variables ( Fig 9A ) ., Here TCF-1 is a representative T cell lineage TF , and PU . 1 is a TF for alternative cell fates ., We also displayed the landscape in a four-dimensional figure , where the three axes represent three TFs ( TCF-1 , BCL11B and PU . 1 ) , and the color represents the energy U ( Fig 9B ) ., Note that our major conclusions do not depend on the specific choice of the coordinate ( see S7 and S8 Figs for landscapes with PU . 1/BCL11B and PU . 1/GATA3 as the coordinates ) ., In the case without Notch signal ( N = 0 ) , four stable cell states emerge on the landscape for the T cell developmental system ( Fig 9 ) ., On the landscape surface , the blue region represents lower potential or higher probability , and the yellow region represents higher potential or lower probability ., The four basins of attraction on the landscape represent four different cell states characterized by different TF expression patterns in the 4-dimensional state space ( Fig 9A and 9B provide two types of projections of the whole 4-dimensional landscape ) ., These states separately correspond to ETP/DN1 ( high PU . 1/low TCF-1/low BCL11B/low GATA3 expression ) , DN3 state ( low PU . 1/high TCF-1/high BCL11B/high GATA3 expression ) , and two intermediate states ( DN2a and DN2b , intermediate expression for the four TFs ) ., The existence of four stable attractors is consistent with experiments 16–19 ., As the Notch signal ( N ) increases , the landscape changes from a quadristable ( four stable states coexist ) , to a tristable ( DN2a , DN2b and DN3 ) , to a bistable ( DN2b and DN3 ) and finally to a monostable DN3 state ( S9 Fig ) ., These results provide a straightforward explanation for the irreversibility observed in experiments for the stepwise T cell lineage commitment ., To check whether our modelling results match experimental data quantitatively , we acquired two sets of gene expression data of the four core TFs for T cell development from previous publications 17 , 47 , and mapped the normalized values ( see Methods ) to the landscape ( Fig 9A and 9B , S7 and S8 Figs ) ., Here , the golden balls represent the four steady states ( characterizing four stages of T cell development ) from the models , the red balls represent the gene expression data points ( Data1 ) from Zhang et al . 47 , and the green balls represent the gene expression data points ( Data2 ) from Mingueneau et al . 17 ., We found that these gene expression data agree well with our landscape models in the sense that each data point is almost positioned in the corresponding basin ( Fig 9A and 9B , S7 and S8 Figs ) ., We found that the landscapes give a better fit to Data2 ( green points ) , since each green data point can be well positioned in one of the four basins , corresponding to four stages of T cell development ., In fact , the two sets of data points are not very close to each other or to the steady states ( golden points ) from the models ., This is reasonable because these two sets of experimental data are measured separately , probably in different conditions , and these data usually delineate the average of multiple measurements from different samples ., Also , the gene expression fluctuations are common in biological systems ., Therefore , our landscape pictures provide a natural way to reconcile the two different experimental data , i . e . the gene expression data do not have to be at the positions of steady states ., Instead , the gene expression data for each individual stage could be somewhere around that basin because of the fluctuations ., This reflects the original spirit of the classic Waddington developmental landscape ., To examine the transitions among individual cell types , we calculated kinetic transition paths by minimizing the transition actions between attractors | Introduction, Results, Discussion, Methods | Multistep cell fate transitions with stepwise changes of transcriptional profiles are common to many developmental , regenerative and pathological processes ., The multiple intermediate cell lineage states can serve as differentiation checkpoints or branching points for channeling cells to more than one lineages ., However , mechanisms underlying these transitions remain elusive ., Here , we explored gene regulatory circuits that can generate multiple intermediate cellular states with stepwise modulations of transcription factors ., With unbiased searching in the network topology space , we found a motif family containing a large set of networks can give rise to four attractors with the stepwise regulations of transcription factors , which limit the reversibility of three consecutive steps of the lineage transition ., We found that there is an enrichment of these motifs in a transcriptional network controlling the early T cell development , and a mathematical model based on this network recapitulates multistep transitions in the early T cell lineage commitment ., By calculating the energy landscape and minimum action paths for the T cell model , we quantified the stochastic dynamics of the critical factors in response to the differentiation signal with fluctuations ., These results are in good agreement with experimental observations and they suggest the stable characteristics of the intermediate states in the T cell differentiation ., These dynamical features may help to direct the cells to correct lineages during development ., Our findings provide general design principles for multistep cell linage transitions and new insights into the early T cell development ., The network motifs containing a large family of topologies can be useful for analyzing diverse biological systems with multistep transitions . | The functions of cells are dynamically controlled in many biological processes including development , regeneration and disease progression ., Cell fate transition , or the switch of cellular functions , often involves multiple steps ., The intermediate stages of the transition provide the biological systems with the opportunities to regulate the transitions in a precise manner ., These transitions are controlled by key regulatory genes of which the expression shows stepwise patterns , but how the interactions of these genes can determine the multistep processes was unclear ., Here , we present a comprehensive analysis on the design principles of gene circuits that govern multistep cell fate transition ., We found a large network family with common structural features that can generate systems with the ability to control three consecutive steps of the transition ., We found that this type of networks is enriched in a gene circuit controlling the development of T lymphocyte , a crucial type of immune cells ., We performed mathematical modeling using this gene circuit and we recapitulated the stepwise and irreversible loss of stem cell properties of the developing T lymphocytes ., Our findings can be useful to analyze a wide range of gene regulatory networks controlling multistep cell fate transitions . | blood cells, medicine and health sciences, immune cells, gene regulation, immunology, notch signaling, cell differentiation, developmental biology, mathematics, network analysis, computer and information sciences, white blood cells, transcriptional control, network motifs, animal cells, gene expression, t cells, signal transduction, cell biology, genetics, biology and life sciences, cellular types, physical sciences, topology, cell signaling | null |
27 | journal.pcbi.1005444 | 2,017 | redGEM: Systematic reduction and analysis of genome-scale metabolic reconstructions for development of consistent core metabolic models | Stoichiometric models have been used to study the physiology of organisms since 1980s 1–3 , and with the accumulation of knowledge , and progressing techniques for genome annotation , these models have evolved into Genome Scale Metabolic Reconstructions ( GEMs ) , which encapsulate all known biochemistry that takes place in the organisms by gene to protein to reaction ( GPRs ) associations 4 ., Since the first Genome Scale models developed 5 , 6 , the number of annotated genomes and the corresponding genome scale metabolic reconstruction has increased tremendously 7–9 ., With increasing popularity of GEMs , different techniques to analyse these networks have been proposed 10 , 11 ., Flux Balance Analysis ( FBA ) , a constraint-based method ( CBM ) enables many forms of analysis based solely on knowledge of network stoichiometry and incorporation of various constraints , such as environmental , physicochemical constraints 12 ., FBA has been further expanded by other methods such as Thermodynamics-based Flux Analysis ( TFA ) 13–16 and others 17 , 18 for the integration of available thermodynamics data with GEMs ., TFA utilizes information about the properties of reaction thermodynamics and integrates them into FBA ., Such properties now can be estimated by Group Contribution Method 19–21 and high-level Quantum Chemical Calculations22 ., Metabolic networks are valuable scaffolds that can also be used to integrate other types of data such as metabolic 23 , 24 , regulatory and signalling 25–27 , that can elucidate the actual state of the metabolic network in vivo ., However , both FBA , TFA and other steady-state approaches cannot capture the dynamic response of metabolic networks , which requires integration of detailed enzyme kinetics and regulations 28 ., Hatzimanikatis and colleagues have developed a framework that utilizes FBA , TFA and generates kinetic models without sacrificing stoichiometric , thermodynamic and physiological constraints 29–31 ., Recently , another approach has been proposed to integrate kinetics into large-scale metabolic networks32 ., As the quality and the size of the models increase with better annotation , the complexity of the mathematical representations of the models also increases ., Hatzimanikatis and colleagues 33 observed that majority of the studies and applications using metabolic models have still revolved around the central metabolism and around “reduced” models instead of genome-scale models , indicating that the full potential of GEMs remains largely untapped 34–38 ., These reduced models have the advantage of having small sizes as they are built with a top-down manner , but they lack the quality of bottom-up built models since they have been reduced ad hoc , with different criteria and aims , which have not been consistently and explicitly justified 39–41 ., An algorithmic approach called NetworkReducer 42 has been recently proposed following a top-down reduction procedure ., The main purpose of this approach is to preserve selected so-called “protected” metabolites and reactions , while iteratively deleting the reactions that do not prevent the activity of the selected reactions ., This algorithm has been further extended 43 to compute the minimum size of subnetworks that still preserve the selected reactions ., In this study , we have developed redGEM , a systematic model reduction framework for constructing core metabolic models from GEMs ., Herewith , we propose an approach that focuses on selected metabolic subsystems and yet retains the linkages and knowledge captured in genome-scale reconstructions ., redGEM follows a bottom-up approach that allows us to handle the complexity and to yield comprehensive insights in connecting the metabolic model to actual cellular physiology ., redGEM can be tailored to generate minimal models with conserved functions ., However , our approach is not strictly focused only on the reduction of the stoichiometry for the generation of highly condensed network , but aims also to preserve the constitutive characteristics of metabolic networks ., In redGEM , we use as inputs:, ( i ) a GEM ,, ( ii ) metabolic subsystems that are of interest for a physiology under study;, ( iii ) information about utilized substrates and medium components; and, ( iv ) available physiological data ( Fig 1 ) ., After a series of computational procedures , we generate a reduced model that is consistent with the original GEM in terms of flux profiles , essential genes and reactions , thermodynamically feasible ranges of metabolites and ranges of Gibbs free energy of reactions ., We applied redGEM on the latest GEM of E . coli iJO1366 44 under both aerobic and anaerobic conditions with glucose and other possible carbon sources and generated a family of reduced E . coli iJO1366 models ., The wild-type biomass reaction of the iJO1366 model contains 102 biomass building blocks ( BBBs ) ., The size and the complexity of the composition makes it necessary to develop techniques to keep the information stored in GEM for the biosynthesis , but yet reduce the size of the network significantly ., Methods , such as graph-search algorithms can be used for identification of biosynthetic routes between two metabolites in metabolic networks 45 , 46 ., However , these graph theory based approaches cannot be used for our purposes due to two main issues and limitations:, i ) they do not make use nor obey mass conservation; hence the pathways they generate are not guaranteed to be able to carry flux in metabolic network or to be elementally balanced ,, ii ) and they cannot manage pathways that are not linear , such as branched pathways ., To overcome these limitations , we used lumpGEM 47 , an in-built tool , which identifies subnetworks that can produce biomass building blocks starting from precursor metabolites ., These precursors are provided by redGEM through the systematically generated core network based on degree of connection parameter , D . Each subnetwork is then transformed into a lumped reaction and inserted in the reduced model ., lumpGEM forces mass conservation constraints during optimization to identify subnetworks , thus preventing the generation of lumped reactions , which cannot carry flux in the metabolic networks ., As an example , for D = 1 , by minimizing the number of non-core reactions In GEM , lumpGEM generated a 17 reactions subnetwork to synthesize histidine from core carbon metabolites ( Fig 3 ) ., Histidine is synthesized from ribose-5-phosphate , a precursor from pentose phosphate pathway ., The linear pathway from this core metabolite to histidine is composed of 10 steps ., However , due to the mass balance constraint , two metabolites , 1- ( 5-Phosphoribosyl ) -5-amino-4-imidazolecarboxamide and L-Glutamine cannot be balanced in a network that is composed of core reactions and the linear pathway from ribose-5-phophate to histidine ., These metabolites are balanced in the network by other non-core reactions ., Hence , the generated sets of reactions are not linear routes from precursor metabolites to biomass building blocks , but branched , balanced subnetworks ( for formulation of lumpGEM , see Material and Methods ) ., Using lumpGEM , we replicated all the biosynthetic pathways in databases such as EcoCyc 48 , either as a part of subnetworks or the exact pathway ., In addition , we identified subnetworks that can qualify as alternative biosynthetic pathways ., E . coli is well-known to be robust against deletions by having many duplicate genes and alternate pathways49 ., Some of these routes may not be active due to energetics or regulatory constraints but using lumpGEM we can map these possible alternate pathways completely and also derive different biosynthetic lumped reactions ., The introduction of such lumped biosynthetic reactions simplifies the core models considerably and allows the use of these models in important computational analysis methods such as dynamic FBA 50 extreme pathway analysis 51 , 52 and elementary flux modes 53 , 54 , as well as for TFA formulations and kinetic modeling ., For D = 1 core network , lumpGEM generated 1216 subnetworks and 254 unique lumped reactions for 79 biomass building blocks in total for aerobic and anaerobic case ., The remaining BBBs of the total 102 can be produced within the D = 1 core network ., For some biomass building blocks , it is possible that all the alternatives for Smin ( the minimal subnetwork size ) subnetworks generated under aerobic conditions are using molecular oxygen , thus cannot carry flux under anaerobic conditions ., This necessitates the generation of lumped reactions without any oxygen in the media ., For Smin , lumpGEM generated only 4 new lumped reactions for anaerobic case , for 3 metabolites , namely , heme O , lipoate ( protein bound ) and protoheme ., All the other lumped reactions generated for anaerobic case are a subset of the 250 lumped reactions ( S2 Table ) for aerobic conditions ., In the subsequent studies , we used all lumped reactions in order to allow for studies under different oxygen limitations without changing the model structure ., The core model can be found in the supplementary material ( S1 File ) ., Reduced models have been used to understand and investigate cellular physiology for many years ., Before the emergence of genome scale models ( GEMs ) , different groups with different aims built reduced models for their studies with a top-down approach ., Conversely , GEMs provide the platform to understand all the metabolic capabilities of organisms , since GEMs encapsulate all the known biochemisty that occurs in cells ., However the complexity of GEMs make their use impractical for different applications , such as kinetic modeling or elementary flux modes ( EFMs ) ., The need to focus on certain parts of these networks without sacrificing detailed stoichiometric information stored in GEMs makes it crucial to develop representative reduced models that can mimic the GEM characteristics ., Within this scope , we developed redGEM , an algorithm that uses as inputs genome-scale metabolic model and defined metabolic subsystems , and it derives a set of reduced core metabolic models ., These family of core models include all the fluxes across the subsystems of interest that are identified through network expansion , thus capturing the detailed stocihiometric information stored in their bottom-up built parent GEM model ., Following the identification of the core , redGEM uses lumpGEM , an algortihm that captures the minimal sized subnetworks that are capable of producing target compounds from a set of defined core metabolites ., lumpGEM expands these core networks to the biomass building blocks through elementally balanced lumped reactions ., Moreover , redGEM employs lumpGEM to include alternative lumped reactions for the synthesis of biomass building blocks , thus accounting for alternative sytnhesis routes that can be active under different physiological conditions ., redGEM builds reduced models rGEMs that are consistent with their parent GEM model in terms of flux and concentration variability and essential genes/reactions ., These reduced models can be used in many different areas , such as kinetic modeling , MFA studies , Elementary Flux Modes ( EFM ) and FBA/TFA ., redGEM algorithm is applicable on any compartmentalized or non-compartmentalized genome scale model , since its procedure does not depend on any specific organism ., As a demonstration , we have applied the redGEM algorithm on different organisms , namely P . putida , S . cerevisiae , Chinese Hamster Ovary cell ( CHO ) and human metabolism ., For instance , redGEM algorithm has generated core networks of sizes between 168 metabolites/164 reactions to 360 metabolites/414 reactions for iMM904 58 GEM reconstucted for S . cerevsiae with degree of connection parameter D varied from 1 to 6 ., The generated reduced model irMM904 with D = 1 has the same biomass yield with the parent model GEM as 0 . 29/hr under 10 mmol/gDWhr glucose uptake ., Similar to E . coli case , flux and concentration variability , and gene essentiality characteristics of the rGEM are in agreement with the GEM counterparts ( Ataman et al . , manuscript in preparation ) ., Moreover , reduced models are promising platforms for the comparison of central carbon ( or any other ) metabolism of different species ., This approach can help us to better investigate the metabolic capabilities and limitations of organisms and to identify the sources of physiological differences across different species ., In redGEM , we introduce and use the following definitions: We can also generate the core network from the chosen subsystems using the minimum distance between the chosen subsystems and report the connecting reactions and metabolites ., In this case , the degree of connection D is the minimum distance between Si and Sj ., redGEM uses the following inputs and parameters: The central workflow of redGEM involves 4 steps: The core carbon network is defined as all the reactions and metabolites in MS , MijD and MiiD ( all i , j pairs ) , RSi , RijD , RiiD ( all i , j pairs ) , RT ( reactions that only cofactor pairs , small metabolites and inorganics participate ) ., We used the lumpGEM algorithm to generate pathways for all biomass building blocks ( BBB ) as they are defined in GEM ., lumpGEM identifies the smallest subnetwork ( Smin ) that are stoichiometrically balanced and capable of synthesizing a biomass building block from defined core metabolites ., Moreover , it identifies alternative subnetworks for the synthesis of the same biomass building block ., Finally , lumpGEM generates overall lumped reactions , in where the cost of core metabolites , cofactors , small metabolites and inorganics are determined for the biosynthesis ., redGEM defined the core network by the algorithm above , and then we generated all minimum sized subnetwork ( Smin ) for each BBB ., Then lumpGEM calculated the unique lumped reactions for all the BBBs , and we used these lumped reactions for further validation and other analysis ., lumpGEM takes the following steps to build elementally balanced lumped reactions for the biomass building blocks ., In the workflow , lumpGEM Maximize, ∑i#ofRnCzrxn , i, such that:, S . v=0, vBBB , j≥nj , GEM ., μmax, where , To identify alternative Smin subnetworks for a BBB , lumpGEM further constrains the GEM with the following integer cuts constraint after generating each subnetwork with an iterative manner59 ., The reactions that belong to each subnetwork are denoted as RSmin, ∑k#ofRSminzRSmin , k>0 We validate the consistency between rGEM and GEM performing the following consistency checks by comparing: While these are the basic consistency tests , one could define additional checks , which can be specific to the organism and problem under study ., We recommend that in all cases one should perform the checks using FBA and TFA , i . e . with and without thermodynamics constraints ., The first release of the redGEM toolbox is available upon request to the corresponding author . | Introduction, Results and discussion, Conclusion, Materials and methods | Genome-scale metabolic reconstructions have proven to be valuable resources in enhancing our understanding of metabolic networks as they encapsulate all known metabolic capabilities of the organisms from genes to proteins to their functions ., However the complexity of these large metabolic networks often hinders their utility in various practical applications ., Although reduced models are commonly used for modeling and in integrating experimental data , they are often inconsistent across different studies and laboratories due to different criteria and detail , which can compromise transferability of the findings and also integration of experimental data from different groups ., In this study , we have developed a systematic semi-automatic approach to reduce genome-scale models into core models in a consistent and logical manner focusing on the central metabolism or subsystems of interest ., The method minimizes the loss of information using an approach that combines graph-based search and optimization methods ., The resulting core models are shown to be able to capture key properties of the genome-scale models and preserve consistency in terms of biomass and by-product yields , flux and concentration variability and gene essentiality ., The development of these “consistently-reduced” models will help to clarify and facilitate integration of different experimental data to draw new understanding that can be directly extendable to genome-scale models . | Reduced models are used commonly to understand the metabolism of organisms and to integrate experimental data for many different studies such as physiology , fluxomics and metabolomics ., Without consistent or clear criteria on how these reduced models are actually developed , it is difficult to ensure that they reflect the detailed knowledge that is kept in genome scale metabolic network models ( GEMs ) ., The redGEM algorithm presented here allows us to systematically develop consistently reduced metabolic models from their genome-scale counterparts ., We applied redGEM for the construction of a core model for E . coli central carbon metabolism ., We constructed the core model irJO1366 based on the latest genome-scale E . coli metabolic reconstruction ( iJO1366 ) ., irJO1366 contains the central carbon pathways and other immediate pathways that must be connected to them for consistency with the iJO1366 ., irJO1366 can be used to understand metabolism of the organism and also to provide guidance for metabolic engineering purposes ., The algorithm is also designed to be modular so that heterologous reactions or pathways can be appended to the core model akin to a “plug-and-play” , synthetic biology approach ., The algorithm is applicable to any compartmentalized or non-compartmentalized GEM . | cell physiology, chemical compounds, ketones, metabolic networks, enzymology, cell metabolism, pyruvate, metabolites, network analysis, enzyme chemistry, amino acid metabolism, purine metabolism, computer and information sciences, metabolic pathways, acids, chemistry, biochemistry, cell biology, biology and life sciences, cofactors (biochemistry), physical sciences, metabolism | null |
1,859 | journal.pcbi.0030084 | 2,007 | Enhancer Responses to Similarly Distributed Antagonistic Gradients in Development | With the availability of complete genome sequences and quantitative gene expression data , it becomes possible to explore the relationships between sequence features of regulatory DNAs and the transcriptional responses of their associated genes 1–7 ., Developmental genes regulated by multiple enhancer regions and their spatio–temporal dynamics of expression are of particular interest 8–11 ., The enhancers of developmental genes , such as gap and pair-rule genes , interpret maternally deposited information and participate in the formation of progressively more complex expression patterns , thus increasing the overall spatial complexity of the embryo ., In part , the information required to generate these downstream patterns ( e . g . , gap and pair-rule ) is present in the enhancer sequences ., Much attention has been paid to the investigation of transcription factor binding motifs and motif combinations , and to interpreting their role in the formation of spatial gene expression patterns ., 5 , 12 , 13 ., However , some early enhancers of Drosophila contain virtually identical sets of binding motifs , yet they produce distinct expression patterns 6 , 14 ., It has been argued extensively that binding site quality ( affinity ) and site arrangement within enhancers ( grammar ) contributes to the levels and precision of enhancer responses 6 , 15–21 ., In fact , some experimental studies of differentially arranged binding sites confirm the dependence of enhancer response on distances between binding sites and on binding site orientation 6 , 16 , 22–24 , and some structural enhancer features such as motif spacing preferences and characteristic binding site linkages ., “Composite elements” and other syntactical features were identified in many model organisms using computational analyses of binding site distributions throughout entire genomes 5 , 25 , 26 ., Recent studies involving in vivo selection of optimal binding-site combinations in yeast also revealed a number of preferred motif combinations and structural features 27 ., Nevertheless , some phylogenetic studies indicate significant flexibility in the regulatory code 28–31 ., The analysis of unrelated , structurally divergent , but functionally similar enhancers aids in defining the balance between the stringency of the functional cis-regulatory “code” and its flexibility as demonstrated by changes in primary enhancer sequence over the course of evolution ., 6 , 18 , 32 ., Requirements for multiple cofactors that influence transcription via protein–protein interaction complicate computational predictions and studies of enhancers ., While known binding motifs are easy to find , most protein–protein interactions leave no clear footprints in the DNA sequence of enhancers—some developmental coregulators such as CtBP ( C-terminal binding protein ) and Groucho influence the transcriptional response through interactions with sequence-specific transcription factors ( e . g . , 33 ) ., Finally , regulatory signals from enhancers must be transmitted to the basal transcriptional machinery; this involves enhancer–promoter communication of some sort , as well as the recruitment of mediator complexes 2 , 21 , 34–36 ., Both aspects further complicate the in silico prediction and analysis of enhancer activity ., Until recently , most models explaining enhancer responses in development were largely qualitative 37 , 38 ., Davidsons group 2 , 39 and Hwas group 21 undertook quantitative modeling of enhancer–promoter interactions and investigated the responses of architecturally complex regulatory units ., The elaborate nature of developmental enhancers in Drosophila was described in quantitative models introduced by Reinitzs group 1 , 7 ., Here , we summarize some basic structural considerations and investigate mechanisms of enhancer regulation to demonstrate how such features may affect the transcriptional responses ., Our quantitative analyses involve models based on the fractional occupancy of transcription factor binding sites present within enhancers 2 , 21 , 40 , 41 ., On the one hand , the described models are similar to those developed by Hwas group 21 as they consider structural enhancer details ., On the other hand , the models include biological assumptions for developmental enhancers ( i . e . , quenching ) , similar to those introduced by Reinitzs group 7 ., Technically , our models use a homotypic array ( a unit containing a number of identical sites ) of binding sites as an elementary unit for modeling ., Based on quantitative analysis of transcriptional responses , we analyze some models for developmental pattern formation ., In particular , we explore the outcome of the interplay between two antagonistic transcription factors , an activator and a repressor ., We demonstrate that a pair of antagonistic gradients with similar or even identical spatial distributions is sufficient to initiate stripes of expression of a downstream gene ., Given that the antagonistic gradients may be deposited by the same localized or terminal signal ( e . g . , in the fly embryo ) 42 or by a focal signal ( e . g . , in the case of a butterfly eyespot ) 43 , the models explain how initiation from a single point in space can lead to efficient gains in spatial complexity ., The transcriptional state of enhancers of developmental genes is among major factors in developmental pattern formation 6–8 , 10 ., If a transcription factor is present in a concentration gradient , the probability of that factor occupying a binding site in a target enhancer at a given position along the gradient depends on the factors concentration at that position ( coordinate ) ., This logic suggests that in the case of activator and repressor gradients , calculating the probability of activator , but not repressor , binding ( i . e . , the successful transcriptional state resulting in transcription ) may serve well to model the spatial expression patterns of the early developmental genes ., Let us consider an elementary enhancer , which contains two binding sites: one for an activator and one for a repressor ., Let us assume that binding of the activator A in the absence of the repressor R brings the elementary two-site regulatory unit i ( the enhancer; see Figure 1A ) into a successful transcriptional state ., The equilibrium probability of the successful state pi depends on the binding probabilities of A ( pA ) and R ( pR ) , which depend on the concentrations of the regulators ( A and R ) and on the binding constants ( KA and KR ) of the binding sites for the corresponding transcription factors ( see Equations S1–S5 in Protocol S1 ) :, Extending this formula to multiple different activators or repressors may be easily obtained with the same logic ( see Equation S6 in Protocol S1 ) ., Bintu and coworkers recently introduced a number of similar models , describing DNA–protein and protein–protein interactions on proximal promoters 21 , where the authors used an “effective dissociation constant , ” which is the inverse of the binding constant ( K ) used in this study ., Developmental enhancers usually contain homotypic or heterotypic binding site arrays for multiple activators and repressors 44 ., The probability of achieving a successful transcriptional state for the binding site array ( enhancer ) i , containing M identical , noninteracting activator sites and N identical , noninteracting repressor sites , is equal to ( see Equation S7 in Protocol S1 ) :, Here , Ψ is the sum of the statistical weights of molecular microstates for a homotypic site array and the denominator ΨAMΨRN is the sum of the statistical weights for all microstates of the system ( i . e . , the partition function; see Protocol S1 , “Binding site arrays” ) ., In such site arrays , bound transcription factors may cooperate or compete for binding ., Let us consider a cooperative array as an element of enhancer architecture ( Figure 1B ) ., Assuming presence of lateral diffusion 41 , 45 , equal binding affinities for all sites in the array and expressing cooperativity C as the ratio between the second and the first binding constants , one can approximate the sum of statistical weights Ψ of all possible molecular microstates for a cooperative array as follows ( see Equations S8 and S9 in Protocol S1 ) :, Binding sites for an activator and a repressor may overlap , and the corresponding proteins compete for binding ., Well-known examples in Drosophila development include Bicoid and Krüppel 46 , Caudal and Hunchback 44 , and Twist and Snail 6 ., The classic example outside Drosophila is the competition between CI and Cro in the phage lambda switch 47 ., The sum of microstates for a competitive site array , containing M overlapping A/R binding sites ( Figure 1C; also see Figure S1 and Equations S8–S12 in Protocol S1 ) , can be approximated by:, In addition to competitive interactions , this model also includes homotypic cooperative interactions between the regulators ( see Equations S10–S12 in Protocol S1 ) ., Structural elements within an enhancer ( single sites or entire site arrays ) may be distributed over extended genomic regions ( thousands of bases , e . g . , the Drosophila sna enhancer ) 48 , 49 ., In these cases , the distant regulatory elements within the enhancer may represent relatively independent units—modules 15 , 26 ( see Figure 1D ) ., Each independent module may include a single binding site or a binding site array ., Redundancy of the enhancer elements ( binding sites and modules ) is a well-known biological phenomenon 44 ., If the modules within an enhancer are independent from one another , bringing any one module into a successful transcriptional state may be sufficient for bringing the entire enhancer into a successful state , even if another module ( s ) is repressed ., Given the probabilities pi of successful states of all i independent modules or enhancers ( Equations 1–4 ) , the probability PEnc of the multimodule enhancer being in a successful state is equal to:, This is the reverse probability of the enhancer being in an inactive state , which is the product of the probabilities of each independent module being in an inactive state ( 1 − pi ) ; Reinitzs group 1 , 7 implemented similar expressions for the quenching mechanism ., While distinct modules may provide simultaneous responses to different inputs , multiple equivalent modules may allow for the boosting of an enhancers overall response to a single input 50 ( see Figure S1E and S1F ) ., In practice , however , the modules may not be completely independent from each other ., Short-range repression and other factors ( discussed below ) may be involved in distance-dependent module responses 22–24 , 48 ., Let us consider an enhancer containing two modules , a and b ., Module a contains an activator site and a repressor site; module b contains an activator site only ( see Figure 1E ) ., Potentially successful enhancer states include all combinations in which at least one activator molecule is bound ., However , the mixed state KaAAKaRR is always inactive as the repressor , and the activator sites in the module a are “close” ., If module b is not “too far” from module a , short-range repression from a may reach the activator site in b ., We can account for this possibility ( and for its extent ) by introducing a multiplier δ , depending on distance between the modules a and b ( see also Equations S14–S16 in Protocol S1 ) :, In this formula , Ψab is the sum of weights for all microstates , and Ψaboff is the sum of weights for the microstates that are always inactive ( see Protocol S1 , Equation S14 ) ., If modules a and b are “far , ” δ = 1; if they are “close , ” δ = 0 . If the distance between a and b is somewhere in between , so that a repressor bound in a partially affects the activator bound in b , we could introduce a distance function δ = f ( x ) ( 0 ≤ δ ≤ 1 ) , where δ depends on the distance x between a and b ( and perhaps other variables , such as the repressor type ) ., However , all we currently know about the distance function is that short-range repression is effective at distances less than 150–200 bases , and long-range repression may spread through entire gene loci ( i . e . , 10–15 kb 23 , 24 , 48 ) ., Without exact knowledge about the distance function , the module concept ( Equation 5 ) allows modeling of distance-dependent responses , but only in a binary close/far ( yes/no ) fashion ., Most of the enhancer response models ( Equations 1–6 ) consider inputs from two antagonistic gradients , but enhancers may be under the control of a larger number of regulators ( see Figure 1D ) ., However , gradients of some of these regulators may either have similar spatial distributions ( e . g . , Dorsal and Twist ) 51 , or non-overlapping spatial expression domains ( e . g . , Krüppel and Giant ) 37 ., Therefore , in many cases the combination of all inputs may be parsed down to one or more pairs of antagonistic interactions ., Based on the described quantitative models approximating enhancer responses ( see above ) , we analyzed possible spatial solutions produced by gradients of two antagonistic regulators ., The examples in Figure 2A–2C demonstrate that the spectrum of possible enhancer responses is quite rich ., One surprising result of these simulations is that even identically distributed antagonistic gradients can yield distinct spatial expression patterns such as stripes ( Figure 2B ) ., We identified conditions for the “stripe” solutions using differential analysis of the site occupancy function shown in Equation, 1 . For example , if both regulators are distributed as identical gradients and if their concentrations and binding constants are equal ( KA = KR; A = R ) , then it is sufficient to identify conditions for the maximum of a site-occupancy function y ( x ) depending on the spatial coordinate x:, In this variant of Equation 1 , k is the product of absolute concentration of the regulators Abs and the binding constant KA ( k = KAAbs ) ., The function f ( x ) is the distribution of the relative concentration ( 0 ≤ f ( x ) ≤ 1 ) of the transcription factors along the spatial coordinate x ( i . e . , the embryo axis ) ., The functions maximum y′ ( x ) = 0; x > 0 is f ( x ) = 1/k ., In the Gaussian , logistic , and exponential decay forms of the function f ( x ) ( see details in 52 ) , the maximum 1/k exists only if KAAbs > 1 ( i . e . , if binding constants and/or the absolute concentrations are high ) ( see also Figure S2 ) ., In the simple case ( Equation 7 ) , the absolute value of the fractional occupancy at the maximum is not very high ( 0 . 25 ) ; adding more sites or modules ( see Figure S1 ) allows for the functions values to approach 1 ( see Figure 2B ) ., However , if the antagonistic gradients are not identical ( e . g . , if the activator gradient is “wider” than the repressor gradient ) , the solutions for the stripe expression are more robust ( Figure 2A ) ., Shifting the peak of the activator gradient relative to the repressor gradient produces even more robust stripe patterns , as in the case of classical qualitative models 37 , where a repressor “splits” or “carves out” the expression of a target gene ( Figure 2C ) ., The formation of distinct gene expression domains ( e . g . , stripes ) in response to similarly or even identically distributed gradients is of interest because this mechanism can lead to the very efficient gain of spatial complexity in just a single step: based on primary sequence , enhancers of target genes can translate two similarly distributed gradients into distinct gene expression domains or stripes ., Such similarly distributed antagonistic gradients may come about by induction due to a single maternal gradient or due to a terminal ( focal ) signal emanating from a discrete point or embryo pole ., The general pattern formation mechanism in the case described can be represented as follows: ( 1 ) maternal/terminal signal initiates two antagonistic gradients; and ( 2 ) interactions between the two gradients produce multiple stripe patterns ., In an extreme case ( e . g . , Figure 2B ) , the described “antagonistic” mechanism could use only a single gradient/polar signal to produce multiple stripes of target gene expression ., The interaction between two antagonistic gradients is an example of a feed-forward loop ., Due to a cascade organization of the developmental transcriptional networks , feed-forward loops are among the most common network elements ( network motifs ) ; a detailed analysis of the feed-forward networks and potential solutions can be found in a recent work by Ishihara et al 53 ., To explore the interplay of antagonistic gradients in detail , we considered particular examples , such as the regulation of rhomboid ( rho ) by gradients of Twist and Snail and the regulation of knirps by the maternal gradients of Hunchback and Bicoid 54 ., The enhancer associated with rho directs localized expression in ventral regions of the neurogenic ectoderm ( vNEs ) 51 ., The rho vNE enhancer , as well as enhancers of other vNE genes such as ventral nervous system defective ( vnd ) , is activated by the combination of Dorsal and Twist , but is repressed by Snail in the ventral mesoderm 13 , 51 ., Both Twist and Snail are targets of the nuclear Dorsal gradient , which is established by the graded activation of the Toll receptor in response to maternal determinants 55 ., The Twist and Snail expression patterns occupy presumptive mesodermal domains in the embryo , yielding slightly distinct protein distributions ., Our recent quantitative analysis indicates that the boundaries of rho and vnd expression are defined largely by the interplay of the two antagonistic Twist and Snail gradients ( see Figure 2D and 2F ) 6 , and the expression patterns of rho and vnd resemble the predicted solutions shown in Figure 2A ., The patterning mechanism in this case can be represented as follows: ( 1 ) a terminal signal ( Toll/Dorsal gradient ) initiates two similar antagonistic gradients , Twi and Sna; and ( 2 ) Twi and Sna gradients produce multiple ( distinct ) stripe patterns ( rho , other vNE genes ) ., Another example of the interplay between an activator and a repressor gradient is the early expression of the gap gene knirps in response to maternal gradients of Bicoid and Hunchback ., Bicoid and Hunchback are deposited maternally and have similar , but distinct distributions—high in the anterior and low in more posterior regions of the embryo ( see Figure 2E ) ., The graded drop-off of the knirps repressor Hunchback at 50%–60% egg length is steeper than that of the knirps activator Bicoid ., This is similar to the theoretical case shown in Figure 2C , where a narrow repressor “splits” a wider activator expression domain , thus producing two peaks of expression of the downstream gene ., Known enhancer elements of knirps drive kni expression in the anterior and the posterior embryo domains and contain binding sites for Bicoid , Hunchback , Caudal , Tailless , and Giant 44 , 56–58 ., However , tailless , caudal , and giant are downstream of Bicoid; it is likely that these and some other genes participate in the later maintenance of kni expression ., It has been extensively argued that gap genes ( and hunchback ) stabilize their patterns along the anterior–posterior axis by mechanism of mutual repression 49 ., At later stages ( after cycle 14 ) , the inputs from Bicoid and Hunchback into knirps regulation may stabilize fluctuations in knirps expression and fluctuations in the entire gap gene network due to mutual repression ., Dynamic models from Reinitzs group based on slightly different logistic response functions support the sufficiency of Bicoid and Hunchback in the establishment of the early knirps expression 59 ., To explore the role of Bicoid and Hunchback interplay in the early expression of knirps , quantitative expression data for Bicoid , Hunchback , and Knirps were downloaded from the FlyEx database 60 , and models simulating the knirps enhancer response were generated based on Equations 1–4 ., One model assumed that Bicoid and Hunchback bind independently from each other; another model assumed that there is an interference ( possibly competition ) between the Bicoid and the Hunchback sites ( Equation 7: competitive binding ) ., Fitting the available quantitative data with the models ( see parameter values in Table 1 ) shows that both models are sufficient to explain the posterior expression of knirps ., However , the competitive model ( Figure 2G ) also predicts the anterior expression of knirps ., This result was especially striking , as the anterior knirps expression data were not included in some of the fitting tests ., Bicoid and Hunchback motifs are quite different , so it is unlikely that this is a case of direct competition for overlapping binding sites ., Other mechanisms may account for the negative interaction between the two regulators; for instance , binding of Bicoid may prevent Hunchback dimerization 61 and/or efficient binding ., Shifting the knirps expression data by more than 5% along the anterior–posterior axis ( see Materials and Methods ) results in reduction of the data-to-model fit quality for the posterior kni expression domain ( see Table 1 for exact parameter values ) ., The robustness of knirps regulation was emphasized earlier 59 , 62 , and the present analysis using site occupancy confirms that the interplay of the two antagonistic gradients , Bicoid and Hunchback , is sufficient to explain the initial formation of both the anterior and the posterior strips of knirps expression ., To test the models describing gene response to antagonistic gradients , we introduced mutations in the rho enhancer and compared the expression patterns produced by the reporter gene in vivo with the simulated expression patterns simulated in silico ( Equations 1–6 ) ., Specifically , the models for rho and vnd expression predicted the following 6: ( 1 ) The position of the dorsal expression border of rho is highly sensitive to Twist and/or its cooperativity with Dorsal ., Reducing Dl–Twi cooperativity or Twist–Twist cooperativity shifts the dorsal border ventrally ., ( 2 ) The number of independent elements ( groups of closely spaced Dorsal-Twist-Snail sites , or “DTS” elements ) contributes to the expression pattern of rho and vnd according to Equation 5 ( boost ) : a higher number of DTS elements in vnd is responsible for the shift of the ventral vnd expression border relatively to rho 6 ., These two specific predictions , based on the model analysis and simulations , were tested by modifying the structure of the minimal rho enhancer ., First , the distance between the Dorsal and the Twist sites in the DTS element was increased ( see Figure 3 ) ., The increased distance between the two sites reduced the cooperative potential between the Dorsal and Twist sites ., Indeed , the observed effect in vivo is consistent with the effect of the same mutation simulated in silico , causing a ventral shift of the dorsal border of the reporter gene expression ( compare Figure 3E with 3A ) ., An additional mutation eliminating the weaker Twist site from the DTS element affects Twist–Twist cooperativity in the enhancer and shifts the dorsal rho–lacZ expression border ., In fact , the combined effect produced by these two mutations in vivo ( Figure 3G; compare with 3C ) and the deletion of the weak Twist site alone ( Figure 3F; compare with 3B ) demonstrate shifts of the dorsal expression border of the rho-lacZ transgene in concordance with the models ., Last , a second DTS module was introduced into the rho enhancer in the context of the previous two mutations ., The predicted in silico effect is a “boost” in expression , resulting in the shift of both ventral and dorsal expression borders ., Again , the predicted changes in the expression pattern were observed in vivo—not only were the positions of the ventral and the dorsal border shifted ( Figure 3H; compare with 3D ) , but the overall level of expression of this transgenic construct appears higher ( unpublished data ) ., The described in vivo tests of the in silico predictions using site-directed mutagenesis of the rho enhancer have demonstrated that though the quantitative models based on fractional site occupancy are approximations , they can produce reasonable predictions for the response of complex regulatory units ( such as fly enhancers ) to gradients of transcriptional regulators ., Using transcriptional response models and quantitative expression data , we demonstrated how two similar terminal gradients can determine stripes of expression of downstream genes ., Related examples are quite frequent in development ., For instance , the posterior stripe of hunchback is the result of activation by Tailless and repression by Huckebein 63 , 64 ., As in the case with Twist and Snail , the posterior gradient of Tailless is slightly broader than the gradient of Huckebein ., Therefore , the mechanisms of posterior hunchback expression may be similar to the mechanisms shown in Figure 2A , 2B , 2D , and 2F ., However , while the examples above involve direct transcriptional regulation in the embryonic syncytial blastoderm , extracellular morphogen gradients may produce similar outcomes if the cellular response is transcriptional in nature ., Formation of eyespot patterns in butterfly wings is an elegant example of axial ( here focal ) patterning in a cellular environment ( see Figure 4A ) ., The interplay between Notch and Distalless specifies the position of focal spots and intervein midline patterns in the butterfly wing 65 ., Subsequent Hedgehog signaling from the focal spots is believed to induce the formation of concentric rings of gene expression and the pigmentation of the eyespots in the adult butterfly wing 66 ., Known targets of the Hedgehog gradient are the butterfly homologs of engrailed and spalt 67 ., Initially , both genes are expressed around the focal spot , but at later stages an external ring of engrailed expression appears around the spalt expression pattern ( see Figure 4B and 4C ) ., In the case of engrailed pattern formation , a simplified mechanism 67 may include elements of the following feed-forward network: ( 1 ) focal signal ( focal spot/Hedgehog signaling ) initiates two antagonistic gradients , the activator Engrailed and the repressor Spalt; and ( 2 ) subsequent interactions between Engrailed and Spalt produce multiple ring patterns ., An extension of the model in Equation 1 , ( k is the rate of synthesis and c is the rate of decay; dR/dt = 0 ) reproduces the dynamic changes in the engrailed pattern ( Figure 4A , 4D–4E ) :, Examples of axial or focal patterning using a single source of signaling or a combination of similar antagonistic gradients are common ., The interplay between maternal hunchback and maternal nanos during development of the short germ-band insect Schistocerca is an example of axial patterning similar to the interplay between Bicoid and Hunchback 68 ., Specification of segments during insect limb development is comparable to the mechanisms of Twist/Snail interplay and the butterfly eyespot formation 69 ., Nature uses many combinations of signals and gradients in pattern formation , but the most effective mechanism/combination may be one that allows maximal informational gain in a minimal number of steps ., From this perspective , the interplay between similar or identical gradients is of significant interest ., Quantitative distribution data for Dorsal , Twist , and Snail were published previously 6 ., Quantitative expression data for mRNA levels of mutated rho enhancers were generated by in situ hybridization ( the data are available at the DVEx database: http://www . dvex . org ) ., Multiplex in situ hybridization probes were used for colocalization studies , including co-stainings for the endogenous mRNAs and lacZ reporter gene expression as described previously , and confocal microscopy and image acquisition were performed as described 6 ., In short , signal intensity profiles of sum projections along the dorso–ventral axis of mid-nuclear cleavage cycle of 14 embryos were acquired using the ImageJ analysis tool ( National Institutes of Health , http://rsb . info . nih . gov/ij ) ., Background signals were approximated by parabolic functions and subtracted according to existing methods 70 ., Online programs for the automated background subtraction and data alignment are available from the University of California Berkeley Web resource ( http://webfiles . berkeley . edu/∼dap5 ) ., After background subtraction , the data were resampled and aligned according to the position of Snail gradient and the distribution of endogenous rho message ., Expression datasets for anterior–posterior genes were downloaded from the FlyEx database ( with options: integrated , without background ) 60 ., In all cases , signal amplitude was normalized to the 0–1 range , and the data was resampled to 1 , 000 datapoints along the coordinate of the corresponding axis ., In all models , we used the relative concentration multiplied by a maximal absolute concentration ., This absolute concentration is an independent unknown parameter ( range , 10−8–10−9 M ) equal for all reaction components ., The minimal rho enhancer 6 was mutated via site-directed mutagenesis in pGem T-Easy ( Promega; http://www . promega . com ) using the following primers: Dl-Twi distance , RZ65mut: 5′-GTTGAGCACATGTTTACCCCGATTGGGGAAATTCCCGG-3′; deletion of Twist site , RZ66mut: 5′-GGCACTCGCATAGATTGAGCACATG-3′; creation of a second DTS , RZ67mut: 5′-GCAACTTGCGGAAGGGAAATCCCGCTGCAACAAAAAG-3′; and RZ68mut: 5′-CACACATCGCGACACATGTGGCGCAACTTGC-3′ ., Mutated enhancers were cloned into the insulated P-element injection vector E2G as described previously 13: constructs were introduced into the D . melanogaster germline by microinjection as described previously 71 ., Between three and six independent transgenic lines were obtained and tested for each construct; results were consistent across lines ., To fit our models with actual quantitative data , we maximized the agreement r ( Pearson association coefficient ) between the model output predictions and the observed ( measured ) expression patterns:, The best set of parameters X* from the parameter space I is defined by the binding constants , cooperativity values , and the number of binding sites ., We used a standard hill-climbing algorithm ( full neighborhood search ) for the main parameter space ( e . g . , 72 ) ., For each identified maximum , we measured the value of the site occupancy function and discarded maxima that produce site saturation values below selected thresholds , as well as such that are located beyond selected realistic parameter ranges for binding constants and cooperativity values ., All maxima producing the highest data-to-model agreement were found multiple times , suggesting that exhaustive mapping of the parameter space was achieved ., Fitting “shifted data” ( wrong data ) for Knirps was performed by exploring exactly the same parameter space and exactly the same number of seed points for each shift value ., Quantitative gene expression data for dorso–ventral genes are available at http://www . dvex . org; the analysis tool “E-response , ” fitting utilities , and online data-treatment programs are available at the University of California Berkeley Web resource http://webfiles . berkeley . edu/∼dap5 . | Introduction, Results/Discussion, Materials and Methods | Formation of spatial gene expression patterns in development depends on transcriptional responses mediated by gene control regions , enhancers ., Here , we explore possible responses of enhancers to overlapping gradients of antagonistic transcriptional regulators in the Drosophila embryo ., Using quantitative models based on enhancer structure , we demonstrate how a pair of antagonistic transcription factor gradients with similar or even identical spatial distributions can lead to the formation of distinct gene expression domains along the embryo axes ., The described mechanisms are sufficient to explain the formation of the anterior and the posterior knirps expression , the posterior hunchback expression domain , and the lateral stripes of rhomboid expression and of other ventral neurogenic ectodermal genes ., The considered principles of interaction between antagonistic gradients at the enhancer level can also be applied to diverse developmental processes , such as domain specification in imaginal discs , or even eyespot pattern formation in the butterfly wing . | The early development of the fruit fly embryo depends on an intricate but well-studied gene regulatory network ., In fly eggs , maternally deposited gene products—morphogenes—form spatial concentration gradients ., The graded distribution of the maternal morphogenes initiates a cascade of gene interactions leading to embryo development ., Gradients of activators and repressors regulating common target genes may produce different outcomes depending on molecular mechanisms , mediating their function ., Here , we describe quantitative mathematical models for the interplay between gradients of positive and negative transcriptional regulators—proteins , activating or repressing their target genes through binding the genes regulatory DNA sequences ., We predict possible spatial outcomes of the transcriptional antagonistic interactions in fly development and consider examples where the predicted cases may take place . | drosophila, developmental biology, computational biology | null |
166 | journal.pntd.0004549 | 2,016 | Transforming Clinical Data into Actionable Prognosis Models: Machine-Learning Framework and Field-Deployable App to Predict Outcome of Ebola Patients | The 2014–15 EVD outbreak in West Africa has eclipsed in magnitude all combined past EVD outbreaks since the disease was first identified in 1976 1 ., As of February 17 , 2016 ( http://www . cdc . gov/vhf/ebola/outbreaks/2014-west-africa/case-counts . html ) , a total of 28 , 639 cases have been reported ( 15 , 251 laboratory-confirmed ) and 11 , 316 total deaths ., The outbreak constitutes one of the most serious worldwide health emergencies in modern times , with severe socioeconomic costs , particularly in the West African nations of Liberia , Sierra Leone , and Guinea ., Although vaccine development is promising 2 , the prospect of future outbreaks looms ., The report of the WHO Ebola Interim Assessment Panel also points to several shortcomings in the initial response 3 , noting that “better information was needed to understand best practices in clinical management” and that “innovations in data collection should be introduced , including geospatial mapping , mHealth communications , and platforms for self-monitoring and reporting” ., Given these circumstances , the development of accurate and accessible computational methods to track the progression of the outbreak and model various aspects of the disease is beneficial not only for the research community , but also for health care personnel in the field ., In particular , prognosis prediction models based on the available patient information would be of great utility ., Such predictive models can identify the clinical symptoms and laboratory results that should be tracked most closely during the onset of EVD , and give health care workers the ability to more accurately assess patient risk and therefore manage treatment more efficiently 4 ., This data-driven prioritization could lead to higher recovery rates through stratified treatment 5 , especially in resource-constrained areas , and would help doctors limit the evaluation of experimental EVD vaccines and treatments 6 with potentially harmful side effects only to highest-risk patients ., These improvements in treatment , however , will only be achieved once larger datasets become available to overcome biases resulting from small samples ., Schieffelin et al . 7 presented the only publicly accessible , at the time of publication , clinical dataset from the West African EVD outbreak ( available in various formats at http://fathom . info/mirador/ebola/datarelease ) to enable clinical investigations ., Although a large amount of very useful case and resource data has been made public throughout the outbreak ( https://data . hdx . rwlabs . org/ebola ) , thanks to the efforts of numerous individuals and organizations , there is to our knowledge no other public source offering a similar level of clinical detail ., The Schieffelin et al . dataset includes epidemiologic , clinical , and laboratory records of 106 patients treated at Kenema Government Hospital in Sierra Leone during the initial stages of the outbreak ., The study also provides a simple heuristic to estimate mortality risk by defining an Ebola Prognostic Score ( EPS ) , which predicts patient outcome based on symptom counts ., EPS offers statistically significant differences between surviving and deceased patients with p < 0 . 001 ., While data from other published clinical studies are not available , their summary results suggest that more advanced prognostic prediction models could be potentially useful to the field ., Levine et al . 8 developed a diagnostics model using data from the Bong County Ebola Treatment Unit in Liberia , which predicts laboratory-confirmed EVD cases using six clinical variables ., Yan et al . 9 carried out a multivariate analysis of 154 EVD patients from the Jui Government Hospital in Sierra Leone , and reported that age , fever , and viral load are independent predictors of mortality , while Zhang et al . 10 recently reported that age , chest pain , coma , confusion , and viral load are associated with EVD prognosis using a set of 63 laboratory-confirmed cases also from the Jui Government Hospital ., In this study , we employed the Schieffelin et al . EVD dataset to develop novel predictive models for patient prognosis , integrating a data-driven hypothesis making approach with a customizable Machine Learning ( ML ) pipeline , and incorporating rigorous imputation methods for missing data ., We evaluated the predictors using a variety of performance metrics , identifying top predictors with and without viral load measurements , and packaged them into a mobile app for Android ad iOS devices ( http://fathom . info/mirador/ebola/prognosis ) ., Our protocol exemplifies how data-driven computational methods can be useful in the context of an outbreak to extract predictive models from incomplete data , and to provide rapidly actionable knowledge to health workers in the field ., Moreover , prognosis prediction software could complement ongoing efforts to develop rapid EVD diagnostics 11 and safe data-entry devices 12 ., Given the availability of only one dataset from a single location , one Ebolavirus species ( Zaire ebolavirus ) , and very specific time span and laboratory protocols , these models need to be interpreted in an exploratory sense and require further validation with independent clinical data from other EVD treatment sites 8 9 13 14 15 ., We have made all of these resources publicly available and fully documented with the hope to encourage further methods development , independent validation , and greater data sharing in outbreak response ., Our analysis and modeling is based on the EVD clinical and laboratory data initially described by Schieffelin et al 7 ., The Sierra Leone Ethics and Scientific Review Committee and the ethics committee at Harvard University have approved the study and public release of this clinical data , which has been de-identified to protect patient privacy ., As indicated by Schieffelin , “these committees waived the requirement to obtain informed consent during the West African Ebola outbreak” and “all clinical samples and data were collected for routine patient care and for public health interventions . ”, The larger dataset comprises 213 suspected cases evaluated for Ebola virus infection at the Kenema Government Hospital ( KGH ) in Sierra Leone between May 25 and June 18 , 2014 ., Outcome data was available for 87 of 106 Ebola-positive cases , giving a Case Fatality Rate ( CFR ) of 73% over the entire dataset ., We considered 65 patients between 10 and 50 years of age ., Within this group , not all individuals had complete clinical chart , metabolic panel , and virus load records available ( Fig 1 ) ., Sign and symptom data were obtained at time of presentation on 34 patients that were admitted to KGH and had a clinical chart ., Metabolic panels were performed on 47 patients with adequate sample volumes , with a Piccolo Blood Chemistry Analyzer and Comprehensive Metabolic Reagent Discs ( Abaxis ) , following the manufacturer’s guidelines ., Virus load was determined in 58 cases with adequate sample volumes using the Power SYBR Green RNA-to-CT 1-Step quantitative RT-PCR assay ( Life Technologies ) at Harvard University ., Both metabolic panel and PCR data used to develop our models was collected during triaging of the patients upon admission , and follow-up data , although available for some patients , was not included in our analyses ., We compiled this data into a single file in CSV format , and made it available in a public repository ( http://dx . doi . org/10 . 5281/zenodo . 14565 ) , together with all original Excel spreadsheets and the cleaning and aggregation scripts ( http://fathom . info/mirador/ebola/datarelease ) , as well as a Dataverse hosted on the Harvard Dataverse Network ( http://dx . doi . org/10 . 7910/DVN/29296 ) ., In a separate effort , we designed the tool Mirador ( http://fathom . info/mirador/ ) to allow users to identify statistical associations in complex datasets using an interactive visualization interface ., This visual analysis is guided by an underlying statistical module that ranks the associations using pairwise Mutual Information 16 ., Mirador automatically computes a sample estimate of the Mutual Information between each pair of variables inspected by the user , and performs a bootstrap significance test 17 to determine if the variables are independent within a confidence level set through the interface ., This calculation relies on an optimal bin-width algorithm 18 , which finds the grid minimizing the Mean Integrated Squared Error between the estimates from the data and the underlying joint distributions ., The user can then rigorously test the hypothesis of association suggested by Mirador using more specialized tools such as R or SPSS , and finally incorporate them into predictive models ., We used the Maximal Information Coefficient ( MIC ) statistic developed by Reshef et al 19 , calculated with the MINE program ( http://www . exploredata . net/ ) , to rank the associations found with Mirador ., Since only 21 patients in the dataset contain complete clinical , laboratory , and viral load information , we applied three Multiple Imputation ( MI ) programs to impute the missing values: Amelia II , which assumes the data follows a multivariate normal distribution and uses a bootstrapped expectation-maximization algorithm to impute the missing values 20; MICE 21 Multivariate Imputation by Chained Equations , where missing values in each variable are iteratively imputed given the other variables in the data until convergence is attained; and Hmisc 22 , which is also based on the chained equations method ., All MI methods require that the missing entries satisfy the Missing Completely At Random ( MCAR ) condition in order to generate unbiased results ., Specifically , MCAR means that the distribution of the missing entries is entirely random and does not depend neither on the observed nor the missing values ., Furthermore , Amelia requires the observed data to follow a multivariate normal distribution ., We used Little’s MCAR chi-square test 23 and Jamshidian and Jalals test for Homoscedasticity , Multivariate Normality , and MCAR 24 to rigorously test for these conditions ., After testing for the MCAR condition , we run each MI program m times to generate m “completed” copies of the original dataset , which we aggregated into a single training set of larger size ( S4 Fig ) ., We performed a detailed comparison of the performance of the predictor when using values imputed by each of the three MI programs , which is described in the results ., The ML pipeline takes as inputs the source data and a list of covariates , and outputs a trained predictor that can be evaluated with several accuracy metrics ., It includes the following classifiers: a single-layer Artificial Neural Network ( ANN ) 25 implemented from scratch , and Logistic Regression ( LR ) , Decision Tree ( DT ) , and Support Vector Machine ( SVM ) classifiers from scikit-learn 26 ., Each classifier was trained on all possible combination of input covariates , from the subset of found with Mirador and MINE , to avoid issues with variable selection methods 27 , and to generate an ensemble of predictors that could be applied to different combinations of available clinical data ., We applied multiple cross-validation in order to train the classifiers for each selection of covariates ., We first split the records without missing values into two sets with identical CFR , then set one aside for model testing ., We combined the second set with the remaining records that include missing values , and used this data as the input for the MI programs ., Depending on the percentage of complete records reserved for testing and the number of MIs , we ended up with testing sets of 6–10 cases and training sets of 200–300 cases ., This ensured having more than 10 samples per variable during predictor training , the accepted minimum in predictive modeling 28 ., We generated 100 of such testing/training set pairs by randomly reshuffling complete records between test set and training set ., Each model was initially ranked by its mean F1-score , which is the weighted average of the precision and sensitivity ., The mean and standard deviation were calculated over the 100 cross-validation iterations for each combination of input covariates ., We then used the bootstrap method originally introduced by Harrell 29 to quantify the optimistic bias 30 in the area under the receiver operator curve ( AUC or c-statistic ) ., We generated 100 bootstrap samples with replacement for each model , and re-trained the model on these samples ., We evaluated the AUC on the bootstrap sample and the original sample , and reported the mean of the AUCboot—AUCorig difference as the estimated optimism ., Finally , we carried out standard logistic regression with variable selection , with the goal of evaluating the effect of our MI protocol on other model selection algorithms , and comparing the resulting standard model with the top-ranking models from our pipeline ., We used the built-in step ( ) function in R to perform backward variable selection with the Akaike Information Criterion ( AIC ) , the ROCR package to compute AUC , and the Boot package to estimate of the optimistic bias with bootstrap sampling ., The ML models generated by our pipeline are essentially Python scripts together with some parameter files ., The Kivy framework ( http://www . kivy . org ) allowed us to package these scripts as mobile apps that can be deployed on tablets or smartphones through Google or Apple’s app stores ., We created a prototype app including the models described in this paper , currently available as Ebola CARE ( Computational Assignment of Risk Estimates ) , shown in Fig 2 ., We have only implemented the ANN classifier into the Ebola CARE app for the time being , because the scikit-learn classifiers could not be compiled to run on Android devices , which is a requirement for our prognosis app ., Once installed , the app is entirely stand-alone , does not require Internet connectivity to run , and can be updated once better models are available ., We began by identifying the clinical and laboratory factors that provide the strongest association with EVD outcome ., Earlier reports indicate that EVD mortality rates in this outbreak are found to be significantly different among children 31 and older adults 7 , and this pattern holds in our data: CFR is higher than 90% for the 18 patients older than 50 years of age , and 75% for the 14 patients under 10 years of age; we therefore restricted our analyses to patients between 10 and 50 years of age ., Within this age range , exploratory analysis with Mirador ( http://fathom . info/mirador/ ) , led us to identify 24 clinical and laboratory factors that show plausible association with EVD outcome: virus load ( PCR ) , temperature ( temp ) , aspartate aminotransferase ( AST ) , Calcium ( Ca ) , Alkaline Phosphatase ( ALK ) , Chloride ( Cl ) , Alanine Aminotransferase ( ALT ) , Creatinine ( CRE ) , Total Carbon Dioxide ( tCO2 ) , Albumin ( Alb ) , Blood Urea Nitrogen ( BUN ) , Total Protein ( TP ) , weakness , vomit , edema , confusion , respiratory rate , back pain , dizziness , retrosternal pain , diarrhea , heart rate , diastolic pressure , and abdominal pain ., Boxplots and histograms for all factors are depicted in S1 and S2 Figs , which also presents the P-values for the association between Outcome and each factor , for the Fisher exact and T-tests ( for nominal and numerical factors , respectively ) ., We applied the Maximal Information Coefficient ( MIC ) statistic developed by Reshef et al . 19 , calculated with the MINE program ( http://www . exploredata . net/ ) , to rank these 24 factors ., We used the ranking to select two informative subsets of 10 variables each ( shown in Fig 3 ) , one with PCR and the other without , by picking the top 5 laboratory results and top 5 clinical chart variables ., The PCR set comprises PCR , temp , AST , ALK , CRE , tCO2 , heart rate , diarrhea , weakness , and vomit , while the non-PCR set includes temp , AST , ALK , CRE , tCO2 , BUN , heart rate , diarrhea , weakness , and vomit ., None of these variables are capable of predicting outcome accurately in isolation ., The performance of the univariate LR classifier is highest with PCR as input , with an F1-score of 0 . 67 , and below 0 . 5 for all other variables ., This result is consistent with the recent report from Crowe et al . 32 , which highlights the importance of viral load in the prognosis of EVD ., We evaluated the impact of the MI step on the predictors’ performance , and chose MI parameters accordingly ., In all three MI modules , Amelia II , MICE and Hmisc , we can adjust the fraction of complete records to be included in the data to impute , as well as the number of imputed copies that are aggregated into a single training set ., We considered all combinations of these two parameters , when allowing 20% , 35% , and 50% as the percentages of complete records used during imputation , and 1 , 5 and 10 for the number of imputed copies ., We examined the resulting 9 combinations of parameters across the 4 predictors , LR , ANN , DT , and SVM ., Accuracy , as measured by mean F1-score , in the PCR case does not seem to depend on the number of imputed copies , percentage of completed records , and MI algorithm ( S3A Fig ) ., In contrast , both higher percentage of completed records and higher number of imputed copies do have a definite enhancing effect in the mean F1-score for the non-PCR case ( S3B Fig ) , while the choice of MI algorithm does not seem to have a significant impact ., Counter intuitively , the standard deviation of the F1-score in the PCR case increases with larger percentage of completed records ., However , this trend can be explained as follows: the complete records not included in the training set are used to construct the testing set , therefore higher percentages of complete records used during MI result in smaller testing sets ., The effect of a single false positive or negative is proportionally larger in smaller testing sets than in larger ones , which results in higher variation of the F1-score in the latter ., We then verified the validity of the MCAR condition in both the PCR and non-PCR sets , crucial to guarantee unbiased imputations , using Little’s chi-square test and Jamshidian and Jalal’s test ., Since the all data used in our models was collected at presentation , there is lower risk of non-random missing patterns due to patient death and withdrawal ., The tests for MCAR indeed confirm this: Little’s statistic takes a value of 45 . 28 with a P-value of 0 . 11 on the PCR set , while the non-PCR set gives a statistic value of 19 . 06 with a P-value of 0 . 32 , meaning that in both cases there is no evidence in the data against the MCAR hypothesis at the 0 . 05 significance level ., Furthermore , Jamshidian and Jalal’s test for Homoscedasticity , Multivariate Normality , and MCAR does not reject the multivariate normality or MCAR hypothesis at the 0 . 05 significance level for both the PCR and non-PCR sets , with P-values of 0 . 79 and 0 . 06 , respectively ., This last result in particular validates the use of the Amelia II package , which assumes that the data follows a normal distribution ., Based on these findings as well as on a published review from Horton et al . 33 , which shows a marginal improvement with Amelia over the other MI methods , we chose Amelia II as the default MI method ., One weakness of the Amelia II program is that combinations of variables that are highly collinear might cause the MI computation to fail to converge ., We addressed this problem by re-running the MI using either MICE or Hmisc when Amelia is detected to fail converging more than 5 times ., We generated out training sets with 50% of the complete records in the data to impute , and 5 imputed copies for aggregation into a single training set ., The performance difference between 5 and 10 imputed copies did not seem large enough to justify the increased computing times ., Having developed and carefully evaluated our models , we demonstrate that we are able to predict EVD prognosis with a mean F1-score of 0 . 9 or higher , for EVD patients aged 10 to 50 ., We arrived at this by exhaustively generating two separate ensembles of predictors , one with PCR data and the other without ., The predictors including PCR data are plotted on a scatter plot of the mean F1-score vs standard deviation ( Fig 4a ) computed over 100 rounds of cross-validation for each predictor ., The ensemble consists of 4 × ( 29–1 ) = 2044 predictors ( LR , ANN , DT , SVM ) that were trained on all combinations of the PCR set ( 9 variables ) , having PCR as a fixed input variable ., The LR and ANN classifiers are the best performers over all the four prediction methods , with 156 models ( 71 ANN , 64 LR , 21 SVM ) yielding an F1-score of 0 . 9 or higher ., Similarly , we generated 4 × ( 210–11 ) = 4052 predictors without PCR data ( Fig 4c ) , which were trained on all combinations of the non-PCR set of variables ( 10 variables ) with at least two elements ., We obtained 45 models ( 18 ANN , 24 LR , 3 SVM ) with a mean F1-score of 0 . 9 or higher ., A number of the variables emerged as those most often included in the top-ranked models , both in the PCR and non-PCR cases respectively ( Fig 4b and 4d ) ., Notably , in addition to temperature , CRE , ALK , and tCO2 levels are consistently present in the predictors including PCR , while the lack of PCR data makes AST levels and the onset of diarrhea more relevant for accurate prognosis ., The optimistic bias of the AUC for the top predictors , both in the PCR and non-PCR cases , is below 0 . 01 for most of them , with a standard deviation of 0 . 03 ( Fig 5a and 5b ) ., This analysis indicates that even though our models are over-fitted for the current data , the magnitude of bias is minor ., S1 and S2 Tables detail all the top-performing predictors and their optimism-corrected AUC scores in the PCR and non-PCR cases , respectively ., S5 Fig shows aggregated ROC curves over all the models for each predictor , for the PCR and non-PCR cases ., The aggregated AUCs are 0 . 96 ( LR ) , 0 . 95 ( ANN ) , 0 . 94 ( SVM ) , and 0 . 84 ( DT ) in the PCR models , and 0 . 88 ( LR , ANN ) , 0 . 86 ( SVM ) , and 0 . 77 ( LR ) in the non-PCR models ., The similar performance of our simple ANN predictor and scikit-learn’s LR classifier suggests that the dependency between the covariates and outcome can be modeled linearly , however larger datasets would enable us to train more complex ANNs with potentially better performance across different groups of patients ., The comparison with variable selection shows an effect of the MI protocol similar to that observed in the top-ranked models ., The optimistic bias of the AUC for the selected PCR and non-PCR models consistently decreases to less than 0 . 01 as the number of imputed copies increases from 1 to 5 ( Fig 5c and 5d ) ., On the other hand , these models assign very small coefficients and odd ratios very close to 1 to the laboratory covariates ( Tables 1 and 2 ) ., This suggests that most of the information in these models is captured by the clinical symptoms ( temperature , diarrhea , vomit ) , although weakness consistently presents an odd ratio less than 1 , contradicting the expected dependency with outcome ., In general , the laboratory variables are the highest ranked according to MIC , and are also included in most of the top-ranked models , using either the LR or ANN classifiers ., These results lead us think that the variable selection approach is discarding relevant information for outcome prediction , which we are able to capture in our ensemble of ML predictors ., The Ebola CARE app packages a total of 82 ANN models , selected from those with a mean F1-score above 0 . 9 , but discarding the models with a standard deviation of 0 , in order to avoid potentially overfitted models ., This set incorporates 64 PCR and 18 non-PCR models , so the app can still be used when viral load information is not available ., We entered into Ebola CARE all the patients who had complete data for at least one model in the app , and recorded the risk prediction as presented after inputting the symptoms ., Predictions for a total of 34 patients were obtained in this way ., For this subgroup of patients , the mortality rate was 79% ( 7 survived , 27 died ) , and the app only misclassified two , one in each outcome group ., In other words , the precision and sensitivity were both 0 . 96 ., However , this number is likely overestimating the performance of the app , since some of these patients used in this test were also included in model training ., The data used in this study is hosted at a Dataverse in the Harvard Dataverse Network ( http://dx . doi . org/10 . 7910/DVN/29296 ) , the source code of Mirador and the ML pipeline is available on Github ( https://github . com/mirador/mirador , https://github . com/broadinstitute/ebola-predictor ) , and the model files ( all training and testing sets ) are deposited on Zenodo ( http://dx . doi . org/10 . 5281/zenodo . 19831 ) ., This work represents the first known application of ML techniques to EVD prognosis prediction ., The results suggest that a small set of clinical symptoms and laboratory tests could be sufficient to accurately prognosticate EVD outcome , and that these symptoms and tests should be given particular attention by health care professionals ., By aggregating all the high-performing models obtained in our exhaustive analysis , we can construct a composite algorithm that runs the best predictor depending on the available data ., We have developed a simple app , Ebola CARE , which can be installed on mobile tablet or phone devices , and would complement rapid EVD diagnostic kits and data-entry devices ., Our Ebola CARE app is a proof-of-concept , only applicable to Ebola Zaire patients treated in similar conditions as those in KGH ., New clinical data will enable us and other groups to independently validate the app , and to generate more generalizable models with higher statistical significance ., Within the current constrains , the results also shed light on the most informative clinical predictors for adult patients -temperature , diarrhea , creatinine , alkaline phosphatase , aspartate aminotransferase , total carbon dioxide- and demonstrate that PCR provides critical additional information to quantify the seriousness of the Ebola virus infection and better estimate the risk of the patients ., In general , these results are consistent with the findings from Schieffelin , Levine , Yan , and Zhang ., Current discrepancies–for instance Zhang reports chest pain , coma , and confusion as significantly associated with EVD prognosis whereas we do not–could be attributed to the small sample sizes , missing data , and different clinical protocols at the various treatment sites ., The prevalence of missing data in the dataset used in this study , and the lack of other publicly available datasets , are fundamental challenges in predictive modeling ., By combining MI with four distinct ML predictors , we offer a direct approach for dealing with the first challenge ., The use of ANN and LR classifiers in combination with a MI enrichment methodology shows promise as a way to accurately predict outcome of EVD patients given their initial clinical symptoms and laboratory results ., New patient data is critical to validate and extend these results and protocols ., Richer datasets incorporating more diverse samples from different locations will allow us and other researchers to train better ML classifiers and to incorporate population variability ., The development of survival models could be another very important application of these techniques to assist not only in prognosis upon patient intake but also during treatment , as shown by Zhang ., Our current data includes time courses that would be useful in this kind of models , but unfortunately only for a handful of patients ., All these facts highlight the importance of immediate availability of clinical data in the context of epidemic outbreaks , so that accurate predictive tools can be quickly adopted in the field ., In summary , we have made our protocol and mobile app publicly available , fully documented ( https://github . com/broadinstitute/ebola-predictor/wiki ) , and readily adaptable to facilitate and encourage open data sharing and further development ., Our integration of Mirador , a tool for visual exploratory analysis of complex datasets , and an ML pipeline defines a complete framework for data-driven analysis of clinical records , which could enable researchers to quickly identify associations and build predictive models ., Our app is similarly designed to be easily updated as new predictive models are developed with our pipeline , validated with better data , and packaged , to generate actionable diagnosis and help inform urgent clinical care in outbreak response . | Introduction, Methods, Results, Discussion | Assessment of the response to the 2014–15 Ebola outbreak indicates the need for innovations in data collection , sharing , and use to improve case detection and treatment ., Here we introduce a Machine Learning pipeline for Ebola Virus Disease ( EVD ) prognosis prediction , which packages the best models into a mobile app to be available in clinical care settings ., The pipeline was trained on a public EVD clinical dataset , from 106 patients in Sierra Leone ., We used a new tool for exploratory analysis , Mirador , to identify the most informative clinical factors that correlate with EVD outcome ., The small sample size and high prevalence of missing records were significant challenges ., We applied multiple imputation and bootstrap sampling to address missing data and quantify overfitting ., We trained several predictors over all combinations of covariates , which resulted in an ensemble of predictors , with and without viral load information , with an area under the receiver operator characteristic curve of 0 . 8 or more , after correcting for optimistic bias ., We ranked the predictors by their F1-score , and those above a set threshold were compiled into a mobile app , Ebola CARE ( Computational Assignment of Risk Estimates ) ., This method demonstrates how to address small sample sizes and missing data , while creating predictive models that can be readily deployed to assist treatment in future outbreaks of EVD and other infectious diseases ., By generating an ensemble of predictors instead of relying on a single model , we are able to handle situations where patient data is partially available ., The prognosis app can be updated as new data become available , and we made all the computational protocols fully documented and open-sourced to encourage timely data sharing , independent validation , and development of better prediction models in outbreak response . | We introduce a machine-learning framework and field-deployable app to predict outcome of Ebola patients from their initial clinical symptoms ., Recent work from other authors also points out to the clinical factors that can be used to better understand patient prognosis , but there is currently no predictive model that can be deployed in the field to assist health care workers ., Mobile apps for clinical diagnosis and prognosis allow using more complex models than the scoring protocols that have been traditionally favored by clinicians , such as Apgar and MTS ., Furthermore , the WHO Ebola Interim Assessment Panel has recently concluded that innovative tools for data collection , reporting , and monitoring are needed for better response in future outbreaks ., However , incomplete clinical data will continue to be a serious problem until more robust and standardized data collection systems are in place ., Our app demonstrates how systematic data collection could lead to actionable knowledge , which in turn would trigger more and better collection , further improving the prognosis models and the app , essentially creating a virtuous cycle . | medicine and health sciences, clinical laboratory sciences, pathology and laboratory medicine, viral transmission and infection, pathogens, microbiology, neuroscience, artificial neural networks, viruses, diarrhea, filoviruses, mathematics, forecasting, statistics (mathematics), signs and symptoms, artificial intelligence, gastroenterology and hepatology, computational neuroscience, rna viruses, viral load, research and analysis methods, cardiology, computer and information sciences, medical microbiology, clinical laboratories, mathematical and statistical techniques, microbial pathogens, prognosis, heart rate, diagnostic medicine, ebola virus, virology, viral pathogens, biology and life sciences, physical sciences, computational biology, cognitive science, statistical methods, hemorrhagic fever viruses, organisms | null |
1,164 | journal.pgen.1007873 | 2,019 | An ABCA4 loss-of-function mutation causes a canine form of Stargardt disease | Inherited retinal dystrophies are a genetically and clinically heterogeneous group of eye diseases leading to severe visual impairment in both humans and dogs 1–6 ., These diseases include various forms of retinitis pigmentosa ( RP ) , Leber congenital amaurosis ( LCA ) , age-related macular degeneration ( AMD ) , cone-rod dystrophies ( CRD ) , and Stargardt disease ( STGD ) and are caused by many different mutations leading to deterioration of neuroretinal and retinal pigment epithelial ( RPE ) function ., Over 100 years ago , progressive retinal atrophy ( PRA ) was described as a canine equivalent of human RP 7 and is today the most common inherited retinal degenerative disease in dogs 8 ., The shared phenotypic similarity of inherited retinal dystrophies in dogs and humans has made canine models attractive for gene discovery and for experimental treatments , including gene therapy 6 , 9–13 ., The development of gene therapy for RPE65-mediated LCA is an example where a canine comparative model has been instrumental for proof-of-principle trials 9 , 11 , 14–16 ., The identification of the p . C2Y mutation ( OMIM: 610598 . 0001 ) in the PRCD gene is another illustrative example of the benefits of using canine genetics to find homologous candidate genes for human retinal dystrophies; the PRCD gene was initially mapped and identified in PRA-affected dogs and subsequently in a human family with RP 17 ., This mutation segregates in multiple dog breeds , including the Labrador retriever , where no other causative genetic variants for inherited retinal degenerations have been identified ., In this study , a Labrador retriever sib-pair , one male and one female , negative for the p . C2Y mutation , was diagnosed with a form of retinal disease which until now had not been characterized clinically ., To identify genetic variants associated with this novel canine retinal disease , we performed whole-genome sequencing ( WGS ) of the two affected individuals and their unaffected parents ., The affected sib-pair ( LAB3 and LAB4 , see S1 Fig ) was visually impaired under both daylight and dimlight conditions when examined at 10 years of age ., Their pupils were dilated under daylight conditions and pupillary light and dazzle reflexes were abnormal , whereas menace responses were present ., On indirect ophthalmoscopy , the tapetal reflectivity varied between normal to grayish hyporeflection when the indirect ophthalmoscopy lens was tilted slightly back and forth , both in the visual streak , as well as in the more peripheral parts of the tapetal fundus in both eyes of the affected dogs ., The visual streak is an area of high photoreceptor cell density in the canine retina , located superior to the optic disc and extending horizontally from the nasal to the temporal region 18 ., Furthermore , a mild to moderate vascular attenuation was observed , as seen in the fundus photograph , taken at the age of 10 years , of the affected male ( LAB4 ) and compared to a fundus photograph of an unaffected , age-matched Labrador retriever dog ( LAB27 ) ( Fig 1 ) ., These ophthalmoscopic findings were symmetrical between the eyes of the affected dogs , diffusely spread over the tapetal fundus and not strictly confined to the visual streak or area centralis ., The WGS of the family quartet ( LAB1 , LAB2 , LAB3 and LAB4 , see S1 Fig ) resulted in an average coverage of 18 . 2x ( S1 Table ) and the identification of 6 . 0 x 106 single nucleotide variants ( SNVs ) and 1 . 9 x 106 insertions/deletions ( INDELs ) , of which 48 , 299 SNVs and 5 , 289 INDELs were exonic ., We used conditional filtering to identify 322 SNVs ( of which 117 were nonsynonymous ) and 21 INDELs that were consistent with an autosomal recessive pattern of inheritance ( S2 Table ) ., To further reduce the number of candidate variants , we compared the positions of the variants to 23 additional dog genome sequences to identify 18 nonsynonymous SNVs in 13 different genes and four INDELs in four genes that were private to the Labrador retriever family ( S2 and S3 Tables ) ., Fourteen of these genes were not strong candidates based on reported function and predicted effect and were not considered further ., The remaining three genes , KIAA1549 , Usherin ( USH2A ) , and ATP binding cassette subfamily A member 4 ( ABCA4 ) are listed in the Retinal Information Network ( RetNet ) database as associated with human retinal diseases and thus considered as causative candidates for canine retinal degeneration 19 ., However , the variant in the KIAA1549 gene was predicted to have a neutral effect on the protein structure ( PROVEAN score -2 . 333 , Polyphen-2 score 0 . 065 ) and was therefore discarded ., The genetic variants in the USH2A ( exon 43; c . 7244C>T ) and ABCA4 ( exon 28; c . 4176insC ) genes were validated by Sanger sequencing ., Mutations in the human USH2A gene are associated with Usher syndrome and RP , resulting in hearing loss and visual impairment 20 ., The identified nonsynonymous substitution in the USH2A gene was scored as “probably damaging” using Polyphen-2 ( score of 0 . 97 ) and as “deleterious” using PROVEAN ( score of -4 . 933 ) ( S3 Table ) ., The insertion in the ABCA4 gene was predicted to result in a premature stop-codon at amino acid position 1395 ., Next , we evaluated if the genetic variants of USH2A and ABCA4 were concordant with the disease by genotyping eight additional clinically affected and fourteen unaffected Labrador retrievers ., Out of these 22 dogs , 16 were related to the family quartet used in the WGS ( S1 Fig ) ., The USH2A variant was discordant with the disease phenotype and was therefore excluded from further analysis ( S4 Table ) ., In contrast , all eight affected individuals were homozygous for the ABCA4 insertion and the 14 unaffected individuals were either heterozygous or homozygous for the wild-type allele ( S4 Table ) ., The identified variant in the ABCA4 gene is a single base pair ( bp ) insertion of a cytosine ( C ) in a cytosine mononucleotide-repeat region in exon 28 , where the canine reference sequence consists of seven cytosines ( CanFam3 . 1 Chr6:55 , 146 , 550–55 , 146 , 556 ) ( Fig 2A ) ., The single bp insertion in this region results in a non-synonymous substitution at the first codon downstream of the repeat , and subsequently leads to a premature stop codon ( p . F1393Lfs*1395 ) ( Fig 2C ) ., If translated , this would result in a truncation of the last 874 amino acid residues of the wild-type ABCA4 protein ( Fig 2B and 2C ) ., Both the human and the dog ABCA4 gene consists of 50 exons and encodes a ~250 kDa ABC transporter protein ( Fig 2D ) ( human and dog ABCA4 consists of 2 , 273 and 2 , 268 amino acid residues , respectively ) 21–23 ., ABCA4 is a flippase , localized to the disc membranes of photoreceptor outer segments and facilitates the clearance of all-trans-retinal from the photoreceptor discs 24–26 ., To compare retinal ABCA4 gene expression in the affected male ( LAB4 ) , his heterozygous sibling ( LAB6 ) , and a wild-type Labrador retriever ( LAB24 ) , we performed quantitative RT-PCR ( qPCR ) ., Primers were designed to amplify three different regions of the gene ., The amplicons spanned the 5´-end ( exons 2–3 ) , the identified insertion ( exons 27–28 ) and the 3´-end of the ABCA4 gene ( exons 47–48 ) ( S5 Table ) ., Each of the three primer pairs amplified a product of expected size in all three individuals ., This suggests that despite the insertion leading to a premature stop codon in exon 28 , the transcripts are correctly spliced ., Relative levels of ABCA4 mRNA were lower for the allele with the insertion in comparison to the wild-type allele ( Fig 3A ) ., This is consistent with nonsense-mediated decay ( NMD ) degrading a fraction of the transcripts with premature translation stop codon 27 ., Transcripts not targeted by NMD could potentially be translated into a truncated protein of only 1 , 394 amino acid residues including the first extracellular domain ( ECD1 ) and the first nucleotide-binding domain ( NBD1 ) ( Fig 2B ) but lacking most of the second extracellular domain ( ECD2 ) and the second nucleotide-binding domain ( NBD2 ) 28–30 ( Fig 2B–2D ) ., The NBDs are conserved across species and the NBD2 , which is also referred to as the ATP binding cassette of the ABCA4 protein , has been shown to be particularly critical for its function as a flippase 28 , 30 ., To investigate the presence of full-length protein , we performed western blot analysis using an anti-ABCA4 antibody recognizing a C-terminal epitope and detecting a protein product with an approximate size of ~250 kDa ., We observed a single , correctly-sized band in samples prepared from both wild-type ( LAB24 ) and heterozygous ( LAB6 ) dogs ., The intensity of staining in retinal protein samples from the heterozygous individual was markedly lower in comparison to the samples from the wild-type retina ( Fig 3B ) ., In contrast , no band was detected in the retinal sample from the affected dog ( LAB4 ) ., To confirm the presence of photoreceptor cells , we used an anti-RHO antibody and detected rhodopsin in all three samples ( Fig 3B ) ., These results suggest that no full-length ABCA4 protein product is produced as a result of the insertion leading to a frameshift and a premature stop codon ., Fluorescence histochemistry was used to analyze the ABCA4 and rhodopsin protein expression in retinas from three dogs with different ABCA4 genotypes ., In addition , we used peanut agglutinin ( PNA ) as it selectively binds to cone photoreceptors 31 ., Consistent with the western blot results , rhodopsin immunoreactivity ( IR ) was detected in the outer segments of rod photoreceptors in all three retinas ( S2 Fig ) ., In the wild-type ( LAB26 ) and the heterozygous dog ( LAB6 ) , the ABCA4 IR was seen in the outer segments of the neural retina and in the RPE ( Fig 4A and 4B ) ., The ABCA4 IR was partially overlapping with the PNA staining , observed in both the inner and outer segments of the cone photoreceptor cells ( Fig 4A and 4B ) ., In sharp contrast , ABCA4 expression was absent and only a limited PNA staining was observed in the retina of the affected dog ( LAB4; Fig 4C ) ., The observed staining pattern in the fluorescence histochemistry thus suggested loss of cone photoreceptors ., To quantify photoreceptor degeneration in the retina of the affected dog ( LAB4 ) , we counted nuclei in the outer and inner nuclear layers and compared the results from the three genotypes ., The photoreceptor nuclei are positioned in the outer nuclear layer ( ONL ) and the inner nuclear layer ( INL ) is composed of the horizontal , bipolar , amacrine and Müller glia cell nuclei ., Approximately , a 46% reduction of the number of nuclei in the ONL was observed in the affected retina compared to the wild-type ( LAB26 ) and heterozygous ( LAB6 ) retinas ( Fig 4D ) ., Thus , the reduction of nuclei in the ONL supported a reduction of the number of photoreceptors ., The results from the IR and PNA stainings had already shown a profound reduction of cone photoreceptors , but to assess whether rods were also degenerated in the affected retina , we inferred the number of rod photoreceptors in the wild-type and heterozygous retinas by substracting the number of cone nuclei from the total number of nuclei in the ONL ., Approximately , a 41% reduction of rod nuclei was observed in the affected retina , consistent with a retinal degeneration involving also rod photoreceptors ( S2 Fig ) ., The corresponding reduction of nuclei was not seen in the INL , suggesting that photoreceptors were affected but not neurons in the INL ., Taken together , we observed loss of ABCA4 protein , profound reduction of cone outer segment PNA staining , and a reduction of photoreceptor nuclei in the affected retina ., The observed reduction in both cone and rod nuclei imply that not only cone photoreceptors but also rod photoreceptors degenerate in the ABCA4-/- retina of these dogs ., The RPE layer of the affected retina was autofluorescent ( Fig 4C ) , indicating accumulation of lipofuscin 32 ., We estimated the intensity of autofluorescence in RPE from retinas representing the three ABCA4 genotypes ( LAB4 , LAB6 and LAB26 ) ., The autofluorescence in the affected retina was approximately seven-fold higher compared to the retinas of the other genotypes ( Fig 4G and 4H ) ., Light microscopic histopathology ( Fig, 5 ) was performed on retina from the affected dog ( LAB4 ) , a heterozygote ( LAB6 ) and an unaffected dog ( German spaniel ) ., We examined plastic embedded thick sections taken from tapetal and non-tapetal regions superior and nasal to the optic nerve ., An accumulation of round lipophilic bodies was found in the RPE overlying the tapetal region of the affected retina ( Fig 5B ) ., In contrast to the pigmented RPE in humans , dogs have a reflective area , the tapetum lucidum , in the choroid , where the overlying RPE is not pigmented 33 ., The observed round lipophilic bodies predominantly seen in the affected dog are therefore not likely to be melanosomes , but rather an accumulation of lipofuscin ., This is consistent with the increased intensity of autofluorescence observed in affected retina as described above ( Fig 4G and 4H ) ., In the nasal , non-tapetal part of the retina of the affected male , we observed multifocal RPE hyperplasia and hypertrophy , accompanied by overlying retinal atrophy in some , but not all of these foci ( S3 Fig ) ., Consistent with the reduction of cone photoreceptors observed in the frozen sections ( Fig 4D; S2 Fig ) , cone nuclei were markedly reduced in the affected dog ( Fig 5A ) compared to heterozygote and control retinas ., Reduced ONL thickness could not be unambiguously confirmed , however it should be noted that very short segments of retina were used for plastic embedding , and that regional ONL atrophy could therefore not be ruled out ., In conclusion , histopathologic comparison identified increased lipofuscin accumulation in the RPE , cone loss in central superior retina and focal RPE hypertrophy and hyperplasia in nasal retina of the affected dog ., We used flash-electroretinography ( FERG ) to study the photoreceptor function in four dogs at the age of 10 years ., The inclination of the first part of the a-waves of the dark-adapted FERG in response to a bright stimulus was less steep and the amplitudes of the a-waves were lower in both affected dogs ( LAB3 and LAB4 ) and their heterozygous sibling ( LAB6 ) , as compared to the age-matched , unaffected dog ( LAB22 ) ( Fig 6A ) , suggesting abnormal photoreceptor function in the affected dogs ., The light-adapted FERG responses were subnormal for the affected dogs , showing profoundly impaired cone function ( Fig 6B and 6C ) ., The light-adapted responses of the heterozygous dog were closer to the wild-type dog , although amplitudes were slightly lower and b-wave and flicker implicit times slightly longer ( Fig 6B and 6C ) ., Furthermore , dark-adaptation reflecting rod photoreceptor function , was clearly delayed in the affected dog ( Fig 6D ) ., After 20 minutes , the time commonly used for dark-adaptation 34 , the rod responses of the affected dogs had very low amplitudes ., After one hour of dark-adaptation , the affected male ( LAB4 ) reached near normal amplitudes , whereas the amplitudes of his female sibling ( LAB3 ) remained clearly subnormal ( Fig 6D ) , showing that the rod photoreceptors were also affected , but their function was better preserved than the function of the cone photoreceptors ., Optical coherence tomography ( OCT ) was performed along the visual streak in three Labrador retriever dogs ( S4 Fig ) ., The affected dog ( LAB4 ) had a thinner retina with marked reduction in ONL thickness ., Furthermore , we observed some areas of full-thickness retinal atrophy , where the retinal layers could not be distinguished ., We were unable to link the areas of alternating normal to grayish hyporeflectivity observed ophthalmoscopically ( Fig, 1 ) to localized retinal lesions on OCT ., The abnormal and variable tapetal reflectivity seen on ophthalmoscopy was therefore considered to be a sign of a diffusely spread degeneration altering the translucency of the retina overlying the tapetum lucidum ., Additional examinations using confocal scanning laser ophthalmoscopy ( cSLO ) and OCT imaging of two affected dogs at the age of 10- and 12-years ( LAB10 and LAB16 , respectively ) confirmed a thinning of the outer retina along the visual streak as compared to two age-matched wild-type dogs ( LAB22 and LAB23 ) ( Figs 7 and 8 ) ., Compared to the wild-type dog ( LAB22 ) ( Fig 7A ) , a more irregular tapetal reflection with a hyporeflective visual streak and vascular attenuation was observed on the cSLO of the affected dog ( LAB10 ) ( Fig 7B ) ., The thickness of the INL was similar in both the wild-type and the affected dogs ( Fig 7C and 7D ) ., The external limiting membrane was thickened and hyperreflective ( Fig 7D ) , whereas the ellipsoid zone ( EZ ) , which corresponds to the junction between the outer and inner segments of the photoreceptors , was fragmented ( Fig 7D ) ., The total retinal thickness ( Fig 8A ) was markedly reduced in both affected Labrador retriever dogs ( LAB10 and LAB16 ) compared to the wild-type dogs ( LAB22 and LAB23 ) ., However , measurements of the inner retina ( Fig 8B ) showed similar thickness in this part of the retina in all four dogs analyzed ., Total photoreceptor length ( REC+; Fig 8C ) and the thickness of the ONL ( Fig 8D ) were markedly reduced both nasally and temporally in the affected dogs , showing that the degeneration of the outer retina is not confined only to the area centralis ., The average distance from the EZ to the RPE/Bruch’s membrane ( the innermost layer of the choroid ) was similar in both genotypes ( Fig 8E ) ., Taken together , vision of the affected dogs at the age of 10 to 12 years was impaired in both daylight and dimlight conditions , but they still retained some vision throughout their lifetime ., The clinical features included ophthalmoscopic signs of bilateral diffuse retinal degeneration and in vivo morphology indicaded a reduction of the number of photoreceptors ., The cone function was profoundly abnormal , whereas rod function was better preserved ., A hallmark of human ABCA4-mediated diseases such as STGD , is the accumulation of autofluorescent lipofuscin in the RPE throughout the fundus 32 , 35 ., This is also seen in mouse models 36 , 37 as well as in the canine retinal degenerative disease described here ., In addition , cone photoreceptors are typically affected prior to rods 38 ., Furthermore , human RPE cells have been shown to be hypertrophic , and at more advanced stages of the disease , RPE is lost in the perifovea 39 , 40 ., Similar to the human histopathology , we observed accumulation of autofluorescent lipofuscin , regions of RPE hypertrophy and hyperplasia , as well as thinning of ONL in the affected dog ., Mutations in the human ABCA4 ( ABCR ) gene cause several clinically different diseases ranging from autosomal recessive STGD and autosomal recessive forms of CRD to RP 41–43 ., The severity of the disease phenotype is suggested to be dependent on the severity of the mutations 41 ., The gene was first cloned and characterized in 1997 21 , and to date , 873 missense and 58 loss-of-function variants have been reported in the ExAC database 44 , 45 , many of which are associated with visual impairment 46–48 ., The ABCA4 protein functions as an ATP-dependent flippase in the visual cycle , transporting N-retinylidene-phosphatidylethanolamine ( N-Ret-PE ) from the photoreceptor disc lumen to the cytoplasmic side of the disc membrane 49 , 50 ., N-Ret-PE is a reversible adduct spontaneously formed between all-trans-retinal and phosphatidylethanolamine ( PE ) , and is unable to diffuse across the membrane by itself ., Once transported by ABCA4 , N-Ret-PE is dissociated and all-trans-retinal will re-enter the visual cycle 51 ., Defective ABCA4 leads to accumulation of N-Ret-PE , which together with all-trans-retinal , will form di-retinoid-pyridinium-phosphatidylethanolamine ( A2PE ) that is further hydrolyzed to phosphatidic acid ( PA ) and a toxic bis-retinoid , di-retinal-pyridinium-ethanolamine ( A2E ) 52 ., This will lead to an accumulation of A2E in RPE cells when photoreceptor discs are circadially shed and phagocytosed by the RPE 36 , 53 , 54 ., A2E is a major component of RPE lipofuscin , accounts for a substantial portion of its autofluorescence , and has a potentially toxic effect on the RPE leading to photoreceptor degeneration 36 , 55–57 ., Currently , there is no standard treatment for STGD in humans and mouse is the only available animal model 58 , 59 ., Both the Abca4 knockout mouse 36 and the recently generated Abca4 p . Asn965Ser ( N965S ) knockin mouse 37 models have been significant for the functional characterization of ABCA4 and the lipofuscin fluorophore A2E ., Mice , however , lack the macula , the area primarily affected in STGD patients and no significant retinal degeneration has been observed in any of the mouse models 37 , 60 , 61 ., Unlike the mouse retina , the dog has a cone rich , fovea-like area functionally more similar to human fovea centralis 2 , 10 , 11 ., The canine eye is also comparable in size to the human eye , and dog models have successfully been used for experimental gene therapy for retinal degenerative diseases , such as LCA , RP , and rod-cone dysplasia type 1 ( rcd1 ) 12 , 14 , 16 , 62 ., For over a decade there has been interest in finding a canine model for ABCA4-mediated diseases 23 , 63 , 64 ., The loss-of-function mutation identified here can be used to develop a large animal model for human STGD ., A family quartet of Labrador retriever dogs ( sire , dam , and two affected offspring numbered LAB1 , LAB2 , LAB3 , and LAB4 , respectively ) were used in the whole-genome sequencing ( WGS ) ., In addition , 16 related individuals ( LAB5 to LAB20 , see S1 Fig ) as well as six unrelated Labrador retrievers ( LAB 21 to LAB26 ) were used to validate the WGS findings ., Whole blood samples from these dogs were collected in EDTA tubes and genomic DNA was extracted using 1 ml blood on a QIAsymphony SP instrument and the QIAsymphony DSP DNA Kit ( Qiagen , Hilden , Germany ) ., We obtained eyes from the affected male ( LAB4 ) and his unaffected sibling ( LAB6 ) at the age of 12 , as well as from two unrelated , unaffected female Labrador retrievers ( LAB24 and LAB26 , 11- and 10-year-old , respectively ) and one 10-year-old male German spaniel ( GS ) after euthanasia with sodium pentobarbithal ( Pentobarbithal 100 mg/ml , Apoteket Produktion & Laboratorier AB , Stockholm , Sweden ) for reasons unrelated to this study ., All samples were obtained with informed dog owner consent ., Ethical approval was granted by the regional animal ethics committee ( Uppsala djursförsöksetiska nämnd; Dnr C12/15 and C148/13 ) ., Ophthalmic examination of all the dogs included in the study included reflex testing , testing of vision with falling cotton balls under dim and daylight conditions , as well as indirect ophthalmoscopy ( Heine 500 , Heine Optotechnik GmbH , Herrsching , Germany ) and slit-lamp biomicroscopy ( Kowa SL-15 , Kowa Company Ltd . , Tokyo , Japan ) after dilation of pupils with tropicamide ( Mydriacyl 0 . 5% , Novartis Sverige AB , Täby , Sweden ) ., Genomic DNA from four Labrador retriever dogs ( LAB1 , LAB2 , LAB3 and LAB4 ) was fragmented using the Covaris M220 instrument ( Covaris Inc . , Woburn , MA ) , according to the manufacturer’s instructions ., To obtain sufficient sequence depth , we constructed two biological replicates of libraries with insert sizes of 350 bp and 550 bp following TruSeq DNA PCR-Free Library Prep protocol ., The libraries were multiplexed and sequenced on a NextSeq500 instrument ( Illumina , San Diego , CA ) for 100 x 2 and 150 x 2 cycles using the High Output Kit and High Output Kit v2 , respectively ., The raw base calls were de-multiplexed and converted to fastq files using bcl2fastq v . 2 . 15 . 0 ( Illumina ) ., The two sequencing runs from each individual were merged , trimmed for adapters and low-quality bases using Trimmomatic v . 0 . 32 65 , and aligned to the canine reference genome CanFam3 . 1 using Burrows-Wheeler Aligner ( BWA ) v . 0 . 7 . 8 66 ., Aligned reads were sorted and indexed using Samtools v . 1 . 3 67 and duplicates were marked using Picard v . 2 . 0 . 1 ., The BAM files were realigned and recalibrated with GATK v . 3 . 7 68 ., Multi-sample variant calling was done following GATK Best Practices 69 using publicly available genetic variation Ensembl Variation Release 88 in dogs ( Canis lupus familiaris ) ., We filtered the variants found by GATK using the default values defining two groups of analyses: trio 1 and 2 , both consisting of the same sire and dam , and one of their affected offspring ., Variants annotated in the exonic region with ANNOVAR v . 2017 . 07 . 16 70 , presenting an autosomal recessive inheritance pattern and shared between the two trios were selected for further evaluation ., To predict the effects of amino acid changes on protein function , we evaluated SNVs using PolyPhen-2 v2 . 2 . 2r398 71 and PROVEAN v . 1 . 1 . 3 72 and non-frameshift INDELS using PROVEAN v . 1 . 1 . 3 ., Frameshift INDELs were manually inspected using The Integrative Genomics Viewer ( IGV ) 73 , 74 ., The sequence data were submitted to the European Nucleotide Archive with the accession number PRJEB26319 ., To validate the WGS results , we designed primers amplifying the variants c . 7244C>T in USH2A gene and c . 4176insC in ABCA4 gene with Primer3 75 , 76 ( S5 Table ) and sequenced the family quartet using Applied Biosystems 3500 Series Genetic Analyzer ( Applied Biosystems , Thermo Fisher Scientific , Waltham , MA ) ., To test if the variants were concordant with the disease , 22 additional ophthalmologically evaluated Labrador retrievers were genotyped by Sanger sequencing ( S1 Fig ) ., Eight of these dogs were clinically affected and fourteen were unaffected , showing no signs of retinal degeneration by seven years of age ., Neuroretinal samples were collected from the affected dog ( LAB4 ) , the heterozygous sibling ( LAB6 ) , and the unaffected female ( LAB24 ) ., The samples were immediately preserved in RNAlater ( SigmaAldrich , Saint Louis , MO ) , homogenized with Precellys homogenizer ( Bertin Instruments , Montigny-le-Bretonneux , France ) and total RNA was extracted with RNAeasy mini kit ( Qiagen ) according to the manufacturer’s instructions ., RNA integrity and quality were inspected with Agilent 6000 RNA Nano kit with the Agilent 2100 Bioanalyzer system ( Agilent Technologies , Santa Clara , CA ) ., cDNA was synthesized using RT2 First Strand kit ( Qiagen ) with random hexamers provided in the kit ., cDNA concentration was inspected with Qubit ssDNA Assay kit ( Life Technologies , Thermo Fisher Scientific ) ., RT2 qPCR Primer Assay ( Qiagen ) was used to amplify the reference gene GAPDH ., To amplify the target gene ABCA4 , we designed custom primers with Primer3 75 , 76 targeting three different regions spanning exons 2 to 3 , 27 to 28 , and 47 to 48 ( S5 Table ) ., We amplified the cDNA fragments encoding regions of interest using RT2 SYBR Green ROX qPCR Mastermix ( Qiagen ) with StepOnePlus Real-Time PCR system ( Applied Biosystems , Thermo Fisher Scientific ) , according to the manufacturer’s instructions ., Target gene expression was normalized to expression of GAPDH , and shown relative to the unaffected female ( LAB24 ) using the △△CT method ., The results were confirmed in two independent experiments ., We extracted protein from the neuroretinal samples of the individuals used in qPCR ( see above ) by homogenization in Pierce RIPA lysis buffer ( Thermo Scientific ) supplemented with phosphatase inhibitor cocktail ( Sigma , P8340 ) using the Precellys homogenizer ( Bertin Instruments ) ., Protein concentration was determined using the Pierce BSA Protein Assay kit ( Thermo Fisher Scientific ) ., 50 μg of protein samples were resolved by SDS-PAGE , transferred to nitrocellulose membrane , and immunoblotted with the following primary antibodies: ABCA4 ( Novus Biologicals , NBP1-30032 , 1:1000 ) , GAPDH ( Thermo Scientific , MA5-15738 , 1:1000 ) , Rhodopsin ( Novus Biologicals , Littleton , CO , NBP2-25160H , 1:5000 ) , followed by Anti-Mouse IgG horseradish peroxidase-conjugated secondary antibody ( R&D Systems , HAF007 , 1:5000 ) ., Binding was detected using the Clarity western ECL substrate ( Bio-Rad , Hercules , CA ) ., Tapetal fundus from the affected male ( LAB4 ) , his unaffected heterozygous sibling ( LAB6 ) , and an unaffected 10-year-old female Labrador retriever ( LAB26 ) were fixed in 4% PFA in 1x PBS on ice for 15 minutes , washed in 1x PBS for 10 minutes on ice , and cryoprotected in 30% sucrose overnight at 4°C ., The central part of the fundus was embedded in Neg-50™ frozen section medium ( Thermo Scientific ) , and 10 μm sections from the tapetal part of the eye were collected on Superfrost Plus slides ( J1800AMNZ , Menzel-Gläser , Thermo Fisher Scientific ) ., The sections were re-hydrated in 1x PBS for 10 minutes , incubated in blocking solution ( 1% donkey serum , 0 . 02% thimerosal , and 0 . 1% Triton X-100 in 1x PBS ) for 30 minutes at room temperature , and incubated in primary antibody ABCA4 ( 1:500 , NBP1-30032 , Novus Biologicals ) or rhodopsin ( 1:5000 , NBP2-25160 , Novus Biologicals ) , and FITC-conjugated lectin PNA ( 1:400 , L21409 , Molecular Probes ) solution at 4°C overnight ., Following overnight incubation , the slides were washed 3 x 5 minutes in 1x PBS and incubated in Alexa 568 secondary antibody ( 1:2000 , A10037 , Invitrogen , Thermo Fisher Scientific ) solution for at least 2 hours at room temperature and washed 3 x 5 minutes in 1x PBS ., The slides were mounted using ProLong Gold Antifade Mountant with DAPI ( P36931 , Molecular Probes , Thermo Fisher Scientific ) ., Fluorescence images were captured using a Zeiss Axioplan 2 microscope equipped with an AxioCam HRc camera ., Ten micrometer retinal sections were stained and mounted as described under Fluorescence histochemistry , and the number of nuclei within a region with a width of 67 μm that was perpendicular to and covered both the outer and inner nuclear layers were counted ., Nuclei in the outer nuclear and inner nuclear layers were counted separately ., We inferred the number of rod photoreceptors by subtracting the number of cones , as identified by PNA staining , from the number of nuclei in the ONL ., We analyzed six images from each of the three dogs ( LAB4 , LAB6 , and LAB26 ) ., Note that cones were so rare in the affected retina , that all the nuclei in the ONL represent rod photoreceptors ., Bar graphs were generated and statistical analysis of the technical replicates ( one-way ANOVA with Tukey’s post hoc multiple comparison analysis ) was performed in GraphPad Prism 7 ., Retinal sections were washed , incubated in blocking solution , and mounted as described under Fluorescence histochemistry ., The exposure times for the excitation at 488 nm and 568 nm were fixed for all images taken ( 150 ms and 80 ms , respectively ) ., Outlines of the retinal pigment epithelium ( RPE ) , as well as adjacent background regions , were drawn using the polygon selection tool in ImageJ ( v1 . 51 , NIH ) , and the area and mean fluorescence intensity were measured ., The mean intensity of the autofluorescence in the RPE was calculated by subtracting the background intensity from the adjacent regions ., We analyzed six images from each of the three individuals used in the fluorescence histochemistry ., Bar graph generation and statistical analysis were performed as described under Counting nuclei ., Light microscopic examination was performed on plastic embedded thick sections from 4% PFA fixed posterior sections from eyes of the affected male ( LAB4 ) and his heterozygous sibling ( LAB6 ) , as well as from an unaffected 10-year-old German spaniel dog ., The samples | Introduction, Results and discussion, Materials and methods | Autosomal recessive retinal degenerative diseases cause visual impairment and blindness in both humans and dogs ., Currently , no standard treatment is available , but pioneering gene therapy-based canine models have been instrumental for clinical trials in humans ., To study a novel form of retinal degeneration in Labrador retriever dogs with clinical signs indicating cone and rod degeneration , we used whole-genome sequencing of an affected sib-pair and their unaffected parents ., A frameshift insertion in the ATP binding cassette subfamily A member 4 ( ABCA4 ) gene ( c . 4176insC ) , leading to a premature stop codon in exon 28 ( p . F1393Lfs*1395 ) , was identified ., In contrast to unaffected dogs , no full-length ABCA4 protein was detected in the retina of an affected dog ., The ABCA4 gene encodes a membrane transporter protein localized in the outer segments of rod and cone photoreceptors ., In humans , the ABCA4 gene is associated with Stargardt disease ( STGD ) , an autosomal recessive retinal degeneration leading to central visual impairment ., A hallmark of STGD is the accumulation of lipofuscin deposits in the retinal pigment epithelium ( RPE ) ., The discovery of a canine homozygous ABCA4 loss-of-function mutation may advance the development of dog as a large animal model for human STGD . | Stargardt disease ( STGD ) is the most common inherited retinal disease causing visual impairment and blindness in children and young adults , affecting 1 in 8–10 thousand people ., For other inherited retinal diseases , the dog has become an established comparative animal model , both for identifying the underlying genetic causes and for developing new treatment methods ., To date , there is no standard treatment for STGD and the only available animal model to study the disease is the mouse ., As a nocturnal animal , the morphology of the mouse eye differs from humans and therefore the mouse model is not ideal for developing methods for treatment ., We have studied a novel form of retinal degeneration in Labrador retriever dogs showing clinical signs similar to human STGD ., To investigate the genetic cause of the disease , we used whole-genome sequencing of a family quartet including two affected offspring and their unaffected parents ., This led to the identification of a loss-of-function mutation in the ABCA4 gene ., The findings of this study may enable the development of a canine model for human STGD . | medicine and health sciences, diagnostic radiology, ocular anatomy, vertebrates, social sciences, neuroscience, dogs, animals, mammals, macular disorders, animal models, retinal disorders, model organisms, experimental organism systems, eyes, research and analysis methods, imaging techniques, animal cells, animal studies, stargardt disease, sensory receptors, mouse models, head, tomography, retinal degeneration, signal transduction, cellular neuroscience, psychology, eukaryota, retina, diagnostic medicine, cell biology, anatomy, radiology and imaging, neurons, ophthalmology, photoreceptors, biology and life sciences, cellular types, afferent neurons, sensory perception, ocular system, amniotes, organisms | null |
2,176 | journal.pntd.0007739 | 2,019 | Rabies-induced behavioural changes are key to rabies persistence in dog populations: Investigation using a network-based model | Canine rabies is an ancient disease that has persisted in dog populations for millennia–well before urbanisation 1 ., Increased understanding of rabies spread in communities with relatively small populations of dogs–such as those in rural and remote areas–could give insights about rabies persistence in non-urban areas , as well as inform prevention and control strategies in such regions ., Rabies virus is neurotropic and clinical manifestations of canine rabies can be broadly classified as the dumb form ( characterised by progressive paralysis ) and the furious form ( characterised by agitated and aggressive behaviour; 2–4 ) ., Although the mechanisms of rabies-induced behavioural signs are poorly understood 5 , pathogen-influenced changes in host behaviour can optimise pathogen survival or transmission 6 ., We hypothesise that rabies-induced behavioural changes promote rabies transmission in dog populations by influencing social network structure to increase the probability of effective contact ., If so , this would enable rabies to spread in rural and remote regions ., Since 2008 , rabies has spread to previously free areas of southeast Asia ., Islands in the eastern archipelago of Indonesia , as well as Malaysia are now infected 7–10 ., Much of this regional spread of canine rabies has occurred in rural and remote areas ., Oceania is one of the few regions in the world in which all countries are rabies free ., Recent risk assessments demonstrate that Western Province , Papua New Guinea ( PNG ) and northern Australia , are at relatively high risk of a rabies incursion 11 , 12 ., Dogs in communities in these regions are owned and roam freely ., Population estimates in such communities are often low; for example , median 41 dogs ( range 10–127 ) in Torres Strait communities ( pers comm: annual surveys conducted by Queensland Health , and Brookes et al . 13 ) and median 100 dogs ( range 30–1000 ) in Western Province Treaty Villages ( pers comm: annual surveys conducted by the Australian Commonwealth Department of Agriculture ) ., Canine rabies might have a low probability of maintenance in domestic dogs in these communities due to their small population sizes , but if continued transmission occurs–particularly over a long duration–then spread to other communities or regional centres and regional endemicity might occur ., GPS telemetry data from small populations of dogs ( < 50 dogs ) in the Torres Strait have recently been collected 13 ., Such data has been used to describe contact heterogeneity in animal populations , and has been used in models to provide insights about disease spread and potential control strategies 14–16 ., The effect of contact heterogeneity on disease spread is well-researched and models can provide useful insights about disease control strategies in heterogeneously mixing populations 17–19 ., Most recently in the context of rabies , Laager et al . 20 developed a network-based model of rabies spread using GPS telemetry data from dogs in urban N’Djamena , Chad ., Other models of rabies-spread in which parameters that describe contact heterogeneity were derived from telemetry data include canine 21 and raccoon models 22 , 23 ., Patterns of contacts are likely to be altered by the behavioural effects of clinical rabies ., Although Hirsch et al . 22 demonstrated that seasonal patterns of rabies incidence in raccoons could be explained by changes in social structure due to normal seasonal behavioural change of the hosts , the influence of rabies-induced behavioural changes on social structure has neither been researched nor explicitly incorporated in simulation models in any species ., Here , our objective was to investigate the probability , size and duration of rabies outbreaks and the influence of rabies-induced behavioural changes on rabies persistence in small populations of free-roaming dogs , such as those found in rural communities in PNG and northern Australia ., We also investigate the effect of pre-emptive vaccination on rabies spread in such populations ., We developed an agent-based , stochastic , mechanistic model to simulate social networks of free-roaming domestic dogs and the subsequent transmission of rabies between individual dogs within these networks following the latent infection of a single , randomly-assigned dog ( Fig 1 ) ., The structure of the social networks was based on three empirically-derived networks of spatio-temporal associations between free-roaming domestic dogs in three Torres Strait Island communities ( Table 1 ) ; Kubin , Warraber and Saibai 13 ., The progression of rabies infection in a susceptible dog was simulated in daily time-steps and followed an SEI1I2R process ( rabies infection status: susceptible S , latent E , pre-clinical infectious I1 , clinical I2 and dead R ) ., Rabies virus transmission from an individual infectious, ( j ) to an individual susceptible, ( i ) dog is described by Eq 1 , in which the daily probability of contact between a pair of such dogs was calculated based on the edge-weight between the pair ( Eij ) , which is the proportion of a 24 hour period during which that pair of dogs is spatio-temporally associated ( in the event of no network connection , Eij = 0 ) ., Transmission of rabies further depends on the probability of a bite ( Pj ) by the infected dog conditional on its infection status ( I1 or I2 ) , and the probability of subsequent infection of the susceptible dog ( Tj ) ., Generation of the social network and estimation of the parameters associated with the dog population dynamics and rabies epidemiology are described below , and parameter values are shown in Table 2 ., Maximum iteration duration was 3 years ., Model outputs included distributions of the predicted duration of outbreaks ( defined as the number of days from the introduction of one latently infected dog to the day on which infected dogs were no longer present ) , the total number of rabies-infected dogs during the outbreak and the effective reproductive number , Re , during the first month following incursion ( mean number of dogs infected by all dogs that were infected during the first month ) ., Initially , rabies was simulated in each of the three community networks and the predicted outputs from each model were compared between each other ., Statistical tests were used to determine the number of iterations required to achieve convergence of output summary statistics ( described below ) ., Global sensitivity analysis using the Sobol’ method ( described below ) was used to investigate the relative influence of all input parameters on model outputs ., To observe the influence of rabies-induced behavioural changes , model outputs from simulations of rabies spread in each of the three community networks with and without parameters associated with rabies-induced behavioural changes were compared ., Finally , the impact of pre-emptive vaccination was investigated by randomly assigning rabies immunity to a proportion of the population ( 10–90%; tested in 10% increments ) prior to incursion of the rabies-infected dog in each iteration ., Prior to each iteration , a modified Watts Strogatz algorithm generated a connected , undirected small-world network of 50–90 dogs with network characteristics that reflected the empirical networks of the dog populations in Saibai , Warraber and Kubin communities , as follows 13 , 24–26 ., Consistent with the terminology used in our previous description of these networks 13 , dogs are nodes , connections between dogs are edges , the proportion of spatio-temporal association ( within 5m for at least 30s ) between a pair of connected dogs in each 24 hour period is represented as edge-weight , and degree refers to the number of network connections for an individual dog ., Re-wiring refers to re-assignment of an individual dogs connections in the network ., A regular ring lattice was constructed with N nodes in which N was randomly selected from a uniform distribution of 50–90 ., Each node was assigned K degrees , which was randomly selected from the respective empirical degree distribution of the community represented by the simulation ., Each node ( ni ) was connected to Ki/2 ( rounded to the nearest integer ) nearest neighbours in the ring lattice in a forward direction , then all nearest neighbours in a backward direction until Ki was achieved ., Existing edges were then re-wired ( the edge was disconnected from the nearest neighbour and reconnected to a randomly selected node ) following a Bernoulli process ( probability ρ ) to achieve an average shortest path-length expected in an equivalent-sized Erdõs-Réyni graph in which nodes are connected randomly , whilst maintaining the empirical degree distribution of the community represented by the simulation 27 ., Edges were then weighted according to the mean expected duration of association between pairs of dogs as a proportion of daily time , and were randomly selected from the respective empirical edge-weight distribution of the community represented by the simulation ., Parameters that describe the empirical networks and their derivation are presented in Brookes et al . 13 ., Networks simulated with the modified Watts Strogatz algorithm were tested for similarity to the empirical networks prior to use of the algorithm in the model ( Table 1 ) ., Degree and edge-weight distributions were compared to those of the empirical networks using the Mann-Whitney U and Kolmogorov-Smirnoff tests , to assess similarity of median and shape of simulated distributions , respectively ., Mean small-world indices were calculated according to Eq 2 in which C is the global clustering coefficient , L is the average shortest path length , s denotes a simulated network and r denotes an Erdos-Reyni random network of equivalent mean degree 28 ., A small-world index >1 indicates local clustering , consistent with the empirical network structures ., Network similarity tests were conducted on 1000 simulated networks for each community ., Parameters that were used to describe the dog populations and rabies epidemiology in the model are listed in Table 2 ., Variance-based GSA using the Saltelli method was used to determine which parameters most influenced the variance of outputs and was implemented in this study using the SALib module in Python 41 ., The sequence of events were: parameter sampling to create a matrix of parameter sets for each iteration ( parameter ranges are listed in Table 2 ) , simulation using the parameter sets to obtain model output ( duration of outbreaks , the total number of rabies-infected dogs and the mean monthly effective reproductive number , Re ) , and estimation of sensitivity indices ( SIs ) to apportion output variance to each parameter ., Mean monthly Re was used as the output of interest in relation to R for Sobol analysis , to remove the strong influence of incubation period on Re in the first month ., To separate the influence of stochasticity from the variation associated with each parameter , the random seed was also included in the Sobol’ analysis 42 ., The seed value for each iteration was selected from the parameter set ( uniform distribution , 1–100 ) ., First-order and total-effect SIs were estimated for each parameter , representing predicted output variance attributable to each parameter without and with considering interactions with other inputs , respectively ., SIs were normalised by total output variance and plotted as centipede plots with intervals representing SI variance ., Model output variance is most sensitive to inputs with the highest indices ., The number of iterations required to achieve sufficient convergence of summary measures was estimated using the following method ., Key output measures–the number of rabid dogs and the duration of outbreaks ( days ) –were recorded from 9 , 999 iterations of the model divided equally between all three communities ., Ten sets of simulated outputs of an increasing number of iterations ( 1–5000 ) were sampled; for example , ten sets of outputs from 1 iteration , 10 sets of outputs from 2 iterations , 10 sets of outputs from 3 iterations , and so on ., The mean number of rabies-infected dogs and outbreak duration was calculated for samples in each set ., The coefficient of variation ( CV; standard deviation/mean ) of these sample means was then calculated for each set ., With increasing iterations , the variation in sample mean between sets decreases and the CV approaches zero ., The number of iterations was considered sufficient to indicate model output stability when 95% of the previous 100 iteration sizes CV was < 0 . 025 ., Each community simulation comprised 10 , 000 iterations ( more than sufficient to achieve convergence of summary output statistics without limiting computational time S1 Fig ) ., Predicted outputs are shown in Table 3 ., The proportion of iterations in which a second dog became infected was greater than 50% in Kubin and Warraber communities , and 43% in Saibai ., In these iterations , predicted median and upper 95% duration of outbreaks were longest in Warraber and shortest in Saibai ( median: 140 and 78 days; 95% upper range 473 and 360 days , respectively ) ., In the Warraber simulations , 0 . 001% of iterations reached the model duration limit of 1095 days ., The number of infected dogs was reflected in the Re estimates in the first month: 1 . 73 ( 95% range 0–6 . 0 ) , 2 . 50 ( 95% range 1 . 0–7 . 0 ) and 3 . 23 ( 95% range 1 . 0–8 . 0 ) in Saibai , Kubin and Warraber communities , respectively ., The rate of cases during these outbreaks was 2 . 4 cases/month ( 95% range 0 . 6–7 . 6 ) , 2 . 0 cases/month ( 95% range 0 . 4–6 . 5 ) and 2 . 6 cases/month ( 95% range 0 . 5–8 . 0 ) in Saibai , Kubin and Warraber communities , respectively ., Fig 2 shows plots of the Sobol’ total-effect sensitivity indices ( SI ) of parameters for outbreak duration , number of infected dogs and the monthly effective reproductive ratio Re ., S2 Fig shows Sobol’ first-order effect SIs , which are low relative to the total-effect SIs for all outcomes ., This indicates that interactions between parameters are highly influential on output variance in this model and therefore , we focus on the influence of parameters through their total effects ., As expected , the total-effect SI of the seed was highest–it was associated with > 50% of the variance for all outcomes–because it determines the random value selected in the Bernoulli processes that provide stochasticity to all parameters ., The influence of the seed is not presented further in these results ., Incubation period , the size of the dog population and the degree of connectivity were highly influential on outbreak duration ( total-effect SI 0 . 51 , 0 . 55 and 0 . 51 , respectively ) ., All parameters were influential on the predicted number of rabid dogs ( total effect SIs > 0 . 1 ) ., The size of the dog population , incubation and clinical periods , and degree had greatest influence ( total effect SIs > 0 . 5 ) ., Dog population size and degree of association were most influential on predicted mean monthly Re ( total effect SI 0 . 74 and 0 . 40 , respectively ) ., Of the community-specific parameters ( population size , degree and edge-weight distributions , birth and death rates , and initial probability of re-wiring ) , dog population size and the degree consistently had the greatest influence on each predicted output’s variance ., Of network parameters other than degree , the probability of wandering ( ‘re-wiring’ ) during the clinical phase ( furious form ) was markedly less influential on predicted mean monthly Re than initial ‘re-wiring’ ( total effect SIs 0 . 051 and 0 . 19 , respectively ) or either parameter associated with spatio-temporal association ( edge-weight; both total effect SIs > 0 . 15 ) ., The influence of the increased probability of a bite by a dog in the clinical period ( furious form ) on predicted mean monthly Re was greater compared to the pre-clinical or clinical ( dumb-form ) bite probability ( total-effect SI 0 . 19 relative to 0 . 11 ) ., The size of the relative influence of these parameters on outbreak duration or number of rabies-infected dogs was reversed and less marked ., Birth and death rate consistently had a moderate influence on all outputs ( total-effect SI 0 . 20–0 . 24 ) ., The proportion of outbreaks in which > 1 dog became infected , and the duration , number of infected dogs and Re in the first month following incursion in simulations without all or with combinations of parameters for rabies-induced behavioural changes , are shown in Fig 3 ., Outputs from the simulation in each community with all parameters ( increased bite probability furious form , increased spatio-temporal association edge-weight; dumb form , wandering ‘re-wiring’; furious form ) are included for comparison ., The simulation without parameters for rabies-induced behavioural changes ( Fig 3; ‘None’ ) propagated following < 10% of incursions in all communities ., In 95% of these predicted outbreaks , rabies spread to ≤ 3 other dogs during a median of ≤ 60 days ., This was reflected in the low Re estimate in the first month of these incursions ( ≤ 0 . 75 ) ., Inclusion of one parameter associated with rabies-induced behavioural changes was still insufficient for sustained predicted outbreaks ., Overall , < 20% incursions in these simulations resulted in rabies spread to ≤ 6 other dogs over a median duration of ≤ 56 days ., Re in the first month of these incursions indicated that increased spatio-temporal association , followed by an increased probability of bite were more likely to result in rabies spread than ‘re-wiring’ to increase network contacts in these simulations ., This pattern was reflected in the upper 95% range of dogs infected , which was greatest when increased spatio-temporal association was included , and least when ‘re-wiring’ was included ., When combinations of rabies-induced behavioural changes were included , increased bite probability and spatio-temporal association together were sufficient to achieve similar proportions of predicted outbreaks in which > 1 dog was infected ( 40–60% of incursions ) as the simulation with all parameters included ( Fig 3 ‘Full’ ) ., Predicted impacts and Re in the first month following incursion were also similar ., Re was greater than the sum of Re from scenarios with increased bite probability and spatio-temporal association alone ., With combined spatio-temporal association and ‘re-wiring’ , the 95% range of the number of infected dogs was greater than simulations in which only one parameter was included ( up to 11 other dogs ) but Re in the first month following incursion was close to 1 in all communities , reflecting overall limited rabies spread ., In the combined increased bite probability and ‘re-wiring’ simulation , propagation did not occur to > 4 dogs , reflecting the Re of ≤ 0 . 8 ., Due to the similarity between median outputs from each community and greatest variation in outputs from Warraber , only vaccination simulations using the Warraber network were included in this section ., Initially , all parameters were included in these vaccination simulations ( births and deaths were included ) ., Vaccination simulations were then run without population turnover ( births and deaths were excluded ) ., Fig 4 shows all outputs ., In all simulations , the proportion of outbreaks in which > 1 dog was infected fell as the proportion of pre-emptively vaccinated dogs increased–a greater reduction was observed in the simulations without population turnover–and was < 40% when at least 70% of the population were vaccinated ., The proportion of outbreaks in which more than one dog was infected was still 17% and 12% when 90% of the population were vaccinated in simulations with and without births and deaths , respectively ., In outbreaks in which > 1 dog was infected , the duration of outbreaks decreased as vaccination proportion increased ( although the 95% range was always predicted > 195 days in all simulations ) ., The median number of infected dogs was ≤ 3 once at least 60% of dogs were vaccinated in all simulations , but the 95% range was not consistently < 10 dogs until 80% and 70% of the population was vaccinated in simulations with and without births and deaths , respectively ., The median case rate was 1 . 6 cases/month ( 95% range 0 . 4–4 . 6 cases/month ) when 70% of the population was vaccinated in simulations with births and deaths , with a median duration of 68 days ( 95% range 16–276 days ) ., In simulations without births and deaths , the case rate was 1 . 4 cases/month ( 95% range 0 . 4–4 . 3 cases/month ) when 70% of the population was vaccinated , with a median duration of 64 days ( 95% range 16–248 days ) ., Re estimated in the first month following incursion reflected these outputs ., At ≥ 70% pre-emptive vaccination , Re was approximately 1 or less when births and deaths were excluded ., However , in the simulations with births and deaths Re did not fall below 1 until > 80% of the population were pre-emptively vaccinated ., Our study is unique in that we modelled rabies spread in small populations of free-roaming dogs and incorporated the effect of rabies-induced behavioural changes ., Key findings included the long duration of rabies persistence at low incidence in these populations , and the potential for outbreaks even with high levels of pre-emptive vaccination ., This has implications for canine rabies surveillance , elimination and incursion prevention strategies , not only in rural areas with small communities , but also for elimination programs in urban areas ., We discuss our findings and their implications below ., Without behavioural change , we could not achieve rabies propagation in the social networks in the current study; disruption of social contacts appears to be key for rabies maintenance in small populations of dogs ., Social network studies have shown that dogs form contact-dense clusters 13 , 20 ., Increased bite probability and spatio-temporal association between contacts ( edge-weight in the model ) were most influential on rabies propagation in our model , but it is possible that ‘re-wiring’ of dogs is also influential in larger populations in which there is a greater probability that a dog would ‘re-wire’ to a completely new set of dogs in another cluster , thus increasing total contacts and enhancing spread ( degree was also found to be highly influential on rabies spread ) ., Ranges for these parameters were wide to reflect uncertainty which in turn reflects the difficulty of acquiring accurate field information about the behaviour of rabies-infected dogs ., It is not ethical to allow dogs that have been identified in the clinical stages of rabies infection to continue to pose a threat to other animals and humans so that field data about contact behaviour can be collected ., However , whilst these parameters were important for spread to occur , their wide range was not as influential on output variance relative to other parameters for which data were more certain ., In the model , limiting types of behavioural change to each rabies form was a simplification that allowed us to differentiate the effects of types of network disruption ., In reality , the association between rabies forms and behavioural changes is likely to be less distinct 33 and thus , rabies spread in small populations could be further enhanced if dogs display a range of behavioural changes ., Incubation period strongly influenced outbreak size and duration , and together with rabies-induced behavioural changes that enabled transmission , is likely to have resulted in the ‘slow-burn’ style of outbreaks ( low incidence over long duration ) that were predicted by this model ., Within iterations in which propagation occurred , case rate was generally < 3 cases/month without vaccination , and 1 . 5 cases/month when 70% of dogs were pre-emptively vaccinated ., At such low incidence , we believe that canine rabies is likely to have a low probability of detection in communities where there is high population turnover and aggressive free-roaming dogs can be normal 29 , 43 ., In these populations , dog deaths and fights between dogs are common ., Undetected , slow-burn outbreaks in previously free regions are a great risk to humans because rabies awareness is likely to be low ., They also provide more opportunity for latently infected dogs to travel between communities either by themselves , or with people , which could result in regional endemicity ., Townsend et al ( 34 ) suggest a case detection rate of at least 5% ( preferably 10% ) is required to assess rabies freedom following control measures; surveillance capacity in rabies-free regions such as Oceania should be evaluated and enhanced if required ., Pre-emptive vaccination is another option to protect rabies-free regions; for example , an ‘immune-belt , ’ an area in which dogs must be vaccinated , was established in the 1950s in northern Malaysia along the Thai border 44 ., The World Health Organization recommends repeated mass parenteral vaccination of 70% of dog populations to achieve herd immunity 45 ., Whilst the origin of this recommendation is unclear , it has been accepted for decades–for example , legislation allowed free-roaming of dogs in designated areas if at least 70% of the dog population was vaccinated in New York State in the 1940s 46–and previous modelling studies of pre-emptive vaccination support this threshold 20 , 47–49 ., We found that vaccination with 70% coverage is expected result in outbreaks are self-limiting ., Therefore , if inter-community dog movements are unlikely , the probability of regional spread is unlikely ., However , given predicted upper 95% ranges of 8–14 rabies infected dogs for at least 8 months at 70% coverage , we recommend at least 90% coverage to reduce the effective monthly reproductive ratio < 1 , limit human exposure , and provide a more certain barrier to regional spread , particularly in regions where dogs are socially and culturally connected to people and consequently , movement of dogs is likely ., In places in which movements are not easily restricted–such as urban centres in which dog populations are contiguous–our study indicates that comprehensive vaccination coverage is crucial and that reducing population turnover ( for example , by increasing veterinary care to improve dog health ) might not have a substantial effect on reducing the vaccination coverage required ., The political and operational challenges of rabies elimination are well-documented 50 , and lack of elimination or subsequent re-emergence is attributed to insufficient vaccination coverage ( < 70% dog population overall , patchy coverage or insufficient duration 49 , 51 , 52 ) and re-introduction of infected dogs 48 , 53 ., Pockets of unvaccinated dogs within well-vaccinated , urban areas could maintain rabies at a low incidence sufficient to re-introduce rabies as surrounding herd immunity wanes ., It is also possible that with comprehensive , homogenous 70% coverage , a low incidence of rabies–such as appears possible at 70% vaccination in our study–is sufficient for endemicity in larger populations but is practically undetectable , giving the appearance of elimination ., A higher proportion of vaccinated dogs might be required for elimination , and further modelling studies incorporating behavioural change in larger empirical networks are required to test this hypothesis ., Validation of a canine-rabies spread model is challenging , not only because variation between model outputs and observed data can arise from many sources , but because rabies surveillance is passive and case ascertainment is notoriously challenging 52 , thus limiting the fitting of mathematical models and undermining comparison of predicted outputs to observed data ., Mechanistic models are therefore a valuable tool to describe possible spread and develop hypotheses about rabies persistence , surveillance and control by using plausible , generalisable disease data ( in the current study , the epidemiology of rabies ) and context specific , ecological data ( in the current study , empirical network data from small populations of dogs to provide contact rates ) ., Although opportunity for validation is limited because outbreak data from small populations of dogs is scarce ( and non-existent in our study area ) , observed patterns of disease spread ( low incidence and long duration of outbreaks ) are consistent with those predicted by the current study 37 , 54 ., Global sensitivity analysis indicated that population size ( a parameter of reasonable certainty ) and degree of connectivity had the greatest influence on duration , size and initial spread; this makes intuitive sense , and as expected , the largest and longest outbreaks were predicted in the Warraber network which had the highest median degree ., Of the parameters that most influenced model outputs , parameterisation of the degree of connectivity was most likely to influence generalisability of our study findings because data are limited and social connectivity might vary between populations of free-roaming dogs ., However , a study in N’Djaména , Chad , found that the average degree was 9 and 15 ( maximum 20 and 64 , respectively ) in two populations of size 272 and 237 dogs , respectively 20 , which is not dissimilar to the degree distribution of the small Torres Strait dog populations ., Reassuringly , input parameters about which there was more uncertainty–for example , bite probabilities–were less influential on variation in outputs ., By exploring rabies epidemiology in small populations of free-roaming dogs–in which contact heterogeneity was determined in part by their social networks and in part by the disease–our study provides insights into how rabies-induced behavioural changes are important for endemicity of rabies in rural and remote areas ., We found that rabies induced behavioural change is crucial for the disease to spread in these populations and enables a low incidence of rabies cases over a long duration ., Without movement restrictions , we predict that substantially greater than the recommended 70% vaccination coverage is required to prevent rabies emergence in currently free areas . | Introduction, Methods, Results, Discussion | Canine rabies was endemic pre-urbanisation , yet little is known about how it persists in small populations of dogs typically seen in rural and remote regions ., By simulating rabies outbreaks in such populations ( 50–90 dogs ) using a network-based model , our objective was to determine if rabies-induced behavioural changes influence disease persistence ., Behavioural changes–increased bite frequency and increased number or duration of contacts ( disease-induced roaming or paralysis , respectively ) –were found to be essential for disease propagation ., Spread occurred in approximately 50% of model simulations and in these , very low case rates ( 2 . 0–2 . 6 cases/month ) over long durations ( 95% range 20–473 days ) were observed ., Consequently , disease detection is a challenge , risking human infection and spread to other communities via dog movements ., Even with 70% pre-emptive vaccination , spread occurred in >30% of model simulations ( in these , median case rate was 1 . 5/month with 95% range of 15–275 days duration ) ., We conclude that the social disruption caused by rabies-induced behavioural change is the key to explaining how rabies persists in small populations of dogs ., Results suggest that vaccination of substantially greater than the recommended 70% of dog populations is required to prevent rabies emergence in currently free rural areas . | We investigated rabies spread in populations of 50–90 dogs using a simulation model in which dogs’ contacts were based on the social networks of three populations of free-roaming domestic dogs in the Torres Strait , Australia ., Rabies spread would not occur unless we included rabies-induced behavioural changes ( increased bite frequency and either roaming or paralysis that increased the number or duration of contacts , respectively ) ., The model predicted very low case rates over long durations which would make detection challenging in regions in which there is already a high population turnover , increasing the risk of human infection and spread to other communities via dog movements ., Spread also occurred in >30% of model simulations at low incidence for up to 200 days when 70% of the population was pre-emptively vaccinated , suggesting that higher vaccination coverage will be required to prevent rabies emergence in currently free rural areas , especially those in which dogs readily travel between communities . | animal types, medicine and health sciences, immunology, tropical diseases, sociology, vertebrates, social sciences, pets and companion animals, dogs, animals, mammals, simulation and modeling, preventive medicine, rabies, animal behavior, network analysis, social networks, neglected tropical diseases, vaccination and immunization, zoology, research and analysis methods, public and occupational health, infectious diseases, computer and information sciences, zoonoses, behavior, epidemiology, psychology, eukaryota, biology and life sciences, viral diseases, amniotes, organisms | null |
10 | journal.pcbi.1006541 | 2,018 | RAVEN 2.0: A versatile toolbox for metabolic network reconstruction and a case study on Streptomyces coelicolor | Genome-scale metabolic models ( GEMs ) are comprehensive in silico representations of the complete set of metabolic reactions that take place in a cell 1 ., GEMs can be used to understand and predict how organisms react to variations on genetic and environmental parameters 2 ., Recent studies demonstrated the extensive applications of GEMs in discovering novel metabolic engineering strategies 3; studying microbial communities 4; finding biomarkers for human diseases and personalized and precision medicines 5 , 6; and improving antibiotic production 7 ., With the increasing ease of obtaining whole-genome sequences , significant challenges remain to translate this knowledge to high-quality GEMs 8 ., To meet the increasing demand of metabolic network modelling , the original RAVEN ( Reconstruction , Analysis and Visualization of Metabolic Networks ) toolbox was developed to facilitate GEM reconstruction , curation , and simulation 9 ., In addition to facilitating the analysis and visualization of existing GEMs , RAVEN particularly aimed to assist semi-automated draft model reconstruction , utilizing existing template GEMs and the KEGG database 10 ., Since publication , RAVEN has been used in GEMs reconstruction for a wide variety of organisms , ranging from bacteria 11 , archaea 12 to human gut microbiome 13 , eukaryotic microalgae 14 , parasites 15–17 , and fungi 18 , as well as various human tissues 19 , 20 and generic mammalian models with complex metabolism 21 , 22 ., As such , the RAVEN toolbox has functioned as one of the two major MATLAB-based packages for constraint-based metabolic modelling , together with the COBRA Toolbox 23–25 ., Here , we present RAVEN 2 . 0 with greatly enhanced reconstruction capabilities , together with additional new features ( Fig 1 , Table 1 ) ., A prominent enhancement of RAVEN 2 . 0 is the use of the MetaCyc database in assisting draft model reconstruction ., MetaCyc is a pathway database that collects only experimentally verified pathways with curated reversibility information and mass-balanced reactions 26 ., RAVEN 2 . 0 can leverage this high-quality database to enhance the GEM reconstruction process ., While the functionality of the original RAVEN toolbox was illustrated by reconstructing a GEM of Penicillium chrysogenum 9 , we here demonstrate the new and improved capabilities and wide applicability of RAVEN 2 . 0 through reconstruction of a GEM for Streptomyces coelicolor ., S . coelicolor is a representative species of soil-dwelling , filamentous and Gram-positive actinobacterium harbouring enriched secondary metabolite biosynthesis gene clusters 27 , 28 ., As a well-known pharmaceutical and bioactive compound producer , S . coelicolor has been exploited for antibiotic and secondary metabolite production 29 ., The first published GEM for S . coelicolor , iIB711 30 , was improved through an iterative process resulting in the GEMs iMA789 31 and iMK1208 32 ., The most recent GEM , iMK1208 , is a high-quality model that includes 1208 genes and 1643 reactions and was successfully used to predict metabolic engineering targets for increased production of actinorhodin 32 ., Here , we demonstrate how the new functions of RAVEN can be used for de novo reconstruction of a S . coelicolor GEM , using comparison to the existing high-quality model iMK1208 as benchmark ., The use of three distinct de novo reconstruction approaches enabled capturing most of the existing model , while complementary reactions found through the de novo reconstructions gave the opportunity to improve the existing model ., After manual curation , we included 402 new reactions into the GEM , with 320 newly associated enzyme-coding genes , including a variety of biosynthetic pathways for known secondary metabolites ( e . g . 2-methylisoborneol , albaflavenone , desferrioxamine , geosmin , hopanoid and flaviolin dimer ) ., The updated S . coelicolor GEM is released as Sco4 , which can be used as an upgraded platform for future systems biology research on S . coelicolor and related species ., RAVEN 2 . 0 aims to provide a versatile and efficient toolbox for metabolic network reconstruction and curation ( Fig 1 ) ., In comparison to other solutions for GEM reconstruction ( Table 1 ) , the strength of RAVEN is its ability of semi-automated reconstruction based on published models , KEGG and MetaCyc databases , integrating knowledge from diverse sources ., A brief overview of RAVEN capabilities is given here , while more technical details are stated in Material & Methods , and detailed documentation is provided for individual functions in the RAVEN package ., RAVEN supports two distinct approaches to initiate GEM reconstruction for an organism of interest:, ( i ) based on protein homology to an existing template model , or, ( ii ) de novo using reaction databases ., The first approach requires a high-quality GEM of a phylogenetically closely related organism , and the functions getBlast and getModelFromHomology are used to infer homology using bidirectional BLASTP and build a subsequent draft model ., Alternatively , de novo reconstruction can be based on two databases: KEGG and MetaCyc ., For KEGG-based reconstruction , the user can deploy getKEGGModelForOrganism to either rely on KEGG-supplied annotations—KEGG currently includes over 5000 genomes—or query its protein sequences for similarity to HMMs that are trained on genes annotated in KEGG ., MetaCyc-based reconstruction can be initiated with getMetaCycModelForOrganism that queries protein sequences with BLASTP for homology to enzymes curated in MetaCyc , while addSpontaneous retrieves relevant non-enzyme associated reactions from MetaCyc ., Regardless of which ( combination of ) approach ( es ) is followed , a draft model is obtained that requires further curation to result in a high-quality reconstruction suitable for simulating flux distributions ., Various RAVEN functions aid in this process , including gapReport that runs a gap analysis and reports e . g . dead-end reactions and unconnected subnetworks that indicate missing reactions and gaps in the model , in addition to reporting metabolites that can be produced or consumed without in- or output from the model , which is indicative of unbalanced reactions ., RAVEN is distributed with a gap-filling algorithm gapFill , however , results from external gap-filling approaches can also be readily incorporated ., This and further manual curation is facilitated through functions such as addRxnsGenesMets that moves reactions from a template to a draft model , changeGeneAssoc and standardizeGrRules that curate gene associations and combineMetaCycKEGGModels that can semi-automatically unify draft models reconstructed from different databases ., In addition to model generation , RAVEN includes basic simulation capabilities including flux balance analysis ( FBA ) , random sampling of the solution space 33 and flux scanning with enforced objective function ( FSEOF ) 34 ., Models can be handled in various file-formats , including the community standard SBML L3V1 FBCv2 that is compatible with many other constraint-based modelling tools , including the COBRA Toolbox 23 , as well as non-MATLAB tools as COBRApy 35 and SBML-R 36 ., As the SBML file format is unsuitable for tracking changes between model versions support for flat-text and YAML formats are provided ., In addition , models can be represented in a user-friendly Excel format ., As a MATLAB package , RAVEN gives users flexibility to build their own reconstruction and analysis pipelines according to their needs ., The enriched capabilities of RAVEN 2 . 0 were evaluated by de novo generation of GEMs for S . coelicolor using three distinct approaches , as described in Material & Methods ( Fig 2 ) ., Cross-comparison of genes from de novo reconstructions and the published S . coelicolor GEM iMK1208 indicated that the three de novo approaches are complementary and comprehensive , combined covering 88% of the genes included in iMK1208 ( Fig 3 ) ., The existing model contained 146 genes that were not annotated by any of the automated approaches , signifying the valuable manual curation that has gone into previous GEMs of S . coelicolor ., Nonetheless , matching of metabolites across models through their KEGG identifiers further supported that most of the previous GEM is captured by the three de novo reconstructions , while each approach has their unique contribution ( Fig 3 ) ., The three draft reconstructions were consecutively merged to result in a combined draft reconstruction ( S1 Data ) , containing 2605 reactions , of which 958 and 1104 reactions were uniquely from MetaCyc- and KEGG-based reconstructions , respectively ( Fig 2 ) ., While MetaCyc-based reconstruction annotated more genes than KEGG-based reconstructions ( Fig 3 ) , the number of unique reactions by MetaCyc is slightly lower than by KEGG , indicating that KEGG based reconstruction is more likely to assign genes to multiple reactions ., Of the 789 reactions from the existing high-quality model that could be mapped to either MetaCyc or KEGG reactions , 733 ( 92 . 9% ) were included in the combined draft model ( S1 Table ) ., The combined de novo reconstruction has a larger number of reactions , metabolite and genes than the previous S . coelicolor GEM ( Fig 2 ) ., While a larger metabolic network does not necessarily imply a better network , we took advantage of the increased coverage of the de novo reconstruction by using it to curate iMK1208 , while retaining the valuable contributions from earlier GEMs ., The culminating model is called Sco4 , the fourth major release of S . coelicolor GEM ., Through manual curation , a total of 398 metabolic reactions were selected from the combined model to expand the stoichiometric network of the previous GEM ( S3 Table ) ., These new reactions cover diverse subsystems including both primary and secondary metabolism ( Fig 4A ) and displayed close association with existing metabolites in the previous GEM ( Fig 4B ) ., Despite both MetaCyc- and KEGG-based reconstructions contributing roughly equally , MetaCyc-unique reactions are more involved in energy and secondary metabolism , while KEGG-unique reactions are more related to amino acid metabolism and degradation pathways ( Fig 4C ) ., The de novo reconstruction annotated genes to 11 reactions that had no gene association in the previous GEM ( S4 Table ) ., Together with 34 spontaneous reactions and 10 transport reactions identified by the MetaCyc reconstruction functions ( S5 and S6 Tables ) , the resulting Sco4 model contains 2304 reactions , 1927 metabolites and 1522 genes ( Fig 2 ) ., The process of model curation using de novo reconstructions furthermore identified erroneous annotations in the previous GEM ., Seventeen metabolites were annotated with invalid KEGG identifiers ( S7 Table ) , impeding matching with the KEGG-based reconstructions ., However , by annotating the reactions and metabolites to MetaCyc , we were still able to annotate all 17 metabolites with a valid KEGG identifier , using MetaCyc-provided KEGG annotations ., While the KEGG identifiers used in iMK1208 were valid previously , they have since been removed from the KEGG database ., Unfortunately , no changelogs are available to trace such revisions ., The quality of Sco4 was evaluated through various simulations ., It displayed the same performance as iMK1208 in growth prediction on 64 different nutrient sources , with a consistent sensitivity of 90 . 6% ( S8 Table ) ., Experimentally measured growth rates in batch and chemostat cultivations were in good correlation with the growth rates predicted by Sco4 ( Fig 5A ) ., A recent large-scale mutagenesis study produced and analyzed 51 , 443 S . coelicolor mutants , where each mutant carried a single Tn5 transposition randomly inserted in the genome 37 ., No transposition insertions were detected in 79 so-called cold regions of the genome , harboring 132 genes of which 65 are annotated to reactions in Sco4 ( S9 Table ) ., The 132 genes are potentially essential , as insertions into these loci would have resulted in a lethal phenotype ., However , as it is unclear whether gene essentiality is truly the cause behind the cold-regions , we therefore take the more conservative assumption that genes located outside cold regions are not essential and compared the non-essential gene sets ., Simulation with Sco4 indicates a specificity ( or true negative rate ) of 0 . 901 , which is an increase over the 0 . 876 of the previous model ( Fig 5B ) ., The S . coelicolor genome project revealed a dense array of secondary metabolite gene clusters both in the core and arms of the linear chromosome ( Bentley et al . 2002 ) , and extensive efforts have been made to elucidate these biosynthetic pathways ( Van Keulen and Dyson , 2014 ) ., The previous GEM of S . coelicolor included only three of these pathways ( i . e . actinorhodin , calcium-dependent antibiotic and undecylprodigiosin ) ., Through our de novo reconstruction , we captured the advances that have since been made in elucidating additional pathways: Sco4 describes the biosynthetic pathways of 6 more secondary metabolites ( e . g . geosmin ) ., These additional pathways were mainly obtained from the MetaCyc-based reconstruction ( Fig 4C ) ., The expanded description of secondary metabolism was used to predict potential metabolic engineering targets for efficient antibiotic production in S . coelicolor ., Flux scanning with enforced objective function ( FSEOF ) 34 was applied to all secondary metabolic pathways in Sco4 and suggested overexpression targets were compared , with significant overlap between different classes of secondary metabolites ( Fig 6 , S10 Table ) ., In addition , several targets were predicted to increase production of all modelled secondary metabolites ., Three reactions , constituting the pathway from histidine to N-formimidoyl-L-glutamate , and catalyzed by SCO3070 , SCO3073 and SCO4932 , were commonly identified as potential targets ( S10 Table ) ., The RAVEN toolbox aims to assist constraint-based modeling with a focus on network reconstruction and curation ., A growing number of biological databases have been incorporated for automated GEM reconstruction ( Fig 1 ) ., The generation of tissue/cell type-specific models through task-driven model reconstruction ( tINIT ) has been incorporated to RAVEN 2 . 0 as built-in resource for human metabolic modeling 19 , 39 ., RAVEN 2 . 0 was further expanded in this study by integrating the MetaCyc database , including experimentally elucidated pathways , chemically-balanced reactions , as well as associated enzyme sequences ( 21 ) ., This key enhancement brings new features toward high-quality reconstruction , such as inclusion of transport and spontaneous reactions ( Table 1 ) ., The performance of RAVEN 2 . 0 in de novo reconstruction was demonstrated by the large overlap of reactions between the automatically obtained draft model of S . coelicolor and the manually curated iMK1208 model 32 ., This indicates that de novo reconstruction with RAVEN is an excellent starting point towards developing a high-quality model , while a combined de novo reconstruction can be produced within hours on a personal computer ., We used the de novo reconstructions to curate the existing iMK1208 model , and the resulting Sco4 model was expanded with numerous reactions , metabolites and genes , in part representing recent progress in studies on metabolism of S . coelicolor and related species ( Fig 5 ) ., We have exploited this new information from biological databases to predict novel targets for metabolic engineering toward establishing S . coelicolor as a potent host for a wide range of secondary metabolites ( Fig 7 ) ., Therefore , RAVEN 2 . 0 can be used not only for de novo reconstruction but also model curation and continuous update , which would be necessary for a published GEM to synchronize with the incremental knowledge ., We thus deposited the Sco4 as open GitHub repositories for collaborative development with version control ., While RAVEN 2 . 0 addresses several obstacles and significantly improves GEM reconstruction and curation , a number of challenges remain to be resolved ., One major obstacle encountered is matching of metabolites , whether by name or identifier ( e . g . KEGG , MetaCyc , ChEBI ) ., Incompatible metabolite nomenclature , incomplete and incorrect annotations all impede fully automatic matching and rather requires intensive manual curation , especially when comparing and combining GEMs from different sources ., Efforts have been made to address these issues , e . g . by simplifying manual curation using modelBorgifier 40 ., Particularly worth noting is MetaNetX 41 , where the MNXref namespace aims to provide a comprehensive cross reference between metabolite and reactions from a wide range of databases , assisting model comparison and integration ., Future developments in this direction ultimately leverage this information to automatically reconcile metabolites and reactions across GEMs ., Another major challenge is evaluation and tracking of GEM quality ., Here we evaluated Sco4 with growth and gene essentiality simulations ( Fig 5 , S8 Table , S9 Table ) , however , the GEM modelling community would benefit from such and additional quality tests according to community standards ., Exciting ongoing progress here is memote: an open-source software that is under development that contains a community-maintained , standardized set of metabolic model tests 42 ., Given the YAML export functionality in RAVEN already supports convenient tracking of model changes in a GitHub repository , this should ideally be combined with tracking model quality with memote , rendering RAVEN suitable for future GEM reconstruction and curation needs ., The RAVEN Toolbox 1 . 0 was released as an open-source MATLAB-package 9 , that has since seen minor updates and bugfixes ., Since 2016 , the development of RAVEN has been organized and tracked at a public GitHub repository ( https://github . com/SysBioChalmers/RAVEN ) ., This repository provides a platform for the GEM reconstruction community , with users encouraged to report bugs , request new features and contribute to the development ., The RAVEN Toolbox is based on a defined model structure ( S11 Table ) ., Design choices dictate minor differences between COBRA and RAVEN structures , however , bi-directional model conversion is supported through ravenCobraWrapper ., Through resolving previously conflicting function names , RAVEN 2 . 0 is now fully compatible with the COBRA Toolbox ., Detailed documentation on the purpose , inputs and outputs for each function are provided in the doc folder ., Novel algorithms were developed to facilitate de novo GEM reconstruction by utilizing the MetaCyc database 26 ., In this module , corresponding MATLAB structures were generated from MetaCyc data files ( version 21 . 0 ) that contained 3118 manually curated pathways with 13 , 689 metabolites and 15 , 309 reactions ( Fig 7 ) ., A total of 17 , 394 enzymes are associated to these pathways and their protein sequences are included ( protseq . fsa ) ., Information from these structures is parsed by getModelFromMetaCyc to generate a model structure containing all metabolites , reactions and enzymes ., This MetaCyc model can subsequently be used for de novo GEM reconstruction through the getMetaCycModelForOrganism function ( Fig 7 ) ., A draft model is generated from MetaCyc enzymes ( and associated reactions and metabolites ) that show homology to the query protein sequences ., Beneficial is that MetaCyc reactions are mass- and charge-balanced , while curated transport enzymes in MetaCyc allow inclusion of transport reactions into the draft model ., In addition , MetaCyc provides 515 reactions that may occur spontaneously ., As such reactions have no enzyme association , they are excluded from sequence-based reconstruction and can turn into gaps in the generated models ., By cataloguing spontaneous reactions in MetaCyc , the addSpontaneousRxns function can retrieve spontaneous reactions depending the presence of the relevant reactants in the draft model ., In addition to MetaCyc-based GEM reconstruction , RAVEN 2 . 0 can utilize the KEGG database for de novo GEM reconstruction ., The reconstruction algorithms were significantly enhanced in multiple aspects: the reformatted KEGG database in MATLAB format is updated to version 82 . 0; and the pipeline to train KEGG Orthology ( KO ) -specific hidden Markov Models is expanded ., Orthologous protein sequences , associated to particular KEGG Orthology ( KO ) , are organised into non-redundant clusters with CD-HIT 43 ., These clusters are used as input in multiple-sequence alignment with MAFFT 44 , for increased accuracy and speed ., The hidden Markov models ( HMMs ) are then trained for prokaryotic and eukaryotic species with various protein redundancy cut-offs ( 100% , 90% or 50% ) using HMMER3 45 and can now be automatically downloaded when running getKEGGModelForOrganism ., To capitalize on the complementary information from MetaCyc- and KEGG-based reconstructions , RAVEN 2 . 0 facilitates combining draft models from both approaches into one unified draft reconstruction ( Fig 2 ) ., Prior to combining , reactions shared by MetaCyc- and KEGG-based reconstructions are mapped using MetaCyc-provided cross-references to their respective KEGG counterparts ( S12 Table ) ., Additional reactions are associated by linkMetaCycKEGGRxns through matching the metabolites , aided by cross-references between MetaCyc and KEGG identifiers ( S13 Table ) ., Subsequently , the combineMetaCycKEGGModels function thoroughly queries the two models for identical reactions , discarding the KEGG versions while keeping the corresponding MetaCyc reactions ., In the combined model , MetaCyc naming convention is preferentially used such that unique metabolites and reactions from KEGG-based draft model are replaced with their MetaCyc equivalents whenever possible ., The combined draft model works as a starting point for additional manual curation , to result in a high-quality reconstruction ., RAVEN 2 . 0 contains a range of additional enhancements ., Linear problems can be solved through either the Gurobi ( Gurobi Optimization Inc . , Houston , Texas ) or MOSEK ( MOSEK ApS , Copenhagen , Denmark ) solvers ., Various file formats are supported for import and export of models , including Microsoft Excel through Apache POI ( The Apache Software Foundation , Wakefield , Massachusetts ) , the community standard SBML Level 3 Version 1 FBC Package Version 2 through libSBML 46 and YAML for easy tracking of differences between model files ., Meanwhile , backwards compatibility ensures that Excel and SBML files generated by earlier RAVEN versions can still be imported ., An improved GEM of S . coelicolor , called Sco4 for the fourth major published model , was generated through RAVEN 2 . 0 following the pipeline illustrated in Fig 3 ., The model is based on the complete genome sequences of S . coelicolor A3 ( 2 ) , including chromosome and two plasmids ( GenBank accession: GCA_000203835 . 1 ) 27 ., MetaCyc-based draft model was generated with getMetaCycModelForOrganism using default cut-offs ( bit-score ≥ 100 , positives ≥ 45% ) ., Two KEGG-based draft models were generated with getKEGGModelForOrganism by, ( i ) using sco as KEGG organism identifier , and, ( ii ) querying the S . coelicolor proteome against HMMs trained on prokaryotic sequences with 90% sequence identity ., These two models were merged with mergeModels , subsequently combined with the MetaCyc-based draft using combineMetaCycKEGGModels , followed by manual curation ., Reactions were mapped from iMK1208 to MetaCyc and KEGG identifiers in a semi-automated manner ( S1 Table ) ., Metabolites in iMK1208 were associated to MetaCyc and KEGG identifiers through examining the mapped reactions ( S2 Table ) ., Pathway gaps and invalid metabolite identifiers were thus detected and revised accordingly ., Manual curation of the combined draft and iMK1208 culminated in the Sco4 model ., Curation entailed identifying reactions from the combined draft , considering the absence of gene-associations in iMK1208; explicit subsystem and/or pathway information; support from both MetaCyc and KEGG reconstructions; additional literature information , as well as potential taxonomic conflicts ., Manual curation was particularly required for secondary metabolite biosynthetic pathways , due to high levels of sequence similarity among the synthetic domains of polyketide synthase and nonribosomal peptide synthetase 47 ., The identified new reactions were added to Sco4 , while retaining the previous manual curation underlying iMK1208 ., Spontaneous reactions were added through addSpontaneousRxns , while transport reactions annotated in the MetaCyc-based reconstruction were manually curated ., Gene essentiality was simulated on iMK1208 and Sco4 by the COBRA function singleGeneDeletion , with a more than 75% reduction in growth rate identifying essential reactions ., Potential targets for metabolic engineering were predicted using the flux scanning with enforced objective function FSEOF 34 ., The reconstruction and curation of Sco4 is provided as a MATLAB script in the ComplementaryScripts folder of the Sco4 GitHub repository ., The updated Sco4 model is deposited to a GitHub repository in MATLAB ., mat , SBML L3V1 FBCv2 ., xml , Excel ., xlsx , YAML ., yml and flat-text . txt formats ( https://github . com/SysBioChalmers/Streptomyces_coelicolor-GEM ) ., Users can not only download the most recent version of the model , but also report issues and suggest changes ., Updates in the metabolic network or gene associations can readily be tracked by querying the difference in the flat-text model and YAML representations ., As such , Sco4 aims to be a community model , where improved knowledge and annotation will incrementally and constantly refine the model of S . coelicolor ., RAVEN is an open source software package available in the GitHub repository ( https://github . com/SysBioChalmers/RAVEN ) ., The updated S . coelicolor genome-scale metabolic model Sco4 is available as a public GitHub repository at ( https://github . com/SysBioChalmers/Streptomyces_coelicolor-GEM ) . | Introduction, Results, Discussion, Material and methods | RAVEN is a commonly used MATLAB toolbox for genome-scale metabolic model ( GEM ) reconstruction , curation and constraint-based modelling and simulation ., Here we present RAVEN Toolbox 2 . 0 with major enhancements , including:, ( i ) de novo reconstruction of GEMs based on the MetaCyc pathway database;, ( ii ) a redesigned KEGG-based reconstruction pipeline;, ( iii ) convergence of reconstructions from various sources;, ( iv ) improved performance , usability , and compatibility with the COBRA Toolbox ., Capabilities of RAVEN 2 . 0 are here illustrated through de novo reconstruction of GEMs for the antibiotic-producing bacterium Streptomyces coelicolor ., Comparison of the automated de novo reconstructions with the iMK1208 model , a previously published high-quality S . coelicolor GEM , exemplifies that RAVEN 2 . 0 can capture most of the manually curated model ., The generated de novo reconstruction is subsequently used to curate iMK1208 resulting in Sco4 , the most comprehensive GEM of S . coelicolor , with increased coverage of both primary and secondary metabolism ., This increased coverage allows the use of Sco4 to predict novel genome editing targets for optimized secondary metabolites production ., As such , we demonstrate that RAVEN 2 . 0 can be used not only for de novo GEM reconstruction , but also for curating existing models based on up-to-date databases ., Both RAVEN 2 . 0 and Sco4 are distributed through GitHub to facilitate usage and further development by the community ( https://github . com/SysBioChalmers/RAVEN and https://github . com/SysBioChalmers/Streptomyces_coelicolor-GEM ) . | Cellular metabolism is a large and complex network ., Hence , investigations of metabolic networks are aided by in silico modelling and simulations ., Metabolic networks can be derived from whole-genome sequences , through identifying what enzymes are present and connecting these to formalized chemical reactions ., To facilitate the reconstruction of genome-scale models of metabolism ( GEMs ) , we have developed RAVEN 2 . 0 ., This versatile toolbox can reconstruct GEMs fast , through either metabolic pathway databases KEGG and MetaCyc , or from homology with an existing GEM ., We demonstrate RAVENs functionality through generation of a metabolic model of Streptomyces coelicolor , an antibiotic-producing bacterium ., Comparison of this de novo generated GEM with a previously manually curated model demonstrates that RAVEN captures most of the previous model , and we subsequently reconstructed an updated model of S . coelicolor: Sco4 ., Following , we used Sco4 to predict promising targets for genetic engineering , which can be used to increase antibiotic production . | enzymes, metabolic networks, enzymology, simulation and modeling, metabolites, network analysis, enzyme metabolism, molecular biology techniques, enzyme chemistry, research and analysis methods, secondary metabolites, sequence analysis, computer and information sciences, bioinformatics, proteins, biological databases, combined bisulfite restriction analysis, molecular biology, molecular biology assays and analysis techniques, biochemistry, sequence databases, database and informatics methods, biology and life sciences, metabolism | null |
269 | journal.pcbi.1002637 | 2,012 | Gene Network Homology in Prokaryotes Using a Similarity Search Approach: Queries of Quorum Sensing Signal Transduction | Comparing prokaryotic whole genome sequences to identify operons is a mature area of research 1 , 2 , 3 , 4 ., Orthologous operon identification can imply a secondary degree of relation between components , reaffirming Clusters of Orthologous Groups ( COG ) and other assignments of function as well as suggesting essentiality 5 ., This conservation of components also speaks to the conservation of signaling capacity in orthologous modular signaling operon-based units ., That is , we are interested in ascertaining the genetic modularity of signal transduction processing , in particular those that operate within known , putative regulons ., Drawing partly on previous work investigating microsynteny and gene neighborhoods 3 , 6 , 7 , we developed a general similarity search approach , we call a Local Modular Network Alignment Similarity Tool ( LMNAST ) ., LMNAST applies a BLAST-like heuristic to gene order and arrangement ., Resultant search hits help capture the conservation and phylogenetic dispersion of a given query modular network ., Using , as queries , contiguously abutting genes of prokaryotic modular signaling networks , LMNAST identifies and scores hits based on the minimum number of frank mutations in gene organization needed to arrive at a given putative system homolog when starting from the query ., Here , homology refers to similarity in relative gene order and relative transcriptional direction , after nucleotide level threshold filtering of gene elements based on BLAST 8 E-value ., For the purpose of evaluation , two small modular systems were used as test inputs: one was the E . coli lac system and the other was the LuxS regulated ( Lsr ) system ., In some ways , the two systems are quite similar ( Fig . 1 ) ., Both import and catabolize the small molecules that induce system expression ., For the lac system , this small molecule is , of course , lactose ., For the Lsr system , the small molecule is autoinducer-2 ( AI-2 ) ., AI-2 is a signaling molecule common among at least eighty bacterial species 9 ., As mediated through either the Lsr or LuxPQ systems , bacteria are believed to use AI-2 to guide population based phenotypes , a phenomenon termed quorum sensing 10 ., LuxPQ is a histidine kinase two component system , the regulon of which is distinct from Lsr and is not considered further ., Lsr is an interesting query because its distribution should help elucidate its putative , modular quorum sensing function 9 and because the known homologs differ in gene organization 10 , 11 , 12 ., Input consisted of an ordered list of gene elements ( for example , lacIZYA ) ., For each gene element a BLAST result file was generated using tblastn to search the nr/nt database for hits with E-values less than 0 . 1 , narrowing the search space ., Each BLAST hit was assigned a character corresponding to the gene element queried ., BioPerl 13 was used to query Genbank databases and process data from retrieved files ., Nucleotide records with sufficiently proximal characters were investigated further ., The degree of similarity between a putative hit and its corresponding query was evaluated according to the number of deletions , insertions , and rearrangements required to generate the putative hit from the query ., Intra-hit gene duplications were disallowed as a simplification ., Consequently , deletion could be noted by character type inclusion ., Insertions of uncharactered elements between gene homologs were scored according to an affine gap rule whereby a portion of the deduction was scaled to the insertion length ., Rearrangements refer to altered relative order and relative gene direction ., Changed relative direction was only considered when relative order was maintained ., When this criterion was satisfied , relative order was evaluated in terms of adjacent homolog distance , disregarding insertions and deletions ., For each such structural dissimilarity there was a standard deduction in score ., Noncontiguous elements were dropped iteratively until a maximum score was reached for each putative hit ., When more than one putative hit version elicited equal scores in the same round , the version of the hit with the most characters was favored ., Putative hits with scores greater than zero were retained ., For evaluation purposes and to find a suitable balance between false positives and coverage completeness , each test query was run under both weak and stringent conditions ., Stringent criteria searches assumed accurate annotation ., Contrarily , weak criteria did not require genes to lie within the annotated coding sequence ., Moreover , characters annotated as “pseudo” or bounded outside “gene” annotation were accepted as homologous characters ., Weak criteria searches also allowed multiple genes to co-exist within the same annotation ., Additionally , as a concession to the possibility of longer range interactions between genes , reduced gap penalties were used in weak criteria searches ., Results described herein were derived using a gap penalty of 1 and 2 with an extension penalty of 0 . 3 and 1 , for weak and stringent criteria searches respectively ., Mean element homology ( meH ) is a normalized , ancillary measure of string similarity as evaluated by BLAST ., Useful for contrasting BLAST results to LMNAST hits , meH was calculated by normalizing each gene homologs bit score to the maximum bit score for the entire corresponding BLAST result given a background subtraction of the minimum bit score ., These normalized bit scores were then averaged for all gene elements within an LMNAST hit ., A score of one indicates exact likeness whereas zero indicates the least degree of similarity ., Also , widening the query beyond the system of interest to include a nominal number of flanking genes , here termed “extended window searching , ” afforded additional contextualization of LMNAST hit results ., Finally , in evaluating certain low homology hits , nonscoring synonyms were used ., Nonscoring synonyms are elements with equivalent gene annotation but insufficient homology according to the initial E-value filter ., This is somewhat analogous to replacement in blastp ., We began evaluation of LMNAST by searching for the well characterized E . coli lac operon ., Specifically , the E . coli lac genes lacI ( BAE76127 ) , lacZ ( BAE76126 ) , lacY ( BAE76125 ) , and lacA ( BAE76124 ) ( spanning bp 360473 to 366734 of the Genbank nucleotide record AP009048 ) were used as a query ., The stringent criteria search yielded fewer hits than the corresponding weak criteria search ( 189 vs . 236 ) ., Of the hits derived from the stringent criteria search , complete and perfectly arranged lac systems were found in 26 unique E . coli strains and S . enterica arizonae serovar 62:z4 , z23 ( meH 0 . 8 ) , the only Salmonella enterica serovar represented among all lac system hits , in keeping with its significant divergence from other serovars 14 ., A representation of E . coli hits in a phylogenetic context is available in Fig . 3a ., The average meH ( 0<meH<1 ) for these complete systems was 0 . 98 ., An extended window query with five additional genes on either side of the original search frame , revealed eight complete systems with a hitchhiking , proximal cytosine deaminase after losing all other proximal genes ., Only one system with all four characters was entirely removed from the original querys proximal gene set , suggestive of negligible stability for the canonical system outside of a limited phylogenetic domain ., An additional 28 hits were bereft one lac system character ( average meH 0 . 74 ) ., In all but three of these cases that missing gene was lacA ., Of these hits , ten had an additional frank structural change to a divergent expression pattern originating between lacI and lacZ characters ( e . g . in E . cloacae ) , likely increasing system sensitivity to lacI repression in these cases 15 ., Surprisingly , in other instances , extended window searching revealed the only proximal structural change to be a missing lacA gene ., This lacA degeneracy may be indicative of its relative functional unimportance compared to other lac system members 16 ., Some of the patterns described above can be inferred from coincidence heat charts ( Fig . 4 ) ., These matrices represent LMNAST results by the frequency of coincidence between gene characters within hits ., The shade of an index represents the frequency of hits where the row gene coincides with the column gene , normalized against the total number of hits containing the row gene , which itself is denoted by ( # ) ., For example , in Fig . 4 , the left-most matrix is a representation of a theoretical set of homolog fragments ( AB , BC , CD , ABC , BCD , and ABCD ) ., This simple set was constructed to only reflect unbiased homologous recombination presumably resulting only in chromosomal rearrangements ., In this set , B and C were extant in five inputs , while A and D were extant in three inputs ., All three inputs containing A also contained B , two also contained C , and one also contained D . This is reflected in the shades of the grids in the top row ., The middle matrix represents the coincidence distribution among LMNAST E . coli lac hits ., As an additional example , matrix element ( 2 , 1 ) is a rust color representing the 139 hits with a lacI character of the 155 also containing lacZ ., Finally , the right-most matrix is the difference between the left and middle matrices ., This particular analysis suggests , for example , that lacI is relatively over-represented across all hits , and that nearly all other coincidences are under-represented; surprisingly , this includes coincidences involving the permease , lacY ., Unlike lacA , lacY is believed necessary for lactose catabolism , possibly pointing to the use of a lower affinity transporter in such cases ., On the other hand , the over representation of lacI indicates an expected preference for the regulation of lactose catabolism ., Of the strong criteria search results , 138 hits contained only two lac gene homologs ( average meH 0 . 28 ) ., Two gene homologs represent the natural minimum of individual characters that a homologous system may contain ., Such hits represented truncated systems , repurposed individual members , homoplasic convergence , or outright false positives ., The majority of these hits fell within clusters of shared Genbank annotation in 2D similarity plots , which compare meHs ( averaged BLAST homologies ) against LMNAST homologies , or put differently , average amino acid identities against the systems broader organizational identity ., Generically then , purely vertical displacements imply perfect conservation across species through either vertical or , more likely , recent horizontal gene transfer accompanied by amelioration , while purely horizontal displacements indicate recent gene loss and/or rearrangement ., For purposes of downstream analysis , it is interesting to speculate that the kinetics of the remaining genes are unaffected in cases of purely horizontal displacement ., For systems subject to HGT , such liberties must necessarily be taken with less confidence ., In the case of the stringent lac search , similarity plots revealed a great deal of structural variability in the lac operon homologs of E . coli and near E . coli species ( Fig . 5 ) ., Nonetheless , the canonical lac operon ( 26 ) and the paralogous evolved beta-galactosidase system ( 43 ) 17 are clearly the most dominant lac operon-homologs , perhaps partially reflecting the relative preponderance of fully sequenced E . coli strains ., Addressing the full breadth of two character homologs , 87 contained lacZ and lacY character types , all of which were adjacent , five of which were misdirected relative to one another ., Numerous truncated systems had high meH but imperfect organizational similarity ., This cohort was restricted to strains of E . coli and closely related Shigella , Citrobacter , and Enterobacter species , reflecting a generally confined phylogenetic breadth among LMNAST lac hits ( Fig . 3b ) , and reinforcing the idea of limited lac horizontal gene transfer ( HGT ) 18 ., The remainder of the hits consisted of adjacent repurposed characters with functional valence around sugar metabolism ., This survey showed that LMNAST E . coli lac operon searches identified numerous ortholog and paralog instances ., Relative disparities in gene preservation , gene loss , and structural rearrangements bearing signaling implications were delineated ., While there was a significant degree of conformity to the standard genomic arrangement , the amount of diversity indicates that attention paid to related , non-canonical signaling units may be worthwhile ., Further testing of LMNAST was conducted with weak , stringent , and expanded window searches of the E . coli Lsr system ., The query Lsr system consists of a kinase ( LsrK: BAA15191 ) , a repressor ( LsrR: BAA15192 ) , ABC transporter genes ( LsrA: BAA15200 , LsrC: BAA15201 , LsrD: BAA15202 , and LsrB: BAE76456 ) , and phospho-AI-2 ( AI-2-P ) processing genes ( LsrF: BAE76457 , LsrG: BAE76458 ) ., Along with AI-2 , the Lsr system consists of multiple overlapping positive and negative feedback loops ., Multimeric LsrR represses system expression emanating from the intergenic region ., AI-2-P , itself catabolized by LsrF and LsrG , allosterically relieves that repression ., Thus , both expression troughs and peaks are tightly regulated 19 ., For the LMNAST search we used the Lsr genes spanning bp 1600331 to 1609003 of E . coli K12 substrain W3110 ( Genbank nucleotide record AP009048 ) ., The number of hits returned using stringent criteria totaled 419 ., Much like the lac operon , the Lsr system appeared subject to imperfect conservation ., Certainly , many fully sequenced E . coli bore exact Lsr homologs ( meH>0 . 95 ) ., Exceptions were the truncated systems found in strains BL21 20 , REL606 20 , and E24377A , and the specific and complete excision of Lsr systems from an otherwise preserved gene order in B2 type E . coli ( Figs . 6 and S1 ) as revealed through expanded window searching ., Unlike the lac operon , numerous Lsr system homologs had perfect LMNAST homology but markedly reduced meH ( Fig . 7a ) ., This is suggestive of amelioration following recent HGT events ( which may itself be a reflection of a carefully tuned signal requiring the full complement and correct arrangement of Lsr elements ) ., Indeed , Lsr system GC content varied in accordance with the background GC content , ranging from 0 . 35 to 0 . 71 ., Finer scale GC analysis revealed a single consistent and curious feature across all hits with meH greater than 0 . 3: a sharply spiking dip in fractional GC content near the intergenic region ( Fig . S2 ) ., This dip is suggestive of a conserved DNA binding domain essential to the signal transduction process , which would also , however , be a regulatory feature outside the scope of LMNAST searches ., Imperfect LMNAST hits with meH greater than 0 . 3 , deviated from the theoretical distribution according to a bias towards the conservation of lsrB , F , and G , relative to the lsrA , C , and D importer genes ( Fig . 8 ) ., This may be attributable to the fact that lsrB , F , and G likely pass cell signaling information downstream 14 , 21 , 22 , whereas loss of Lsr importer function might be partially redundant to a low affinity rbs pathway 23 , the likely alternate AI-2 import pathway 19 , 24 , 25 ., In contrast to high meH systems , many systems with low meH ( <0 . 3 ) were involved in the metabolism of 5 carbon sugars , mainly ribitol and xylose , according to Genbank annotation ( Fig . 7b ) ., Since AI-2 itself is mainly comprised of a 5 carbon ring , such homology is simultaneously intriguing and unsurprising ., More generally among these low similarity hits , lsrK characters were commonly coincident with Lsr importer characters ( lsrA , C , D , and B ) , indicative of the functional link between such characters ., These various features were laid more strongly in relief when measured against the proximal genetic background in an extended window search ., While a representation of hit variability preserving structural information can be had from trackback plots ( Fig . S1 ) , additional salient results from stringent Lsr extended window searching could be deduced from the more summary coincidence heat maps ( Fig . 9 ) ., The matrices indicate that lsrK and lsrA genes were strongly preserved among extended window hits ., Also , if either lsrF or lsrG were present , the remaining Lsr genes were likely present ., The complete system rescission mentioned before was hinted at , especially in rows 3 and 4 , corresponding to the toxin/antitoxin hipAB system ., Intra-species variation of structural homology increased greatly when using stringent rather than weak criteria ( data not shown ) , mainly as a result of gene loss to pseudo gene conversion , mostly among transporter genes—a bias most easily explained as a matter of pure probability since there are more transporter genes than any other type , and a fact whose functional significance is blunted by the alternative AI-2 import pathway ., These initial E . coli searches motivated other orthologous Lsr system queries ., Full results for E . coli , S . Typhimurium , and B . cereus searches are available in Fig . S3 ., These additional searches helped identify other possible Lsr system homologs , HGT partners , and non-canonical system-associated gene candidates ., In Fig . S3 , we delineate operon directionality and gene homology ., It is interesting to note that system variants exist among noted human pathogens: Yersinia pestis , Bacillus anthracis , and Haemophilus influenzae ., In some instances , lsrRK are either absent ( e . g . E . coli BL21 ) or are associated with altered intergenic regions implying altered regulatory control ( e . g . Yersinia pestis Antiqua ) ., In other cases transporter genes are distributed with altered bias due to position in the bidirectional operons ( e . g . Yersinia pseudotuberculosis PB1/+ ) ., In some cases there is no LsrFG component ( e . g . Shigella flexerni 2002017 ) ., LsrF and LsrG are AI-2-P processing enzymes that lower the intracellular AI-2-P level , thereby contributing to the repression of AI-2 induced genes ., Given even only this modest degree of dispersion , it is nonetheless reasonable to suggest that the Lsr auto-induction system is , in fact , extant among scores of bacterial species and that because the organization of genes within the regulatory architecture is varied , the downstream phenotypic behaviors aligned with AI-2 regulated QS genes is likewise variable ., Thus , our results are in line with a general hypothesis that the AI-2 quorum sensing system is broadly distributed and that the specific needs of the bacteria in a given niche are met by disparate operon arrangements ., The overall phylogenetic distribution of the Lsr system mirrors that as developed by Pereira , et al . in the cluster they denote as Group I 26 ., Here , however , details were fleshed out with different emphases ., The Lsr LMNAST search captured the diversity of pseudo gene conversion , structural rearrangement , and additional hitchhiking genes associated with the Lsr systems that exist in the present nr/nt database ., Moreover , inferences regarding regulatory Lsr system signals could be made that might also map to phylogenies or possibly , with much more effort , related ecological niches ., Results from the various LMNAST searches were reconciled by taking the highest scoring hit among overlaps within each nucleotide record ., In Fig . 10 , we overlaid LMNAST search results onto a phylogenetic tree 27 based on the E . coli genome and 16s data ., Interestingly , Lsr system homologs clustered mainly in gammaproteobacteria with the greatest density being among E . coli strains ., Diffusely manifesting in more distantly related bacterial species , the Lsr system appears to have been subject to several HGT events ., That is , the Lsr system is absent in numerous Enterobacteriaceae species , while HGT gain events happened at the root of the Bacillus cereus group , to R . sphaeroides and R . capsulatus separately , to Sinorhizobium meliloti , and to Spirochaeta smaragdinae ( Fig . 10 ) ., Curiously , while these bacteria occupy distinct ecological niches , they are all common to soil or water environments ., Multiple extended window searches indicated that S . enterica was the most proximal cluster for every Lsr system HGT candidate ., The sharing of a novel Lsr system-associated “mannose-6-phosphate isomerase” ( NP_460428 ) between Bacillus cereus group members , S . smaragdine , and S . enterica , further strengthened the suggestion of HGT partnership ., The gene annotated as “mannose-6-phosphate isomerase” or “sugar phosphate isomerase , ” has recently been shown to be part of the LsrR regulon in Salmonella 28 ., Although not part of the E . coli regulon , it was also associated with S . smaragdinae and B . cereus group orthologs ., In keeping with a possible AI-2-P processing role , it was consistently adjacent to lsrK ., Among gammaproteobacteria , parsimony suggests that two gain events of the Lsr system occurred: one deeply rooted in enterobacteriales and one in a pasteurellales ancestor ., In the enterobacteriales branch , besides Escherichia , Shigella , and Salmonella , Lsr organizational homologs were found in Enterobacter , Photorhabdus , and Xenorhadbus species , although most of these instances lacked importer genes ( lsrACDB ) ., While it is thought that regulatory proteins conserved across such long phylogenetic distances often regulate different targets 29 , the regulation of community-related functions by different manifestations of the Lsr system ( such as biofilm maturation checkpoints in E . coli 25 and possible biofilm dispersion in B . cereus 12 ) suggests a convergent tendency to leverage a quorum/environment sensing capacity inherent to the Lsr system ., Indirect influence over a broader regulon may be abetted by the involvement of AI-2 , the Lsr system substrate , in metabolic pathways 30 ., LMNAST is a program that evaluates similarity or homology on the level of gene organization , conducting a search for patterns and prevalence constrained by a BLAST E-value filter ., Program results overlaid onto phylogenetic data allow visual inspection of phylogenetic density and dispersion ., 2D homology plots display system variability among LMNAST orthologs , and when overlaid with genera/species clustering , reveal the degree of system conservation within and across genera/species when organizational homology decreases and element homology is constant ., Clustering also enables the identification of conserved system homologs ., Organizational information is lost when using coincidence heat charts , but suggestions of the underlying structural variability remain nonetheless ., This is particularly true for coincidence representations of extended window searches ., For such searches , contextual associations with non-canonical genes may also emerge ., Trackback plots illustrate both variety and structural information , albeit in a less dense format ., These representations are especially useful in combination ., It should be noted that the results are almost entirely comprised of excerpts from fully sequenced genomes ., Results are also biased by BLAST input , as characters with more element homologs ( e . g . lsrA ) appear more frequently in hits ., Generically , LMNAST identified query homologs with a variety of deletions , insertions , misordering , and misdirections ., While nearly any source of mutagenesis may result in a frank mutation affecting a systems organizational homology , homologous recombination , insertion sequences , transposable elements , and combinations thereof are likely to be of particular consequence for LMNAST searches ., Deletions may be a result of pseudo gene conversion , of chromosomal rearrangements , or part and parcel of an insertion event—if the insertion results in a gap sufficiently large as to disconnect hit elements from one another ., In the case of such insertions , sufficiently weak criteria may be of use , with the caveat that decreased stringency increases the number of false positives ., From a signaling perspective , depending on the impacted elements and the nature of the inserted sequence , gap presence could result in system discoordination; and the longer the gap the more probable and severe the discoordination , most likely to the detriment of system function ., As for the specific test queries examined herein , while the lac operon is well characterized in its canonical form , there nonetheless exists a great deal of frank variation from the textbook case ., Of particular interest were homologous instances where structural rearrangement could influence self-regulation of component expression ., Also of note were its multiple signaling component deletions ., Such abbreviated homologs were frequently repurposed in a related context ., Complete lac operons were found among nearly all E . coli strains ., Incomplete lac operons were found to be distributed only among closely related Enterobacteriaceae species comprised almost entirely of Escherichia , Citrobacters , Enterobacters , and Serratias as expected based on limited lac operon HGT 18 ., This difference between the rates of decay for the two homology signals over phylogenetic space may be suggestive of distinct selection pressures guiding the two systems ., Also identified through LMNAST were conserved , E . coli-specific evolved beta-galactosidase systems 17 , demonstrating a capacity to find directly evolved but highly distinct ( meH∼0 . 19 ) homologs ., On par , Lsr system hit structural similarity was less well correlated with meH than lac operon results , a phenomenon presumably associated with apparent Lsr system HGT ., The Lsr system was phylogenetically dispersed more widely than the lac operon , even while its distribution remained densest among gammaproteobacteria ., Much like the lac operon , Lsr system structure was subject to significant variability ., lsrK and lsrR characters were common to many hits ., lsrF and lsrG were the least common; the inclusion of both elements nearly always coincided with the presence of all other Lsr characters as well ., Lsr-contextually associated genes and novel putative Lsr systems were also elucidated ., The dispersion of Lsr to bacteria as far afield as the S . smaragdinae , the first Spirochaeta to be fully sequenced 31 , is intriguing ., It suggests that while the depth of Lsr dispersion may not be significant , that its exposed breadth will expand incrementally at a rate proportional to microbial genome sequencing ., While the direct regulon of such HGT systems is expected to be limited 29 , 32 , the proximity of the substrate to key metabolic pathways may allow the Lsr system to confer contextual phenotypic advantages by impacting downstream pathways with its capacity to recompartmentalize a metabolic intermediate ., Moreover , the known regulatory requirements for functional integration of the Lsr system are minimal , consisting entirely of interaction with cAMP-CRP complex , which is deeply rooted in eubacteria ., Gene organization differences between dispersed Lsr homologs , may indicate distinct signaling outcomes , in turn suggesting the appropriation of the Lsr systems inherent quorum capacity to drive distinct phenotypes suited to a given bacterias needs within its particular niche ., Unlike the results for the lac operon , Lsr system results returned a large number of other-annotated , low homology systems ., This speaks to both the inherent difficulty of extrapolation based on homology and the utility of the additional , complementary homology measure yielded by LMNAST searching ., Overall , given the complexity of the results , numerous aspects may be of interest ., For example , extant variation of the queried modular systems , as captured by frank changes in gene organization , was revealed ., Several topological curiosities were also revealed ., For example , the Lsr systems apparent dispersion through both horizontal and vertical inheritance could , in fact , suggest that quorum sensing behavior that is regulated by the Lsr system is conveyed as a root of selective advantage , as opposed to the specific regulon known to uptake small molecules that could otherwise be viewed as carbon source ., By considering our results in the context of common graphical tools of a complementary nature ( e . g . 2D similarity plots and coincidence heat maps ) , through LMNAST we offer a new avenue by which to explore this and other provocative questions . | Introduction, Methods, Results, Discussion | Bacterial cell-cell communication is mediated by small signaling molecules known as autoinducers ., Importantly , autoinducer-2 ( AI-2 ) is synthesized via the enzyme LuxS in over 80 species , some of which mediate their pathogenicity by recognizing and transducing this signal in a cell density dependent manner ., AI-2 mediated phenotypes are not well understood however , as the means for signal transduction appears varied among species , while AI-2 synthesis processes appear conserved ., Approaches to reveal the recognition pathways of AI-2 will shed light on pathogenicity as we believe recognition of the signal is likely as important , if not more , than the signal synthesis ., LMNAST ( Local Modular Network Alignment Similarity Tool ) uses a local similarity search heuristic to study gene order , generating homology hits for the genomic arrangement of a query gene sequence ., We develop and apply this tool for the E . coli lac and LuxS regulated ( Lsr ) systems ., Lsr is of great interest as it mediates AI-2 uptake and processing ., Both test searches generated results that were subsequently analyzed through a number of different lenses , each with its own level of granularity , from a binary phylogenetic representation down to trackback plots that preserve genomic organizational information ., Through a survey of these results , we demonstrate the identification of orthologs , paralogs , hitchhiking genes , gene loss , gene rearrangement within an operon context , and also horizontal gene transfer ( HGT ) ., We found a variety of operon structures that are consistent with our hypothesis that the signal can be perceived and transduced by homologous protein complexes , while their regulation may be key to defining subsequent phenotypic behavior . | Bacteria communicate with each other through a network of small molecules that are secreted and perceived by nearest neighbors ., In a process known as quorum sensing , bacteria communicate their cell density and certain behaviors emerge wherein the population of cells acts as a coordinated community ., One small signaling molecule , AI-2 , is synthesized by many bacteria so that in a natural ecosystem comprised of many secreting cells of different species , the molecule may be present in an appreciable concentration ., The perception of the signal may be key to unlocking its importance , as some cells may recognize it at lower concentrations than others , etc ., We have created a searching algorithm that finds similar gene sets among various bacteria ., Here , we looked for signal transduction pathways similar to the one studied in E . coli ., We found exact replicas to that of E . coli , but also found pathways with missing genes , added genes of unknown function , as well as different patterns by which the genes may be regulated ., We suspect these attributes may play a significant role in determining quorum sensing behaviors ., This , in turn , may lead to new discoveries for controlling groups of bacteria and possibly reducing the prevalence of infectious disease . | sequence analysis, bio-ontologies, biology, computational biology, signaling networks | null |
809 | journal.pcbi.1005517 | 2,017 | Automatically tracking neurons in a moving and deforming brain | Optical neural imaging has ushered in a new frontier in neuroscience that seeks to understand how neural activity generates animal behavior by recording from large populations of neurons at cellular resolution in awake and behaving animals ., Population recordings have now been used to elucidate mechanisms behind zebra finch song production 1 , spatial encoding in mice 2 , and limb movement in primates 3 ., When applied to small transparent organisms , like Caenorhabditis elegans 4 , Drosophila 5 , and zebrafish 6 , nearly every neuron in the brain can be recorded , permitting the study of whole brain neural dynamics at cellular resolution ., Methods for segmenting and tracking neurons have struggled to keep up as new imaging technologies now record from more neurons over longer times in environments with greater motion ., Accounting for brain motion in particular has become a major challenge , especially in recordings of unrestrained animals ., Brains in motion undergo translations and deformations in 3D that make robust tracking of individual neurons very difficult ., The problem is compounded in invertebrates like C . elegans where the head of the animal is flexible and deforms greatly ., If left unaccounted for , brain motion not only prevents tracking of neurons , but it can also introduce artifacts that mask the true neural signal ., In this work we propose an automated approach to segment and track neurons in the presence of dramatic brain motion and deformation ., Our approach is optimized for calcium imaging in unrestrained C . elegans ., Neural activity can be imaged optically with the use of genetically encoded calcium sensitive fluorescent indicators , such as GCaMP6s used in this work 7 ., Historically calcium imaging was often conducted in head-fixed or anesthetized animals to avoid challenges involved with imaging moving samples 4 , 8 , 9 ., Recently , however , whole-brain imaging was demonstrated in freely behaving C . elegans 10 , 11 ., C . elegans are a small transparent nematode , approximately 1mm in length , with a compact nervous system of only 302 neurons ., About half of the neurons are located in the animal’s head , which we refer to as its brain ., Analyzing fluorescent images of moving and deforming brains requires algorithms to detect neurons across time and extract fluorescent signals in 3D ., Automated methods exist for segmenting and tracking fluorescently labeled cells during C . elegans embryogenesis 12 , and semi-automated methods are even able to track specific cells during embryo motion 13 , but to our knowledge these methods are not suitable for tracking neurons in adults ., Generally , several strategies exist for tracking neurons in volumetric recordings ., One approach is to find correspondences between neuron positions in consecutive time points , for example , by applying a distance minimization , and then stitching these correspondences together through time 14 ., This type of time-dependent tracking requires that neuron displacements for each time step are less than the distance between neighboring neurons , and that the neurons remain identifiable at all times ., If these requirements break down , even for only a few time points , errors can quickly accumulate ., Other common methods , like independent component analysis ( ICA ) 15 are also exquisitely sensitive to motion and as a result they have not been successfully applied to recordings with large brain deformations ., Large inter-volume motion arises when the recorded image volume acquisition rate is too low compared to animal motion ., Unfortunately , large inter-volume brain motion is likely to be a prominent feature of whole-brain recordings of moving brains for the foreseeable future ., In all modern imaging approaches there is a fundamental tradeoff between the following attributes: acquisition rate ( temporal resolution ) , spatial resolution , signal to noise , and the spatial extent of the recording ., As recordings seek to capture larger brain regions at single cell resolution , they necessarily compromise on temporal resolution ., For example , whole brain imaging in freely moving C . elegans has only been demonstrated at slow acquisition rates because of the requirements to scan the entire brain volume and expose each slice for sufficiently long time ., At these rates , a significant amount of motion is present between image planes within a single brain volume ., Similarly , large brain motions also remain between sequential volumes ., Neurons can move the entire width of the worm’s head between sequential volumes when recording at 6 brain-volumes per second , as in 10 ., In addition to motion , the brain also bends and deforms as it moves ., Such changes to the brain’s conformation greatly alter the pattern of neuron positions making constellations of neurons difficult to compare across time ., To track neurons in the presence of this motion , previous work that measured neural activity in freely moving C . elegans utilized semi-automated methods that required human proof reading or manual annotation to validate each and every neuron-time point 10 , 11 ., This level of manual annotation becomes impractical as the length of recordings and the number of neurons increases ., For example , 10 minutes of recorded neural activity from 10 , had over 360 , 000 neuron time points and required over 200 person-hours of proofreading and manual annotation ., Here , we introduce a new time-independent algorithm that uses machine learning to automatically segment and track all neurons in the head of a freely moving animal without the need for manual annotation or proofreading ., We call this technique Neuron Registration Vector Encoding , and we use it to extract neural signals in unrestrained C . elegans expressing the calcium indicator GCaMP6s and the fluorescent label RFP ., We introduce a method to track over 100 neurons in the brain of a freely moving C . elegans ., The analysis pipeline is made of five modules and an overview is shown in Fig 1 ., The first three modules , “Centerline Detection , ” “Straightening” and “Segmentation , ” collectively assemble the individually recorded planes into a sequence of 3D volumes and identify each neuron’s location in each volume ., The next two modules , “Registration Vector Construction” and “Clustering , ” form the core of the method and represent a significant advance over previous approaches ., Collectively , these two modules are called “Neuron Registration Vector Encoding . ”, The “Registration Vector Construction” module leverages information from across the entire recording in a time-independent way to generate feature vectors that characterize every neuron at every time point in relation to a repertoire of brain confirmations ., The “Clustering” module then clusters these feature vectors to assign a consistent identity to each neuron across the entire recording ., A final module corrects for errors that can arise from segmentation or assignment ., The implementation and results of this approach are described below ., Worms expressing the calcium indicator GCaMP6s and a calcium-insensitive fluorescent protein RFP in the nuclei of all neurons were imaged during unrestrained behavior in a custom 3D tracking microscope , as described in 10 ., Only signals close to the cell nuclei are measured ., Two recordings are presented in this work: a new 8 minute recording of an animal of strain AML32 and a previously reported 4 minute recording of strain AML14 first described in 10 ., The signal of interest in both recordings is the green fluorescence intensity from GCaMP6s in each neuron ., Red fluorescence from the RFP protein serves as a reference for locating and tracking the neurons ., The microscope provides four raw image streams that serve as inputs for our neural tracking pipeline , seen in Fig 2A ., They are: ( 1 ) low-magnification dark-field images of the animal’s body posture ( 2 ) low-magnification fluorescent images of the animal’s brain ( 3 ) high-magnification green fluorescent images of single optical slices of the brain showing GCaMP6s activity and ( 4 ) high-magnification red fluorescent images of single optical slices of the brain showing the location of RFP ., The animal’s brain is kept centered in the field of view by realtime feedback loops that adjust a motorized stage to compensate for the animal’s crawling ., To acquire volumetric information , the high magnification imaging plane scans back and forth along the axial dimension , z , at 3 Hz as shown in Fig 2B , acquiring roughly 33 optical slices per volume , sequentially , for 6 brain-volumes per second ., The animal’s continuous motion causes each volume to be arbitrarily sheared ., Although the image streams operate at different volume acquisition rates and on different clocks , they are later synchronized by flashes of light that are simultaneously visible to all cameras ., Each image in each stream is given a timestamp on a common timeline for analysis ., Each of the four imaging streams are spatially aligned to each other in software using affine transformations found by imaging fluorescent beads ., An example of the high magnification RFP recording is shown in S1 Movie ., The animal’s posture contains information about the brain’s orientation and about any deformations arising from the animal’s side-to-side head swings ., The first step of the pipeline is to extract the centerline that describes the animal’s posture ., Centerline detection in C . elegans is an active field of research ., Most algorithms use intensity thresholds to detect the worm’s body and then use binary image operations to extract a centerline 16–18 ., Here we use an open active contour approach 19 , 20 to extract the centerline from dark field images with modifications to account for cases when the worm’s body crosses over itself as occurs during so-called “Omega Turns . ”, In principle any method , automated or otherwise , that detects the centerlines should be sufficient ., At rare times where the worm is coiled and the head position and orientation cannot be determined automatically , the head and the tail of the worm are manually identified ., The animal’s centerline allows us to correct for gross changes in the worm’s position , orientation , and conformation ( Fig 3a ) ., We use the centerlines determined by the low magnification behavior images to straighten the high magnification images of the worm’s brain ., An affine transform must be applied to the centerline coordinates to transform them from the dark field coordinate system into the coordinate system of the high magnification images ., Each image slice of the worm brain is straightened independently to account for motion within a single volume ., The behavior images are taken at a lower acquisition rate than the high magnification brain images , so a linear interpolation is used to obtain a centerline for each slice of the brain volume ., In each slice , we find the tangent and normal vectors at every point of the centerline ( Fig 3b ) ., The points are interpolated with a single pixel spacing along the centerline to preserve the resolution of the image ., The image intensities along each of the normal directions are interpolated and the slices are stacked to produce a straightened image in each slice ( Fig 3c ) ., In the new coordinate system , the orientation of the animal is fixed and the gross deformations from the worm’s bending are suppressed ., More subtle motion and deformation , however , remains ., We further reduce shearing between slices using standard video stabilization techniques 21 ., Specifically , bright-intensity peaks in the images are tracked between neighboring image slices ., The coordinates of these peaks are used to calculate the affine transformations between neighboring slices of the volume using least squares ., All slices are registered to the middle slice by applying these transformations sequentially throughout the volume ., Each slice would undergo transformations for every slice in between it and the middle slice to correct shear throughout the volume ., A final rigid translation is required to align each volume to the first volume of the recording ., The translations are found by finding an offset that maximizes the cross-correlation between each volume and the initial volume ., A video of straightening is shown in S1 Movie ., Straightened images are used for the remaining steps of the analysis pipeline ., Only the final measurement of fluorescence intensity is performed in the original unstraightened coordinated system ., Before neuron identities can be matched across time , we must first segment the individual neurons within a volume to recovers each neuron’s size , location , and brightness ( Fig 3d and 3e ) ., Many algorithms have been developed to segment neurons in a dense region 22 , 23 ., We segment the neurons by finding volumes of curvature in fluorescence intensity in the straigthened volumes ., After an initial smoothing , we compute the 3D Hessian matrix at each point in space and threshold for points where all of the three eigenvalues of the Hessian matrix are negative ., This process selects for regions around intensity peaks in three dimensions ., In order to further divide regions into objects that are more likely to represent neurons , we use a watershed separation on the distance transform of the thresholded image ., The distance transform is found by replacing each thresholded pixel with the Euclidean distance between it and the closest zero pixel in the thresholded image ., This approach is sufficient to segment most neurons ., Occasionally neurons are missed or two neurons are incorrectly merged together ., These occasional errors are corrected automatically later in the pipeline ., Extracting neural signals requires the ability to match neurons found at different time points ., Even after gross alignment and straightening , neurons in our images are still subject to local nonlinear deformations and there is significant movement of neurons between volumes ., This remaining motion and deformation is clearly visible , for example , in S1 Movie ., Rather than tracking neurons sequentially in time , the neurons in each volume are characterized based on how they match to neurons in a set of reference volumes ., Our algorithm compares constellations of neurons in one volume to unannotated reference volumes and assigns correspondences or “matches” between the neurons in the sample and each reference volume ., We modified a point-set registration algorithm developed by Jian and Vemuri 24 to do this ( Fig 4a ) ., The registration algorithm represents two point-sets , a sample point-set denoted by X = {xi} and a reference point-set indicated by R = {ri} , as Gaussian mixtures and then attempts to register them by deforming space to minimize the distance between the two mixtures ., In their implementation , each point is modeled by a 3D Gaussian with uniform covariance ., Since we are matching images of neurons rather than just points , we can use the additional information from the size and brightness of each neuron ., We add this information to the representation of each neuron by adjusting the amplitude and standard deviation of the Gaussians ., The Gaussian mixture representation of an image is given by ,, f ( ξ , X ) =∑iAiexp ( −‖ξ−xi‖22 ( λσi ) 2 ) , ( 1 ), where Ai , xi , and σi are the amplitude , mean , and standard deviation of the i-th Gaussian ., These parameters are derived from the brightness , centroid , and size of the segmented neuron , while ξ is the 3D spatial coordinate ., A scale factor λ is added to the standard deviation to scale the size of each Gaussian ., This will be used later during gradient descent ., The sample constellation of neurons is then represented by the Gaussian mixture f ( ξ , X ) ., Similarly , the reference constellation’s own neurons are represented as a f ( ξ , R ) ., To match a sample constellation of neurons X with a reference constellation of neurons R , we use the non rigid transformation u : I R 3 ↦ I R 3 ., The transformation maps X to uX such that the L2 distance between f ( ξ , uX ) and f ( ξ , R ) is minimized with some constraint on the amount of deformation ., This can be written as an energy minimization problem , with the energy of the transformation , E ( u ) , written as, E ( u ) = ∫ f ( ξ , u X ) - f ( ξ , R ) 2 d ξ + E Deformation ( u ) ., ( 2 ) Note that the point-sets X and R are allowed to have different numbers of points ., We model the deformations as a thin-plate spline ( TPS ) ., The TPS transformation equations and resulting form of EDeformation ( u ) are shown in the methods ., The minimization of E is found by gradient descent ., Working with Gaussian mixtures as opposed to the original images allows us to model the deformations and analytically compute the gradients of Eq 2 making gradient descent more efficient ., The gradient descent approach used here is similar to that outlined by Jian and Vemuri 25 ., Since the energy landscape has many local minima , we initially chose a large scale factor , λ , to increase the size of each Gaussian and smooth over smaller features ., Gradient descent is iterated multiple times with λ decreasing multiple times ., After the transformation , sample points are matched to reference points by minimizing distances between assigned pairs using an algorithm from 14 ., The matching is not greedy , and neurons in the sample that are far from any neurons in the reference are not matched ., A neuron at xi is assigned a match vi to indicate which neuron in the set R it was matched to ., For example if xi matched with rj when X is registered to R , then vi = j ., If xi has no match in R , then vi = ∅ ., The modified non-rigid point-set registration algorithm described above allows us to compare one constellation of neurons to another ., In principle , neuron tracking could be achieved by registering the constellation of neurons at each time-volume to a single common reference ., That approach is susceptible to failures in non-rigid point-set registration ., Non-rigid point-set registration works well when the conformation of the animal in the sample and the reference are similar , but it is unreliable when there are large deformations between the sample and the reference , as happens with some regularity in our recordings ., In addition , this approach is especially sensitive to any errors in segmentation , especially in the reference ., An alternative approach would be to sequentially register neurons in each time volume to the next time-volume ., This approach , however , accumulates even small errors and quickly becomes unreliable ., Instead of either of those approaches , we use registration to compare the constellation of neurons at each time volume to a set of reference time-volumes that span a representative space of brain conformations ( Fig 4b ) , as described below ., The constellation of neurons at a particular time in our recording is given by Xt , and the position of the i-th neuron at time t is denoted by xi , t ., We select a set of K reference constellations , each from a different time volume Xt in our recording , so as to achieve a representative sampling of the many different possible brain conformations the animal can attain ., These K reference volumes are denoted by {R1 , R2 , R3 , … , RK} ., We use 300 volumes spaced evenly through time as our reference constellations ., Each Xt is separately matched with each of the references , and each neuron in the sample , xi , t , gets a set of matches v i , t = { v i , t 1 , v i , t 2 , v i , t 3 , ., ., v i , t K } , one match for each of the K references ., This set of matches is a feature vector which we call a Neuron Registration Vector ., It describes the neuron’s location in relation to its neighbors when compared with the set of references ., This vector can be used to identify neurons across different times ., We find that 300 reference volumes creates feature vectors that are sufficiently robust to identify neurons in our recordings ., What determines the optimal number of reference volumes ?, As long as the reference volumes contain a representative sample of the space of brain conformation occupied during our recordings , the number of reference volumes needed to create a robust feature vector depends only on the size of this conformation space ., Because the conformation space of a real brain in physiological conditions is finite , there exists some number of reference volumes beyond which adding more reference volumes provides no additional information ., Crucially , the worm brain seems to explore this finite conformation space quickly relative to the time scales of our recordings ., As a result , the number of required reference volumes should not depend on recording length , at least for the minutes-long timescales that we consider here ., The neuron registration vector provides information about that neuron’s position relative to its neighbors , and how that relative position compares with many other reference volumes ., A neuron with a particular identity will match similarly to the set of reference volumes and thus that neuron will have similar neuron registration vectors over time ., Clustering similar registration vectors allows for the identification of that particular neuron across time ( Fig 4c and 4d ) ., To illustrate the motivation for clustering , consider a neuron with identity s that is found at different times in two sample constellations X1 and X2 ., When X1 and X2 have similar deformations , the neuron s from both constellations will be assigned the same set of matches when registered to the set of reference constellations , and as a result the corresponding neuron registration vectors v1 and v2 will be identical ., This is true even if the registration algorithm itself fails to correctly match neuron s in the sample to its true neuron s in the reference ., As the deformations separating X1 and X2 become larger , the distance between the feature vectors v1 and v2 also becomes larger ., This is because the two samples will be matched to different neurons in some of the reference volumes as each sample is more likely to register poorly with references that are far from it in the space of deformations ., Crucially , the reference volumes consist of instances of the animal in many different deformation states ., So while errors in registering some samples will exist for certain references , they do not persist across all references , and thus do not effect the entire feature vector ., For the biologically relevant deformations that we observe , the distance between v1 and v2 will be smaller if both are derived from neuron s than compared to the distance between v1 and v2 if they were derived from s and another neuron ., We can therefore cluster the feature vectors to produce groups that consist of the same neuron found at many different time points ., The goal of clustering is to assign each neuron at each volume to a cluster representing that neuron’s identity ., Clustering is performed on the list of neuron registration vectors from all neurons at all times , {vi , t} ., Each match in the vector , v i , t k , is represented as a binary vector of 0s with a 1 at the v i k - th position ., The size of the vector is equal to the number of neurons in Rk ., The feature vector {vi , t} is the concatenation of all of the binary vectors from all matches to the K reference constellations ., For computational efficiency , a two-step process was used to perform the clustering: First agglomerative hierarchical clustering was used on the neurons from an initial subset of volumes to define the clusters ., Next , neurons from all volumes at all times were assigned to the nearest cluster as defined by correlation distance to the clusters’ center of mass ., Assignments were made in such a way so as to ensure that a given cluster is assigned to at most one neuron per volume ., Details of this clustering approach are described in the methods ., Each cluster is given a label {S1 , S2 , S3 , …} which uniquely identifies a single neuron over time , and each neuron at each time xi , t is given an identifier si , t corresponding to the cluster to which that neuron-time belongs ., Neurons that are not classified into one of these clusters are removed because they are likely artifactual or represent a neuron that is segmented too poorly for inclusion ., Neuron Registration Vector Encoding successfully identifies segmented neurons consistently across time ., A transient segmentation error , however , would necessarily lead to missing or misidentified neurons ., To identify and correct for missing and misidentified neurons , we check each neuron’s locations and fill in missing neurons using a consensus comparison and interpolation in a TPS deformed space ., For each neuron identifier s and time t⋆ , we use all other point-sets , {Xt} to guess what that neuron’s location might be ., This is done by finding the TPS transformation , ut→t⋆: Xt ↦ Xt⋆ , that maps the identified points from Xt to the corresponding points in Xt⋆ excluding the point s ., Since the correspondences between neurons has already been determined , ut→t⋆ can be found by solving for the parameters from the TPS equation ( see methods ) ., The position estimate is then given by ut→t⋆ xi , t with i selected such that si , t = s ., This results in a set of points representing the set of predicted locations of the neuron at time t⋆ as inferred from the other volumes ., When a neuron identifier is missing for a given time , the position of that neuron s is inferred by consensus ., Namely , correct location is deemed to be the centroid of the set of inferred locations weighted by the underlying image intensity ., This weighted centroid is also used if the current identified location of the neuron s has a distance greater than 3 standard deviations away from the centroid of the set of locations inferred from the other volumes , implying that an error may have occurred in that neuron’s classification ., This is shown in Fig 5 , where neuron 111 is correctly identified in volume 735 , but the the label for neuron 111 is incorrectly located in volume 736 ., In that case the weighted centroid from consensus voting was used ., To assess the accuracy of the Neuron Registration Vector Encoding pipeline , we applied our automated tracking system to a 4 minute recording of whole brain activity in a moving C . elegans that had previously been hand annotated and published 10 ., A custom Matlab GUI was used for manually identifying and tracking neurons ., Nine researchers collectively annotated 70 neurons from each of the 1519 volumes in the 4 minute video ., This is much less than the 181 neurons predicted to be found in the head 26 ., The discrepancy is likely caused by a combination of imaging conditions and human nature ., The short exposure time of our recordings makes it hard to resolve dim neurons , and the relatively long recordings tend to cause photobleaching which make the neurons even dimmer ., Additionally , human researchers naturally tend to select only those neurons that are brightest and are most unambiguous for annotation , and tend to skip dim neurons or those neurons that are most densely clustered ., We compared human annotations to our automated analysis in this same dataset ., We performed the entire pipeline including detecting centerlines , worm straightening , segmentation , and neuron registration vector encoding and clustering , and correction ., Automated tracking detected 119 neurons from the video compared to 70 from the human ., In each volume , we paired the automatically tracked neurons with those found by manual detection by finding the closest matches in the unstraightened coordinate system ., A neuron was perfectly tracked if it matched with the same manual neuron at all times ., Tracking errors were flagged when a neuron matched with a manual neuron that was different than the one it matched with most often ., The locations of the detected neurons are shown in Fig 6A ., Only one neuron was incorrectly identified for more than 5% of the time volumes ( Fig 6B ) ., The locations of neurons and the corresponding error rates are shown in Fig 6B ., Neurons that were detected by the algorithm but not annotated manually are shown in gray ., Upon further inspection , it was noted that some of the mismatches between our method and the manual annotation were due to human errors in the manual annotation , meaning the algorithm is able to correct humans on some occasions ., GCaMP6s fluorescent intensity is ultimately the measurement of interest and this can be easily extracted from the tracks of the neuron locations across time ., The pixels within an approximate 2 μm radius sphere around each point are used to calculate the average fluorescent intensity of a neuron in both the red RFP and green GCaMP6s channels at each time ., This encompasses regions of the cell body , but excludes the neuron’s processes ., The pixels within this sphere of interest are identified in the straightened RFP volume , but the intensity values are found by looking-up corresponding pixels in the unstraightened coordinate system in the original red- and green-channel images , respectively ., We use the calcium-insensitive RFP signal to account for noise sources common to both the GCaMP6s and the RFP channel 10 ., These include , for example , apparent changes in intensity due to focus , motion blur , changes in local fluorophore density arising from brain deformation and apparent changes in intensity due to inhomogeneities in substrate material ., We measure neural activity as a fold change over baseline of the ratio of GCaMP6s to RFP intensity ,, Activity = Δ R R 0 = R - R 0 R 0 , R = I GCaMP 6 s I RFP ., ( 3 ) The baseline for each neuron , R0 , is defined as the 20th percentile value of the ratio R for that neuron ., Fig 7 shows calcium imaging traces extracted from new whole-brain recordings using the registration vector pipeline ., 156 neurons were tracked for approximately 8 minutes as the worm moves ., Many neurons show clear correlation with reversal behaviors in the worm ., The Neuron Registration Vector Encoding method presented here is able to process longer recordings and locate more neurons with less human input compared to previous examples of whole-brain imaging in freely moving C . elegans 10 ., Fully automated image processing means that we are no longer limited by the human labor required for manual annotation ., In new recordings presented here , we are able to observe 156 of the expected 181 neurons , much larger than the approximately 80 observed in previous work from our lab and others 10 , 11 ., By automating tracking and segmentation , this relieves one of the major bottlenecks to analyzing longer recordings ., The neuron registration vector encoding algorithm primarily relies on the local coherence of the motion of the neurons ., It permits large deformations of the worm’s centerline so long as deformations around the centerline remain modest ., Crucially , the algorithm’s time-independent approach allows it to tolerate large motion between consecutive time-volumes ., These properties make it well suited for our neural recordings of C . elegans and we suspect that our approach would be applicable to tracking neurons in moving and deforming brains from other organisms as well ., Certain classes of recordings , however , would not be well suited for Neuron Registration Vector Encoding and Clustering ., The approach will fail when the local coherence of neuron motion breaks down ., For example , if one neuron were to completely swap locations with another neuron relative to its surroundings , registration would not detect the switch and our method would fail ., In this case a time-dependent tracking approach may perform better ., In addition , proper clustering of the feature vectors requires the animal’s brain to explore a contiguous region of deformation space ., | Introduction, Results, Discussion, Methods | Advances in optical neuroimaging techniques now allow neural activity to be recorded with cellular resolution in awake and behaving animals ., Brain motion in these recordings pose a unique challenge ., The location of individual neurons must be tracked in 3D over time to accurately extract single neuron activity traces ., Recordings from small invertebrates like C . elegans are especially challenging because they undergo very large brain motion and deformation during animal movement ., Here we present an automated computer vision pipeline to reliably track populations of neurons with single neuron resolution in the brain of a freely moving C . elegans undergoing large motion and deformation ., 3D volumetric fluorescent images of the animal’s brain are straightened , aligned and registered , and the locations of neurons in the images are found via segmentation ., Each neuron is then assigned an identity using a new time-independent machine-learning approach we call Neuron Registration Vector Encoding ., In this approach , non-rigid point-set registration is used to match each segmented neuron in each volume with a set of reference volumes taken from throughout the recording ., The way each neuron matches with the references defines a feature vector which is clustered to assign an identity to each neuron in each volume ., Finally , thin-plate spline interpolation is used to correct errors in segmentation and check consistency of assigned identities ., The Neuron Registration Vector Encoding approach proposed here is uniquely well suited for tracking neurons in brains undergoing large deformations ., When applied to whole-brain calcium imaging recordings in freely moving C . elegans , this analysis pipeline located 156 neurons for the duration of an 8 minute recording and consistently found more neurons more quickly than manual or semi-automated approaches . | Computer algorithms for identifying and tracking neurons in images of a brain have struggled to keep pace with rapid advances in neuroimaging ., In small transparent organism like the nematode C . elegans , it is now possible to record neural activity from all of the neurons in the animal’s head with single-cell resolution as it crawls ., A critical challenge is to identify and track each individual neuron as the brain moves and bends ., Previous methods required large amounts of manual human annotation ., In this work , we present a fully automated algorithm for neuron segmentation and tracking in freely behaving C . elegans ., Our approach uses non-rigid point-set registration to construct feature vectors describing the location of each neuron relative to other neurons and other volumes in the recording ., Then we cluster feature vectors in a time-independent fashion to track neurons through time ., This new approach works very well when compared to a human . | fluorescence imaging, invertebrates, classical mechanics, caenorhabditis, neuroscience, animals, animal models, vector construction, caenorhabditis elegans, model organisms, microscopy, experimental organism systems, damage mechanics, dna construction, molecular biology techniques, neuroimaging, research and analysis methods, imaging techniques, animal cells, deformation, molecular biology, molecular biology assays and analysis techniques, physics, gene expression and vector techniques, calcium imaging, cellular neuroscience, transmission electron microscopy, dark field imaging, cell biology, neurons, electron microscopy, nematoda, biology and life sciences, cellular types, physical sciences, organisms | null |
1,311 | journal.pcbi.1002186 | 2,011 | Local Orientation and the Evolution of Foraging: Changes in Decision Making Can Eliminate Evolutionary Trade-offs | The evolution of behavior is to a large extent the evolution of information processing 1–4 ., On short timescales individuals respond to local information in the environment ., For instance in foraging , a basic local information processing is that animals detect food , turn and move to food , and eat ., On the long term this generates behavioral patterns ., The latter shapes how individual behavior relates to patterns in the environment ( e . g . resource distributions ) and affects aspects of Darwinian fitness ( e . g . foraging success ) ., At present it is poorly known how local information processing mechanisms ( e . g . cognition ) determine larger scale pattern detection and evolve 3 , 5–8 ., Here we study the evolution of local information processing and orientation to the environment , and its relation to environmental pattern detection ., In evolutionary theory on foraging , the focus is often on how well individuals match ( fitness relevant ) patterns in the environment ., In optimal search theory ( OST ) the main focus has been on what kinds of random turning strategies optimize search 9–11 ., A second focus has been on the value of alternating between intensive searching , once a food patch is found , to extensive search when food has not been found for a while , using combinations of correlated random walks differing in turning rates 12 ., Simulations show that such switching between search strategies can enhance foraging efficiency because it concentrates search effort in the right places ( i . e . it allows patches to be “detected” ) , so called area-concentrated search ., This is true for models in which “continuous” patchy environments are assumed 12 , 13 , where resource items are only locally detectable , but aggregated on a scale that is beyond the perception of individuals , as apposed to models in which discrete and fully detectable patches are assumed ( e . g . the marginal value theorem 14 ) ., Random-walk models have been used to statistically characterize animal movement trajectories , including bi-modal search patterns similar to area-concentrated search 15 , 16 ., However , such model fitting does not necessarily reveal underlying movement mechanisms 6 , 17 ., Interaction with , and orientation to , the external environment can generate similar movement patterns as those generated by internal turning strategies 6 , 17 , 18 ., Moreover , Benhamou showed that local orientation via memory of where an individual last found a food item , can further improve foraging efficiency relative to “random” area-restricted search without such memory 19 , indicating the adaptive value of reacting to external cues ., However , like the random-walk search models , an important assumption is that food is detected and consumed on the same range ., Instead , if food can be detected beyond the range at which it can be eaten ( as is often the case ) , an animal will be able to approach foraging opportunities from some distance via direct visual cues ., This is probably one of the most simple ways through which animals can orientate themselves relative to food ., Important is that such opportunity-based adaptation ( or responsiveness ) stands in direct relation to feeding opportunities in the environment ., Therefore , on longer timescales , behavioral patterns emerge that are “a reflection of complexity in the environment” 20 ., To conceptualize how interaction of individuals with the environment can structure behavior , Hogeweg and Hesper 21 coined the TODO principle ., This envisages behavior as multi-scale information processing 22 , 23 ( see Figure 1 ) :, ( i ) TODO: individuals behaviorally adapt to local opportunities by “doing what there is to do” , and, ( ii ) Pattern formation and detection: behavioral patterns self-organize on larger spatio-temporal scales through the continual feedback between behavior and local environmental contexts ( This use of the term “information processing” differs from that in behavioral ecology where it generally refers only to individual-level behavioral flexibility , often specifically in relation to energy-dependent behavioral choices ) ., A simplistic example of TODO is that as food density declines individuals end up moving more and eating less , because there is no opportunity to eat ., As such , the environment is like a “behavioral template” to which individuals can respond , allowing individuals to effectively “detect” patterns of opportunities in the environment beyond their own perception ., In order to fit models to movement data and elucidate underlying mechanisms , requires a thorough understanding of how both internal and external structuring of behavior can generate foraging patterns ., This can be done using pattern oriented modeling 24 and other multi-level modeling approaches 25 , where model fits are evaluated based on patterns on multiple levels: small scale movement decisions , mesoscale patterns such as trajectories and space use and more global patterns such as population distributions ., The requirement of fitting models to multiple levels places the focus on the mechanisms that generate the inter-relation between small-scale processes and patterns on larger scales ., A thorough understanding of how small scale behavior interactions generate behavioral patterns through TODO could be an important contribution to such modeling approaches ., Essentially , TODO and the longer term behavioral patterns it generates , come to expression ( in models ) when individuals interact with the environment and need to make behavioral decisions based on local information ., In this light , Hogeweg 26 showed that foragers with simple TODO rules could forage much more efficiently than those with much more complicated rules ., This was because foragers with simple rules could react to local opportunities and therefore automatically adapt to larger-scale patterns in the environment ( i . e . generalize their behavior ) ., More counter-intuitive and complex behavioral patterns emerge in models with more detailed environmental structure and multiple types of behavior ., Examples include “self-structuring” explanations for social dynamics in bumblebee colonies 27 , grouping patterns in chimpanzees 28 , diet learning and cultural inheritance in group foragers 29 , 30 ., At present , the role of pattern recognition through TODO is most likely underestimated in most approaches to the evolution of foraging behavior ., For instance in OST the simple orientation mechanism of turning and moving to food is generally not included ., Moreover , behavior is usually assumed to be continuous in that movement , search and food consumption occur in parallel ( although a trade-off between movement speed and search accuracy is often assumed 12 ) ., Decision-making is therefore restricted to changes in direction ., However , if movement , scanning for food and eating are at least partially mutually exclusive , then individuals must decide about what to do next ( e . g . search again at a certain location , or move on ) ., Such foraging behavior can be referred to as pause-travel 31 , or intermittent search 7 ., Here we focus on local orientation towards food in such a setting where individuals must make decisions , and study the role of TODO in the evolution of simple foraging behavior ., We ask: how does local information processing evolve in order to determine how individuals “do what there is to do” ?, More specifically , how does the responsiveness and orientation of individuals to feeding opportunities in the environment evolve in light of the larger spatio-temporal pattern recognition that this generates ?, To address this question , we study the evolution of foraging behavior in a model with individuals that have to choose amongst alternative behavioral actions according to information they obtain through searching ., This happens in a spatial environment with patchy and uniform patterns of feeding opportunities ., To address how local information processing ( sensing and decision making ) affects information processing on larger spatio-temporal scales ( pattern recognition and genetic adaptation , see Figure 1 ) , we compare the evolution of decision making and properties of behavioral actions in two model variants ., In a “restricted” model we limit information individuals can remember and use relative to an “extended” model ., The comparison across environments is used to understand evolutionary adaptation to prevailing ecological conditions ( patchy or uniform ) ., The comparison across models ( restricted versus extended ) is used to understand how differences in the evolutionary freedom ( or constraints ) for evolving decision making affect evolution ., This has similarities to artificial neural network approaches to the evolution of behavior , where behavior is not predefined , but emerges from neural architecture and learning processes 32–35 ., Such models have been used to show , for instance , that risk-averse foraging can emerge as a side-effect of an evolved reinforcement learning process 33 ., In our case there is no learning , but the “architecture” of decision making can evolve such that non-predefined behavior can evolve ., Therefore we do not prespecify a selection function , but only define that inter-birth intervals decrease with increased food intake , and allow natural selection to arise from competition in a world with finite resources ., We then study how Darwinian fitness arises as an emergent property of how micro-scale interactions generate longer-term behavioral patterns ., Thus , we study evolution as the interplay of information processing on multiple timescales ( Figure 1 ) , based on bioinformatic ( processes ) theory 22 , 23 , 36–38 ., Using this approach , we show that local information processing and opportunity-based adaptation can play a significant role in detecting patterns of resources in the environment , and the evolution of foraging ., In particular , we find that the differences in decision making capabilities affect how individuals interact with the environment ( TODO ) , and this can alleviate evolutionary trade-offs and allows for novel pattern recognition specializations ., Our model incorporates, ( i ) individual foragers and, ( ii ) a 2-dimensional environment with resource items in either a patchy or uniform distribution , adapted from van der Post and Hogeweg 29 ., Individuals have a decision making algorithm which determines the sequence and context dependency of the following behavioral actions: MOVE , FOODSCAN , MOVETOFOOD and EAT ., Each of these behavioral actions has specific properties ( such as distances , angles etc ) ., Our model is event-based , which means that actions take time ., When individuals complete an action they choose a new one ., The individual with the shortest time to complete its action is next to choose a new action ., We study two model variants ( “restricted” and “extended” ) which differ in the type of decision making algorithm that can evolve ., Both the parameters of the decision making algorithm and the details of behavior are “genes” which change through mutation ., This generates genetic variation , which may result in differences in foraging efficiency and rates of reproduction ., Natural selection then arises from resource competition ., For a full list of model parameters please see Table 1 and 2 ., Next we discuss the model in more detail ., Our environment is 5660 by 5660 lattice , where grid points are scaled to be 1 meter apart , giving 32 , 035 , 600 grid points ( 32 . 035 km squared ) ., This size was chosen to support a population size ( about 100–150 individuals ) ., This was the minimal population where:, ( i ) parameters evolved ,, ( ii ) the population is self-sustaining , and, ( iii ) simulations are completed in a reasonable time span ., It also ensures that individuals need to move through space to find food , survive and reproduce ., Resource items were placed on grid points ., Resource items appeared at fixed , but randomly assigned time points within a year , and remained there until eaten ., If eaten the resource item was depleted , and appeared again at its fixed time point in the year ., Days are 720 minutes ( 12 hours of “daylight” ) and years are 365 days ( 262800 minutes ) ., We implement a patchy and a uniform environment , where we keep the total number of food items constant and only vary the resource distribution ., In the patchy environment we placed 8000 patches , each with about 2500 items depending on overlap of randomly positioned patches ., Each patch is a circle with a radius of 20 meters ., Within this circle , 2 resource items are placed at each grid point ., All resource items in a patch appear at the same time point , and different patches appear at random fixed times in the year ., In the uniform environment resources are placed with probability 0 . 535 per grid location to match the total number of resources placed in the patchy environment ( 17150000 items ) ., In the uniform environment , resource items appear at randomly assigned fixed times throughout the year ., The restricted and extended model differ in the decision making that can evolve ., Figures 1a and b show the basic decision making algorithms: the behavioral actions that are possible ( ovals ) and in the case of FOODSCAN , the information this provides ( rectangles ) ., Arrows indicate what can be done next , or what information is obtained ( after FOODSCAN ) , and an individuals last action ( + information obtained ) represents its “state” ( or memory ) ., EAT and MOVETOFOOD can only occur after food is detected ., EAT occurs when food is detected in range , otherwise individuals first MOVETOFOOD ( MTF ) and then EAT ., Without any information about food , individuals can either MOVE or do FOODSCAN ., As a starting condition , we set these to alternate so that individuals always do FOODSCAN after MOVE and vice versa ., To allow decision making to evolve we define parameters which determine the probability of moving again after MOVE ( ) and scanning again after FOODSCAN ( ) ( Figure 2a ) , searching again after EAT ( ) , or searching again after NO FOOD ( ) ( Figure 2b , see also Table 2 ) ., This is indicated by decision points ( black diamonds ) after MOVE , NO FOOD and EAT , where arrows split ., For each of these probabilities , the alternative decision has a probability of ., For the restricted model we only allow and to evolve , where is a general probability to do FOODSCAN again , irrespective of whether individuals have eaten or did not find food ( Figure 2a ) ., Thus in the restricted model , the probability to do FOODSCAN again after EAT or after NO FOOD , is determined by the same parameter ( ) ., For the extended model we allow , , and to evolve ( Figure 2b ) , where , and can be seen as context dependent forms of ., In the extended model , the probability to do FOODSCAN again after EAT or after NO FOOD , can therefore evolve independently ., Thus , in the restricted model individuals cannot remember and make use of the additional information “just ate” or “didn’t find food” to determine the probability to do FOODSCAN again , while in the extended model they can ., Moreover , in the restricted model , we assumed individuals always MOVETOFOOD when food is out of reach ., In the extended model we allowed this probability ( ) to evolve , and it always evolved to ( see section 2 in Text S1 and Figure S1 ) ., The parameters of specific behavioral actions determine how individuals move and sense their environment ( see Figure 2c ) ., Unless stated otherwise , we allow all these parameters to evolve: where is the area scanned ( ) , and where 1 second of scanning for 1 gives ., The closest detected item is chosen for consumption ., If there are multiple items equally close , a random closest item is chosen ., This scanning algorithm therefore represents the case where individuals eat the first item they find ., Note also that we assume that MOVE and FOODSCAN cannot occur at the same time , and thus we focus pause-travel foraging 31 or “intermittent search” behavior 7 ., Individuals gain energy through food ( energy units per item ) which is added to their energy store ( with a maximum: ) ., To survive , individuals must have energy ( ) , which means energy intake must compensate basal metabolism ( , which is subtracted from every minute ) ., Because resources become locally depleted individuals must move to eat ., We do not add explicit movement costs , but time spent moving cannot be spent eating ., Individuals reproduce when ., Energy is then halved and the other half goes to a single offspring ., The time taken to get back to defines a birth interval ., Individuals with shorter birth intervals achieve greater lifetime reproductive success ., Individuals can die with a probability of 0 . 1 per year , and can reach a maximum age of 10 years ., This adds some stochasticity in survival and limits lifespans to 10 years ., Since resources are limited in the environment , the population grows until the reproduction is at replacement rate ( carrying capacity ) ., Our model requires that the population is viable in relation to resource availability , thus energy and life-history parameters are chosen such that at low population sizes individuals can definitely gain sufficient energy to reproduce ., Moreover , to focus on movement and foraging in differently patterned environments , we set the energy required to give birth in relation to energy per food time , and the density of food items in space , such that individuals have to move to and forage from many food patches and experience the full scale of environmental patterns during a reproductive cycle ( i . e . they cannot complete reproductive cycles within a single patch ) ., Lifespan is set to allow multiple reproductive events per individual ., We expect most parameter combinations that satisfy these qualitative relationships ( see section 1 in Text S1 for more detail ) , to give similar results ., When individuals reproduce , the parameters of decision making and behavioral actions are inherited by offspring , with a probability of mutation of 0 . 05 per gene ( this rate of mutation was chosen after observing that natural selection lead to consistent evolutionary change with increases in foraging efficiency ) ., We allow all action durations , distances and angles to evolve except and ., The mutation “step” is defined by drawing the parameter value from a normal distribution with the mother’s parameter value as mean and standard deviation scaled to about 20% of the range of values that is relevant for that parameter ( see Table 2 ) ., Moreover , in order to keep simulations running fast enough , we limited the minimal action duration to seconds ., Most mutations are close the mother’s parameter value , but larger jumps are possible ., This was chosen to make evolution of parameters possible without predefining their ranges ., We cannot predict what parameter settings are viable and take a “zero” state ( all parameters zero ) as initial condition ., To make sure the population does not die out initially , we use a birth algorithm in which the non-viable population is maintained at a minimum of 10 individuals , and let it evolve to a viable state ., During this time , if the population drops below this minimum then an individual is chosen to reproduce according to a probability ( ) relative to its energy ( ) : ( 2 ) Energy costs of reproduction and energy of offspring as the same as before ., Once the population grows above 10 individuals and becomes viable , this algorithm is not used anymore ., At this point the population grows to carrying capacity and becomes stable ., For our study we used the following types of simulations:, We find that in both models the population evolves to environment specific attractors ., We refer to these evolved states as “specialists”: uniform specialists in the uniform environment , and patch specialists in the patchy environment ., These four specialists differ from each other and these differences depend on the following parameters:, ( i ) probabilities to SEARCH again ( , , ) ,, ( ii ) probability to MOVE again ( ) ,, ( iii ) MOVE distance ( ) ,, ( iv ) turning angle ( ) , and, ( v ) FOODSCAN angle ( ) ( see Figure 3 ) ., For ease of reference we name the specialists and summarize their distinguishing features as follows ( illustrated in Figure 4 ) ., Parameter values shown are means of ancestor traces between year 800 and 900 ( see also Table S1 ) : Further analysis revealed that variation of both probability to repeat move ( ) and turning angles ( ) did not impact food intake significantly ., For both parameters we found that evolved values result from evolutionary drift because of a very flat adaptive landscape ( for more detail see Text S1 section 2 and Figure S1 and Text S1 section 4 and Figure S3 and S4 ) ., Moreover , other parameters did not differ between specialists: durations evolved to minimal values ( see section 2 in Text S1 and Figure S1 ) and food scan range ( ) converged to between 2–2 . 5 meters ( see sections 2 and 3 in Text S1 and Figure S2 ) ., From here on we focus on those parameters that generated differences in foraging efficiency between the specialists , namely: , , , and ., We use the means of evolved parameter values to characterize each specialist ( see Table S1 for a complete list of average evolved parameter values ) ., The values of the evolved decision making parameters mean that in the extended model decision making evolves to: always do FOODSCAN after EAT , always MOVE after NO FOOD ( and , Figure 4c and, d ) ., This generates a clear differentiation of behavior in food and non-food contexts ( Figure 4c and d , blue and yellow loops respectively ) ., Thus in a food context individuals continue to do FOODSCAN until they no longer find food ( blue loop ) ., This generates efficient FOODSCAN - EAT - FOODSCAN - EAT sequences and allows systematic depletion of resources at a given location ., During this time any movement is via MOVETOFOOD when food is out of range , always towards food ., Only when no more food is found do individuals MOVE ., Thus in a “no food” context , individuals switch behavior and no longer repeat FOODSCAN ( yellow loop ) ., In the restricted model only the patch specialist ( R-Patchy ) has a certain degree of repeated scanning for food ( , Figure 4a ) ., However this happens equally after EAT and NO FOOD , because differentiating behavior relative to FOOD and NOFOOD is not possible ., This specialist therefore can only to a certain extent avoid MOVE in the presence of food , and is more limited in generating time efficient FOODSCAN-EAT sequences and to only MOVETOFOOD when food is beyond REACH ., In contrast the uniform specialist ( R-Uni ) of the restricted model never repeats FOODSCAN ( Figure 4b ) ., It only searches once per location and generates MOVE - FOODSCAN - EAT or MOVE - FOODSCAN - MOVETOFOOD - EAT sequences ., For behavioral actions the most obvious difference between the specialists is that between the patch specialists of the different models ( illustrated in Figure 4a and, c ) ., R-Patchy’s maximum FOODSCAN angle in combination with its short move distance leads to a behavioral pattern with a large overlap in areas searched after each MOVE ., In contrast , Ext-Patchy’s smaller FOODSCAN angle with long move distance generates a pattern with long distances in which it does not scan , followed by food directed movement when food is detected ., The difference between the uniform specialists is more subtle ( Figure 4b and, d ) ., The shorter MOVE of R-Uni leads to considerable overlap in areas scanned after each MOVE ., Ext-Uni’s longer MOVE leads to hardly any overlap in areas scanned after each MOVE ., To qualitatively reveal larger-scale behavioral patterns , we visualize the movement trajectories of all evolved specialists in both environments using ecological simulations ( Figure 5 ) ., Most striking is that it is difficult to distinguish between the specialists in the same environment , because they all adapt flexibly to both environments , whether they evolved there or not ., This is because all specialists are responsive to opportunities in the environment , and have the same basic TODO ( “do what there is to do” ) : move when there is no food , turn and move to food when out or reach , and stop to eat ., In the uniform environment this generates random-walk-like patterns reflecting the random encounters with food ., In the patchy environment TODO generates a bi-modal pattern of straight movements between patches and frequent turning and remaining localized for some time within patches ., Thus irrespective of genetic adaptations , through ( automatic ) opportunity-based adaptation all specialists are able to generalize their behavior to an environment in which they did not evolve ., The large-scale behavioral patterns of individuals reflect patterns of feeding opportunities in the environment ( patchy or uniform ) ., The more accurate this reflection , the better individuals “detect” resource patterns , and this affects their foraging success ., An individual’s genotype determines how it responds to opportunities in the environment , and we find that the genetic adaptations of specialists increase their foraging success relative to the environment they evolved in ( Figure 6 ) ., Overall , differences in food intake rates of evolved specialists , as measured in ecological simulations , are as follows: ( Figure 6a ) ., ( Figure 6b ) ., where represents a minor difference , and a large difference ., In both environments , specialists from the extended model are the most successful foragers ., Interestingly , Ext-Uni is not only the best forager in the uniform environment , but the second best in the patchy environment ., In the uniform environment , Ext-Uni has about 9% greater food intake than R-Uni ( this difference is significant: Wilcoxon rank sum test , , . For Ext-Uni: ; ; . For R-Uni: ; ; ) ., In the patchy environment , Ext-Uni has on average about 11% lower food intake than Ext-Patchy ( this difference is significant: Wilcoxon rank sum test , , . For Ext-Uni: ; ; . For R-Patchy: ; ; ) ., However , Ext-Uni has nearly 2 times greater food intake than R-Patchy , even though it did not evolve in the patchy environment ( unlike R-Patchy ) ., In contrast , Ext-Patchy is the least successful forager in the uniform environment , although average food intake is only about 3% lower than R-Patchy ( but this difference is significant: Wilcoxon rank sum test , , . For Ext-Patchy: ; ; . For R-Patchy: ; ; ) ., Overall , differences in the patchy environment are greater ( 2 fold versus a 1 . 5 fold maximum difference in the uniform environment ) , indicating more room for specialization ., To understand these results we look in detail at how changes in decision making and behavioral actions affect food intake ., The difference in decision making capabilities of the two models has a profound effect on the evolutionary landscape ., This is most clear in the patchy environment , where the enhanced information use in the extended model allows a trade-off on within- and between-patch behavior to be eliminated ., Therefore , while we find that evolved parameters in both patch specialists reflect a tendency to maximize food intake by, ( i ) trying to stay in patches , and, ( ii ) minimizing inter-patch travel , how this is achieved depends on how the underlying decision making capabilities shape the evolutionary landscape ., This is most clearly illustrated with a local adaptive landscape characterization around the evolutionary attractors relative to the probability to search again ( and ) and move distance ( ) ., We consider how parameters affect yearly food intake ( “fitness” ) , and how this depends on inter-patch travel , patch visits time ( i . e . how much they manage to eat in a patch ) and size of patches visited ( Figure 7 ) ., The comparison between the extended model ( top ) and the restricted model ( bottom ) reveals a significant shift in the location of the adaptive peak ( Figure 7a top and bottom , yellow zone ) , which coincides with evolved parameter values ( indicated by black circles ) ., In the restricted model we can understand the location of the adaptive peak ( and evolved parameters ) in terms of a trade-off between inter-patch travel rate , and patch visit times ., As one increases , the other declines ( compare Figure 7b and c bottom row ) ., This is because in order to stay in patches ( and find food ) , individuals need short move distances and repeated food scans , otherwise they prematurely leave the patch ., However , this slows down inter-patch travel with redundant search ., The evolutionary attractor is therefore located where interpatch-travel time and intrapatch-travel time are such that food intake is maximized ( Figure 7a , bottom ) ., As a result R-Patchy has the slowest inter-patch travel of all specialists ( see section 5 in Text S1 and Figure S5 ) ., Moreover , this is also why R-Patchy has such a large food scan angle , because this allows it to “turn back” when it inadvertently leaves a patch ( see section 3 in Text S1 and Figure S2 ) , and why it does not evolve repeated moving ( see section 4 in Text S1 and Figure S3 ) ., In the extended model this trade-off does not arise ., Here decision making allows differentiation of behavior: food scanning is only repeated after eating and does not occur during inter-patch travel ( no food encountered ) ., Repeated food scanning can therefore evolve to maximal values , which allows individuals to move systematically from one food item to the next within patches via MOVETOFOOD ., This leads to longer patch visit times ( Figure 7c top ) and enhanced patch depletion ., Unlike in the restricted model , MOVE is now used purely for inter-patch travel ., Move distance ( ) is then freed from the trade-off between inter- and intra-patch travel because it no longer affects patch visit times ., The enhanced decision making in the extended model therefore eliminates the trade-off , allowing both extended model specialists to be more efficient than R-patchy ., As a consequence of the trade-off disappearing , move distance evolves to much longer distances ( Figure 3c ) because this allows individuals to bias foraging to larger patches ( Figure 7d top ) ., ( Note that while we implement patches of a fixed size , partial depletion of patches generates smaller patches . ), In fact there are two feedbacks which affect that individuals bias their patch visiting to larger patches:, ( i ) by extending patch visiting times , an individual visits on average larger patches longer , and, ( ii ) by reduced scanning for food while moving during inter-patch travel ( i . e . due longer move distances ) individuals are less sensitive to each food item on their way ., Thus they are more likely to find food and stop moving when local resource densities are higher ., Effectively this allows individuals to “select” larger patches ., Therefore , for the same time spent traveling , Ext-Patchy manages to find on average larger patches and eat more than Ext-Uni ( see section 5 in Text S1 and Figure S5 for more detail ) ., Long move distances also generate more neutrality for repeated move and turning angles , allowing them to evolve ( see section 4 in Text S1 and Figure S3 and S4 ) ., For the uniform specialists we also find a difference between the extended and restricted model ., Both specialists tend to maximize food intake by, ( i ) not wasting time searching depleted areas , and, ( ii ) not moving too far and skipping food items on the way ., However , in the extended model food intake peaks at maximal repeated search after finding food , while in the restricted model food intake peaks at minimal repeated search and slightly shorter move distances ( Figure 8a , top and bottom respectively ) ., In both cases , local depletion of food causes that individuals who m | Introduction, Materials and Methods, Results, Discussion | Information processing is a major aspect of the evolution of animal behavior ., In foraging , responsiveness to local feeding opportunities can generate patterns of behavior which reflect or “recognize patterns” in the environment beyond the perception of individuals ., Theory on the evolution of behavior generally neglects such opportunity-based adaptation ., Using a spatial individual-based model we study the role of opportunity-based adaptation in the evolution of foraging , and how it depends on local decision making ., We compare two model variants which differ in the individual decision making that can evolve ( restricted and extended model ) , and study the evolution of simple foraging behavior in environments where food is distributed either uniformly or in patches ., We find that opportunity-based adaptation and the pattern recognition it generates , plays an important role in foraging success , particularly in patchy environments where one of the main challenges is “staying in patches” ., In the restricted model this is achieved by genetic adaptation of move and search behavior , in light of a trade-off on within- and between-patch behavior ., In the extended model this trade-off does not arise because decision making capabilities allow for differentiated behavioral patterns ., As a consequence , it becomes possible for properties of movement to be specialized for detection of patches with more food , a larger scale information processing not present in the restricted model ., Our results show that changes in decision making abilities can alter what kinds of pattern recognition are possible , eliminate an evolutionary trade-off and change the adaptive landscape . | Animals differ in how they sense and process information obtained from the environment ., An important part of this information processing is used to find food ., In terms of foraging , local decision making determines how successful individuals are at finding food on longer timescales ., Using an artificial-world model , we studied different kinds of decision making to understand how local information processing affects larger scale behavioral patterns and their evolution ., We compared a restricted decision making ( less memory ) to extended decision making ( more memory ) ., We then compared the evolution of decision making and behavioral actions ( moving and scanning for food ) in patchy and uniform environments ., Our results show that with restricted decision making individuals face a trade-off in the patchy environment: they try to stay in patches by not moving forward too far , but to do so they sacrifice how fast they travel between patches ., With extended decision making this trade-off completely disappears because decision making allows moving forward to be avoided in patches ., Instead moving forward can be used exclusively for faster traveling between patches and for selecting bigger patches ., Our results show how changes in local decision making can significantly alter what evolutionary forces are faced and can eliminate evolutionary trade-offs . | theoretical biology, ecology, biology, computational biology, evolutionary biology | null |
2,242 | journal.pcbi.1006111 | 2,018 | Computational mechanisms underlying cortical responses to the affordance properties of visual scenes | Recent advances in the use of deep neural networks for computer vision have yielded image computable models that exhibit human-level performance on scene- and object-classification tasks 1–4 ., The units in these networks often exhibit response profiles that are predictive of neural activity in mammalian visual cortex 5–11 , suggesting that they might be profitably used to investigate the computational algorithms that underlie biological vision 12–16 ., However , many of the internal operations of these models remain mysterious , and the fundamental theoretical principles that account for their predictive accuracy are not well understood 16–18 ., This presents an important challenge to the field: if deep neural networks are to fulfill their potential as a method for investigating visual perception in living organisms , it will first be necessary to develop techniques for using these networks to provide computational insights into neurobiological systems ., It is this issue—the use of deep neural networks for gaining insights into the computational processes of biological vision—that we address here ., We focus in particular on the mechanisms underlying natural scene perception ., A central aspect of scene perception is the identification of the navigational affordances of the local environment—where one can move to ( e . g . , a doorway or an unobstructed path ) , and where ones movement is blocked ., In a recent fMRI study , we showed that the navigational-affordance structure of scenes could be decoded from multivoxel response patterns in scene-selective visual areas 19 ., The strongest results were found in a region of the dorsal occipital lobe known as the occipital place area ( OPA ) , which is one of three patches of high-level visual cortex that respond strongly and preferentially to images of spatial scenes 20–24 ., These results demonstrated that the OPA encodes affordance-related visual features ., However , they did not address the crucial question of how these features might be computed from sensory inputs ., There was one aspect of the previous study that provided a clue as to how affordance representations might be constructed: affordance information was present in the OPA even though participants performed tasks that made no explicit reference to this information ., For example , in one experiment , participants were simply asked to report the colors of dots overlaid on the scene , and in another experiment , they were asked to perform a category-recognition task ., Despite the fact that these tasks did not require the participants to think about the spatial layout of the scene or plan a route through it , it was possible to decode navigational affordances in the OPA in both cases ., This suggested to us that affordances might be rapidly and automatically extracted through a set of purely feedforward computations ., In the current study we tested this idea by examining a biologically inspired CNN with a feedforward architecture that was previously trained for scene classification 3 ., This CNN implements a hierarchy of linear-nonlinear operations that give rise to increasingly complex feature representations , and previous work has shown that its internal representations can be used to predict neural responses to natural scene images 25 , 26 ., It has also been shown that the higher layers of this CNN can be used to decode the coarse spatial properties of scenes , such as their overall size 25 ., By examining this CNN , we aimed to demonstrate that affordance information could be extracted by a feedforward system , and to better understand how this information might be computed ., To preview our results , we find that the CNN contains information about fine-grained spatial features that could be used to map out the navigational pathways within a scene; moreover , these features are highly predictive of affordance-related fMRI responses in the OPA ., These findings demonstrate that the CNN can serve as a candidate , image-computable model of navigational-affordance coding in the human visual system ., Using this quantitative model , we then develop a set of techniques that provide insights into the computational operations that give rise to affordance-related representations ., These analyses reveal a set of stimulus input features that are critical for predicting affordance-related cortical responses , and they suggest a set of high-level , complex features that may serve as a basis set for the population coding of navigational affordances ., By combining neuroimaging findings with a fully quantitative computational model , we were able to complement a theory of cortical representation with discoveries of its algorithmic implementation—thus providing insights at multiple levels of understanding and moving us toward a more comprehensive functional description of visual cortex ., To test for the representation of navigational affordances in the human visual system , we examined fMRI responses to 50 images of indoor environments with clear navigational paths passing through the bottom of the scene ( Fig 1A ) ., Subjects viewed these images one at a time for 1 . 5 s each while maintaining central fixation and performing a category-recognition task that was unrelated to navigation ( i . e . , press a button when the viewed scene was a bathroom ) ., Details of the experimental paradigm and a complete analysis of the fMRI responses can be found in a previous report 19 ., In this section , we briefly recapitulate the aspects of the results that are most relevant to the subsequent computational analyses ., To measure the navigational affordances of these stimuli , we asked an independent group of subjects to indicate with a computer mouse the paths that they would take to walk through each environment starting from the bottom of the image ( Fig 1B ) ., From these responses , we created probabilistic maps of the navigational paths through each scene ., We then constructed histograms of these navigational probability measurements in one-degree angular bins over a range of directions radiating from the starting point of the paths ., These histograms approximate a probabilistic affordance map of potential navigational paths radiating from the perspective of the viewer 27 ., We then tested for the presence of affordance-related information in fMRI responses using representational similarity analysis ( RSA ) 28 ., In RSA , the information encoded in brain responses is compared with a cognitive or computational model through correlations of their representational dissimilarity matrices ( RDMs ) ., RDMs are constructed through pairwise comparisons of the model representations or brain responses for all stimulus classes ( in this case , the 50 images ) , and they serve as a summary measurement of the stimulus-class distinctions ., The correlation between any two RDMs reflects the degree to which they contain similar information about the stimuli ., We constructed an RDM for the navigational-affordance model through pairwise comparisons of the affordance histograms ( Fig 1C ) ., Neural RDMs were constructed for several regions of interest ( ROIs ) through pairwise comparisons of their multivoxel activation patterns for each image ., We focused our initial analyses on three ROIs that are known to be strongly involved in scene processing: the OPA , the parahippocampal place area ( PPA ) , and the retrosplenial complex ( RSC ) 20–24 ., All three of these regions respond more strongly to spatial scenes ( e . g . , images of landscapes , city streets , or rooms ) than other visual stimuli , such as objects and faces , and thus are good candidates for supporting representations of navigational affordances ., We also examined patterns in early visual cortex ( EVC ) ., Using RSA to compare the RDMs for these regions to the navigational-affordance RDM , we found evidence that affordance information is encoded in scene-selective visual cortex , most strongly in the dorsal scene-selective region known as the OPA ( Fig 1C ) ., These effects were not observed in lower-level EVC , suggesting that navigational affordances likely reflect mid-to-high-level visual features that require several computational stages along the cortical hierarchy ., In our previous report , a whole-brain searchlight analysis confirmed that the strongest cortical locus of affordance coding overlapped with the OPA 19 ., Interestingly , affordance coding in scene regions was observed even though participants performed a perceptual-semantic recognition task in which they were not explicitly asked about the navigational affordances of the scene—suggesting that affordance information is automatically elicited during scene perception ., Together , these results suggest that scene-selective visual cortex routinely encodes complex spatial features that can be used to map out the navigational affordances of the local visual scene ., These analyses provide functional insights into visual cortex at the level of representation—that is , the identification of sensory information encoded in cortical responses ., However , an equally important question for any theory of sensory cortical function is to understand how its representations can be computed at an algorithmic level 12–16 ., Understanding the algorithms that give rise to high-level sensory representations requires a quantitative model that implements representational transformations from visual stimuli ., Thus , we next turn to the question of how affordance representations might be computed from sensory inputs ., Visual cortex implements a complex set of highly nonlinear transformations that remain poorly understood ., Attempts at modeling these transformations using hand-engineered algorithms have long fallen short of accurately predicting mid-to-high-level sensory representations 6 , 10 , 11 , 29–31 ., However , advances in the development of artificial deep neural networks have dramatically changed the outlook for the quantitative modeling of visual cortex ., In particular , recently developed deep CNNs for tasks such as image classification have been found to predict sensory responses throughout much of visual cortex at an unprecedented level of accuracy 5–11 ., The performance of these CNNs suggests that they hold the promise of providing fundamental insights into the computational algorithms of biological vision ., However , because their internal representations were not hand-engineered to test specific theoretical operations , they are challenging to interpret ., Indeed , most of the critical parameters in CNNs are set through supervised learning for the purpose of achieving accurate performance on computer vision tasks , meaning that the resulting features are unconstrained by a priori theoretical principles ., Furthermore , the complex transformations of these internal CNN units cannot be understood through a simple inspection of their learned parameters ., Thus , neural network models have the potential to be highly informative to sensory neuroscience , but a critical challenge for moving forward is the development of techniques to probe the factors that best account for similarities between cortical responses and the internal representations of the models ., Here we tested a deep CNN as a potential candidate model of affordance-related responses in scene-selective visual cortex ., Given the apparent automaticity of affordance-related responses , we hypothesized that they could be modeled through a set of purely feedforward computations performed on image inputs ., To test this idea , we examined a model that was previously trained to classify images into a set of scene categories 3 ., This feedforward model contains 5 convolutional layers followed by 3 fully connected layers , the last of which contains units corresponding to a set of scene category labels ( Fig 2A ) ., The architecture of the model is similar to the AlexNet model that initiated the recent surge of interest in CNNs for computer vision 2 ., Units in the convolutional layers of this model have local connectivity , giving rise to increasingly large spatial receptive fields from layers 1 through 5 ., The dense connectivity of the final three layers means that the selectivity of their units could depend on any spatial position in the image ., Each unit in the CNN implements a linear-nonlinear operation in which it computes a weighted linear sum of its inputs followed by a nonlinear activation function ( specifically , a rectified linear threshold ) ., The weights on the inputs for each unit define a type of filter , and each convolutional layer contains a set of filters that are replicated with the same set of weights over all parts of the image ( hence the term “convolution” ) ., There are two other nonlinear operations implemented by a subset of the convolutional layers: max-pooling , in which only the maximum activation in a local pool of units is passed to the next layer , and normalization , in which activations are adjusted through division by a factor that reflects the summed activity of multiple units at the same spatial position ., Together , this small set of functional operations along with a set of architectural constraints define an untrained model whose many other parameters can be set through gradient descent with backpropagation—producing a trained model that performs highly complex feats of visual classification ., We passed the images from the fMRI experiment through the CNN and constructed a set of RDMs using the final outputs from each layer ., We then used RSA to compare the representations of the CNN with:, ( i ) the RDM for the navigational-affordance model and, ( ii ) the RDM for fMRI responses in the OPA ., The RSA comparisons with the affordance model showed that the CNN contained affordance-related information , which arose gradually across the lower layers and peaked in layer 5 , the highest convolutional layer ( Fig 2B ) ., Note that this was the case despite the fact that the CNN was trained to classify scenes based on their categorical identity ( e . g . , kitchen ) , not their affordance structure ., Weak effects were observed in lower convolutional layers , consistent with the pattern of findings from the fMRI experiment , in which affordance representations were not evident in EVC , and they suggest that affordances reflect mid-to-high-level , rather than low-level , visual features ., The decrease in affordance-related information in the last three fully connected layers may result from the increasingly semantic nature of representations in these layers , which ultimately encode a set of scene-category labels that are likely unrelated to the affordance-related features of the scenes ., The RSA comparisons with OPA responses showed that the CNN provided a highly accurate model of representations in this brain region , with strong effects across all CNN layers and a peak correlation in layer 5 ( Fig 2B ) ., Indeed , several layers of the CNN reached the highest accuracy we could expect for any model , given the noise ceiling of the OPA , which was calculated from the variance across subjects ( r-value for OPA noise ceiling = 0 . 30 ) ., Together , these findings demonstrate the feasibility of computing complex affordance-related features through a set of purely feedforward transformations , and they show that the CNN is a highly predictive model of OPA responses to natural images depicting such affordances ., The above findings demonstrate that the CNN is representationally similar to the navigational-affordance RDM and also similar to the OPA RDM , but they leave open the important question of whether the CNN captures the same variance in the OPA as the navigational-affordance RDM ., In other words , can the CNN serve as a computational model for affordance-related responses in the OPA ?, To address this question , we combined the RSA approach with commonality analysis 32 , a variance partitioning technique in which the explained variance of a multiple regression model is divided into the unique and shared variance contributed by all of its predictors ., In this case , multiple regression RSA was used to construct an encoding model of OPA representations ., Thus , the OPA was the predictand and the affordance and CNN models were predictors ., Our goal was to identify the portion of the shared variance between the affordance RDM and OPA RDM that could be accounted for by the CNN RDM ( Fig 3A ) ., This analysis showed that the CNN could explain a substantial portion of the representational similarity between the navigational-affordance model and the OPA ., In particular , over half of the explained variance of the navigational-affordance RDM could be accounted for by layer 5 of the CNN ( Fig 3B ) ., This suggests that the CNN can serve as a candidate , quantitative model of affordance-related responses in the OPA ., One of the most important aspects of the CNN as a candidate model of affordance-related cortical responses is that it is image computable , meaning that its representations can be calculated for any input image ., This makes it possible to test predictions about the internal computations of the model by generating new stimuli and running in silico experiments ., In the next two sections , we run a series of experiments on the CNN to gain insights into the factors that underlie its predictive accuracy in explaining the representations of the navigational-affordance model and the OPA ., A fundamental issue for understanding any model of sensory computation is determining the aspects of the sensory stimulus on which it operates ., In other words , what sensory inputs drive the responses of the model ?, To answer this question , we investigated the image features that drive affordance-related responses in the CNN ., Specifically , we sought to identify classes of low-level stimulus features that are critical for explaining the representational similarity of the CNN to the navigational-affordance model and the OPA ., We expected that navigational affordances would rely on image features that convey information about the spatial structure of scenes ., Our specific hypotheses were that affordance-related representations would be relatively unaffected by color information and would rely heavily on high spatial frequencies and edges at cardinal orientations ( i . e . , horizontal and vertical ) ., The hypothesis that color information would be unimportant was motivated by our intuition that color is not typically a defining feature of the structural properties of scenes and by a previous finding of ours showing that affordance representations in the OPA are partially tolerant to variations in scene textures and colors 19 ., The other two hypotheses were motivated by previous work suggesting that high spatial frequencies and cardinal orientations are especially informative for the perceptual analysis of spatial scenes , and that the PPA and possibly other scene-selective regions are particularly sensitive to these low-level visual features 33–38 , but see 39 ., To test these hypotheses , we generated new sets of filtered stimuli in which specific visual features were isolated or removed ( i . e . , color , spatial frequencies , cardinal or oblique edges; Fig 4A and 4B ) ., These filtered stimuli were passed through the CNN , and new RDMs were created for each layer ., We used the commonality-analysis technique described in the previous section to quantify the portion of the original explained variance of the CNN that could be accounted for by the filtered stimuli ., This procedure was applied to the explained variance of the CNN for predicting both the navigational-affordance RDM and the OPA RDM ( Fig 4A ) ., The results for both sets of analyses showed that over half of the explained variance of the CNN could be accounted for when the inputs contained only grayscale information , high-spatial frequencies , or edges at cardinal orientations ., In contrast , when input images containing only low-spatial frequencies or oblique edges were used , a much smaller portion of the explained variance was accounted for ., The differences in explained variance across high and low spatial frequencies and across cardinal and oblique orientations were more pronounced for the RSA predictions of the affordance RDM , but a similar pattern was observed for the OPA RDM ., We used a bootstrap resampling procedure to statistically assess these comparisons ., Specifically , we calculated bootstrap 95% confidence intervals for the following contrasts of shared-variance scores:, 1 ) high spatial frequencies minus low spatial frequencies and, 2 ) cardinal orientations minus oblique orientations ., These analyses showed that the differences in shared variance for high vs . low spatial frequencies and for cardinal vs . oblique orientations were reliable for both the affordance RDM and the OPA RDM ( all p<0 . 05 , bootstrap ) ., Together , these results suggest that visual inputs at high-spatial frequencies and cardinal orientations are important for computing the affordance-related features of the CNN ., Furthermore , these computational operations appear to be largely tolerant to the removal of color information ., Indeed , it is striking how much explained variance these inputs account for given how much information has been discarded from their corresponding filtered stimulus sets ., In addition to examining classes of input features to the CNN , we also sought to understand how inputs from different spatial positions in the image affected the similarity between the CNN and RDMs for the navigational-affordance model and the OPA ., Our hypothesis was that these RSA effects would be driven most strongly by inputs from the lower visual field ( we use the term “visual field” here because the fMRI subjects were asked to maintain central fixation throughout the experiment ) ., This was motivated by previous findings showing that the OPA has a retinotopic bias for the lower visual field 40 , 41 and the intuitive prediction that the navigational affordances of local space rely heavily on features close to the ground plane ., To test this hypothesis , we generated sets of occluded stimuli in which everything except a small horizontal slice of the image was masked ( Fig 5 ) ., These occluded stimuli were passed through the CNN , and new RDMs were created for each layer ., Once again , we used the commonality-analysis technique described above to quantify the portion of the original explained variance of the CNN that could still be accounted for by these occluded stimuli ., This procedure was repeated with the un-occluded region slightly shifted on each iteration until the entire vertical extent of the image was sampled ., We used this procedure to analyze the explained variance of the CNN for predicting both the navigational-affordance RDM and the OPA RDM ( Fig 5 ) ., For comparison , we also applied this procedure to RDMs for the other ROIs ., These analyses showed that the predictive accuracy of the CNN for both the affordance model and the OPA was driven most strongly by inputs from the lower visual field ., Strikingly , as much as 70% of the explained variance of the CNN in the OPA could be accounted for by a small horizontal band of features at the bottom of the image ( Fig 5 ) ., We created a summary statistic for this visual-field bias by calculating the difference in mean shared variance across the lower and upper halves of the image ., A comparison of this summary statistic across all tested RDMs shows that the lower visual field bias was observed for the RSA predictions of the affordance model and the OPA , but not for the other ROIs ( Fig 5 ) ., Together , these results demonstrate that information from the lower visual field is critical to the performance of the CNN in predicting the affordance RDM and the OPA RDM ., These findings are consistent with previous neuroimaging work on the retinotopic biases of the OPA 40 , 41 , and they suggest that the cortical computation of affordance-related features reflects a strong bias for inputs from the lower visual field ., The analyses above examined the stimulus inputs that drive affordance-related computations in the CNN ., We next set out to characterize the high-level features that result from these computations ., Specifically , we sought to characterize the internal representations of the CNN that best account for the representations of the OPA and the navigational-affordance model ., To do this , we performed a set of visualization analyses to reify the complex visual motifs detected by the internal units of the CNN ., We characterized the feature selectivity of CNN units using a receptive-field mapping procedure ( Fig 6A ) 42 ., The goal was to identify natural image features that drive the internal representations of the CNN ., In this procedure , the selectivity of individual CNN units was mapped across each image by iteratively occluding the inputs to the CNN ., First , the original , un-occluded image was passed through the CNN ., Then a small portion of the image was occluded with a patch of random pixel values ( 11 pixels by 11 pixels ) ., The occluded image was passed though the CNN , and the discrepancies in unit activations relative to the original image were logged ., These discrepancy values were calculated for each unit by taking the difference in magnitude between the activation to the original image and the activation to the occluded image ., After iteratively applying this procedure across all spatial positions in the image , a two-dimensional discrepancy map was generated for each unit and each image ( Fig 6A ) ., Each discrepancy map indicates the sensitivity of a CNN unit to the visual information across all spatial positions of an image ., The spatial distribution of the discrepancy effects reflects the position and extent of a unit’s receptive field , and the magnitude of the discrepancy effects reflects the sensitivity of a unit to the underlying image features ., We focused our analyses on the units in layer 5 , which was the layer with the highest RSA correlation for the both the navigational-affordance model and the OPA ., We selected 50 units in this layer based on their unit-wise RSA correlations to the navigational-affordance model and the OPA ., These units were highly informative for our effects of interest: an RDM created from just these 50 units showed comparable RSA correlations to those observed when using all units in layer 5 ( correlation with affordance RDM: r = 0 . 28; correlation with OPA RDM: r = 0 . 35 ) ., We generated receptive-field visualizations for each of these units ., These visualizations were created by identifying the top 3 images that generated the largest discrepancy values in the receptive-field mapping procedure ( i . e . , images that were strongly representative of a unit’s preferences ) ., A segmentation mask was then applied to each image by thresholding the unit’s discrepancy map at 10% of the peak discrepancy value ., Segmentations highlight the portion of the image that the unit was sensitive to ., Each segmentation is outlined in red , and regions of the image outside of the segmentation are darkened ( Fig 6B ) ., We sought to identify prominent trends across this set of receptive-field segmentations ., In a simple visual inspection of the segmentations , we detected visual motifs that were common among the units , and the results of an automated clustering procedure highlighted these trends ., Using data-driven techniques , we embedded the segmentations into a low-dimensional space and then partitioned them into clusters with similar visual motifs ., We used t-distributed stochastic neighbor embedding ( t-SNE ) to generate a two-dimensional embedding of the units based on the visual similarity of their receptive-field segmentations ( Fig 6B ) ., We then used k-means clustering to identify sets of units with similar embeddings ., The number of clusters was set at 7 based on the outcome of a cluster-evaluation procedure ., The specific cluster assignments do not necessarily indicate major qualitative distinctions between units ., Rather , they provide a data-driven means of reducing the complexity of the results and highlighting the broad themes in the data ., These themes can also be seen in the complete set of visualizations plotted in S1–S7 Figs ., These visualizations revealed two broad visual motifs: boundary-defining junctions and large , extended surfaces ., Boundary-defining junctions are the regions of an image where two or more extended planes meet ( e . g . , clusters 1 , 5 , 6 , and 7 in Fig 6B ) ., These were often the junctions of walls and floors , and less often ceilings ., This was the most common visual motif across all segmentations ., Large , extended surfaces were uninterrupted portions of floor and wall planes ( e . g . , cluster 3 in Fig 6B ) ., There were also units that detected more complex structural features that were often indicative of doorways and other open pathways ( e . g . , clusters 2 and 4 in Fig 6B ) ., A common thread running through all these visualizations is that they appear to reflect high-level scene features that could be reliably used to map out the spatial layout and navigational affordances of the local environment ., Boundary-defining junctions and large , extended surfaces provide critical information about the spatial geometry of the local scene , and more fine-grained structural elements , such as doorways and open pathways , are critical to the navigational layout of a scene ., Together , these results suggest a minimal set of high-level visual features that are critical for modeling the navigational affordances of natural images and predicting the affordance-related responses of scene-selective visual cortex ., Our analyses thus far have focused on a carefully selected set of indoor scenes in which the potential for navigation was clearly delimited by the spatial layout of impassable boundaries and solid open ground ., Indeed , the built environments depicted in our stimuli were designed so that humans could readily navigate through them ., However , there are many environments in which navigability is determined by a more complex set of perceptual factors ., For example , in outdoor scenes navigability can be strongly influenced by the material properties of the ground plane ( e . g . , grass , water ) ., We wondered whether the components of the CNN that were related to the navigational-affordance properties of our indoor scenes could be used to identify navigational properties in a broader range of images ., To address this question , we examined a set of images depicting natural landscapes , whose navigational properties had been quantified in a previous behavioral study 43 ., Specifically , these stimuli included 100 images that could be grouped into categories of low or high navigability based on subjective behavioral assessments ., The overall navigability of the images reflected subjects’ judgments of how easily they could move through the scene ., These navigability assessments were influenced by a wide range of scene features , including the spatial layout of pathways and boundaries , the presence of clutter and obstacles , and the potential for treacherous conditions ., There were also low and high categories for 13 other scene properties ( Fig 7A ) ., Each scene property was associated with 100 images ( 50 low and 50 high ) , and many images were used for multiple scene properties ( 548 images in total ) ., We sought to determine whether the units from the CNN that were highly informative for identifying the navigational affordances of indoor scenes could also discern the navigational properties in this heterogeneous set of natural landscapes ., To do this , we focused on the 50 units selected for the visualization analyses in Fig 6 , and we used the responses of these units to classify natural landscapes based on their overall navigability ( Fig 7A ) ., We found that not only were these CNN units able to classify navigability across a broad range of outdoor scenes , but they also appeared to be particularly informative for this task relative to the | Introduction, Results, Discussion, Methods | Biologically inspired deep convolutional neural networks ( CNNs ) , trained for computer vision tasks , have been found to predict cortical responses with remarkable accuracy ., However , the internal operations of these models remain poorly understood , and the factors that account for their success are unknown ., Here we develop a set of techniques for using CNNs to gain insights into the computational mechanisms underlying cortical responses ., We focused on responses in the occipital place area ( OPA ) , a scene-selective region of dorsal occipitoparietal cortex ., In a previous study , we showed that fMRI activation patterns in the OPA contain information about the navigational affordances of scenes; that is , information about where one can and cannot move within the immediate environment ., We hypothesized that this affordance information could be extracted using a set of purely feedforward computations ., To test this idea , we examined a deep CNN with a feedforward architecture that had been previously trained for scene classification ., We found that responses in the CNN to scene images were highly predictive of fMRI responses in the OPA ., Moreover the CNN accounted for the portion of OPA variance relating to the navigational affordances of scenes ., The CNN could thus serve as an image-computable candidate model of affordance-related responses in the OPA ., We then ran a series of in silico experiments on this model to gain insights into its internal operations ., These analyses showed that the computation of affordance-related features relied heavily on visual information at high-spatial frequencies and cardinal orientations , both of which have previously been identified as low-level stimulus preferences of scene-selective visual cortex ., These computations also exhibited a strong preference for information in the lower visual field , which is consistent with known retinotopic biases in the OPA ., Visualizations of feature selectivity within the CNN suggested that affordance-based responses encoded features that define the layout of the spatial environment , such as boundary-defining junctions and large extended surfaces ., Together , these results map the sensory functions of the OPA onto a fully quantitative model that provides insights into its visual computations ., More broadly , they advance integrative techniques for understanding visual cortex across multiple level of analysis: from the identification of cortical sensory functions to the modeling of their underlying algorithms . | How does visual cortex compute behaviorally relevant properties of the local environment from sensory inputs ?, For decades , computational models have been able to explain only the earliest stages of biological vision , but recent advances in deep neural networks have yielded a breakthrough in the modeling of high-level visual cortex ., However , these models are not explicitly designed for testing neurobiological theories , and , like the brain itself , their internal operations remain poorly understood ., We examined a deep neural network for insights into the cortical representation of navigational affordances in visual scenes ., In doing so , we developed a set of high-throughput techniques and statistical tools that are broadly useful for relating the internal operations of neural networks with the information processes of the brain ., Our findings demonstrate that a deep neural network with purely feedforward computations can account for the processing of navigational layout in high-level visual cortex ., We next performed a series of experiments and visualization analyses on this neural network ., These analyses characterized a set of stimulus input features that may be critical for computing navigationally related cortical representations , and they identified a set of high-level , complex scene features that may serve as a basis set for the cortical coding of navigational layout ., These findings suggest a computational mechanism through which high-level visual cortex might encode the spatial structure of the local navigational environment , and they demonstrate an experimental approach for leveraging the power of deep neural networks to understand the visual computations of the brain . | medicine and health sciences, diagnostic radiology, functional magnetic resonance imaging, neural networks, engineering and technology, applied mathematics, brain, social sciences, neuroscience, magnetic resonance imaging, algorithms, simulation and modeling, mathematics, brain mapping, computational neuroscience, vision, neuroimaging, coding mechanisms, research and analysis methods, computer and information sciences, imaging techniques, visual cortex, navigation, psychology, radiology and imaging, diagnostic medicine, anatomy, biology and life sciences, sensory perception, physical sciences, computational biology | null |
12 | journal.pcbi.1001064 | 2,011 | Integrative Features of the Yeast Phosphoproteome and Protein–Protein Interaction Map | Protein phosphorylation is a reversible , ubiquitous , and fundamentally post-translational modification ( PTM ) that regulates a variety of biological processes; one of its critical roles is the control of protein signaling 1–3 ., Recent advances in mass-spectrometry ( MS ) –based technologies and phosphopeptide enrichment methods have enabled the use of high-throughput in vivo phosphosite mapping 4–7 to identify thousands of phosphoproteins ., To date , around 10 , 000 phosphosites of serine , threonine , or tyrosine residues have been identified in each of many organisms , including human 8–12 , mouse 13 and yeast 14–16 ., Many public databases , such as PHOSIDA 17 , Phospho . ELM 18 , and UniProt 19 , have been developed or expanded to catalog such phosphoproteome data ., Accordingly , the numbers of phosphoproteins that have been identified in various organisms now greatly exceed the numbers known to have roles in protein signaling ., This has raised the question of whether this intracellular phosphorylation , which occurs on such a large scale , has other major roles ., In modern biology , the use of high-throughput screening methods has enabled rapid progress in the disclosure of protein–protein interaction ( PPI ) networks in many organisms 20–27 ., Topological features common to PPI networks ( e . g . , scale-free and small-world properties ) are of prime importance in interpreting intracellular protein behavior and the evolutionary aspects of PPIs 28–31 ., PTM changes the physical characteristics of proteins ., It is therefore probable that reversible PTM has large effects on the dynamic states of intracellular protein-binding patterns and complex formation , and that it controls not only signal transduction but also many other cellular pathways ., However , the impact of PTM on the whole picture of the PPI network has not yet been described ., Here , we describe the intracellular global relationships between protein phosphorylation and physical PPI , as derived from the results of integrative and systematic data-mining of Saccharomyces cerevisiae multi-omics data ( Fig . 1 ) ., New phosphoproteome data on S . cerevisiae were initially obtained by MS–based analysis and unified with data on previously identified phosphoproteomes ., We superimposed the unified phosphoproteome data onto a S . cerevisiae PPI network with other multi-omics data on S . cerevisiae ., From the results , we infer that the tremendous numbers of phosphorylations within a cell have a large impact on PPI diversity , and that intracellular phosphorylation patterns are affected partly by simultaneous phosphorylation of physically bound proteins that is triggered by the action of single kinases ., On the basis of liquid chromatography ( LC ) -MS analysis , we initially identified 1 , 993 S . cerevisiae phosphoproteins containing 6 , 510 phosphosites ., Information on the identified phosphopeptides has been stored in PepBase ( http://pepbase . iab . keio . ac . jp ) ., We unified these new phosphoproteome data with the publicly available phosphoproteome datasets of Holt et al . 16 and UniProt 19 and obtained a total of 3 , 477 phosphoproteins containing 25 , 997 phosphosites ( Fig . 2; Supplementary Table S1 ) ., The pS/pT/pY ratios of this study , the study of Holt et al . , and UniProt were 72%/23%/5% , 72%/23%/5% , and 80%/18%/2% , respectively ., Among the unified phosphoproteome data , 343 phosphoproteins and 2 , 778 phosphosites were not found in the data of Holt et al . or UniProt ., Comparison with S . cerevisiae genomic information 32 revealed that 58 . 5% of the 5 , 815 known and predicted genes were phosphoprotein-encoding genes ( Supplementary Table S2 ) ., Although the use of current high-throughput technologies cannot disclose the entire phosphoproteome picture of a cell , these results imply that most intracellular proteins can be phosphorylated under the appropriate environmental conditions ., The unified phosphoproteome data were superimposed onto the PPI network to generate a “phospho-PPI” network ., PPI data were obtained via DIP ( Database of Interacting Proteins ) 33 and grouped into four categories according to the experimental method used for the PPI assay: all kinds of experimental methods ( “ALL” ) , yeast two-hybrid ( “Y2H” ) , co-immunoprecipitation ( “IMM” ) , and tandem affinity purification ( “TAP” ) ., Among all the protein nodes involved in every category of the phospho-PPI network , the proportion of phosphoproteins was also nearly 60% ( Supplementary Fig . S1 ) ., For example , the phospho-PPI network of the “ALL” category was composed of 4 , 945 proteins , including 2 , 934 phosphoproteins ( 59 . 3% ) and 17 , 215 physical interactions ., To explore specific characteristics of the phospho-PPI network , the number counts of interacting partners of phosphoproteins and nonphosphoproteins were analyzed ( note that throughout this study , the word “nonphosphoprotein” means a protein with no phosphosite identified to date ) ., We found that , in general , phosphoproteins had more interacting partners than nonphosphoproteins ., In each phospho-PPI network of the “ALL” and “Y2H” categories with enough protein nodes for the subsequent statistical analysis , the cumulative percentage distributions of node degrees ( or the number count of interacting partners ) of phosphoproteins and nonphosphoproteins were markedly different ( Fig . 3A and D ) ., For example , in the dataset of “ALL” , 47 . 6% of nonphosphoproteins had three or more interacting partners , but this was true for 67 . 9% of phosphoproteins ., Moreover , in both datasets , about twice as many phosphoproteins as nonphosphoproteins had 10 interacting partners ( Fig . 3B and E ) ., To analyze the statistical significance of this difference in the context of phosphorylation , we prepared randomly generated phospho-PPI networks by “node label shuffling” ( NLS ) , in which the node positions of phosphoproteins and nonphosphoproteins were randomly moved within the phospho-PPI networks ( for details , see Materials and Methods ) ., This demonstrated that the node degree of phosphoproteins was significantly higher than expected from a random distribution ( Fig . 3C and F ) ., Node degree in PPI networks has an exponential relationship with protein expression level 34–36 , perhaps because cellular proteins with more copies have a greater possibility of interacting with others by chance 36 ., Therefore , if the phosphoproteome data are biased by protein abundance and highly abundant proteins tend to be identified as phosphoproteins , there is a strong possibility that the relationship between phosphorylation and node degree is spurious , with no direct causal connection ., In fact , proteome abundance data obtained through a single-cell proteomic analysis combining high-throughput flow cytometry and a library of GFP-tagged yeast strains 37 showed that the number of phosphoproteins in the “ALL” phospho-PPI was skewed , especially among highly abundant proteins ( Fig . 4A and D ) ., However , we demonstrated that in the “ALL” phospho-PPI network there were still significant differences in the node degree levels of phosphoproteins and nonphosphoproteins of similar abundance , and that the differences could be explained independently of protein copy number ( Fig . 4B , C , E and F ) ., Similar results were derived from the phospho-PPI network generated only from the “Y2H” category ( Supplementary Fig . S2 ) ., We further compared the abilities to predict phosphoproteins by using node degree and protein abundance levels above given thresholds ., The predictive power of node degree was markedly higher than that of protein abundance , except in the case of proteins that were extremely abundant ( Supplementary Fig . S3 ) ., If this higher predictive ability were attributable to a spurious relationship associated with the actual intracellular proteome abundance , then the node degree of a protein given by PPI assays would appear to provide a better approximation of the intracellular protein copy number than would single-cell proteomic analysis , which is unlikely ., Protein disorder is also a typical feature of “hub” proteins in PPI networks 38–40 ., Parts of unstructured proteins lack fixed structure , and such disordered regions may have the ability to bind multiple proteins and to diversify PPI networks 38–40 ., Additionally , at the proteome level , phosphorylation occurs at high rates in the disordered regions of proteins 16 , 17 , 41–44 ., Therefore , it is highly likely that protein disorder affects the node degree difference between phosphoproteins and nonphosphoproteins ., For every S . cerevisiae protein registered in UniProt , we calculated the probability of harboring intrinsic disordered regions ( see Materials and Methods ) ., In the “ALL” phospho-PPI network , the ratio of phosphoproteins to nonphosphoproteins increased smoothly with increasing disorder probability level ( Fig . 4G ) ., However , in the same network , the node degree levels of phosphoproteins and nonphosphoproteins of the same disorder probability level were significantly different ( Fig . 4H and I ) ., Even between phosphoproteins that had a low disorder probability of <0 . 1 and nonphosphoproteins that had an extremely high disorder probability of >0 . 9 , the node degree level of the phosphoproteins was significantly higher than that of the nonphosphoproteins ( P\u200a=\u200a0 . 0043 ) ., Similar results were observed in the “Y2H” dataset ( Supplementary Fig . S2 ) ., These results imply that the higher node degree of phosphoproteins than of nonphosphoproteins is at least partly independent of the PPI network diversity produced by unstructured proteins ., Other factors that could influence the relationship between protein phosphorylation and interaction are protein size and protein groups with identical cellular function ., Larger proteins may have a greater chance of being phosphorylated and may provide more binding domains for interactions with other proteins ., However , similar to the results for protein abundance and disorder , statistical significance of the higher node degree of phosphoproteins was observed independently of protein length ( Supplementary Fig . S4 ) ., ( Phosphorylation probability was highly correlated with protein length; Supplementary Fig . S4 . ), In the event that both protein phosphorylation and interaction events occurring in a fraction of proteins confer a particular , identical cellular function , then the global difference in node degree levels of phosphoproteins and nonphosphoproteins would appear to be caused only by differences in function ., However , we found that , for most functional annotations of S . cerevisiae in GO Slim ( a higher level view of Gene Ontology ) , there was a higher node degree level for phosphoproteins than for nonphosphoproteins ( Supplementary Fig . S5 ) ., The average node degree of phosphoproteins is higher than that of nonphosphoproteins 45 , but it was unclear, 1 ) whether this characteristic was observable only in hub proteins or whether it existed broadly at the proteome level; and, 2 ) whether this was a spurious correlation that had emerged because of the presence of some third factor hidden in the complex and intertwining proteomes ., Our results show that , in many cases , this characteristic is present not only in hub proteins but also in proteins that have few interacting partners ., They also imply that these protein interactions or binding patterns are not the result of influence by a third factor but are caused by phosphorylation-dependent cellular activities ., The additive effect of kinase–substrate and phosphatase–substrate reactions is one possible model for interpreting this phenomenon in the phospho-PPI network ., If PPIs include many transient signaling reactions between kinases , phosphatases , and their substrates ( most of which are phosphorylated under certain conditions ) , then the signaling proteins may have interactions additional to the cohesive protein binding interactions in the PPI data ., Indeed , some enzyme–protein substrate interactions are surprisingly stable and can be captured in protein interaction assays 46 ., However , of the 795 yeast phosphorylation and dephosphorylation reactions for which information has previously been published 47 , only 3 . 9% , 1 . 6% , 2 . 4% , and 0 . 8% overlapped with those in our “ALL , ” “Y2H , ” “IMM , ” and “TAP” PPI datasets , respectively ( Supplementary Fig . S6 ) ., Note , however , that these values were significantly higher than those expected from negative controls of the corresponding PPI networks generated by “random edge rewiring” ( RER ) , and similar , significant overlaps between physical PPI and signaling network were obtained by another group 48; for details of RER , see Materials and Methods ., On the other hand , the node degree levels of at least 600 proteins ( >20% of phosphoproteomes in the “ALL” phospho-PPI network ) might have been related to , and affected by , phosphorylation , as evidenced by the cumulative percentage of phosphoproteins , which was more than 20% higher than that of nonphosphoproteins ( Fig . 3A ) ., In addition to this , many unidentified phosphoproteins are certain to be present in the nonphosphoprotein dataset ., Therefore , it is difficult to interpret such a large difference in the node degree of phosphoproteins and nonphosphoproteins only in terms of the additive effect of signaling reactions , which had such a small overlap with the PPI data ., Furthermore , among the GO Slim ontology groups within the “signal transduction” and “cell cycle” categories , which especially include many signaling proteins , there were no great distinctions between the node degree levels of phosphoproteins and nonphosphoproteins ( although the node degree levels for “cytokinesis” and “response to stress , ” like those for most of the other ontology groups , showed marked differences between phosphoproteins and nonphosphoproteins ) ( Supplementary Fig . S5 ) ., In the phospho-PPI network , phosphoproteins had a greater tendency than nonphosphoproteins to interact with proteins harboring phosphoprotein binding domains ( PPBDs ) ., Out of 10 known PPBDs—14-3-3 , BRCT , C2 , FHA , MH2 , PBD , PTB , SH2 , WD-40 , and WW 49—six ( BRCT , C2 , FHA , SH2 , WD-40 , and WW ) were present in the member proteins of the “ALL” phospho-PPI network , and the average probabilities that phosphoproteins would interact with proteins that had all PPBDs or each type of PPBD were higher than those for nonphosphoproteins ( Fig . 5 ) ., ( The gap between node degree levels of phosphoproteins and nonphosphoproteins was normalized; see Materials and Methods . ), Considering all of these results and perspectives , a reasonable and generalized model that can be used to interpret the higher node degree of phosphoproteins is that reversible and alternative phosphorylation reactions alter the physical characteristics of proteins under various environmental conditions; the interacting or binding partners of phosphoproteins are thereby more diversified than those of nonphosphorylated proteins ., Consistent with this interpretation , phosphoproteins harboring at least two phosphosites had more interacting partners than those with a single phosphosite in the phospho-PPI network ( Supplementary Fig . S7 ) , even though phosphoproteins follow a power-law distribution with regard to phosphosite number counts and only a small fraction of phosphoproteins have multiple phosphosites 50 ., Protein phosphorylation reactions therefore seem to make a large contribution to intracellular PPI diversity ., We further analyzed the phosphorylation patterns of protein pairs forming pair-wise interactions in the phospho-PPI network , and we found that both interacting proteins in each pair tended to be phosphorylated ., For every category of phospho-PPI network , three types of pair-wise interactions were counted , whereby “Both , ” “Either , ” or “Neither” of two interacting proteins were phosphorylated ., The “Both” and “Neither” types of protein interactions were significantly more common in the real phospho-PPI network than was expected from negative controls produced by RER , whereas the “Either” types of protein interactions were significantly less common than expected ( Fig . 6; Supplementary Fig . S8 ) ., Notably , this outcome was independent of whether the node degrees of the phosphoproteins were higher or lower than those of the nonphosphoproteins , because RER does not change the node degree of each protein in a given network 51 ., PPI data contain homodimer and heterodimer information that can be captured by experimental assays such as two-hybrid assays 52 ., Therefore , to check the possibility that the tendency of interacting proteins to have similar phosphorylation patterns was caused by protein interactions between structurally and sequentially homologous proteins with similar phosphosites , we conducted the same analysis as above but using “filtered” phospho-PPI networks , in which interactions between two homologous proteins were eliminated by E-value cut-offs of 1e–10 in the BLASTP program , but no marked change was observed ( Fig . 6; Supplementary Fig . S8 ) ., Proteins involved in signal transduction pathways tend to be phosphorylated , and this is reflected in the PPI data , although the overlaps between such signaling reactions and PPIs are limited ( see above and Supplementary Fig . S6 ) ., Another possible interpretation for the multitude of physical interactions between phosphoproteins is that physically binding proteins that are members of the same protein complex tend to be phosphorylated simultaneously by a single enzyme ., To search for the protein kinases potentially responsible for the co-phosphorylation of proteins forming the same complex , we analyzed a dataset of kinase–substrate relationships with PPI data of the “ALL” category ., In the following analysis , we used 85 and 65 kinases , respectively , from the experimental results of an in vitro kinase–substrate assay 53 and a literature-derived collection of yeast signaling reactions 47 , each having multiple substrates ( Supplementary Table S3 ) ., For each kinase , its multiple substrates were superimposed on the PPI network and the number of “interacting kinate modules” ( IKMs , triangle motifs composed of a kinase and its two physically interacting substrates ) ( Fig . 7A ) 53 was counted and compared with those estimated in negative controls of the PPI network produced by NLS and RER ., This analysis revealed that three kinases from the in vitro assay and 12 from the literature-based collection had significantly higher IKM formability than those expected from both NLS and RER ( P<0 . 05 ) ( Fig . 7B and C; Supplementary Table S3 ) ., Similar results were obtained by using the “filtered” phospho-PPI network ( Supplementary Fig . S9; Supplementary Table S3 ) ., Accordingly , we suggest that , when a protein complex and kinase are in close proximity within the intracellular environment , there is a high chance of simultaneous phosphorylation of member proteins participating in the complex ., This is consistent with the subcellular co-localization of signaling networks recently revealed through the systematic prediction of signaling networks by using phosphoproteome data with an integrated protein network information derived from curated pathway databases , co-occurring terms in abstracts , physical protein interaction assays , mRNA expression profiles , and the genomic context 48 , and by data analysis of time-course phosphoproteome data 54 ., IKMs may enhance the subcellular co-localization of signaling reactions , and/or vice versa ., The literature-derived signaling collection is presumably more enriched with well-investigated reactions and thus may more accurately reflect in vivo signaling ., This may explain why the collection harbored more kinases with high IKM formabilities ( 12 out of 65 ) than the in vitro kinase–substrate relationship data ( three out of 85 ) ., It is plausible that , in living cells , the diversity of protein interactomes ( not only of protein signaling but also of protein complex formation ) is essentially influenced by the large number of phosphorylation events; many reversible phosphorylations might control condition-specific protein binding interactions related to different subcellular processes and molecular machines ., On the other hand , protein phosphorylation patterns also seem to depend largely on intracellular protein interaction diversity ., It is possible that many of the proteins defined as nonphosphoproteins in this study can actually be phosphorylated under appropriate cellular conditions ., Even where this is true , however , the set we defined here as phosphoproteins should be enriched with proteins that are frequently phosphorylated under normal or many different cellular conditions , because the frequently phosphorylated proteins have a higher chance of being identified as phosphoproteins than do the rarely phosphorylated proteins ., Accordingly , the features and models discussed in this study should reflect the overall characteristics of phosphoproteins and nonphosphoproteins among a number of different cellular conditions ., This is supported by the finding that proteins that had two or more phosphosites physically interacted with more proteins than did those with only a single phosphosite ( Supplementary Fig . S7 ) ., Although the quality of current yeast PPI data is also not perfect and the data may include false positives , the observed features with statistical significance should be consequences of the actual behaviors of intracellular proteins , because the effects of such false positives on the statistical tests are supposedly random ., The integrative data-mining of yeast multi-omics data has now shed light on the macroscopic and large-scale relationships between phosphoproteomes and protein interactomes ., Future comprehensive analyses of the in vivo link between protein phosphorylation and physical interaction will yield more insights into the complex and intertwined molecular systems of living cells ., Saccharomyces cerevisiae strain IFO 0233 cells grown continuously on glucose medium 55 were used ., Pelleted cells were vacuum dried and frozen until further analysis ., A Bioruptor UCW-310 ( Cosmo Bio , Tokyo Japan ) was used to disrupt the pellets in 0 . 1 M Tris-HCl ( pH 8 . 0 ) containing 8 M urea , protein phosphatase inhibitor cocktails 1 and 2 ( Sigma ) , and protease inhibitors ( Sigma ) ., The homogenate was centrifuged at 1 , 500g for 10 min and the supernatant was reduced with dithiothreitol , alkylated with iodoacetamide , and digested with Lys-C; this was followed by dilution and trypsin digestion as described 56 ., Digested samples were desalted by using C-18 StageTips 57 ., Phosphopeptide enrichment by hydroxy acid–modified metal oxide chromatography ( HAMMOC ) was performed as reported previously 11 , 58 ., Briefly , digested lysates ( 100 µg each ) were loaded onto a self-packed titania-C8 StageTip in the presence of lactic acid ., After the samples had been washed with 80% acetonitrile containing 0 . 1% TFA , phosphopeptides were eluted by a modified approach using 5% ammonium hydroxide , 5% piperidine , and 5% pyrrolidine in series 59 ., An LTQ-Orbitrap XL ( Thermo Fisher Scientific , Bremen , Germany ) coupled with a Dionex Ultimate 3000 ( Germering , Germany ) and an HTC-PAL autosampler ( CTC Analytics AG , Zwingen , Switzerland ) was used for nanoLC-MS/MS analyses ., An analytical column needle with a “stone-arch” frit 60 was prepared with ReproSil C18 materials ( 3 µm , Dr . Maisch , Ammerbuch , Germany ) ., The injection volume was 5 µL and the flow rate was 500 nL/min ., The mobile phases consisted of ( A ) 0 . 5% acetic acid and ( B ) 0 . 5% acetic acid and 80% acetonitrile ., A three-step linear gradient of 5% to 10% B in 5 min , 10% to 40% B in 60 min , 40% to 100% B in 5 min , and 100% B for 10 min was employed throughout this study ., The MS scan range was m/z 300 to 1500 , and the top 10 precursor ions were selected in MS scans by Orbitrap with R\u200a=\u200a60 , 000 for subsequent MS/MS scans by ion trap in the automated gain control ( AGC ) mode; AGC values of 5 . 00e+05 and 1 . 00e+04 were set for full MS and MS/MS , respectively ., The normalized collision energy was set at 35 . 0 ., A lock mass function was used for the LTQ-Orbitrap to obtain constant mass accuracy during gradient analysis ., Both Mass Navigator v1 . 2 ( Mitsui Knowledge Industry , Tokyo , Japan ) and Mascot Distiller v2 . 2 . 1 . 0 ( Matrix Science , London , UK ) were used to create peak lists based on the recorded fragmentation spectra ., Peptides and proteins were identified by automated database searching using Mascot Server v2 . 2 ( Matrix Science ) against UniProt/SwissProt v56 . 0 with a precursor mass tolerance of 3 ppm , a fragment ion mass tolerance of 0 . 8 Da , and strict trypsin specificity , allowing for up to two missed cleavages ., Carbamidomethylation of cysteine was set as a fixed modification , and oxidation of methionines and phosphorylation of serine , threonine , and tyrosine were allowed as variable modifications ., Phosphopeptide identification and phosphorylated site determination were performed in accordance with a procedure reported previously 11 ., The false discovery rate was estimated to be 1 . 07% using a randomized database ., All annotated MS/MS spectra were stored in PepBase ( http://pepbase . iab . keio . ac . jp ) ., Saccharomyces cerevisiae phosphoproteome data were obtained from Dataset S1 of Holt et al . 16 ., Another collection of formerly identified phosphoproteins and their phosphosites was obtained from UniProt ( release 15 . 14; http://www . uniprot . org/ ) 19 ., All UniProtKB/Swiss-Prot protein entries identified to have at least one phosphosite in high-throughput phosphoproteomics studies were downloaded via the Protein Knowledgebase ( UniProtKB ) in XML format by querying the term scope: “PHOSPHORYLATION LARGE SCALE ANALYSIS AT” ., Some phosphoproteins registered in UniProt had multiple synonyms of UniProt accession ., For integrative analyses and comparisons of yeast multi-omics data , all identities of proteins and genes obtained from different data sources were standardized to UniProt accessions ., If objects ( e . g . gene names , ORF names , and/or locus names ) in a data source did not have UniProt accessions , the objects were standardized to their corresponding UniProt accessions according to the cross-reference list prepared from UniProtKB/Swiss-Prot protein entries obtained from UniProt ( release 15 . 14 ) ., In cases when an object corresponded to multiple synonyms of UniProt accessions , all accessions were used to identify its corresponding objects in other data sources ., The phosphoproteome data newly identified in this study and the former phosphoproteome datasets obtained from Holt et al . and UniProt were unified according to their UniProt accessions ., Positions of phosphosites and their amino acid residues in the unified phosphoproteome data were double-checked by using the proteome sequences obtained from UniProt ( release 15 . 14 ) ., From SGD ( Saccharomyces Genome Database; http://yeastgenome . org ) 32 , annotations of 5 , 815 known and predicted genes were obtained ., ORF names of genes were checked by using the unified phosphoproteome data to determine whether the encoded protein was identified as a phosphoprotein ., The S . cerevisiae PPI network was obtained as XML files ( Scere20081014 ) from DIP ( Database of Interacting Proteins; http://dip . doe-mbi . ucla . edu ) 33 ., We eliminated each interaction entry including three or more “interactors” ( e . g . , in which multiple prey proteins were detected for one bait protein in one experimental assay ) and used only those including two “interactors . ”, Every node in the PPI network was labeled by its corresponding UniProt ID provided in the same XML file ., For the PPI assay , PPI data were further grouped into four categories: all kinds of experimental methods ( “ALL” ) , yeast two-hybrid ( “Y2H” ) , co-immunoprecipitation ( “IMM” ) , and tandem affinity purification ( “TAP” ) ., A “filtered” PPI network was also prepared for each category by eliminating interactions between two similar proteins by using the BLASTP program and an E-value cut-off of 1e–10 ., Unified phosphoproteome data were mapped onto every category of PPI data prepared from DIP according to their UniProt accessions , and a phospho-PPI network was generated ., Throughout this study , proteins that did not correspond to phosphoproteome data were termed “nonphosphoproteins . ”, To prepare negative controls for PPI and phospho-PPI networks , two different processes ( as diagrammed in Fig . 1 ) were appropriately adopted on a case-by-case basis ., “Node label shuffling” ( NLS ) swaps the labels of two randomly selected nodes in a given network; it repeats this operation a sufficient number of times until all pair-wise interactions in the queried network have disappeared or until the number of iterations reaches 1 , 000 times the number of interactions ., “Random edge rewiring” ( RER ) randomly selects two edges in a given network and randomly rewires them ., During this process , each rewiring operation is retried if a pair of nodes redundantly wired by two edges occurs in the network; the iteration termination condition is the same as that of NLS ., Proteome abundance data for S . cerevisiae that were previously acquired through a single-cell proteomics analysis combining high-throughput flow cytometry and a library of GFP-tagged strains 37 were used to analyze the characteristics of protein expression in the phospho-PPI network ., These data were composed of proteome abundance data measured for cells grown in rich ( YEPD ) and synthetic complete ( SD ) medium ., For each cell growth condition , protein names were standardized to UniProt accessions , and protein abundance levels were log-transformed ( base 10 ) and superimposed on each of the phospho-PPI networks of “ALL” and “Y2H . ”, In this case , protein nodes for which the abundance levels were not provided in the abundance data were deleted from the phospho-PPI network ., The protein disorder level of every S . cerevisiae protein registered in UniProt ( release 15 . 14 ) was predicted by the POODLE-W program , which uses the support vector machine–based learning of amino acid sequences of structurally confirmed disordered proteins 61 ., For the analysis , we used the “disorder probability” ( i . e . the probability that a given protein is unstructured ) output by this program ., Saccharomyces cerevisiae gene annotations belonging to “molecular function , ” “biological process , ” or “cellular component” of GO Slim , a higher level view of S . cerevisiae Gene Ontology ( GO ) , were downloaded via the SGD ftp site ., Information on S . cerevisiae proteins , each of which has at least one of 10 known phosphoprotein binding domains ( PPBDs ) , namely 14-3-3 , BRCT , C2 , FHA , MH2 , PBD , PTB , SH2 , WD-40 , and WW 49 , was obtained according to the protein domain annotations of UniProt ( release 15 . 14 ) , which were provided by other protein databases ., To evaluate the tendencies of phosphoproteins and nonphosphoproteins to interact with proteins that had PPBDs , the normalized probabilities of such interactions were defined ., For each protein , the number of interacting protein partners that had PPBDs was divided by the number of all interacting partners ., To find possible IKMs , kinases previously reported to phosphorylate multiple substrates were obtained from data on in vitro substrates recognized by most yeast protein kinases that were measured with the use of proteome chip technology Supplementary Data 2 of Ptacek et al . 53 , as well as from a literature-derived collection of documented yeast signaling reactions Table S3 of Fiedler et al . 47 ., All gene names of substrates in the in vitro kinase–substrate relationship data and ORF name | Introduction, Results/Discussion, Materials and Methods | Following recent advances in high-throughput mass spectrometry ( MS ) –based proteomics , the numbers of identified phosphoproteins and their phosphosites have greatly increased in a wide variety of organisms ., Although a critical role of phosphorylation is control of protein signaling , our understanding of the phosphoproteome remains limited ., Here , we report unexpected , large-scale connections revealed between the phosphoproteome and protein interactome by integrative data-mining of yeast multi-omics data ., First , new phosphoproteome data on yeast cells were obtained by MS-based proteomics and unified with publicly available yeast phosphoproteome data ., This revealed that nearly 60% of ∼6 , 000 yeast genes encode phosphoproteins ., We mapped these unified phosphoproteome data on a yeast protein–protein interaction ( PPI ) network with other yeast multi-omics datasets containing information about proteome abundance , proteome disorders , literature-derived signaling reactomes , and in vitro substratomes of kinases ., In the phospho-PPI , phosphoproteins had more interacting partners than nonphosphoproteins , implying that a large fraction of intracellular protein interaction patterns ( including those of protein complex formation ) is affected by reversible and alternative phosphorylation reactions ., Although highly abundant or unstructured proteins have a high chance of both interacting with other proteins and being phosphorylated within cells , the difference between the number counts of interacting partners of phosphoproteins and nonphosphoproteins was significant independently of protein abundance and disorder level ., Moreover , analysis of the phospho-PPI and yeast signaling reactome data suggested that co-phosphorylation of interacting proteins by single kinases is common within cells ., These multi-omics analyses illuminate how wide-ranging intracellular phosphorylation events and the diversity of physical protein interactions are largely affected by each other . | To date , high-throughput proteome technologies have revealed that hundreds to thousands of proteins in each of many organisms are phosphorylated under the appropriate environmental conditions ., A critical role of phosphorylation is control of protein signaling ., However , only a fraction of the identified phosphoproteins participate in currently known protein signaling pathways , and the biological relevance of the remainder is unclear ., This has raised the question of whether phosphorylation has other major roles ., In this study , we identified new phosphoproteins in budding yeast by mass spectrometry and unified these new data with publicly available phosphoprotein data ., We then performed an integrative data-mining of large-scale yeast phosphoproteins and protein–protein interactions ( complex formation ) by an exhaustive analysis that incorporated yeast protein information from several other sources ., The phosphoproteome data integration surprisingly showed that nearly 60% of yeast genes encode phosphoproteins , and the subsequent data-mining analysis derived two models interpreting the mutual intracellular effects of large-scale protein phosphorylation and binding interaction ., Biological interpretations of both large-scale intracellular phosphorylation and the topology of protein interaction networks are highly relevant to modern biology ., This study sheds light on how in vivo protein pathways are supported by a combination of protein modification and molecular dynamics . | computational biology/systems biology | null |
2,221 | journal.pcbi.1005343 | 2,017 | Correlation-based model of artificially induced plasticity in motor cortex by a bidirectional brain-computer interface | The cerebral cortex contains interacting neurons whose functional connections are modified through repeated patterns of activation ., For example , motor and somatosensory cortices are typically organized into somatotopic regions in which localized neural populations are associated with muscles or receptive fields and show varied levels of correlated activity ( e . g . 1–5 ) ., Functional relationships between such neural populations are known to change over time , reinforcing relevant pathways 6–9 ., These changes are the result of plasticity mechanisms acting on myriad synaptic connections between cortical neurons ., Most of them are relatively weak but can potentiate under the right conditions ., However , it is not always clear what such conditions might be , or how one can interact with them for experimental or clinical purposes ., Unanswered questions include the way local synaptic plasticity rules lead to stable , emergent functional connections , and the role of neural activity—and its statistics—in shaping such connections ., While recent and ongoing work elucidates various plasticity mechanisms at the level of individual synapses , it is still unknown how these combine to shape the recurrently connected circuits that support population-level neural computations ., Bidirectional brain-computer interfaces ( BBCI ) capable of closed-loop recording and stimulation have enabled targeted conditioning experiments that probe these issues ., In a seminal experiment 10 , a BBCI called the Neurochip recorded action potentials of a neuron at one site in motor cortex ( MC ) and delivered spike-triggered stimuli at another for prolonged times in freely behaving macaque monkeys ., This conditioning was able to increase functional connectivity from neurons in the recorded site to the ones in the stimulated site ( c . f . 11 ) , as measured by electromyogram ( EMG ) of muscle activation evoked by intracortical microstimulation ( ICMS ) in MC ., Importantly , the relative strength of induced changes showed a strong dependence on the spike-stimulus delay , consistent with experimentally derived excitatory spike-timing-dependent plasticity ( STDP ) time windows 12–14 ., The effects of this protocol were apparent after about a day of ongoing conditioning , and lasted for several days afterwards ., Similar spike-triggered stimulation showed that corticospinal connections could increase or decrease , depending on whether the postsynaptic cells were stimulated after or before arrival of the presynaptic impulses 15 ., This BBCI protocol has potential clinical uses for recovery after injuries and scientific utility for probing information processing and learning in neural circuits ., The observations outlined above suggest that STDP is involved in shaping neural connections by normal activity during free behavior , and is the central mechanism behind the success of spike-triggered conditioning ., However , this could not be verified directly as current experiments only measure functional relationships between cortical sites ., Furthermore , interactions between BBCI signals and neural activity in recurrent networks are still poorly understood , and it remains unclear how BBCI protocols can be scaled up , made more efficient , and optimized for different experimental paradigms ., For example , during spike-triggered stimulation , the spikes from a single unit are used to trigger stimulation of a neural population ., While STDP can explain how the directly targeted synapses may be affected ( i . e . from the origin of the recorded spikes to the stimulated population ) , the observed functional changes must rely on a broader scope of plastic changes involving other neurons that are functionally related to the recorded ones ., What are the relevant network mechanisms that govern these population-level changes ?, How can a BBCI make use of population activity to trigger optimal stimulation ?, Here we advance a modeling platform capable of capturing the effects of BBCI on recurrent , plastic neuronal networks ., Our goal is to use the simplest dynamical assumptions in a “bottom-up” approach , motivated by neuronal and synaptic physiology , that enable the reproduction of key experimental findings from 10 at the functional level in MC and related muscles ., In turn , we seek to use this model to provide insights into plasticity mechanisms in recurrent MC circuits ( and other cortical regions ) that are not readily accessible experimentally , as well as establish a theoretical framework upon which future BBCI protocols can be developed ., We build on a well-established body of work that enables analytical estimates of synaptic changes based on network statistics 16–22 and compare theoretical results with experiments and numerical simulations of a probabilistic spiking network ., In our model , every neuron is excitatory—the modulatory role of inhibition in MC is instead represented implicitly by non-homogeneous probabilistic activation rates ., While inhibition likely plays an important role in cortical dynamics , we consider results from our exclusive use of excitation to be a significant finding , suggesting that a few key mechanisms can account for a wide range of experimental results ., Using data from previous work as well as from novel experiments , we calibrate STDP synaptic dynamics and activity correlation timescales to those typically found in MC neural populations ., The result is a spiking model with multiplicative excitatory STDP and stable connectivity dynamics which can reproduce three key experimental findings: Furthermore , we make the following novel findings: Together , these results provide quantifiable experimental predictions ., They arise from a theoretical framework that is easily scalable and serves as a potential testbed for next-generation applications of BBCIs ., We discuss ways to use this framework in state-dependent conditioning protocols ., When neurons in the network’s three groups a , b , c are subject to external commands ν, ( t ) = ( νa, ( t ) , νb, ( t ) , νc, ( t ) ) with stationary statistics , their averaged connectivity J ¯ ( t ) evolves toward an equilibrium J ¯ * that reflects these inputs’ correlations ( c . f . 24 , 25 ) , although individual synapses may continue to fluctuate ., This has been observed in a number of theoretical studies ( see e . g . 18 , 23 ) and is consistent with the formation and dissociation of muscle assemblies in MC due to complex movements that are regularly performed 8 ., The mean synaptic equilibrium J ¯ * strongly depends on the external inputs ν, ( t ) ’s correlation structure C ^ ( u ) ( see Fig 2A ) ., Indeed , a narrow peak near the origin for correlations within groups , as is the case for the periodic external rates shown in Fig 1B , along with the absence of such peaks for cross-group correlations , contribute to strengthening synapses within groups and weakening those across groups ., Under such conditions , what will be the impact of spike-triggered stimulation ?, Fig 2B shows the evolution of synaptic averages J ¯ α β ( t ) , analytically computed ( see Methods ) for a system initiated at the synaptic equilibrium associated with external rates ν, ( t ) from Fig 1B ., The inset of Fig 2B shows the evolution of individual synapses from group a to group b from full network simulations ., At 15 hours , the spike-triggered stimulation protocol is turned “on” , with a set delay d† = 20 milliseconds , and synapses start changing ., In ∼10 hours they reach a new equilibrium which differs from the initial one in a few striking ways , as seen in Fig 2C and 2D , where normalized differences ( J ¯ † - J ¯ ) / J m a x are plotted for all pre- and post-group combinations ., First , as expected and in accordance with experiments 10 , the mean strength of synapses from group a ( recorded site ) to group b ( stimulated site ) are considerably strengthened ( by about 80% ) ., As described in more detail below , this massive potentiation relies on two ingredients: correlated activity within group a and an appropriate choice of stimulation delay d† ., Perhaps more surprising are collateral changes in other synapses , although they are of lesser magnitude ., While this was previously unreported , it is consistent with unpublished data from the original spike-triggered stimulation experiment 10 ., It is unclear how many of these changes are due to the particular external rate statistics and other model parameters; we return to this question below when realistic activity statistics are considered ., We also show that spike-triggered stimulation induces novel correlation structures due to synaptic changes as illustrated in Fig 2A , which plots the correlation functions C ^ ( u ) , C, ( u ) , C†, ( u ) and C J † ( u ) ., Here , C J † ( u ) denotes the correlations one observes under baseline activity ( i . e . without ongoing stimulation ) but with the equilibrium connectivity J ¯ † * obtained after prolonged spike-triggered stimulation , i . e . , at the end of spike-triggered stimulation ., It is clear that every interaction involving group b is considerably changed , most strongly that with group a , which includes the neuron used to trigger stimulation ., More surprising is the increased cross-correlation of group b with itself , even though connectivity within group b is not explicitly potentiated by conditioning ., In fact , it is slightly depressed ( Fig 2D ) ., This happens because connections from group a to group b are considerably enhanced , which causes the mean firing rate of group b to grow and its correlations to increase ., Later , we explore similar collateral changes that occur because of multi-synaptic interactions ., We see in the next section how these correlations translate into functional changes in the network ., Finally , Fig 2B shows a crucial feature of our model: the timescale for the convergence from the normal to the artificial equilibrium is different from that of the decay back to normal equilibrium after the end of spike-triggered stimulation ., In the conditioning experiments from 10 , 15 , the effect of spike-triggered stimulation was seen after about 5–24 hours of conditioning , while the changes decayed after 1 to 7 days ., With the simplified external drives producing a reasonable mean firing rate of about 10Hz for individual cells , an STDP learning rate of η = 10−8 was adequate to capture the two timescales of synaptic changes ., Thus , the simple excitatory STDP mechanisms in our model give rise to distinct timescales for increases and decay of synaptic connectivity strength produced by spike-triggered conditioning , in agreement with experimental observations ., The emergence of distinct timescales was previously reported and studied in related modelling contexts 30 , 31 ., It should be noted that a number of parameters are shown to affect the magnitude of timescale separation , such as types of synaptic delays , weight dependence of the STDP rule , the firing rate of ongoing baseline activity , etc ., We reiterate that many of these parameters are not well resolved experimentally for macaque MC , and that we have made simplifying choices ( see Methods ) that can be adapted to new experimental data ., Nevertheless , we expect our model to robustly produce distinct synaptic timescales that can be fitted by a single time-scaling parameter ., This does not contradict the findings from human psychophysical studies that feedback error-driven motor adaptation may involve two or more different and independent parallel processes with different temporal dynamics for learning and decay of the motor skill 35 ., Nevertheless , the cellular mechanisms in our model may have some relation to the different timescales proposed to underlie motor adaptation at the system level 35 ., Such relationships could be further investigated by direct experimentation and appropriate simulations ., In summary , our model satisfies the first experimental observation from 10 we set out to reproduce ( point a . in Introduction ) ., Indeed , we find that two distinct timescales of synaptic changes ( during and after conditioning ) are an emergent property of our model , and tuning a single parameter is sufficient to fit the rates observed in experiments ., Changes in correlations due to spike-triggered conditioning indicate that there is an activity-relevant effect of induced synaptic changes , which is measurable from spiking statistics ( see Fig 2A ) ., We now show how this is directly observable in evoked activity patterns that are consistent with intra-cortical microstimulation ( ICMS ) protocols employed in experiments ., In 10 , connectivity changes were inferred using ICMS and electromyogram ( EMG ) recordings of the monkey’s wrist muscles , as well as evoked isometric torques ., To summarize , a train of ICMS stimuli lasting 50 ms was delivered to each MC site; simultaneously , EMG activity in three target muscles were recorded ., The average EMG responses for repeated trials were documented for each of three MC sites ( i . e . group a , b and c ) before and after spike-triggered conditioning ., The experiment showed that prior to conditioning , ICMS stimulation of any MC site elicited well-resolved average EMG responses , largest in one muscle but not the two others ., After conditioning , ICMS stimulation of the recording site ( group, a ) not only elicited an EMG response in its associated muscle , but also in that of stimulated site ( group, b ) ., While it was conjectured that synaptic changes in MC were responsible for the changes , this could not be verified directly ., Our model suggests that synaptic changes can indeed occur in MC-like networks , but it remains unclear if such changes can lead to the experimentally observed motor output changes ., We address this by simulating EMG responses of our model , before and after spike-triggered conditioning ( † ) ., Fig 3 shows a simulated ICMS protocol before ( panel A ) and after ( panel B ) spike-triggered stimulation conditioning ., For each case , synaptic matrices are chosen from full network simulations and fixed ( STDP is turned off ) , as shown in the top row of Fig 3 ( reproduced from Fig 2C ) ., To mimic the ICMS stimulus , we add a square-pulse input rate of 100 Hz lasting 50 ms to the external rate να, ( t ) of a target group ., An example of the spiking output of our network for α = a is shown in the top row of Fig 3 where the solid black bar below the graph shows the stimulus duration ., Next , we filter the spike output of all neurons within a group using the synaptic filter ε, ( t ) described in Methods , and add them to obtain population activity time-courses ., Finally , we take these summed profiles and pass them through a sigmoidal non-linearity— ( 1 + exp−a ( x −, b ) ) −1 where x is the filtered activity—meant to represent the transformation of neural activity to EMG signals ., Here , we assume that the hypothetical motoneurons whose target muscle EMG is recorded receive inputs only from a single neural group and that network interactions are responsible for cross-group muscle activation ., We label our modelled motor output EMG measurements by Mα , α ∈ {a , b , c} ., We choose the nonlinearity parameters a = 2 . 5 and b = 5 to qualitatively reproduce the EMG responses seen in the experiment before spike-triggered conditioning: namely , well-resolved EMG responses Mα are observed only when the relevant MC group α is stimulated ., The bottom row of Fig 3 shows Mα responses of each group , averaged over 15 trials each , when ICMS stimulation is delivered to a single group at a time ., Panel A shows little cross-group response to ICMS stimulation before spike-triggered stimulation conditioning ., However , after conditioning , stimulation of group a evokes an emergent response in the muscle activated from group b ( see circles in Fig 3 ) , as well as a small increase for group c ., These features were both present in the original experiments ( see Figure 2 in 10 ) and are consistent with the synaptic strengths across groups before and after conditioning ., In addition to EMG , the authors of 10 also measured the effects of ICMS using a manipulandum that recorded isometric torques produced by evoked wrist motion ., In our model , the newly evoked EMG responses , after conditioning , agree with the observation that torques evoked from the recorded site ( group, a ) typically changed toward those previously evoked from the stimulated site ( group, b ) ., As such , from now on we equate an increase in mean synaptic strength J ¯ α β between groups to an increase in functional connectivity ., We conclude that our model satisfies the second experimental observation from 10 we set out to reproduce ( point, b . in Introduction ) ., That is , a simple interpretation of evoked network activity—a filtered output of distinct neural group spiking activity—is consistent with the functional changes in muscle activation in ICMS protocols observed before and after conditioning ., Up to now , we used toy activation profiles ν, ( t ) in the form of truncated sinusoidal bumps to drive neural activity ( Fig 1B ) ., In this section , we modify our simple model to incorporate experimentally observed cross correlation functions , whenever possible , in an effort to eliminate artificial activation commands and capture more realistic regimes ., As a result , we no longer rely on numerical simulations of spiking activity , but rather on analytically derived averaged quantities to explore a wide range of conditioning regimes ., There is no longer a need to specifically define the activation functions ν, ( t ) , we instead rely solely on cross-correlation functions C ^ ( u ) and C, ( u ) ., Below , we aim to construct versions of these functions that are as close to experimental data as possible ., Before discussing spiking statistics , we note an important advantage of only considering mean synaptic strengths J ¯ ( t ) ., For spiking simulations shown above , we used networks of N = 60 neurons with probability of connection p = 0 . 3 , which are considerably far from realistic numbers ., Nevertheless , the important quantity for mean synaptic dynamics is the averaged summed strengths of synaptic inputs that a neuron receives from any given group: p N 3 J ¯ α β ., Notice that many choices of p and N can lead to the same quantity , therefore creating a scaling equivalence ., Moreover , additional scaling of Jmax can further accommodate different network sizes ., So far , we assumed that every neuron receives an average of 6 synapses from each group ., If each of these synapse were at maximal value Jmax = 0 . 1 , then simultaneous spiking from a pre-synaptic group would increase the post-synaptic neuron’s spiking probability by 60% , a number we consider reasonable ., It remains unclear if the mechanisms described above are consistent with the experimentally observed relationship between stimulation delay ( d† ) and efficacy of spike-triggered conditioning in macaque MC ., We investigated this by comparing efficacy , as measured by the percentage of torque direction change evoked by ICMS before and after conditioning 10 , to relative synaptic strength changes in our model ., This is motivated by the above demonstration that synaptic strengths are well correlated with amplitude of evoked muscle activations in a ICMS experiment ( see Fig 3 and point, b . in Introduction ) ., Nevertheless , the following comparison between model and experiment is qualitative , and meant to establish a correspondence of ( delay ) timescales only ., We use the data originally presented in Figure 4 of 10 , describing the shift in mean change in evoked wrist torque direction by ICMS of the recorded site ( group, a ) , as a function of stimulation delay d† ., We plot the same data in Fig 4E , with the maximal change ( in degrees ) normalized to one ., On the same graph , we plot the ( J ¯ b a † - J ¯ b a ) / J m a x v . s . d† curve for the value of σ that offers the best fit ( in L1-norm ) ., This amounts to finding the best “σ-slice” of the graph in Fig 4D to fit the experimental data ., We found that σ ≃ 17 ms gives the best correspondence ., We reiterate that this comparison is qualitative ., Nevertheless , the fit between the d†-dependence of experimentally observed functional changes and modelled synaptic ones is clear ., As our model’s spiking activity and STDP rule are calibrated with experimentally observed parameters ( see Methods ) , this evidence suggests that our simplified framework is consistent with experiments ., Importantly , σ = 17 ms is comparable to correlation timescales between functionally related MC neurons in macaque , as reported in 41 ( see also 5 , 40 ) and discussed earlier ., It was shown that such neurons have gaussian-like correlation functions with mean peak width at half height on the order of 22 ms , corresponding roughly to σ = 10 ms . While this is slightly lower than our estimate , we note that task-specific motion is known to introduce sharp correlations and that free behaving , rest and sleep states induce longer-range statistics 42 ., Cross-correlation functions reported above were recorded during a stereotyped center-out reach task experiment , in contrast to the spike-triggered conditioning experiment which was conducted over a wide range of states , including sleep , which may lead to longer mean cross-correlation timescales 10 ., A prediction of our model is that spike-triggered conditioning restricted to periods of long timescale correlations in MC , such as during sleep 42 , could lead to a more robust conditioning dependence on stimulation delays ( see Discussion ) ., This finding implies that our model successfully reproduces the third and last experimental observation from 10 ( point c . from Introduction ) : using simplified cross-correlation functions of MC neural populations calibrated from experimental measurements , our model reproduces the relationship between the magnitude of plastic changes and the stimulation delay in a spike-triggered conditioning protocol ., We now explore the effects of spike-triggered stimulation on collateral synaptic strengths , i . e . , other than the targeted a-to-b pathway ., For a wide range of parameters , there is little change other than for the a-to-b synapses ., Indeed , when cross-correlation width σ is moderate to large , spike-triggered stimulation has little effect on collateral connections , for any stimulation delay ., This is in conjunction with the robustness of a-to-b changes discussed in the previous section ( see Fig 4D ) ., Nevertheless , some localized features arise when cross-correlation width σ is small ., Fig 5A shows color plots of these changes as a function of d† and σ , for the nine combinations of pre- and post-synaptic groups ., We now review the mechanisms responsible for these indirect synaptic alterations ., First , b-to-b synapses become depressed , regardless of stimulation delay , for short correlation timescales ., This is due to the occurrence of synchronized population spiking produced by artificial stimulation which , because of our choice of dendritic and axonal delays ( see Methods ) , promote depression ., Such synchronized spikes induce sharp δ-peaks in the network’s cross-correlation ( see Methods ) and the combination of transmission delays shifts this peak toward the depression side of the STDP rule ., When cross-correlations are narrow ( i . e . small σ ) , their interaction with the STDP rule –which manifests in the integral in Eq ( 13 ) – is more sensitive to the addition of such δ-peaks , resulting in overall depression ., In contrast , when cross-correlations are wider , the addition of δ-peaks has a smaller effect since a wider range of correlations contribute to the integrated STDP changes ., Second , the b-to-a synapses become potentiated for short delays d† when σ is small enough ., This happens because of a combination of factors ., When the recorded neuron in group a spikes , the population-wide spike artificially elicited in b quickly follows and travels to the b-to-a synapses ., This means that the spike of a single neuron in a effectively reaches all neurons in a , with an effect amplified by the strength of many synapses , shortly after the neuron originally fired ., When cross-correlations among a-neurons are wide , the effect of this mechanism is diluted , similarly to the b-to-b synapses discussed above ., However , when neurons in a are highly synchronous , this short-latency feedback produces synaptic potentiation of the b-to-a synapses ., Third , the synapses from both groups a and b onto the control group c are also potentiated when σ and d† are small enough ., This can be explained in two parts and involves di- and tri-synaptic mechanisms ., When the recorded neuron in a fires a spike , a population-wide spike is artificially evoked in b shortly after , which travels down to b-to-c synapses and elicits a response from neurons in c ., Narrow cross-correlations imply that many spikes in a fall within a favorable potentiation window of spikes in c , thereby contributing to the potentiation of a-to-c synapses ., To test this mechanism , we compute the relative synaptic changes due to spike-triggered stimulation in an altered network , where b-to-c synapses are inactivated ( Jcb ≡ 0 ) ., Fig 5B shows the synaptic changes for the normal network ( top ) and this altered one ( middle ) for fixed parameters d† = 5 ms and σ = 17 ms ( same σ that best fitted experiments , see Fig 4E ) ., We can clearly see that without b-to-c synapses , a-to-c synapses do not potentiate under spike-triggered stimulation ., In turn , the strengthening of a-to-c synapses imply that spikes in a are more likely to directly elicit spikes in c , thereby repeating the same process in a different order for b-to-c synapses ., Note that without a-to-c synapses , the b-to-c synapses would not potentiate ., Indeed , as was the case for b-to-b synapses , the combination of transmission delays do not conspire to promote direct potentiation following a population-wide synchronous spike ., This is tested by inactivating the a-to-c synapses , which prevents the potentiation of b-to-c synapses , as shown in the bottom panel of Fig 5B ., Finally , there is a moderate increase of the c-to-b synapses for stimulation delays d† from about 5 to 20 ms . These are observed because of a-to-c synapses , promoting spikes in c that precede the stimulation of group, b . This mechanism only works if neurons in a are tightly correlated , i . e . , for small σ ., Using the same process described above , we tested this mechanism by inactivating a-to-c synapses which prevents the potentiation of c-to-b synapses ., We reiterate that our specific choice of synaptic transmission delays may influence the magnitude of the changes described above ., This is because many of the outlined mechanisms rely on well-timed series events ., See e . g . 18 , 19 , 30 , 31 for more details about the impact of delays ., We expect that tuning our model to experimentally measured delay distributions , as they become available , will help validate and/or improve our framework’s predictive power ., Together , these mechanisms form our second novel finding ( point 2 . from the Introduction ) : multi-synaptic mechanisms give rise to significant changes in collateral synapses during conditioning with short delays , and these cannot be attributed to STDP mechanisms directly targeted by the BBCI , instead emerging from network activity ., We note that the correlation time-scale that best fits experiments ( σ = 17ms , see Fig 4E ) falls within the parameter range where these effects occur ., We have assumed that spike-triggered stimulation elicits population-wide synchronous spiking of all neurons in group b ( Nstim in 10 ) ., This is valid if the neural group b represents all the neurons activated by the stimulating electrode of a BBCI , but is not necessarily representative of the larger population of neurons that share external activation statistics due to a common input νb, ( t ) ., Indeed , some neurons that activate in conjunction with those close to the electrode may be far enough from it so they do not necessarily spike in direct response to a stimulating pulse ., Alternatively , selective activation of neurons within a group can also be achieved via optogenetic stimulation in a much more targeted fashion 43 ., We now consider the situation in which only a certain proportion of neurons from group b is activated by the spike-triggered stimulus ., We denote the stimulated subgroup by b† and the unstimulated subgroup by b∘ ., All neurons in group b receive the same external rates νb, ( t ) as before , but only a few ( solid red dots in Fig 6A ) are stimulated by the BBCI ., Let Nb = N/3 be the number of neurons in group b and the parameter ρ , with 0 ≤ ρ ≤ 1 , represent the proportion of stimulated neurons in, b . The sizes of groups b† and b∘ are given by N b † = ρ N b and N b ∘ = ( 1 - ρ ) N b , respectively ., We now adapt our analytical averaged model ( 23 ) to explore the effect of stimulation on subdivided synaptic equilibria ., We verified that the analytical derivations used below match the full spiking network simulations as before ., Both subgroups of b receive the external rate νb, ( t ) but only one receives spike-triggered stimulation ., These changes are captured in the averaged analytical derivations by tracking the number of neurons in each sub-group in the averaging steps leading to Eqs ( 22 ) and ( 23 ) accordingly—replacing N/3 by N b † and N b ∘ where necessary ., This way , we obtain subgroup-specific synaptic averages ( e . g . J ¯ b †, a ) ., Fig 6B shows the group-averaged connectivity strength between group a ( the recording site ) and both subgroups of b , before and after spike-triggered stimulation ., External cross-correlations C ^ ( u ) are as in Fig 4B with σ = 20ms , and the stimulation delay is set at d† = 30ms ., The proportion of stimulated neurons in b is set to ρ = 0 . 5 ., The bottom of the same panel shows the normalized changes of mean synaptic strengths due to spike-triggered stimulation ., As established for the original network ( see Fig 2B ) , the biggest change occurs for synapses from a to the subgroup that is directly stimulated ( b† ) ., However for subgroup b∘ , we see a noticeable change in its incoming synapses from group a , in contrast to synapses of other unstimulated groups ( c ) that do not appreciably change ., This means that sharing activation statistics with stimulated neurons is enough to transfer the plasticity-inducing effect of conditioning to a secondary neural population ., Next , we investigate how this phenomenon is affected by the proportion of neurons in b that receive stimulation , ρ ., Fig 6C shows the subgroup-averaged normalized changes of synapses from group a to subgroups b† and b∘ , as ρ varies between 0 and 1 ., When more neurons get stimulated , the transferred effect on the unstimulated group is amplified ., This means that the combined outcome on the entirety of group b grows even faster—supralinearly—with ρ , as shown in Fig 6C , where the combined b-averaged changes in synaptic strength , J ¯ b a = ρ J ¯ b † a + ( 1 - ρ ) J ¯ b ∘ a , are plotted as a function of ρ ., In summary , this phenomenon represents our third and final finding ( point 3 . in Introduction ) ., Our model shows that neurons not directly stimulated during spike-triggered conditioning can be entrained into artificially induced plasticity changes by a subgroup of stimulated cells , and that the combined population-averaged effect grows supra-linearly with the size of the stimulated subgroup ., In this study , we used a probabilistic model of spiking neurons with plastic synapses obeying a simple STDP rule to investigate the effect of a BBCI on the connectivity of recurrent cortical-like networks ., Here the BBCI records from a single neuron within a population and delivers spike-triggered stimuli to a different population after a set delay ., We developed | Introduction, Results, Discussion, Methods | Experiments show that spike-triggered stimulation performed with Bidirectional Brain-Computer-Interfaces ( BBCI ) can artificially strengthen connections between separate neural sites in motor cortex ( MC ) ., When spikes from a neuron recorded at one MC site trigger stimuli at a second target site after a fixed delay , the connections between sites eventually strengthen ., It was also found that effective spike-stimulus delays are consistent with experimentally derived spike-timing-dependent plasticity ( STDP ) rules , suggesting that STDP is key to drive these changes ., However , the impact of STDP at the level of circuits , and the mechanisms governing its modification with neural implants remain poorly understood ., The present work describes a recurrent neural network model with probabilistic spiking mechanisms and plastic synapses capable of capturing both neural and synaptic activity statistics relevant to BBCI conditioning protocols ., Our model successfully reproduces key experimental results , both established and new , and offers mechanistic insights into spike-triggered conditioning ., Using analytical calculations and numerical simulations , we derive optimal operational regimes for BBCIs , and formulate predictions concerning the efficacy of spike-triggered conditioning in different regimes of cortical activity . | Recent developments in Bidirectional Brain-Computer Interfaces ( BBCI ) not only allow the reading out of neural activity from cortical neurons , but also the delivery of electrical signals ., These drive neural dynamics and shape synaptic plasticity , thus opening the possibility of engineering novel neural circuits , with important applications for clinical treatments of spinal cord injuries and stroke ., However , synaptic changes in recurrent networks of neurons are hard to predict: they involve complex dynamic mechanisms on multiple temporal and spatial scales ., Based on experiments , we develop a computational network model with plastic synapses that serves as a predictive tool for BBCI protocols ., We show how the efficacy of BBCIs is influenced by cortical activity statistics and we propose state-based stimulation strategies for driving artificially-induced synaptic plasticity . | action potentials, medicine and health sciences, neural networks, nervous system, membrane potential, electrophysiology, neuroscience, synaptic plasticity, bioassays and physiological analysis, muscle electrophysiology, neuronal plasticity, research and analysis methods, developmental neuroscience, computer and information sciences, animal cells, behavior, electrophysiological techniques, behavioral conditioning, cellular neuroscience, cell biology, anatomy, synapses, electromyography, physiology, neurons, biology and life sciences, cellular types, neurophysiology | null |
1,172 | journal.pcbi.1000914 | 2,010 | Regression Analysis for Constraining Free Parameters in Electrophysiological Models of Cardiac Cells | Mathematical modeling has become an increasingly popular and important technique for gaining insight into biological systems , both in physiology , where models have a long history 1 , 2 , and in biochemistry and cell biology , where quantitative approaches have gained traction more recently 3 , 4 ., However , as new models proliferate and become increasingly complex , analysis of parameter sensitivity has emerged as an important issue 5 , 6 ., It is clear that to understand a model requires not only knowing the output generated using the published “baseline” set of parameters , but also some knowledge of how changes in the models parameters affect its behavior ., During the development of a mathematical model , the choice of parameters is a critical step ., Parameters are constrained by data whenever this is possible , but direct measurements are frequently lacking ., Often , however , a situation exists in which values for many parameters are unknown , but a considerable amount is known about the systems emergent phenomena ., In such cases , experienced researchers narrow down the values of the unknown model parameters based on how the model “ought to behave . ”, Parameter sets that generate grossly unrealistic output are rejected whereas those that produce reasonable output are tentatively accepted until they fail in some important respect ., The emergent phenomena considered in this process can be switching or oscillatory behavior in the case of biochemical signaling models 3 , 4 , or outputs such as action potential ( AP ) and calcium transient morphology in models of ion transport 7–10 ., Computational studies , however , have revealed the limitations of this intuition-based procedure ., In particular , work in theoretical neuroscience has shown that when a single output such as neuronal firing rate is considered , many different combinations of model parameters can generate equivalent behavior 11–14 ., This general problem is illustrated in Figure 1A , which shows results from a popular mathematical model of the human ventricular action potential , that of ten Tusscher , Noble , Noble , and Panfilov ( TNNP; 15 ) ., Random variation of model parameters revealed that completely different parameter combinations could produce virtually identical AP morphology ., This result is analogous to studies by Prinz et al . examining firing rate in neuronal cell models 13 , 14 ., However , an interesting aspect of the simulation is as follows ., The two hypothetical cells , although generating nearly identical APs under normal conditions , exhibited intracellular Ca2+ transients that differed with respect to both amplitude and kinetics ( Figure 1B ) ., Theoretically , then , a justifiable choice between these two parameter combinations , while impossible based only on the results shown in Figure 1A , could be made by considering the additional information in Figure 1B ., Such distinctions are frequently made by researchers with experimental expertise , who either accept or reject models based on how well they recapitulate a range of observed phenomena ., This process , although somewhat arbitrary and potentially subject to bias , nonetheless reflects sound reasoning , since a “good” model should successfully reproduce many biological behaviors ., Based on results such as those shown in Figure 1 , we sought to formalize and place on a sound mathematical footing the process of choosing parameters by comparing model output with several sets of data ., In particular , our hypothesis was that examining a single model output , such as action potential duration ( APD ) , would fail to constrain parameters , but success would be more likely if the number of physiological outputs was similar to the number of free model parameters ., We demonstrate that this is true in the case of the TNNP model 15 through two methods ., The first , an extension of the use of multivariable regression for parameter sensitivity analysis 16 , consists of inverting a regression matrix and then using this to calculate the changes in model parameters required to generate a given change in outputs ., The second method employs Bayess theorem to estimate the probabilities that model parameters lie within certain ranges ., The results , which are generally applicable across different models and different biological systems , can be of great use when building new models , and also provide new insights into the relationships between model parameters and model results ., The overall hypothesis of our study was that if several physiologically-relevant characteristics of a models behavior were known , this information would be sufficient to constrain some or all of the models parameters ., We tested this idea using two approaches: one based on multivariable regression and the other based on Bayess theorem ., We began by generating a database of candidate models ., The parameters that define maximal conductances and rates of ion transport in the TNNP model 15 were varied randomly , and several simulations , defining how the candidate model responded to altered experimental conditions , were performed with each new set of parameters ., In general , the simulations reflected experimental tests commonly performed on ventricular myocytes , such as determining the threshold for excitation or changing the rate of pacing ., For the first approach , the results of these simulations were collected in “input” and “output” matrices X and Y , respectively ., Each column of X corresponded to a model parameter , and each row corresponded to a candidate model ( n\u200a=\u200a300 ) ., The columns of Y were the physiological outputs extracted from the simulation results , such as action potential duration ( APD ) and Ca2+ transient amplitude ., Complete descriptions of the randomization procedure and simulation protocols are provided in the Methods and Text S1 ., Outputs are listed in Table 1 and described in detail in Text S1 ., Multivariable regression techniques were used to quantitatively relate the inputs to the outputs ., In the “forward problem , ” a matrix of regression coefficients B was derived such that the predicted output Y ̂\u200a=\u200aXB was a close approximation of the actual output Y . This method has recently been proven useful for characterizing the parameter sensitivity of electrophysiological models 16 ., We reasoned that a similar approach could be used to address the question: if the measurable physiological characteristics of a cardiac myocyte are known , can this information be used to uniquely specify the magnitudes of the ionic currents and Ca2+ transport processes ?, Specifically , we hypothesized that if:, 1 ) Y ̂\u200a=\u200aXB was a close approximation of the true output Y , and, 2 ) B was a square matrix of full rank , then Xpredicted\u200a=\u200aYB−1 should be a close approximation of the true input matrix X . This argument is illustrated schematically in Figure 2 ., Figure 3A demonstrates the accuracy of the reverse regression method ., For four chosen conductances , the scatter plots show the “actual” values , generated by randomizing the baseline parameters in the published TNNP model , versus the “predicted” values calculated with the regression model ., The large R2 values ( >0 . 9 ) indicate that the predictions of the regression method are quite accurate ., Of the 16 conductances in the TNNP model , 12 could be predicted with R2>0 . 7 ., The four that were less well-predicted were the Na+ background conductance ( GNab ) , the rapid component of the K+ delayed rectifier conductance ( GKr ) , the sarcolemmal Ca2+ pump ( KpCa ) and the second SR Ca2+ release parameter ( Krel2 ) ., To verify that these encouraging results were not specific to the TNNP model , we performed similar analyses on additional models , the human ventricular myocyte model of Bernus et al . 17 , and the “Phase 1” ventricular cell model of Luo and Rudy 18 ., In either case ( Figures S3 and S4 , respectively ) , the reverse regression was highly predictive of most parameters , indicating that this approach is generally applicable ., The outputs used for these analyses , listed in Text S1 , differed somewhat from those used for the TNNP simulations because the Bernus et al . 17 and Phase 1 Luo and Rudy 18 models are relatively simple and do not consider intracellular Ca2+ handling in detail ., Figure 3B illustrates how the quantity and identity of the outputs in Y affected the accuracy of the predictions ., Bar graphs show R2 values for prediction of each model parameter obtained by performing the reverse regression in three ways:, 1 ) using all 32 outputs ( blue ) ,, 2 ) matrix inversion ( green ) , with the 16 best outputs as identified by the output elimination algorithm ( see Methods ) , and, 3 ) using only the 16 rejected outputs ( red ) ., The R2 values computed using the 16 best outputs were virtually identical to those obtained when all 32 outputs were used whereas R2 values for most conductances were substantially lower when only the 16 rejected outputs were included ., These tests validate the algorithm which selected the outputs for matrix inversion ., Moreover , since the 16 best outputs performed essentially as well as the full set of 32 outputs , this result implies that the model outputs were not fully linearly independent , and the 16 rejected outputs contained redundant information ., Figure 4 displays , as heat maps , the coefficients for both the forward and reverse regression problems ., The former indicate how model parameters influence outputs , whereas the latter specify how changes in model outputs restrict the parameters ., Parameter sensitivities for selected outputs and conductances are shown as bar graphs to the right ., As previously argued for the case of forward regression 16 , these parameter sensitivities help to illustrate the relationships between parameters and outputs ., For instance , forward regression coefficients indicate that diastolic Ca2+ is determined primarily by a balance between SR Ca2+ uptake and SR Ca2+ leak , with other parameters making only minimal contributions ., Conversely , for reverse regression , the maximal conductance of L-type Ca2+ current ( GCa ) depends on many model outputs including action potential duration , Ca2+ transient amplitude , and , in particular , how these are altered with changes in extracellular potassium ., This result underscores the centrality of intracellular Ca2+ regulation to many cellular processes ., The results shown in Figure 3 demonstrated that most of the model parameters used to generate the dataset could be reconstructed using the reverse regression procedure ., To provide evidence that this procedure may be more broadly useful , we applied the method to a novel test case by performing simulations with the most recent version of the Hund & Rudy canine ventricular model 19 ., Specifically , we considered changes in seven parameters corresponding to the condition of heart failure , as previously modeled by Shannon et al 20 ., Figure 5A shows that implementing these parameter changes dramatically alters both AP shape and Ca2+ transient amplitude ., After performing simulations under a range of conditions with both normal , healthy cells and pathological , failing cells ( see Methods and Text S1 ) , we asked how well the reverse regression matrix could calculate the parameter changes in the failing cells ., We found that this method constrained 5 out of 7 parameters with excellent accuracy , while changes in two parameters ( GKs and Kleak ) were overestimated somewhat by the regression algorithm ., This novel test cases validates our approach and suggests that it may indeed prove a useful method for developing new models based on experimental measurements ., The second approach for constraining model parameters is based on Bayess theorem ., In statistics , this celebrated result describes the conditional probability of one event given another in terms of:, 1 ) the conditional probability of the second event given the first , and, 2 ) the marginal probabilities of the two events:In this context , we consider event A that a model conductance lies within a given range , and event B that a model output is within a particular range ., When many simulations are performed with randomly varying parameters , the probability P ( A ) is fixed by the user , while the probabilities P ( B ) and P ( B|A ) can be estimated from the results ., This allows us to approximate P ( A|B ) , which reflects how well a model parameter is constrained by a particular simulation result ., Since our hypothesis was that multiple outputs needed to be considered to constrain model parameters , we were interested in extensions of Bayess theorem to more than two variables , e . g . P ( A|B∩C ) , where B and C are events related to two model outputs ., For instance , B and C could represent , respectively , that APD and Ca2+ transient amplitude are within particular ranges ., If the conditional probability of the parameter increases as additional outputs are considered , this validates the thinking underlying the approach ., The application of this strategy to our data set is illustrated in Figure 6 ., The two rows of histograms display distributions of GNa and GCa , which are typical of the 16 model parameters considered ., The leftmost histogram in each row shows the distribution of conductance values in the entire population , and the remaining columns show conductance values for sub-populations that satisfy constraints on one or more model outputs ., Successive columns from left to right show distributions with additional model outputs considered , as noted ., In either case , the distributions become progressively narrower , and the conditional probability is unity once 3 outputs are considered ., This procedure also provides insights into which specific outputs provide the greatest information about particular model parameters ., For instance , the distribution of GNa given a certain range of APD appears similar to the overall distribution of GNa because these two variables are not strongly correlated ( i . e . P ( B|A ) ≈ P ( B ) ) ., In contrast , inclusion of Vpeak , an output highly dependent on GNa , narrows the distribution significantly ., In the case of GCa , restricting APD to a particular range makes the distribution narrower , which is to be expected given the relatively strong correlation between the parameter and the output ., Thus , an approach based on Bayess theorem also supports the idea that model parameters can successfully be constrained if multiple model outputs are considered ., In this study we have presented two methods that can be used to constrain free parameters in complex mathematical models of biological systems ., The utility of these methods was demonstrated through simulations with models of ventricular myocytes 15 , 17–19 , but with modifications the strategies could also be applied to other classes of models ., For instance , these methods could be used to constrain parameters in models of the sinoatrial node 21 , 22 , but in this case more useful outputs would be metrics such as inter-beat interval , diastolic depolarization rate , and maximum diastolic potential 23 ., Our results show that model parameters are difficult to specify uniquely using a limited number of model outputs as “targets , ” but parameters can be constrained successfully if numerous model outputs are simultaneously considered 24 ., The premise underlying this strategy is therefore similar to ideas advanced by Sethna and colleagues in discussions of model “sloppiness” 25 , 26 ., Even if individual parameters are largely unknown or cannot be measured with precision , predictive models can still be built if care is taken to match the models output to diverse sets of experimental data ., The reverse regression method uses matrix multiplication to predict a set of parameters , in this case ionic current maximal conductances , that are most likely to recapitulate a given set of model outputs ., In a recent paper 16 , parameter randomization followed by regression was used to quantify parameter sensitivities in electrophysiological models ., The method presented here is an extension of this: we added outputs so that the regression matrix B could be inverted ., Each element of this inverted matrix , B−1 , therefore indicates how much a physiological output contributes to the prediction of a particular input conductance ( Figure 4 ) ., In experimental studies , metrics derived from data are frequently used as indirect semi-quantitative surrogates of ionic conductances ., For instance , conventional wisdom holds that action potential upstroke velocity reflects the availability of Na+ current 27 , and the prominence of the Phase 1 “notch” indicates the contribution of transient outward K+ current 28 , 29 ., Our reverse regression method is simply a mathematically more formal extension of this general strategy , whereby every output can conceivably influence the prediction of each model parameter ., When applied to the simulations with the TNNP model , reverse regression was able to generate accurate predictions of most conductances or rates of ion transport in the model ( R2>0 . 7 for 12 of 16 parameters ) ., Of the 4 parameters that were not predicted accurately , two , namely Na+ background conductance ( GNab ) and the sarcolemmal Ca2+ ATPase ( KpCa ) are considered to be relatively unimportant for normal cellular physiology ., The parameter Krel2 ( crel in the original TNNP model ) , was also predicted poorly , most likely because it is partially redundant with the parameter Krel1 ( arel in the original TNNP model ) , which was well constrained by the analysis ., The surprise in our simulations was the poor prediction of the rapid component of the delayed rectifier current , GKr , since this current contributes to AP repolarization 30 , 31 , and block of IKr is the primary cause of drug-induced long QT syndrome 32 , 33 ., It should be noted , however , that our prediction of the conductance corresponding to the slow delayed rectifier , GKs , was accurate ., This suggests that in the TNNP model , these conductances serve similar functions and perhaps compensate for each other ., A similar conclusion can be drawn from the simulations in which we used the reverse regression procedure to reconstruct the parameters corresponding to heart failure in the Hund & Rudy 19 model ( Figure 5 ) ., Five out of the seven parameters altered in the heart failure cell were predicted accurately by the reverse regression procedure ., The two that were not predicted accurately , Kleak , and GKs , have relatively minor effects in the Hund & Rudy model , although these are more important in some other models ., Thus , these methods are not only useful for constraining parameters; they can provide novel insight into the relative importance of particular model parameters in determining physiological function ., Two important factors influencing the accuracy of the conductance predictions are the number and quality of the outputs ., Mathematically , inversion of the regression matrix B requires that the columns be linearly independent , which in turn requires independence of the columns of Y , i . e . the outputs ., In contrast , linear dependence would imply that the outputs contain redundant information ., Since we did not know a priori which outputs would be informative and which would be partially redundant , we implemented an algorithm to remove outputs sequentially and find a set of 16 that yielded the best results ., This resulted in the unexpected elimination of seemingly important outputs such as the maximal upstroke velocity , a metric closely related to Na+ conductance ., However , it is important to note that this result does not argue against the usefulness of upstroke velocity as a metric , it merely indicates that the information contained in this output has already been captured by the 16 that were selected ., These considerations suggest a future application of these techniques , besides their obvious utility in the construction of new mathematical models ., Since the regression analyses provide insight into which physiological measures are independent and which are partially redundant , these types of simulation studies can be used to prioritize experiments ., Experimental studies consume the valuable resources of reagents , animals , and person-hours , and computational approaches that could reliably distinguish between more informative and less informative experiments would therefore be quite valuable ., For example , the pacing cycle length at which a myocyte begins to exhibit APD alternans ( BCLalt ) is an important quantity related to the arrhythmogenic potential of the cardiac substrate 34 , 35 ., Determining this threshold , however , requires time-consuming experiments in which myocytes must be paced at many different rates ., This output was rejected by our elimination algorithm , suggesting that , at least in the TNNP model , the information provided by this difficult experiment is not different from that contained in other , perhaps simpler , measurements ., Our current work is focused on formalizing these ideas and developing methods to quantify the relative information content of different experimental measurements ., We should note that the outputs chosen for our analysis are physiologically meaningful metrics that are measured routinely in isolated cardiac myocytes ., We purposely excluded measures that quantify how cellular behavior changes after application of a pharmacological agent ., Since the explicit purpose of adding a drug is often to deduce the importance of the drugs primary target , we felt that including these metrics would , for an existing model , make the parameter constraint problem fairly trivial ., In future studies , however , including these outputs will undoubtedly improve the predictive power of these methods ., Similarly , the addition of more columns to the matrix Y corresponding to results from voltage-clamp experiments should also improve the accuracy of the method ., These extensions will likely be necessary if maximal conductances are essentially unknown , or if ionic current kinetic parameters are also to be constrained ., In the field of cardiac electrophysiology , a few modeling studies have examined issues of parameter sensitivity 6 , 16 , 36 , 37 , parameter estimation 38 , 39 , and model identifiability 40 ., For example , Fink and Noble recently assessed the adequacy of whole-cell voltage clamp records for uniquely determining parameters in models of ion channel gating 40 ., These analyses suggested that optimized voltage clamp protocols might be more efficient for parameter identification than protocols currently used in experiments ., More studies that address these sorts of issues have been performed in computational neuroscience ., For instance , analogous to the results shown in Figure 1A , several studies have shown that different combinations of model conductances can produce seemingly identical behavior , either in isolated neurons 11 , 13 or in models of small neuronal networks 14 ., Olypher and Calabrese then generalized this result by demonstrating that , close to a particular location in parameter space , infinitely many parameter combinations can produce the same level of activity as the original location , and these authors derived 2×2 sensitivity matrices to demonstrate these compensatory changes 41 ., Our reverse regression approach is essentially an extension of this idea to multiple dimensions , with the implicit assumption that considering additional linearly-independent model outputs will increase the likelihood of determining parameters uniquely ., Given that parameters in neuronal models cannot be uniquely specified using only a metric such as firing rate , a few studies have combined genetic algorithms with more sophisticated data-matching strategies such as phase-plane analysis 11 or multiple objective optimization 42 ., Our methods offer both advantages and disadvantages compared with these alternative strategies ., The primary advantage here is that reverse regression is simple and intuitive , and the outputs considered are well-defined metrics that are readily obtainable in the laboratory ., We can therefore easily relate , in a way that other techniques do not allow , the observable characteristics of the cardiac myocyte to the membrane densities of the important ion channels ., The main drawbacks of our approach are:, 1 ) that we only perform a local search around the baseline model and, 2 ) that we assume a linear relationship between changes in parameters and changes in outputs ., While linear approximations to these input-output relationships have been shown to work well in cardiac models 16 , particularly when conductances are expressed in log-transformed units , this assumption may not hold in all classes of models 43 ., This limitation is evident in the simulations shown in Figure 6 in that:, 1 ) two parameters were poorly predicted by the regression model; and, 2 ) in these simulations , the parameter search was constrained to only seven possibilities rather than allowing any model parameter to contribute to the phenotype ., Future studies will likely improve on these strategies and combine aspects of several approaches to refine methods for determining parameters in complex models of biological processes ., In summary , we have presented new methods for constraining free parameters in mathematical models , and demonstrated their utility through analyses of a common model of the ventricular myocyte ., The approaches we describe have potentially broad implications ., Analysis tools such as these can be used to obtain new insight into the relationships between model parameters , model outputs , and experimental data ., The ideas offer hope that , even if some model parameters cannot be directly measured , a close comparison of data to model output can still discriminate between possibilities and produce a model with strong predictive power ., This computational study aimed to extend the use of regression to develop methods for constraining free parameters in mathematical models ., The ideas were tested through simulations using the TNNP model 15 of the human ventricular action potential ( described in more detail in the Supporting Information ) ., First , regression was used to derive a matrix ( B ) whose elements indicate how changes in input parameters , namely maximal ionic conductances , affect physiologically-meaningful model outputs ., The regression matrix was then inverted , thereby deriving a new matrix ( B−1 ) that specifies the ionic conductances required to produce a given set of model outputs ., In the first stage , the input matrix X was generated by randomly scaling 16 parameters in the TNNP model ., A total of 300 random sets of parameters were generated such that X had dimensions 300×16 ., To compute the output matrix Y , several simulations were performed with each of the 300 models defined by a given parameter set ., These simulations reflected standard electrophysiological tests such as the response of the myocyte to changes in pacing rate or extracellular potassium concentration ., The calculation of some of these outputs is illustrated in Figure S1 ., The 32 outputs computed from these simulations , listed in Table 1 , ranged from straightforward measures such as action potential duration ( APD ) and Ca2+ transient amplitude to more abstract metrics such as the minimum cycle length required to induce APD alternans 34 ., The 16×32 matrix B relates the inputs to the outputs such that Y ̂\u200a=\u200aXB is a close approximation of the true output matrix Y . To allow for inputs and outputs expressed in different units to be compared , values in X and Y were converted into Z-scores – i . e . each column was mean-centered and normalized by its standard deviation ., The results of the “forward” regression performed in the first stage are shown in Figure S2 ., The second stage of the computational experiment aimed to determine if the input matrix X could be inferred , assuming the output matrix Y was known ., Since Y ̂\u200a=\u200aXB≈Y , we reasoned that YB−1 should be a close approximation of X , provided that B is an invertible matrix ., We performed an iterative procedure to determine the 16 most appropriate outputs for this matrix inversion ., First , with the full 300×32 matrix Y , “reverse regression” was performed to derive a matrix B′ such that YB′≈X ., We then removed each of the columns of Y and performed the reverse regression with the remaining 31 outputs ., The output whose removal caused the smallest change in the prediction of X ( quantified by R2 ) was deemed the least essential and was removed permanently ., This procedure was repeated to reduce the number of outputs from 31 to 30 , etc . , until Y had dimensions 300×16 ., A further set of simulations was performed with the 2008 version of the Hund and Rudy model of the canine action potential 19 ., In these simulations , we sought to determine whether changes in model parameters in heart failure could be determined using the reverse regression procedure ., We simulated the changes in parameters used by Shannon et al to simulate heart failure in their model of the rabbit action potential 20 ., This involved alterations to seven model parameters: GK1 , GKs , Gto , KNCX , KRyR , KSERCA , and Kleak ., Simulations were performed under three conditions: normal extracellular K+o ( 5 . 4 mM ) , hypokalemia ( K+o\u200a=\u200a3 mM ) and hyperkalemia ( K+o\u200a=\u200a8 mM ) ., In these simulations , a total of 33 model outputs were calculated to constrain the parameters ( see Text S1 for full list ) ., Reverse regression was performed to map the 33 outputs from the simulated failing myocyte to the predicted 7 parameter changes ., In the second approach , based on Bayess theorem , we were interested in estimating P ( A|B ) from P ( B|A ) , P ( A ) , and P ( B ) ., In this context , A is that a parameter is in a particular range , and B is that a model output is in a specified range ., To estimate P ( B|A ) from the set of 300 simulation results , we sorted the values in each column of X and Y , then computed the percentile ranges ., This allowed us to easily select , for instance , 10% of the values of a particular output centered around a given value ., To generate histograms such as those shown in Figure 4 , we first plotted the distribution of all the tested values of a given conductance ., Then we selected the conductance values corresponding only to those trials for which APD fell within a particular range , and generated the histogram of this set ., From this subset of conductances , we then selected the conductance values corresponding to those trials for which Vrest was in a certain range , etc ., To allow for visual comparison , each histogram was normalized to the total number of values of the subset ., To ensure that this procedure found a set of conductances that actually existed in the data set , we first identified the “best” trial for which the difference between Y and Y ̂ was minimal ., The output ranges used to select the subsets of conductances all represented deviations of ±5% around these values ., A bundle containing the Matlab™ code used to generate the results presented in the manuscript has been uploaded as Protocol S1 in the Supporting Information . | Introduction, Results, Discussion, Methods | A major challenge in computational biology is constraining free parameters in mathematical models ., Adjusting a parameter to make a given model output more realistic sometimes has unexpected and undesirable effects on other model behaviors ., Here , we extend a regression-based method for parameter sensitivity analysis and show that a straightforward procedure can uniquely define most ionic conductances in a well-known model of the human ventricular myocyte ., The models parameter sensitivity was analyzed by randomizing ionic conductances , running repeated simulations to measure physiological outputs , then collecting the randomized parameters and simulation results as “input” and “output” matrices , respectively ., Multivariable regression derived a matrix whose elements indicate how changes in conductances influence model outputs ., We show here that if the number of linearly-independent outputs equals the number of inputs , the regression matrix can be inverted ., This is significant , because it implies that the inverted matrix can specify the ionic conductances that are required to generate a particular combination of model outputs ., Applying this idea to the myocyte model tested , we found that most ionic conductances could be specified with precision ( R2 > 0 . 77 for 12 out of 16 parameters ) ., We also applied this method to a test case of changes in electrophysiology caused by heart failure and found that changes in most parameters could be well predicted ., We complemented our findings using a Bayesian approach to demonstrate that model parameters cannot be specified using limited outputs , but they can be successfully constrained if multiple outputs are considered ., Our results place on a solid mathematical footing the intuition-based procedure simultaneously matching a models output to several data sets ., More generally , this method shows promise as a tool to define model parameters , in electrophysiology and in other biological fields . | Mathematical models of biological processes generally contain many free parameters that are not known from experiments ., Choosing values for these parameters , although an important step in the construction of realistic computational models , is frequently performed using an ad hoc approach that is a combination of intuition and trial and error ., We have developed a novel method for constraining free parameters in mathematical models based on the techniques of linear algebra ., We demonstrate this methods utility through simulations with a model of a human heart cell ., The underlying premise is that if the model is only asked to recapitulate one or a few biological behaviors , the values of the parameters may be ambiguous; however , if the model must simultaneously match many features of experimental data , the free parameters can be determined uniquely ., The results demonstrate that if computational models are to be realistic , they must be compared with several sets of data at the same time ., This new method should serve as a valuable tool for investigators interested in developing realistic mathematical models of biological processes . | cardiovascular disorders/arrhythmias, electrophysiology, and pacing, computational biology/systems biology, physiology/cardiovascular physiology and circulation | null |
1,071 | journal.pcbi.1000691 | 2,010 | Temporal Sensitivity of Protein Kinase A Activation in Late-Phase Long Term Potentiation | Synaptic plasticity , the activity-dependent change in the strength of neuronal connections , is a cellular mechanism proposed to underlie memory storage ., One type of synaptic plasticity is long term potentiation ( LTP ) , which typically is induced by brief periods of high-frequency synaptic stimulation ., LTP displays physiological properties suggestive of information storage and has been found in all excitatory pathways in the hippocampus , as well as other brain regions ., Late-phase LTP ( L-LTP ) is induced by 4 trains of stimulation separated by either 3–20 sec ( massed ) or 300–600 sec ( spaced ) , lasts more than 3 hours , and requires protein synthesis 1 ., Interestingly , the temporal spacing between successive trains regulates the PKA-dependence of L-LTP 2 , 3 ., A spaced protocol ( using a 300 sec inter-train interval ) requires PKA , whereas massed protocols ( using 20 sec and 3 sec intervals ) induce L-LTP that is independent of PKA ., The mechanisms underlying this temporal sensitivity of PKA dependence are not understood ., PKA is composed of two regulatory subunits bound to two catalytic subunits that form a tetrameric holoenzyme ., Sequential and co-operative binding of four cAMP to these regulatory subunits results in the release of two catalytic subunits 4 , 5 ., In the hippocampus , cAMP is produced by adenylyl cyclase types 1 and 8 , which are activated by calcium and Gsα coupled receptors 6 ., Consistent with this pathway of reactions leading to PKA , activation of dopaminergic and glutamatergic pathways is required for the induction of L-LTP in hippocampal CA1 pyramidal neurons 7–11 ., NMDA receptor activation also leads to stimulation of the calcium sensitive isoform of adenylyl cyclase 12 ., Because the induction of L-LTP involves complex networks of intracellular signaling pathways , computational models have been developed to gain an understanding of LTP 13–17 ., Several of these studies , which specify the model using ordinary differential equations , explain the requirement for high frequency stimulation ( e . g . 100 Hz for LTP ) versus low frequency stimulation ( e . g . 1 Hz for long term depression ) in terms of the characteristics of CaMKII 18–21 ., Even though PKA has been incorporated in some of these models , PKA activation is typically described using simplified algebraic equations 21–23 ., These models do not include the role of dopamine or β-adrenergic receptors in PKA activation nor adequately describe the temporal dynamics of PKA activation ., Consequently , these models do not evaluate the temporal sensitivity of PKA , and cannot accurately explain why PKA is required for spaced stimulation ., In contrast , several models by Bhalla 24 , 25 include not only the signaling pathways leading to PKA activation , but also those for mitogen activated protein kinase ( MAPK ) activation ., However , Bhalla did not explore the role of dopamine or PKA in late-phase LTP , and we have utilized more recent experimental data to update several of the reactions , especially those involved in PKA activation ., To evaluate the biochemical mechanisms underlying the temporal sensitivity of PKA dependence of L-LTP and the role of dopamine , we developed a single compartment model of postsynaptic signaling pathways underlying L- LTP in CA1 pyramidal neurons of the hippocampus ., Reaction rates and pathways are based on published biochemical measurements ., Simulations explore the mechanisms underlying temporal sensitivity of LTP to PKA and complementary experiments test the model predictions of the critical temporal interval separating PKA-dependent and PKA-independent LTP ., Simulation results show that the activation of PKA is greater with spaced as compared to massed stimulation ( Fig 2A1 ) ., These results are consistent with experimental results 3 showing that PKA is required for spaced , but not massed stimulation ., The cumulative activity of PKA with spaced stimulation ( 2321 nM-sec ) is 60% greater than with massed stimulation ( 1455 nM-sec ) ., Although the massed protocol produces a higher peak PKA activity , it is not 4 times higher than the peak produced from a single spaced train of stimulation because of sub-linear summation: the PKA peak activity for massed stimulation is only 1 . 4 times higher than the peak activity in response to spaced stimulation ( Fig 2A2 ) ., Subsequent trains do not increase the peak activity of PKA , but do contribute to cumulative PKA activity over time by linear summation; therefore more PKA activity is available with spaced stimuli ., Simulations are repeated for a range of inter-train intervals to further explore the temporal sensitivity of PKA dependence ., Fig 2C shows that cumulative PKA activity increases with temporal interval , with a time constant , τ , of 8 . 5 sec ., PKA activity reaches 95% of maximal value within 3 time constant , i . e . , at 25 . 5 sec ., This temporal sensitivity is not observed if peak activity is evaluated ., Activity at a single time point , such as 10 minutes after stimulation , is often used to compare with experimental measurements that measure enzyme activity at a single time point ., Nonetheless , cumulative activity better indicates the ability of an enzyme to act on downstream targets ., Using single time point measures of activity may explain why a previous study did not observe temporal sensitivity of PKA ., This increase in PKA activity with increasing inter-train interval can partly explain the mechanism of temporal sensitivity of PKA dependence of L-LTP , but the other part of the explanation is likely a deficit in some other molecule , such as CaMKII , which is known to be sensitive to higher frequency stimuli and plays a major role in LTP ., Thus , levels of phosphorylated CaMKII were examined for 3 sec and 300 sec inter-train intervals to assess whether PKA dependence was related to a decline in phosphorylated CaMKII with longer inter-train intervals ., This peak was evaluated because experiments suggest that phosphoCaMKII anchors at the post-synaptic density ( PSD ) and is not accessible to dephosphorylation by protein phosphatase 1 28 ., This would imply that activity would be proportional to peak value , and the resulting slow decay of phosphoCaMKII precludes a reasonable calculation of the area under the curve ., Fig 2B shows that peak activity of phosphorylated CaMKII with 300 sec intervals is lower than with 3 sec intervals , which is opposite to the temporal sensitivity of PKA , suggesting that PKA activity is compensating for a frequency-dependent deficit in CaMKII ., To further compare the CaMKII temporal sensitivity with the PKA temporal sensitivity , Fig 2C explores the phosphorylated activity of CaMKII for a range of inter-train intervals ., PhosphoCaMKII decreases as temporal interval increases ( beyond 3 sec ) , in agreement with experiments 29 ., The time constant of this decrease is 20 . 8 sec , and phosphoCaMKII drops to 95% of its peak value with a 62 sec inter-train interval ., The sum of ( normalized ) phosphoCaMKII and PKA activity is independent of interval for all but the very shortest intervals suggesting that PKA is required for spaced stimulation to compensate for a decrease in CaMKII ., This result leads to the prediction that PKA will be required for inter-train intervals greater than ∼62 sec ., The prediction that PKA is required for intervals greater than ∼62 sec was tested by inducing L-LTP at Schaffer collateral-CA1 synapses in mouse hippocampal slices using 4 trains of high frequency stimulation , with either 40 sec or 80 sec inter-train intervals , in the presence of either KT5720 or vehicle as control ., As shown in Fig 3A , LTP induced by stimulation trains delivered at 80 sec inter-train intervals was attenuated in KT5720-treated slices compared to vehicle controls ., At 120 min after LTP induction , the average fEPSP slopes were significantly different: 196±11% for vehicle-treated slices and 112±7% for KT5720-treated slices ( Mann-Whitney U test , p<0 . 05 ) ., This demonstrates that LTP induced by 4 trains of high frequency stimulation delivered at 80 sec inter-train intervals requires PKA ., In contrast , fEPSP slopes are not significantly different between KT5720 and control slices using 40 sec inter-train intervals ( Fig 3B ) ., At 120 min after LTP induction , the average fEPSP slopes were 167±14% for vehicle-treated slices and 167±13% for KT5720-treated slices ( Mann-Whitney U test , p>0 . 05 ) ., This indicates that LTP stimulated by 4 trains of high frequency stimulation delivered at 40 sec inter-train interval is PKA-independent ., These results , and previous experimental results on PKA dependence 3 , are summarized in Fig 3C , which demonstrates that L-LTP induced with temporal intervals of 3 sec to 40 sec are PKA-independent , whereas L-LTP induced by temporal intervals of 80 sec and 300 sec are PKA-dependent ., These experiments support the model prediction , thus verifying the model and its explanation of mechanisms underlying PKA dependence ., In the hippocampus , adenylyl cyclase type 1 is synergistically activated by both calcium-calmodulin and dopamine , which is released during 100 Hz stimulation 30 from fibers innervating hippocampal area CA1 31 ., Further support for the role of dopamine is provided by experiments that show that L-LTP induced using a 10–12 min inter-train interval is reduced when dopamine receptors are blocked 8 , 30 , 32 ., Thus , simulations were repeated with the dopamine receptor blocked , to evaluate the contribution of dopamine to L-LTP ., Fig 4 shows that cumulative PKA activity is reduced significantly with both massed and spaced stimulation intervals when dopamine receptor function is blocked ., The PKA activity for a 300 sec inter-train interval with no dopamine is similar to the PKA activity for the 3 sec inter-train interval with dopamine present , suggesting that L-LTP induction with spaced stimuli requires the higher PKA produced by spaced stimuli ., Though the lack of dopamine reduces PKA activity for the 3 sec inter-train interval , this is not functionally significant because L-LTP with massed stimulation is PKA-independent ., In other words , a 300 sec inter-train interval activates insufficient quantities of CaMKII , and additional dopamine stimulated PKA activity is required for the 300 sec interval only ., Stimulation with a 3 sec interval activates sufficient CaMKII , and thus , the model predicts that blocking dopamine receptors would not block L-LTP for this interval ., The sensitivity of cumulative PKA activity to different temporal intervals follows that of adenylyl cyclase ( Fig 5A ) and cAMP ( Fig 5B ) ., The first 100 Hz train produces a 600 nM increase in adenylyl cyclase activity from binding to calmodulin and Gsα ( Fig 5A2 ) ., With the massed protocol , the second 100 Hz train only produces an additional 300 nM increase in adenylyl cyclase activity , because free adenylyl cyclase is depleted with massed trains to a significant degree ., More than 80% of unbound adenylyl cyclase 1 is available for activation by the first train of stimulation ( Fig 5C ) ; unbound adenylyl cyclase 1 decreases by 20% for massed ( Fig 5C1 ) , but remains at more than 80% for spaced stimulation ( Fig 5C2 ) ., Calmodulin , which activates adenylyl cyclase 1 , also exhibits a small degree of depletion , in part because it binds to other molecules , such as protein phosphatase 2B and phosphodiesterase 1B , with extremely high affinity ., Thus , subsequent stimulation trains produce smaller increments in activated adenylyl cyclase for massed , but not for spaced stimulation ., These lower adenylyl cyclase activity increments result in lower cAMP increments with subsequent trains using massed stimulation: 300 nM for the first train and 150 nM for the second train ( Fig 5B2 ) ; thus the total cAMP produced from four trains of stimulation is less than four times the cAMP produced for one train ., Note that the temporal pattern of cAMP , which decays within 40 sec to basal levels , agrees with measurements using a fluorescent Epac-1 probe 33 , 34 , verifying this aspect of the model ., Therefore , the activation of PKA is greater with spaced as compared to massed stimulation because adenylyl cyclase activity is greater with spaced as compared to massed stimulation ., PKA is important in LTP because it phosphorylates AMPA receptors and inhibitor-1 , as well as other plasticity related proteins 35–38 , not all of which have been identified ., Because rates of AMPA receptor phosphorylation have not been directly measured , we chose to evaluate the effect of PKA activity on a different target , namely inhibitor-1 ., Furthermore , inhibition of protein phosphatase 1 by phosphorylated inhibitor-1 will enhance phosphorylation of many PKA targets via inhibition of dephosphorylation ., Thus , examination of the phosphorylation state of inhibitor-1 in these simulations both represents the ability of PKA to phosphorylate downstream targets , and also indicates whether free protein phosphatase 1 will be sensitive to temporal interval ., As seen in Fig 6A , the amount of phosphorylated inhibitor-1 is 50% greater for spaced than massed stimulation ., Similar to that observed with PKA activity , the peak value is higher for massed stimuli , but total phosphorylated inhibitor-1 is greater for spaced stimuli ., This shows that the temporal sensitivity of PKA activity propagates to downstream targets ., The phosphorylated inhibitor-1 binds to protein phosphatase 1 with high affinity , inhibiting its activity ., Thus , the 50% increase in phosphorylated inhibitor-1 produces a 50% decrease in protein phosphatase 1 ( Fig 6B ) ., This suggests that the enhanced activity of PKA with spaced stimulation will suppress protein phosphatase 1 activity , reinforcing the phosphorylation of plasticity related proteins ., To test whether the enhanced inhibitor-1 phosphorylation increased CaMKII phosphorylation , simulations were repeated with PKA phosphorylation of inhibitor-1 blocked ., The decrease in CaMKII phosphorylation was small ( Fig S2A ) , suggesting other mechanisms to enhance PKA activity are important ( discussed below ) ., To investigate the robustness of results ( i . e . , whether the results are sensitive to variation in parameters ) , simulations are repeated using parameter values 2 to 10 times larger or smaller than the control values , for parameters that are least constrained by biochemical data ., For instance , though the quantity of PKA has been estimated to be 1 . 2 µM in brain tissue , assuming the protein distributes in 70% of intercellular space 39 , the existence of localized pools of PKA suggest that the effective quantity of this enzyme in the synapse could be higher than the estimated quantity ., Similar arguments can be made for protein phosphatase 1 ., Thus , simulations are repeated using both higher and lower quantities of PKA , protein phosphatase 1 , protein phosphatase 2B , as well as Ca2+ influx ., As shown in Fig S3 , the main results from this model are qualitatively robust ., Though the PKA activity increases when enzyme quantities are increased , spaced stimulation still produces ∼60% more total activity than massed stimulation ( Fig S3A ) ., The quantity of protein phosphatase 1 has no effect on PKA activity , but does modify the decay rate of phosphoCaMKII ., Regardless of protein phosphatase 1 quantity or dephosphorylation rate , spaced stimulation produces lower phosphorylated CaMKII than massed stimulation ( Fig S3B ) ., Peak Ca2+ has a different effect on phosphoCaMKII: it changes the peak value with no change in decay , and no change in frequency sensitivity ( Fig S3D ) ., PKA was minimally affected by variation of peak Ca2+ ( Fig S3C ) ., A recent FRET imaging experiment suggests that CaMKII activity in spines is transient in response to synaptic stimulation 40 ., Thus , additional simulations evaluated whether the results are sensitive to persistence of CaMKII ., Transient phosphoCaMKII was produced by allowing protein phosphatase 1 to dephosphorylate the calmodulin bound form of phosphoCaMKII ( Fig S2A ) ., Using this more transient phosphoCaMKII in simulations , CaMKII activity is quantified as area under the curve ( instead of peak ) ., Fig S2B shows that area under the curve increases for PKA and decreases for phosphoCaMKII with increasing inter-train interval , the latter with a time constant of 17 . 8 sec – close to the time constant for the persistent model of CaMKII ., Thus , the prediction that PKA is required to compensate for a decrease in phosphoCaMKII is robust to this variation in CaMKII dynamics ., To better understand the complex intracellular signaling networks underlying the temporal sensitivity of PKA dependence of L-LTP , we developed a computational model of the calcium and cAMP signaling pathways involved in PKA and CaMKII activation in hippocampal CA1 neurons ., The model is based on published biochemical measurements of many key signaling molecules , most notably PKA and CaMKII ., Simulations of four trains of 100 Hz stimuli separated by 300 sec or 3 sec revealed that spaced stimulation activates more PKA and less CaMKII than massed stimulation ., Thus , PKA activity may be required for spaced stimulation because more of it is active , and less phosphoCaMKII is available ., Simulations were repeated for a range of inter-train intervals , to further explore the PKA dependence of L-LTP induction ., PKA activity increases exponentially with increasing inter-train interval , compensating for the decrease in phosphoCaMKII with increasing inter-train interval ., The time constant of phosphoCaMKII decrease was 20 . 8 sec; thus , the model predicts that L-LTP induced with an inter-train interval greater than 62 sec ( 3τ ) will be dependent on PKA , and L-LTP induced with an interval less than 62 sec will be independent of PKA ., Experiments confirm this prediction , showing that a 40 sec inter-train interval is PKA-independent and an 80 sec inter-train interval requires PKA ., The temporal sensitivity of PKA differs from that in a previously published model 29 mainly due to the different method of quantifying PKA activity ., The present study measured cumulative PKA activity as area under the curve and found an increase with temporal interval ., In the previously published model , PKA activity was quantified as the peak activity at 600 sec after the last tetanus , to compare with experimental measurements which also measured activity at 600 sec after the last tetanus ., In that study , PKA peak activity did not exhibit temporal sensitivity , and thus could not explain the temporal sensitivity of PKA dependence of LTP ., To compare the present model results with that previous model , PKA activity was quantified as activity at 600 sec after the last tetanus ., Using this quantification , temporal sensitivity of PKA activation in the present model is minimal , in agreement with Ajay and Bhalla 29 ., Nonetheless , cumulative activity is a better measure of the ability of a kinase to phosphorylate downstream substrates such as AMPA receptors or inhibitor-1 , because cumulative activity is proportional to average enzyme activity over the time course of the enzyme ., With regards to CaMKII activity , both cumulative , when CaMKII phosphorylation is transient , and peak when CaMKII phosphorylation is persistent , were good predictors of the critical inter-train interval ., Another PKA-dependent form of L-LTP is induced by theta-burst stimuli 41 , which uses short bursts of 100 Hz stimulation ( e . g . , 4 pulses ) repeated at 200 msec intervals ., A typical experimental induction protocol uses fifteen repetitions of 4 bursts yielding 60 pulses total , far less than provided with 4 bursts of 100 Hz ., Model simulations show that both CaMKII and PKA are lower with this stimulation due to the lower number of pulses ., These simulations of post-synaptic mechanisms cannot explain the PKA-dependence of theta-burst L-LTP because theta-burst L-LTP involves pre-synaptic mechanisms 42 ., In our model , activated PKA is represented as the cumulative quantity of the free catalytic subunit ., Although stimulation produces about a 60% increase in free catalytic subunit , the peak quantity of free catalytic subunit is relatively small ( less than 50 nM for massed stimulation and less than 35 nM for spaced stimulation ) ., This may suggest that the quantity of free catalytic subunit would be insufficient for the PKA-dependent L-LTP ( i . e . , both the increase in inhibitor-1 phosphorylation , and the inhibition in CaMKII dephosphorylation were small ) , especially given the number of PKA targets ., The small quantity of PKA free catalytic subunit produced is due to the high affinity ( 9 nM ) of the regulatory subunit for the catalytic subunit even when all four cAMP molecules are bound ., One possible solution to the low quantity of PKA catalytic subunit is that the cAMP-saturated holoenzyme is catalytically active toward its substrates ., Binding of four cAMP to the linker region of the regulatory subunit causes a conformational change , exposing the catalytic site without complete dissociation 43 , 44 ., The L-LTP induction paradigms produce a significant amount of cAMP-saturated holoenzyme ( twice as much as free catalytic subunit ) ., If this form is active , the quantity of active PKA would be three times higher ., In addition , the actions of anchoring also increase local PKA activity in the synapse ., A kinase anchoring proteins ( AKAPs ) bind to the regulatory subunit of the PKA holoenzyme 45 ., By tethering the PKA holoenzyme near a preferred substrate at a particular subcellular location , a small number of molecules could produce significant phosphorylation of its substrate ., In support of this concept , experiment shows that hippocampal synaptic plasticity requires not only PKA activation , but also the activation of an appropriately anchored pool of PKA 42 , 46 ., As previously mentioned , the conceptual model of CaMKII activation predicts a positive feedback loop in which increased phosphorylation leads to an increased rate of subsequent phosphorylation ., Therefore , subsequent stimulus trains should produce increasing increments in CaMKII activity ., Similar to other single compartment models 19 , 20 , 47 , this positive feedback response is not observed in the model unless additional calmodulin is provided ( Fig S4A ) ., Calmodulin binds with high affinity to protein phosphatase 2B and phosphodiesterase 1B , and with intermediate affinity to adenylyl cyclase as well as CaMKII ., This binding causes a decrease in Ca4-calmodulin with subsequent trains due to competition for calmodulin between the CaMKII pathway and other pathways ., Calmodulin is a diffusible protein; thus , in a dendritic spine free calmodulin would diffuse into the spine from the dendrite to replace the bound calmodulin ., In addition , neurogranin is a calmodulin binding protein that releases calmodulin upon Ca2+ stimulation; in essence neurogranin acts as a calmodulin reservoir 48–50 ., Simulations in which additional calmodulin is provided yields a frequency sensitivity of phosphoCaMKII that agrees with experimental measurements 29 ., Not only CaMKII , but also PKA activation is limited by free available calmodulin , since the predominant adenylyl cyclases ( 1/8 ) in hippocampus are activated by calmodulin ., Calmodulin depletion results in decreasing increments of adenylyl cyclase activity , cAMP production , and PKA activation with massed stimulation , causing sublinear summation ., Providing additional calmodulin reduces the degree of sublinear summation , though the limited quantity of adenylyl cyclase 1 and adenylyl cyclase 8 also contributes to sublinear summation ., Thus , as illustrated in Fig S4B , the incorporation of additional calmodulin does not change the main result , namely that PKA cumulative activity is higher with spaced stimulation ., One way in which LTP is expressed post-synaptically is as enhanced phosphorylation of AMPA receptors leading to insertion of new AMPA receptors ., The phosphorylation state of AMPA receptors depends on the balance of kinases and phosphatases including PKA , CaMKII and protein phosphatase 1 51–53 ., Active PKA directly phosphorylates the AMPA receptor GluR1 subunit at Ser845 , enhancing AMPA channel function 54 and leading to increased AMPA channel expression ., PKA indirectly governs the dephosphorylation activity of protein phosphatase 1 by phosphorylating inhibitor-1 with very high affinity allowing it to bind protein phosphatase 1 ., Other substrates of PKA are implicated in hippocampal synaptic plasticity , including phosphodiesterase type 4D3 and inositol triphosphate receptor channels 55 , 56 ., AMPA channel phosphorylation modulates expression of LTP , but transcription and translation are required for L-LTP 57 ., A target of phosphorylation by active PKA involved in transcription is the cAMP Response Element Binding Protein ( CREB ) in the nucleus ., Phosphorylated CREB increases activation of transcription and protein translation ., Members of the mitogen activated protein kinase ( MAPK ) family are targets of PKA that plays a role in transcription , translation , and synaptic plasticity 58 ., One member of the MAPK family is extracellular signal-regulated kinase type II , which is phosphorylated by several signaling pathway kinases , such as PKA and also CaMKII through synGAP 59 , 60 ., Ajay and Bhalla 29 demonstrate that both extracellular signal-regulated kinase type II activity and the magnitude of LTP induction are maximal using inter-train intervals of 300–600 sec; in this context , our results suggest that part of the temporal dependence of extracellular signal-regulated kinase type II is due to PKA ., Yet another target of PKA involved in maintenance of LTP is the atypical protein kinase C , type Mζ , which is phosphorylated at a site of convergence of both PKA and CaMKII 61 ., Thus , our hypothesis that the combination of CaMKII plus PKA is critical for L-LTP is consistent with several of these target proteins whose activity integrates multiple kinases ., Additional evidence suggests that PKA is critical for synaptic tagging 46 , 62 , 63 , which provides the synaptic specificity important for information processing ., The synaptic tag theory proposes that L-LTP associated gene products can only be captured and utilized at synapses that have been tagged by previous activity 64 ., Both CaMKII and PKA have been implicated in phosphorylation of an unidentified synaptic substrate , which appears necessary to set a tag at activated synapses to allow capture of plasticity factors ( i . e . CRE-driven gene products , newly synthesized AMPA receptors or mRNAs ) ., One possibility is that phosphorylation of the tag can be provided by either CaMKII , PKA , or both , depending on the temporal interval of stimulation ., To further evaluate L-LTP , it will be necessary to include some of these signaling events downstream of PKA , such as activation of extracellular signal-regulated kinase type II ., Furthermore , anchoring of proteins in spines , communication with the larger dendrites , and other spatial details all suggest that single compartment models are not sufficiently accurate ., Thus , multi-compartmental models will be critical for evaluating issues such as the distribution of synaptic inputs underlying the spread of biochemical signals from synapses to dendrites 25 or the diffusion of biochemical signals between spines 65 ., For example , preliminary simulations using a multi-compartmental stochastic model suggest that localization of dopamine receptors and PKA leads to larger phosphorylation of inhibitor-1 , and inhibition of protein phosphatase 1 , as experimentally observed 35 ., Given the complexity of non-linear interactions among signaling pathways , simulations using these novel multi-compartmental models promise to enhance understanding of the mechanisms underlying synaptic plasticity ., All research with animals was consistent with NIH guidelines and approved by the IACUC at the University of Pennsylvania ., The single compartment , computational model , illustrated in Fig 1A , consists of signaling pathways known to underlie synaptic plasticity in hippocampal CA1 pyramidal neurons ., Calcium influx through the NMDA receptor leads to calcium-calmodulin activation of adenylyl cyclase types 1 and 8 66 , phosphodiesterase type 1B , protein phosphatase 2B ( PP2B or calcineurin ) and CaMKII ., In addition , CA1 is innervated by dopamine fibers67 , and dopamine type D1/D5 receptors , coupled to Gsα , are expressed in CA168 ., Dopamine levels increase in response to 100 Hz stimulation 30 , leading to enhanced adenylyl cyclase ( type, 1 ) activity 69 , 70 , and increases in cAMP , which activate PKA 71 , 72 ., The phosphorylation of inhibitor-1 by PKA transforms inhibitor-1 into a potent inhibitor of protein phosphatase 1 73 , 74 , thereby decreasing CaMKII dephosphorylation ., Though not included in the model , the phosphorylation state of the AMPA receptor is controlled by CaMKII , PKA and protein phosphatase 1 75 , 76 ., All reactions in the model are listed in Tables 1 and 2 and are described as bimolecular chemical reactions or as enzymatic reactions except for PKA ( described below ) and CaMKII reactions ( Text S1 ) ., A set of rate equations is constructed to describe the biochemical reactions of the models pathways ., These rate equations are nonlinear ordinary differential equations with concentrations of chemical species as variables ., Equations are derived assuming all reactions are in a single compartment and the number of molecules is sufficient for mass action kinetics , as follows: For a bimolecular chemical reaction: ( 1 ) in which substrates , A and B , are consumed to create products , C and D , the rate of reaction is represented by a differential equation of the form ( 2 ) where kf and kb are the forward and backward rate constants of the reaction , and Kd\u200a=\u200akf /kb is the affinity ., For an enzyme-catalyzed reaction: ( 3 ) where E , S , ES and P denote enzyme , substrate , enzyme-substrate complex and product , the rate of production of P , dP/dt , is given by: ( 4 ) For enzymatic reactions kcat defining the last , catalytic step , is the rate at which product appears , and the affinity Km is defined as ., When kb is not known explicitly , kb is defined as 4 times kcat 14 ., PKA ( cAMP-dependent protein kinase ) is activated by the cooperative binding 77 of cAMP to two tandem cAMP-binding sites ( called A and B sites ) on each of the two regulatory subunits ., The binding of four cAMP leads to the dissociation of the active catalytic subunits , allowing them to phosphorylate their protein targets 4 ., In the model pairs of cAMP bind with first order kinetics as measured by the fraction of free catalytic subunit as a function of cAMP concentration 78 , 79 ., The affinity of site A relative to the affinity of site B is obtained from Herberg et al . 77 , 80 ., Keeping this ratio , the affinity of these sites was adjusted to match the overall affinity of the holoenzyme 72 ( Fig S1A ) ., The only exception to the single compartment approximation is that additional calmodulin was provided to prevent calmodulin from decreasing significantly during stimulation ., Calmodulin is a diffusible protein , thus in a dendritic spine , free calmodulin would diffuse into the spine from the dendrite to replace the bound calmodulin ., In addition , neurogranin is a calmodulin binding protein that releases calmodulin upon Ca2+ stimulation; in essence neurogranin acts as a calmodulin reservoir 48–50 ., The increase in calmodulin was made proportional to the difference between initial calmodulin and free calmodulin ., The rationale is that without additional calmodulin , subsequent stimulation trains produce a smaller increment in phosphoCaMKII ( 1st: 166 nM , 4th: 154 nM ) , which is inconsistent with the positive | Introduction, Results, Discussion, Methods | Protein kinases play critical roles in learning and memory and in long term potentiation ( LTP ) , a form of synaptic plasticity ., The induction of late-phase LTP ( L-LTP ) in the CA1 region of the hippocampus requires several kinases , including CaMKII and PKA , which are activated by calcium-dependent signaling processes and other intracellular signaling pathways ., The requirement for PKA is limited to L-LTP induced using spaced stimuli , but not massed stimuli ., To investigate this temporal sensitivity of PKA , a computational biochemical model of L-LTP induction in CA1 pyramidal neurons was developed ., The model describes the interactions of calcium and cAMP signaling pathways and is based on published biochemical measurements of two key synaptic signaling molecules , PKA and CaMKII ., The model is stimulated using four 100 Hz tetani separated by 3 sec ( massed ) or 300 sec ( spaced ) , identical to experimental L-LTP induction protocols ., Simulations show that spaced stimulation activates more PKA than massed stimulation , and makes a key experimental prediction , that L-LTP is PKA-dependent for intervals larger than 60 sec ., Experimental measurements of L-LTP demonstrate that intervals of 80 sec , but not 40 sec , produce PKA-dependent L-LTP , thereby confirming the model prediction ., Examination of CaMKII reveals that its temporal sensitivity is opposite that of PKA , suggesting that PKA is required after spaced stimulation to compensate for a decrease in CaMKII ., In addition to explaining the temporal sensitivity of PKA , these simulations suggest that the use of several kinases for memory storage allows each to respond optimally to different temporal patterns . | The hippocampus is a part of the cerebral cortex intimately involved in learning and memory behavior ., A common cellular model of learning is a long lasting form of long term potentiation ( L-LTP ) in the hippocampus , because it shares several characteristics with learning ., For example , both learning and long term potentiation exhibit sensitivity to temporal patterns of synaptic inputs and share common intracellular events such as activation of specific intracellular signaling pathways ., Therefore , understanding the pivotal molecules in the intracellular signaling pathways underlying temporal sensitivity of L-LTP in the hippocampus may illuminate mechanisms underlying learning ., We developed a computational model to evaluate whether the signaling pathways leading to activation of the two critical enzymes: protein kinase A and calcium-calmodulin-dependent kinase II are sufficient to explain the experimentally observed temporal sensitivity ., Indeed , the simulations demonstrate that these enzymes exhibit different temporal sensitivities , and make a key experimental prediction , that L-LTP is dependent on protein kinase A for intervals larger than 60 sec ., Measurements of hippocampal L-LTP confirm this prediction , demonstrating the value of a systems biology approach to computational neuroscience . | computational biology/computational neuroscience | null |
1,749 | journal.pcbi.1003070 | 2,013 | Hybrid Equation/Agent-Based Model of Ischemia-Induced Hyperemia and Pressure Ulcer Formation Predicts Greater Propensity to Ulcerate in Subjects with Spinal Cord Injury | In the United States , it is estimated that approximately 250 , 000 people live with spinal cord injury ( SCI ) ., Approximately 12 , 000 new cases occur each year 1 , with total direct costs for treating all cases of SCI exceeding $7 billion annually 2 , 3 ., Pressure ulcers are common , costly and life-threatening complications for people with SCI ., The prevalence of pressure ulcers in people with SCI is estimated to range from 8% to as high as 33% 4 ., Post-SCI pressure ulcers are caused by a combination of impaired sensation , reduced mobility , muscle atrophy , as well as reduced vascularity and perfusion 5 ., The current consensus is that pressure alone or pressure in combination with shear force cause localized injury to the skin and/or underlying tissue , usually over a bony prominence 6 ., Several pathways have been identified for pressure/shear-induced ulceration , the major one being tissue ischemia ., Prolonged tissue ischemia may cause inflammation , necrosis , and the eventual formation of a pressure ulcer 7 , 8 ., Tissue inflammation is the common physiological reaction caused by tissue ischemia before necrosis occurs ., We have focused our attention on this complex biological process ., Inflammation is a central , modulating process in many complex diseases ( e . g . sepsis , infectious disease , trauma , and wound healing ) , and is a central driver of the physiology of people with SCI 9–12 ., However , inflammation is not an inherently detrimental process: properly regulated inflammation is required for successful immune response and wound healing 9 , 13 , 14 ., Inflammation is a prototypical complex , nonlinear biological process that has defied reductionist , linear approaches 15–18 ., Dynamic computational simulations , including ordinary differential equation ( ODE ) - and agent-based models ( ABM ) , have been employed to gain insights into inflammation ., These simulations have been useful in integrating mechanistic information and predicting qualitative and quantitative aspects of the inflammatory/wound healing response 19–22 ., The purpose of the present study was to integrate blood flow data and the process of skin injury , inflammation , and healing using a hybrid model that combines ABM and ODE into a single computational model ., Agent-based modeling is an object-oriented , rule-based , discrete-event method of constructing computational models , and this technique can be used to model complex biological systems in which the behavior of individual components/agents , as well as pattern formation and spatial considerations are important 23 ., Systems of ODE are well-suited for describing processes ( or physiological responses ) that can be approximated as well-mixed systems 22–26 ., Modeling with differential equations ( ordinary or partial ) is the most widely used method of mathematical modeling ., The main advantage of this approach is that there is a well-developed mathematical theory of differential equations which helps to analyze such equations and in some cases completely solve them 23 , 25 , 26 ., To model a complex biological system as an ABM , the system is divided into small computational units ( “agents” ) , with each agent obeying a set of rules that define the behavior of this agent ., These simple rules , performed stochastically by agents in the model , lead to a complex , often emergent behavior of the system as a whole ., In many cases , agents need only local information on the state of the system , rather than being affected by the global system state ., As such , ABMs are particularly well suited to representing the transition between mechanisms at one scale of organization to behavior observed at another ., The object-rule emphasis of an ABM greatly simplifies the process of model construction without loss of important features in the system , and also allows for modeling biological processes that are known to have both local and global features 23 ., Our primary goal in this study was to gain translationally-useful insights into post-SCI pressure ulcer formation using dynamic , mechanistic computational modeling ., However , several issues exist with the use of either ABMs or ODEs alone in modeling the pressure ulcer formation ., It is difficult to analyze the output of ABMs in order to derive insights into qualitative regimes or primary drivers of outcome ., In addition , simulating ABMs is more computationally intensive than simulating ODE-based models ., On the other hand , real-life systems are often too complex to be modeled using only ODE , and the corresponding equation-based models may become too complicated to carry out practically useful results ., Hybrid modeling is an emerging technique that involves combining diverse types of computational models into a single simulation 27–29 ., In this approach , ODE can be used to define certain agent rules ( low-level details ) , and ABM to describe the behavior of the high-level components of our system ., In the present study , we utilized ODE to model properties tissue ischemia , and an ABM to model the stochastic , pressure-driven ulcer formation behavior in people with and without SCI ., Using this approach , we find that a model calibrated with blood flow data predicted a higher propensity to form ulcers in response to pressure in SCI patients vs . non-injured control subjects ., The skin blood flow data used for computing the parameters of the differential equation model were collected from 12 adults ( six with SCI and six without ) ., This study was approved by the University of Pittsburgh Institutional Review Board ( IRB# PRO08060015 ) , and was carried out after obtaining informed consent from the participants ., The age range of the subjects recruited for this study was 20–50 years old ., The actual age in each group was: subjects with spinal cord injury ( 26 , 27 , 35 , 35 , 43 , 48 years old ) ; subjects without any neurological deficits ( 21 , 25 , 29 , 35 , 36 , 44 years old ) ., There was no statistically significant difference in age between the two cohorts of subjects ( data not shown ) ., For people with SCI , only those with ASIA 30 , a scale for classification of spinal injury , grade A and B , one-year post-injury and non-ambulatory are recruited ., The reactive hyperemic response was induced with 60 mmHg of pressure for 20 min on the sacral skin , with the participants lying on their stomach on a mat table ., A laser Doppler probe was located at the center of the indenter to collect the skin blood flow ., Instrumentation details are published previously 31 ., A sample blood flow data collected in the experiment is demonstrated in Figure S1 ., The raw blood flow data of all tested subjects are provided in Dataset S1 and the plots of these data are shown in Dataset S2 ., The hybrid model utilized in our study is comprised of an ABM of skin/muscle injury , inflammation , and ulcer formation along with an ODE model of blood flow and reactive hyperemia ., The ABM portion of the model comprises interactions among oxygen , pro-inflammatory elements , anti-inflammatory elements , and skin damage , with realistic predictions of the pattern , size , and progression of pressure ulcers ., All rules of this ABM were generated based on literature reviews and previously-described ABMs of diabetic foot ulcer formation 21 and simplified pressure ulcer formation 32 ., The ODE portion of the model simulates the ischemia-induced reactive hyperemic response , and is derived from a previous circuit model 33 ., Figure 1 shows the model representation of the pressure ulcer formation ., Figures 2A&B depict the model components and their interactions within the hybrid model , with the solid rectangles , ellipses and arrows representing the components of the ABM portion and the dashed ellipse and arrows representing the components of the ODE portion of the model ., The ABM portion of the model is based on our previously-developed models 21 , 32 ., This ABM is a simplified model that simulates inflammation and reactive hyperemic response ( as the result of applied pressure ) in a small segment of tissue ( epithelial cells in the model ) ., We implemented this ABM in SPARK ( Simple Platform for Agent-based Representation of Knowledge; freely downloadable at http://www . pitt . edu/~cirm/spark ) 34 , following an extensive process of literature search and creation of graphical diagrams that incorporate known biological influences 20 , 35 , 36 ., From such diagrams and based on our prior work on modeling of the formation of diabetic foot ulcers 21 , we constructed rules by which individual agents ( e . g . cells or cytokines ) interact with each other and bring about biological effects ., The ABM portion of the model consists of key cells and diffusible inflammatory signals assumed to be involved in the process of formation of a pressure ulcer ., A similarly parsimonious approach was used to construct the rules and relationship among agents , with the goal of generating a high-level view of the process of pressure ulcer formation ., The components and inter-relationships among the agents and variables of the pressure ulcer ABM are presented in Figure 2 ., Importantly , our model adheres to our prior work on the importance of the positive feedback loop of tissue damage/dysfunction→inflammation→tissue damage/dysfunction 22 , 25 ., The main components of the ABM portion of the model are: structural/functional skin cell ( nominally epithelial cells ) ; inflammatory cells ( nominally macrophages ) ; blood vessels; an aggregate pro-inflammatory cytokine agent ( nominally TNF-α ) ; an aggregate anti-inflammatory/pro-healing cytokine ( nominally TGF-β1 ) ; and oxygen ., These agents interact according to the following rules ., Epithelial cells are damaged by applied pressure ., A damaged epithelial cell produces TNF-α ., Epithelial cells also are damaged by excessive amount of TNF-α ., A severely damaged epithelial cell dies ., An epithelial cell can be healed by TGF-β1 , and the healing rate is proportional to the amount of oxygen at the position of the epithelial cell ., Macrophages are attracted by TNF-α , and they also produce TNF-α and TGF-β1 ., Each macrophage has a fixed lifespan ( measured in simulation steps ) and a macrophage dies after several simulation steps ., Blood vessels create new macrophages and release oxygen ., The rate of macrophage production and oxygen release depends on the amount of blood flowing through a blood vessel ., The ODE portion of the model ( see below ) is incorporated into blood vessel rules , which specify how the oxygen is produced ., Blood flow depends on the pressure applied on a blood vessel ., A blood vessel dies if the surrounding epithelial cells die ., There are also global model rules which specify how oxygen , TNF-α , and TGF-β1 diffuse and evaporate ., Physical pressure in ABM portion of the model is applied periodically ., More specifically , the pressure is applied for a fixed period of time ., The pressure is then released for the same amount of time , and the process repeats ., A specific model parameter ( called Pressure Interval ) specifies the pressure time interval ., A detailed description of ABM rules and parameters is given in Text S1 ., Ischemia-induced hyperemia ( the reactive hyperemic response ) is a sudden increase in skin blood flow following tissue ischemia 37 ., Hyperemia is a normal physiological response that can be easily induced with non-damaging ischemic events , and it has been used in numerous fields to examine endothelial function 38 and vascular activity 39 ., We incorporated an ODE model of reactive hyperemia into the pre-existing ABM of ulceration in order to link measurable parameters of reactive hyperemia to the process of ulceration induced by repeated cycles of pressure and ischemia/reperfusion ., To do so , we adopted the ODE-based circuit model of de Mul et al 40 ., These authors suggested that the reactive hyperemic response could be modeled as the circuit shown at Figure 3 , with R ( resistance ) representing vascular resistance , C ( capacitance ) representing vessel compliance , V ( t ) representing the input blood flow pressure , and I ( current ) representing blood flow ., I2 ( t ) represents the skin blood flow ( specifically , reactive hyperemia ) as measured using a laser Doppler flowmetry system ., The ODE system derived from the circuit model has the following formNote that here we have only two differential equations for I1 ( t ) and I2 ( t ) ., I3 ( t ) , I4 ( t ) , I5 ( t ) , and I6 ( t ) can be algebraically eliminated ., We are interested in modeling a situation when an occlusion occurs in the input blood flow due to application of an external pressure ., De Mul et al 40 model such a situation by considering the following stepwise input blood flow functionHere V0 is the aortic pressure ., Based on this expression of V ( t ) , an explicit solution for I2 ( t ) can be derived with initial conditions I1 ( 0 ) =\u200aI2 ( 0 ) =\u200a0 ., This solution has the following formHere I2 , rest , a , b , p1 , and p2 are constants expressed in terms of R1 , R2 , R3 , R4 , C1 , C2 , V0 ., We used this explicit solution for I2 ( t ) for finding parameter values of the circuit model ( the ODE portion of the model ) based on available blood flow experimental data ., In our agent-based simulations , the input blood pressure was a periodic function ., In order to obtain the blood flow in these simulations , we used the ODE explicitly in our ABM ., The main components of SPARK models are Space , Data Layers , Agents , and the Observer 34 ., Space is analogous to the physical space , and provides a context within which the model evolves ., Data Layers provide a convenient way of tracking variables in space ., Data layers update in time simultaneously at all positions ., This is a computationally efficient way of handling processes such as diffusion and evaporation without employing an agent at each position to carry out the calculation ., Agents can move , perform functions , interact with each other , and also interact with the space they occupy ., Each agent has a set of behaviors and rules of action ., The Observer contains information about space and all agents in the model ., We extended SPARK with a feature for simple incorporation of ODE into an ABM ., Epithelial cells , blood vessels , and macrophages were implemented as agents in SPARK ., Oxygen , TNF-α , and TGF-β1 were implemented as data layers in SPARK ., Pressure was implemented as a global model variable that periodically changes during the model simulation process ., The ODE portion of the model is integrated into the code of blood vessel agents ., The following example shows how ODEs were added into SPARK-PL code: equations I4\u200a= ( V - R1 * I1 ) /R4 I3\u200a= ( V - R1 * I1 - R2 * I2 ) /R3 I5\u200a=\u200aI1 - I2 - I4 I6\u200a=\u200aI2 - I3 Dt I1\u200a= ( dV - I5/C1 ) /R1 Dt I2\u200a= ( I5/C1 - I6/C2 ) /R2 All variables in the example above are local variables of a blood vessel agent ., Equations describe the evaluation of these variables in time ., Each time step , the equation is integrated on the interval t1 , t1+dt , where t1 is the current simulation time and dt is the global parameter which specifies the time step ., The output values of the equations are used in other rules defined for a blood vessel agent ., V represents the input blood pressure which is a periodic function in our simulations which depends on three parameters:Here , Vmax and Vmin represent maximal and minimal blood pressures respectively; Tp is the pressure interval parameter of the model; k\u200a=\u200a0 , 1 , 2 , etc; t is the number of simulation ticks ., In other words , we set V\u200a=\u200aVmin when the external pressure is applied and V\u200a=\u200aVmax when the external pressure is released ., The SPARK source codes of this hybrid model are provided in Dataset S3 ., The ODE-based portion of the model was fit to data on blood flow for two different groups of subjects: a control group ( CTRL ) and an SCI group , as follows ., We initially fixed parameters of the agent-based portion of the model ., We chose these parameters based on a literature search ., Only the approximate scale of parameters could be selected in this fashion , since our ABM is a simple , lumped-parameter model ., With this set of parameter values , the ABM produces qualitative behavior commensurate with normal inflammation and wound healing 21 ., Raw blood flow data was filtered with low pass filters ., The filtered data were averaged over all six subjects in each group ., Figures 4A and 4B depict the averaged reactive hyperemia blood flow data in people with and without SCI , respectively ., We note that Figure 4A tend to oscillate more than Figure 4B ., Depending on the level , and severity of injury , the reactive hyperemic response as measured with skin blood flow varied in people with SCI as compared to people without any neurological deficits ., One main difference was the rate of increase and decrease in the skin blood flow of the reactive hyperemic response 41 , in other words , one subjects peak blood flow may occur at 0 . 5 minute , and the other one may occur at 2 . 0 minute ., With this variation , the blood flow oscillates more in Figures 4A as compared to Figures 4B ., Another possible explanation is that , the skin blood flow as measured with the laser Doppler flowmetry system does oscillate naturally ., When the skin blood flow signal was computed with Fourier transform , previous studies have identified that different frequency bands represent different physiological control mechanism of the blood flow 42 ., Therefore the oscillation of skin blood flow is inevitable ., We also note that the data in our simulation focused on the first 4 minutes ., The interesting portion of the experimental data is the time when the peak blood flow occurs ., We obtained approximately 10 minutes of raw data after releasing the pressure ., The important information includes the time of the peak and the rate of decrease after the peak; both these values can be extracted from first 4 minutes after the pressure is released for all recorded data ., We believe that it is simpler and more reliable to fit the ODE parameters based on the most important part of the experimental data ( i . e . the first 4 minutes ) , since the rest of the data do not contain any important information for model fitting ., We then calibrated the ODE portion of the model based on the averaged data ., Calibration was done using the following error function which measures the distance between actual ( averaged ) data and simulated results:Here i is the group index , i . e . , i is either CTRL or SCI ., Ei ( p ) is the error for the i-th group; y ( p , k ) is the value of the model function evaluated at the point k with the parameter vector p ., Mi ( k ) is the averaged i-th data at the point k ., Calibration was performed using Matlab R2011 ( The Mathworks , Inc . , Natick , MA , USA ) ., We used the explicit expression of I2 ( t ) for finding best-fit parameters ., The values of Vmax were assumed to be 85 mmHg for the control group and 75 mmHg for the SCI group , the same pressure values as in the experiments ., For all other parameters , we defined possible lower and upper bounds ., For the control group , we set 200 as the upper bound of all parameters , and 0 . 01 as the lower bound for all parameters except R4 , for which we chose 190 as the lower bound since it is assumed that R4>>R1 , R2 , R3 40 ., Then we randomly selected 1000 points in the space of parameters and ran the standard Matlab minimization function fminsearch for all these initial points , and picked the best fit results ., The search of best-fit parameters for the SCI group was carried out in a similar way ., The only differences were that the value of Vmax\u200a=\u200a75 , and in addition we changed the upper bounds of C1 and C2 and set them equal to the best-fit values of C1 and C2 for the control group ., This change was made to reflect the fact that C1 , 2SCI<C1 , 2CTRL 43 ., Figures 5A and 5B show the best-fit simulation results , which minimize the error function Ei ( p ) in data from people with and without SCI , respectively ., Table 1 lists the values of the best-fit parameters for both group with the ratios calculated in the Figure 6 to show the significant change of parameters for people with and without SCI ., The results show that vascular resistance ( R1 ) is significantly increased and that blood vessel compliance ( C1 , C2 ) is decreased in the SCI group by comparing with the control group ., We next sought to determine the behavior of our simulation under a more clinically realistic setting , in which pressure to tissues alternates with periods of pressure relief ., We also sought to determine if , once partially calibrated with blood flow data from control vs . SCI subjects , our model would predict differential propensity to ulcerate between these two groups of patients ., We simulated the application of medium-scale pressure on the skin with different frequencies , first applying a pressure on the skin for a given period of time ( pressure interval ) , releasing the pressure for the same amount of time , and then repeating the process ., Using the parameters obtained as described above , we ran the model simulations for both groups and compared the outcome ., We ran the model for 2000 steps with various values of the pressure interval parameter ., All other ABM parameters were fixed ., We assumed Vmax\u200a=\u200aV0 ( i . e . , Vmax\u200a=\u200a85 for the control group and Vmax\u200a=\u200a75 for the SCI group ) and Vmin\u200a=\u200a40 for both groups ., We initially examined the minimal value of the pressure interval that would be predicted to result in substantial tissue damage ( death of some epithelial cell agents ) ., Figures 7A and 7B show the SPARK simulation results for control and SCI subjects ., Green squares represent healthy epithelial cells , red squares represent damaged epithelial cells , red circles represent blood vessels , and blue circles represent macrophage ., For the control group , the minimal value of the pressure interval was 205–210 simulation ticks ( Figure 7A ) ; in contrast , for the SCI group , the minimal value was 105–110 simulation ticks ( Figure 7B ) ., We also performed subject-specific fitting of the ODE parameters and measured the minimal value of the pressure interval resulting in substantial tissue damage for each subject ., The results are given in Table 2 ., The average subject-specific value of the minimal pressure interval was 207 for control subjects and 168 for SCI subjects ., These results agree qualitatively with our findings for the averaged data presented above: the minimal pressure interval is larger for the control group ., We next examined the predicted effect of turning frequency on control and SCI subjects ., Figures 8A and 8B show how the predicted health of epithelial cells progresses over time for simulations of the control and SCI groups , respectively , over varying pressure on/off cycles ., Increasing the frequency ( or applying pressure for a short period of time and then subsequently relieving this pressure ) , we obtained an outcome in which a pressure ulcer did not form: when the simulated pressure is applied , the tissue is damaged somewhat , but when the pressure is relieved tissue health is restored ., Also , simulated damage/dysfunction was predicted to increase more rapidly in the SCI group vs . the control group when the pressure interval was increased ., The components of the inflammatory response are time-driven , highly interconnected , and interact in a nonlinear fashion 15–18 , 44 ., The systems biology community has integrated mathematical and simulation technologies to understand complex biological processes 45 ., More recently , we have suggested translational systems biology as a framework in which computational simulations are designed to facilitate in silico clinical trials , simulations are appropriate for in vivo and specifically clinical validation , and mechanistic simulations of whole-organism responses could guide rational therapeutic approaches 25 ., Agent-based models have emerged as a useful complement to ODE-based models for elucidating complex biological systems , including inflammation , wound healing , angiogenesis , and cancer 19 , 21 , 23 , 36 , 46–49 ., In the present study , we utilized a hybrid modeling approach that combines the both features of ODE and agent-based models ., Using this approach , we integrate data regarding blood flow properties in SCI patients and compare them to data from control subjects ., Our analysis suggests that , based on an abstraction of these blood flow properties and a stochastic model of tissue inflammation and ulcer formation , and in agreement with the literature 50 , SCI patients are predicted to be more prone to ulceration ., Our study , along with prior work 28 , 51 , 52 , suggests that such hybrid modeling methodology could have a wide application in modeling complex , multiscale biological systems ., Despite the lack of sensation and motor function after SCI , several physiological changes at the chronic stage of SCI ( more than 12 months since injury ) increase a persons susceptibility to develop pressure ulcers , including changes in body composition ( increased proportion of fatty tissue ) and vascularity 5 ., The linkage between changes in vascularity , epithelial function and pressure ulcer formation in people with SCI is not fully explored ., Therefore , this pilot hybrid model was aimed at simulating pressure ulcer development by including a key vascular response ( reactive hyperemia ) observed in human subjects ., The goal of our previous research was to find the optimal turning frequency for patients with SCI 32 ., The goal of the present model is the improvement of our previous model by coupling an ODE model of the reactive hyperemic response observed experimentally to an ABM based on rules derived from the literature ., This model was capable of simulating the intensity in epithelial cell damage as a function of changes of amount and duration of localized pressure on the skin of people with and without SCI ., Results from the best-fit parameters of the circuit model set showed differences in vascular resistance ( R1 ) and blood vessel compliance ( C1 , C2 ) between the two groups ., The arterial resistance was bigger while the capillary resistance was smaller , respectively , in subjects with SCI as compared to controls ., Changes in vascularity in people with SCI may be caused by denervation of sympathetic nervous system 53 as well as physical inactivity 54 ., Our finding of increased vascular resistance in the arterial system was consistent with previous studies ., With the loss of supraspinal control of the vascular system after high level of injury , people with SCI were reported to have increased vascular resistance in order to maintain the vascular tone by compensating for the loss of supraspinal sympathetic control 55 ., Additionally , the increased vascular resistance may result from preservation of α-adrenergic tone ., The increased vascular resistance could also result from vascular adaptation to deconditioning with the loss of motor function 56 ., One prior study found that there was an increased activation of the receptor of the endothelin-1 , which increases the vascular tone 56 ., The results of decreased vascular resistance in the capillary system were not consistent with observations regarding vascular resistance in the arterial system ., The capillary resistance was not investigated in previous studies; thus , our findings regarding vascular resistance in the arterial system may not be generalized to the capillary resistance , since the vascular resistance was measured with venous occlusion plethysmography in previous studies and the measurement was not directly on capillary blood flow ., In addition , the measurement of reactive hyperemia in our study was at the lower back using an indenter , whereas the aforementioned previous studies measured this response at lower limbs with cuff ., Future study on structural changes in capillary system and vascularity of the microcirculation might be beneficial in understanding the linkage to ulceration ., Results from the analysis of the best-fit parameters of the circuit model set also showed that the vessel compliance is smaller in people with SCI as compared to the controls ., De Groot et al . found that the femoral artery compliance is smaller in individuals with SCI 43 , and they suggested that this physiological change may be due to inactivity of the muscle since arterial compliance could be enhanced with functional electrical stimulation ., Our model validation studies suggest that the minimal amount of repeated pressure required to cause endothelial cell damage would be smaller in subjects with SCI ., People with SCI are susceptible to ulcer formation , and there are several physiological changes that may contribute to the susceptibility of pressure ulcer development in this population ., For example , people with complete SCI had decreased cross-sectional area of muscle fibers 57 and increased fat mass in lower limbs 58 ., A recent study from Linder-Ganz et al . directly pointed out the relationship between physiological changes after injury and the pressure ulcer formation by using finite element model ., They found that with the use of the same seat cushion , people with SCI had greater deep muscle stress as compared to controls 59 ., To date , there is no study that investigated the direct linkage between changes in vascularity and ulcer formation in people with SCI ., We were not aware of the underlying mechanism of changes in vascularity and the ulcer formation ., However , from the rules and results of our model , it is indicated that changes in vascularity may play a role in decreased tolerance of pressure and endothelial function that leads to more severe damage with the same amount and duration of pressure ., There are several limitations of this study ., This study only recruited limited numbers of subjects ( six CTRL and six SCI ) , and people with SCI and controls were not matched for comparison ., If additional subjects were used for the model calibration , the conclusion could be reached at a higher level degree of confidence ., Though the ages of the subjects in the cohorts were not identical , there was no statistically significant difference with regard to age between the two groups of patients ., In addition , previous studies 60 , 61 found that the reactive hyperemic response was not different between healthy elderly population and healthy adults; these authors only found an impaired reactive hyperemic response among individuals in a hospitalized elderly population ., Since there was no statistically significant difference in age between non-injured and SCI-injured subjects in our studies , and since all subjects recruited in our studies were healthy and not hospitalized during the time of the study , age is unlikely to be a significant factor in our data analysis ., This is a pilot study developing this hybrid model of ulcer formation with different input of people with and without SCI ., For a more realistic simulation , the ABM portion of the model could be expanded by incorporating additional physical and biological components , such as shear force and reperfusion injury , which may contribute to the formation of the pressure ulcer ., Nevertheless , in this work , we present a first attempt to construct a biological model in a single computational platform where mathematical and agent-based models work in a seamless manner , and the result of the model reveals useful insight into the ulceration in people with and without SCI ., In conclusion , we used a hybrid approach combining ordinary differential equations related to blood flow along with an agent-base | Introduction, Methods, Results, Discussion | Pressure ulcers are costly and life-threatening complications for people with spinal cord injury ( SCI ) ., People with SCI also exhibit differential blood flow properties in non-ulcerated skin ., We hypothesized that a computer simulation of the pressure ulcer formation process , informed by data regarding skin blood flow and reactive hyperemia in response to pressure , could provide insights into the pathogenesis and effective treatment of post-SCI pressure ulcers ., Agent-Based Models ( ABM ) are useful in settings such as pressure ulcers , in which spatial realism is important ., Ordinary Differential Equation-based ( ODE ) models are useful when modeling physiological phenomena such as reactive hyperemia ., Accordingly , we constructed a hybrid model that combines ODEs related to blood flow along with an ABM of skin injury , inflammation , and ulcer formation ., The relationship between pressure and the course of ulcer formation , as well as several other important characteristic patterns of pressure ulcer formation , was demonstrated in this model ., The ODE portion of this model was calibrated to data related to blood flow following experimental pressure responses in non-injured human subjects or to data from people with SCI ., This model predicted a higher propensity to form ulcers in response to pressure in people with SCI vs . non-injured control subjects , and thus may serve as novel diagnostic platform for post-SCI ulcer formation . | Pressure ulcers are costly and life-threatening complications for people with spinal cord injury ( SCI ) ., To gain insight into the pathogenesis and effective treatment of post-SCI pressure ulcers , we constructed a computer simulation in a hybrid modeling platform which combines both equation- and agent-based models ., The model was calibrated using skin blood flow data and reactive hyperemia in response to pressure and predicted a higher propensity to form ulcers in response to pressure in people with SCI vs . non-injured control subjects ., The methodology we present in the paper may eventually be used as a novel platform to study post-SCI ulcer formation , as well as serving as a framework for other biological contexts in which agent-based models and mathematical equations can be integrated . | systems biology, biochemical simulations, computer science, computer modeling, immunology, biology, computational biology, computerized simulations, immune response, immune system | null |